🧠
Model

English Document Ocr Qwen3.5 2b

by loay hf-model--loay--english-document-ocr-qwen3.5-2b
Nexus Index
38.0 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 15
R: Recency 97
Q: Quality 50
Tech Context
2 Params
4.096K Ctx
Vital Performance
264 DL / 30D
0.0%
Audited 38 FNI Score
Tiny 2B Params
4k Context
264 Downloads
8G GPU ~3GB Est. VRAM
Restricted CC License
Model Information Summary
Entity Passport
Registry ID hf-model--loay--english-document-ocr-qwen3.5-2b
License CC-BY-NC-SA-4.0
Provider huggingface
💾

Compute Threshold

~2.8GB VRAM

Interactive
Analyze Hardware
â–ŧ

* Static estimation for 4-Bit Quantization.

📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__loay__english_document_ocr_qwen3.5_2b,
  author = {loay},
  title = {English Document Ocr Qwen3.5 2b Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/loay/english-document-ocr-qwen3.5-2b}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
loay. (2026). English Document Ocr Qwen3.5 2b [Model]. Free2AITools. https://huggingface.co/loay/english-document-ocr-qwen3.5-2b

đŸ”ŦTechnical Deep Dive

Full Specifications [+]

Quick Commands

đŸĻ™ Ollama Run
ollama run english-document-ocr-qwen3.5-2b
🤗 HF Download
huggingface-cli download loay/english-document-ocr-qwen3.5-2b

âš–ī¸ Nexus Index V2.0

38.0
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 15
Recency (R) 97
Quality (Q) 50

đŸ’Ŧ Index Insight

FNI V2.0 for English Document Ocr Qwen3.5 2b: Semantic (S:50), Authority (A:0), Popularity (P:15), Recency (R:97), Quality (Q:50).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
---

🚀 What's Next?

Technical Deep Dive

English-Document-OCR-Qwen3.5-2B

I built this model as part of my ongoing work in document digitization and archival OCR. My goal was to create a small, locally-runnable model that punches above its weight class, and I'm happy to say it does: despite being only 2B parameters, it outperforms several larger frontier models on text extraction from complex document layouts, including dense multi-column newsprint, historical serif typefaces, and degraded archival scans.

This is the first release. I'll be sharing updated versions with broader language coverage and improved layout handling soon. If you try it on your documents, I'd love to hear how it performs, feel free to leave feedback in the Community tab.

License: This model is intended for personal and research use only. If you want to use this model in a product or service, or need to process documents commercially, contact ocr@loay.net.


Model Details

  • Fine-tuned by: loay
  • Base Model: unsloth/Qwen3.5-2B
  • Task: Document OCR
  • Training Data: 8,000 synthetic English document images with ground-truth Markdown transcriptions, featuring faded ink, bleed-through artifacts, skewed layouts, and historical serif typefaces
  • Output Format: Markdown text preserving paragraph flow and layout structure
  • Language Support: Optimized for English and other left-to-right (LTR) scripts. See my OCR finetuned models for right-to-left document OCR.

Usage

The model does not require a specific prompt. It will perform OCR on any document image by default. To achieve the best results and prevent conversational hallucinations, use the exact instruction the model was fine-tuned on:

Extract all text from this document image and output it in markdown format.


GGUF & Local Inference

Quantized GGUF files are available for use with llama.cpp, LM Studio, Ollama, and similar runtimes.

You must load mmproj-english-document-ocr-qwen3.5-2b-f16.gguf alongside your chosen weight file. Without the multimodal projector, the model cannot process images.

File Use Case
english-document-ocr-qwen3.5-2b-f16.gguf Full precision, maximum accuracy
english-document-ocr-qwen3.5-2b-q8_0.gguf Best quality/size tradeoff for OCR precision
english-document-ocr-qwen3.5-2b-q6_k.gguf High quality, lower VRAM
english-document-ocr-qwen3.5-2b-q5_k_m.gguf Balanced quality and speed
english-document-ocr-qwen3.5-2b-q4_k_m.gguf Fast, efficient local inference
mmproj-english-document-ocr-qwen3.5-2b-f16.gguf Required multimodal projector (load with any weight above)

Example with llama.cpp:

bash
llama-cli \
  --model english-document-ocr-qwen3.5-2b-q4_k_m.gguf \
  --mmproj mmproj-english-document-ocr-qwen3.5-2b-f16.gguf \
  --image your_document.jpg

Limitations

  • Trained exclusively on synthetic data. May degrade on severe real-world scan artifacts outside the training distribution.
  • No handwriting support, relies on base model zero-shot for cursive or marginalia.
  • Does not extract mathematical formulas, charts, or scientific figures.
  • Optimized for LTR latin scripts. For Arabic/RTL documents, see my OCR models.
  • May hallucinate or break on very long context from dense pages. If your document is text-heavy, consider splitting it into sections before inference.

âš ī¸ Incomplete Data

Some information about this model is not available. Use with Caution - Verify details from the original source before relying on this data.

View Original Source →

📝 Limitations & Considerations

  • â€ĸ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • â€ĸ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • â€ĸ FNI scores are relative rankings and may change as new models are added.
  • ⚠ License Unknown: Verify licensing terms before commercial use.

Social Proof

HuggingFace Hub
264Downloads
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Model Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

🆔 Identity & Source

id
hf-model--loay--english-document-ocr-qwen3.5-2b
slug
loay--english-document-ocr-qwen3.5-2b
source
huggingface
author
loay
license
CC-BY-NC-SA-4.0
tags
gguf, ocr, vision-language-model, document-understanding, qwen3, qwen3.5, lora, english, archival, text-extraction, merged, image-text-to-text, en, base_model:qwen/qwen3.5-2b, base_model:adapter:qwen/qwen3.5-2b, license:cc-by-nc-sa-4.0, endpoints_compatible, region:us, conversational

âš™ī¸ Technical Specs

architecture
null
params billions
2
context length
4,096
pipeline tag
image-text-to-text
vram gb
2.8
vram is estimated
true
vram formula
VRAM ≈ (params * 0.75) + 0.8GB (KV) + 0.5GB (OS)

📊 Engagement & Metrics

downloads
264
stars
0
forks
0

Data indexed from public sources. Updated daily.