🧠
Model

Qwen3.5 27b Tq3 1s

by YTan2000 hf-model--ytan2000--qwen3.5-27b-tq3_1s
Nexus Index
43.5 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 44
R: Recency 99
Q: Quality 50
Tech Context
27 Params
4.096K Ctx
Vital Performance
10.6K DL / 30D
0.0%
Audited 43.5 FNI Score
27B Params
4k Context
10.6K Downloads
24G GPU ~22GB Est. VRAM
Commercial MIT License
Model Information Summary
Entity Passport
Registry ID hf-model--ytan2000--qwen3.5-27b-tq3_1s
License MIT
Provider huggingface
💾

Compute Threshold

~21.6GB VRAM

Interactive
Analyze Hardware
â–ŧ

* Static estimation for 4-Bit Quantization.

📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__ytan2000__qwen3.5_27b_tq3_1s,
  author = {YTan2000},
  title = {Qwen3.5 27b Tq3 1s Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/ytan2000/qwen3.5-27b-tq3_1s}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
YTan2000. (2026). Qwen3.5 27b Tq3 1s [Model]. Free2AITools. https://huggingface.co/ytan2000/qwen3.5-27b-tq3_1s

đŸ”ŦTechnical Deep Dive

Full Specifications [+]

Quick Commands

đŸĻ™ Ollama Run
ollama run qwen3.5-27b-tq3_1s
🤗 HF Download
huggingface-cli download ytan2000/qwen3.5-27b-tq3_1s

âš–ī¸ Nexus Index V2.0

43.5
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 44
Recency (R) 99
Quality (Q) 50

đŸ’Ŧ Index Insight

FNI V2.0 for Qwen3.5 27b Tq3 1s: Semantic (S:50), Authority (A:0), Popularity (P:44), Recency (R:99), Quality (Q:50).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
---

🚀 What's Next?

Technical Deep Dive

Qwen3.5-27B-TQ3_1S

Qwen3.5-27B-TQ3_1S is a GGUF quantization of Qwen/Qwen3.5-27B using TQ3_1S, a 3.5-bit Walsh-Hadamard-transform weight format.

Files

  • Qwen3.5-27B-TQ3_1S.gguf
  • mmproj-BF16.gguf

Runtime Requirement

This model requires the public TurboQuant runtime fork:

  • https://github.com/turbo-tan/llama.cpp-tq3

It will not load correctly on stock llama.cpp or other runtimes that do not include TQ3_1S.

Text-Only Run

bash
./build/bin/llama-server \
  -m /path/to/Qwen3.5-27B-TQ3_1S.gguf \
  -ngl 99 -c 8192 -np 1 \
  -ctk q8_0 -ctv q8_0 -fa on \
  --cache-ram 0 --no-warmup --jinja \
  --reasoning off --reasoning-budget 0 --reasoning-format deepseek

Vision / Image Input

For image input, use the included projector:

bash
./build/bin/llama-server \
  -m /path/to/Qwen3.5-27B-TQ3_1S.gguf \
  -mm /path/to/mmproj-BF16.gguf \
  -ngl 99 -c 8192 -np 1 \
  -ctk q8_0 -ctv q8_0 -fa on \
  --cache-ram 0 --no-warmup --jinja \
  --reasoning off --reasoning-budget 0 --reasoning-format deepseek \
  --no-mmproj-offload

If your frontend says image input is unsupported, it is almost always talking to an older server instance that was started without --mmproj.

Quality

Gold-standard wiki.test.raw pass, c=512, full 580 chunks:

Format PPL Size
Q4_0 7.2431 +/- 0.0482 14.4 GB
TQ3_1S 7.2570 +/- 0.0480 12.9 GB

Base Model

License

Same license terms as the base model apply.

âš ī¸ Incomplete Data

Some information about this model is not available. Use with Caution - Verify details from the original source before relying on this data.

View Original Source →

📝 Limitations & Considerations

  • â€ĸ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • â€ĸ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • â€ĸ FNI scores are relative rankings and may change as new models are added.
  • ⚠ License Unknown: Verify licensing terms before commercial use.

Social Proof

HuggingFace Hub
10.6KDownloads
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Model Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

🆔 Identity & Source

id
hf-model--ytan2000--qwen3.5-27b-tq3_1s
slug
ytan2000--qwen3.5-27b-tq3_1s
source
huggingface
author
YTan2000
license
MIT
tags
gguf, llama.cpp, qwen, qwen3.5, quantization, turboquant, wht, text-generation, en, base_model:qwen/qwen3.5-27b, base_model:quantized:qwen/qwen3.5-27b, license:mit, endpoints_compatible, region:us, imatrix, conversational, tq3_1s, multimodal, image-text-to-text

âš™ī¸ Technical Specs

architecture
null
params billions
27
context length
4,096
pipeline tag
image-text-to-text
vram gb
21.6
vram is estimated
true
vram formula
VRAM ≈ (params * 0.75) + 0.8GB (KV) + 0.5GB (OS)

📊 Engagement & Metrics

downloads
10,615
stars
0
forks
0

Data indexed from public sources. Updated daily.