🧠
Model

Omnially R1 70b Merged I1 Gguf

by mradermacher hf-model--mradermacher--omnially-r1-70b-merged-i1-gguf
Nexus Index
39.8 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 37
R: Recency 98
Q: Quality 30
Tech Context
70 Params
4.096K Ctx
Vital Performance
3.7K DL / 30D
0.0%
Audited 39.8 FNI Score
70B Params
4k Context
3.7K Downloads
H100+ ~55GB Est. VRAM
Model Information Summary
Entity Passport
Registry ID hf-model--mradermacher--omnially-r1-70b-merged-i1-gguf
Provider huggingface
💾

Compute Threshold

~55GB VRAM

Interactive
Analyze Hardware
â–ŧ

* Static estimation for 4-Bit Quantization. [Multi-GPU / Unified Memory Required]

📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__mradermacher__omnially_r1_70b_merged_i1_gguf,
  author = {mradermacher},
  title = {Omnially R1 70b Merged I1 Gguf Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/mradermacher/omnially-r1-70b-merged-i1-gguf}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
mradermacher. (2026). Omnially R1 70b Merged I1 Gguf [Model]. Free2AITools. https://huggingface.co/mradermacher/omnially-r1-70b-merged-i1-gguf

đŸ”ŦTechnical Deep Dive

Full Specifications [+]

Quick Commands

đŸĻ™ Ollama Run
ollama run omnially-r1-70b-merged-i1-gguf
🤗 HF Download
huggingface-cli download mradermacher/omnially-r1-70b-merged-i1-gguf
đŸ“Ļ Install Lib
pip install -U transformers

âš–ī¸ Nexus Index V2.0

39.8
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 37
Recency (R) 98
Quality (Q) 30

đŸ’Ŧ Index Insight

FNI V2.0 for Omnially R1 70b Merged I1 Gguf: Semantic (S:50), Authority (A:0), Popularity (P:37), Recency (R:98), Quality (Q:30).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
---

🚀 What's Next?

Technical Deep Dive

About

weighted/imatrix quants of https://huggingface.co/rsellman/omnially-r1-70b-merged

For a convenient overview and download list, visit our model page for this model.

static quants are available at https://huggingface.co/mradermacher/omnially-r1-70b-merged-GGUF

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Link Type Size/GB Notes
GGUF imatrix 0.1 imatrix file (for creating your own quants)
GGUF i1-IQ1_S 15.4 for the desperate
GGUF i1-IQ2_XXS 19.2
GGUF i1-Q2_K_S 24.6 very low quality
GGUF i1-Q2_K 26.5 IQ3_XXS probably better
GGUF i1-Q3_K_S 31.0 IQ3_XS probably better

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.

âš ī¸ Incomplete Data

Some information about this model is not available. Use with Caution - Verify details from the original source before relying on this data.

View Original Source →

📝 Limitations & Considerations

  • â€ĸ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • â€ĸ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • â€ĸ FNI scores are relative rankings and may change as new models are added.
  • ⚠ License Unknown: Verify licensing terms before commercial use.

Social Proof

HuggingFace Hub
3.7KDownloads
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Model Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

🆔 Identity & Source

id
hf-model--mradermacher--omnially-r1-70b-merged-i1-gguf
slug
mradermacher--omnially-r1-70b-merged-i1-gguf
source
huggingface
author
mradermacher
license
tags
transformers, gguf, en, base_model:rsellman/omnially-r1-70b-merged, endpoints_compatible, region:us, imatrix, conversational

âš™ī¸ Technical Specs

architecture
null
params billions
70
context length
4,096
pipeline tag
vram gb
55
vram is estimated
true
vram formula
VRAM ≈ (params * 0.75) + 2GB (KV) + 0.5GB (OS)

📊 Engagement & Metrics

downloads
3,741
stars
0
forks
0

Data indexed from public sources. Updated daily.