🧠
Model

Qwen2.5 0.5b Math

by Sepolian hf-model--sepolian--qwen2.5-0.5b-math
Nexus Index
39.9 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 23
R: Recency 100
Q: Quality 65
Tech Context
0.5B Params
4.096K Ctx
Vital Performance
681 DL / 30D
0.0%
Audited 39.9 FNI Score
Tiny 0.5B Params
4k Context
681 Downloads
8G GPU ~2GB Est. VRAM
Restricted OTHER License
Model Information Summary
Entity Passport
Registry ID hf-model--sepolian--qwen2.5-0.5b-math
License Other
Provider huggingface
💾

Compute Threshold

~1.7GB VRAM

Interactive
Analyze Hardware
â–ŧ

* Static estimation for 4-Bit Quantization.

📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__sepolian__qwen2.5_0.5b_math,
  author = {Sepolian},
  title = {Qwen2.5 0.5b Math Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/sepolian/qwen2.5-0.5b-math}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
Sepolian. (2026). Qwen2.5 0.5b Math [Model]. Free2AITools. https://huggingface.co/sepolian/qwen2.5-0.5b-math

đŸ”ŦTechnical Deep Dive

Full Specifications [+]

Quick Commands

đŸĻ™ Ollama Run
ollama run qwen2.5-0.5b-math
🤗 HF Download
huggingface-cli download sepolian/qwen2.5-0.5b-math
đŸ“Ļ Install Lib
pip install -U transformers

âš–ī¸ Nexus Index V2.0

39.9
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 23
Recency (R) 100
Quality (Q) 65

đŸ’Ŧ Index Insight

FNI V2.0 for Qwen2.5 0.5b Math: Semantic (S:50), Authority (A:0), Popularity (P:23), Recency (R:100), Quality (Q:65).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
---

🚀 What's Next?

Technical Deep Dive

Model Card for Qwen2.5-0.5B-SFT-math-mix-35k

This model is a fine-tuned version of Qwen/Qwen2.5-0.5B

Version

0.1

Mixed small subset of mutiple dataset in math domain, total 35k.

Component Raw Filtered Removed Final Avg Tok Tot Tok
A. GSM8K (train) 7,473 7,473 0 7,473 181 1,355,305
B. MathInstruct (filtered) 262,039 189,889 72,150 10,000 256 2,555,928
C. DeepMath-103K (Lvl 5-6) 103,022 8,132 94,890 8,132 1717 13,962,959
D. Big-Math-RL-Verified 251,122 210,659 40,463 4,000 74 294,586
E. camel-ai/math 50,000 6,880 43,120 3,000 523 1,567,709
F. camel-ai/physics 20,000 20,000 0 3,000 541 1,622,913
TOTAL 35,605 21,359,400

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • total_eval_batch_size: 16
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 1.0

Training results

Framework versions

  • Transformers 5.2.0
  • Pytorch 2.11.0+cu128
  • Datasets 4.0.0
  • Tokenizers 0.22.2

âš ī¸ Incomplete Data

Some information about this model is not available. Use with Caution - Verify details from the original source before relying on this data.

View Original Source →

📝 Limitations & Considerations

  • â€ĸ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • â€ĸ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • â€ĸ FNI scores are relative rankings and may change as new models are added.
  • ⚠ License Unknown: Verify licensing terms before commercial use.

Social Proof

HuggingFace Hub
681Downloads
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Model Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

🆔 Identity & Source

id
hf-model--sepolian--qwen2.5-0.5b-math
slug
sepolian--qwen2.5-0.5b-math
source
huggingface
author
Sepolian
license
Other
tags
transformers, safetensors, qwen2, text-generation, llama-factory, full, generated_from_trainer, conversational, base_model:qwen/qwen2.5-0.5b, base_model:finetune:qwen/qwen2.5-0.5b, license:other, text-generation-inference, endpoints_compatible, region:us, reinforcement-learning, math

âš™ī¸ Technical Specs

architecture
null
params billions
0.5
context length
4,096
pipeline tag
reinforcement-learning
vram gb
1.7
vram is estimated
true
vram formula
VRAM ≈ (params * 0.75) + 0.8GB (KV) + 0.5GB (OS)

📊 Engagement & Metrics

downloads
681
stars
0
forks
0

Data indexed from public sources. Updated daily.