đ§
Lfm2 24b A2b Apex Gguf model by mudler
â 39.3
đŦTechnical Deep Dive
Full Specifications [+]
đ Daily sync (03:00 UTC)
AI Summary: Based on Hugging Face metadata. Not a recommendation.
đĄī¸ Model Transparency Report
Technical metadata sourced from upstream repositories.
Open Metadata
đ Identity & Source
- id
- hf-model--mudler--lfm2-24b-a2b-apex-gguf
- slug
- mudler--lfm2-24b-a2b-apex-gguf
- source
- huggingface
- author
- mudler
- license
- Other
- tags
- gguf, quantized, apex, moe, mixture-of-experts, liquidai, lfm2, hybrid, base_model:liquidai/lfm2-24b-a2b, base_model:quantized:liquidai/lfm2-24b-a2b, license:other, endpoints_compatible, region:us, conversational
âī¸ Technical Specs
- architecture
- null
- params billions
- 24
- context length
- 4,096
- pipeline tag
- vram gb
- 19.3
- vram is estimated
- true
- vram formula
- VRAM â (params * 0.75) + 0.8GB (KV) + 0.5GB (OS)
đ Engagement & Metrics
- downloads
- 2,683
- stars
- 0
- forks
- 0
Data indexed from public sources. Updated daily.