🧠
Model

Gemma 3 12b It Abliterated V2 Gguf

by mlabonne hf-model--mlabonne--gemma-3-12b-it-abliterated-v2-gguf
Nexus Index
36.8 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 44
R: Recency 51
Q: Quality 50
Tech Context
12 Params
4.096K Ctx
Vital Performance
9.9K DL / 30D
0.0%
Audited 36.8 FNI Score
12B Params
4k Context
9.9K Downloads
24G GPU ~11GB Est. VRAM
Restricted GEMMA License
Model Information Summary
Entity Passport
Registry ID hf-model--mlabonne--gemma-3-12b-it-abliterated-v2-gguf
License Gemma
Provider huggingface
💾

Compute Threshold

~10.3GB VRAM

Interactive
Analyze Hardware
â–ŧ

* Static estimation for 4-Bit Quantization.

📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__mlabonne__gemma_3_12b_it_abliterated_v2_gguf,
  author = {mlabonne},
  title = {Gemma 3 12b It Abliterated V2 Gguf Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/mlabonne/gemma-3-12b-it-abliterated-v2-gguf}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
mlabonne. (2026). Gemma 3 12b It Abliterated V2 Gguf [Model]. Free2AITools. https://huggingface.co/mlabonne/gemma-3-12b-it-abliterated-v2-gguf

đŸ”ŦTechnical Deep Dive

Full Specifications [+]

Quick Commands

đŸĻ™ Ollama Run
ollama run gemma-3-12b-it-abliterated-v2-gguf
🤗 HF Download
huggingface-cli download mlabonne/gemma-3-12b-it-abliterated-v2-gguf
đŸ“Ļ Install Lib
pip install -U transformers

âš–ī¸ Nexus Index V2.0

36.8
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 44
Recency (R) 51
Quality (Q) 50

đŸ’Ŧ Index Insight

FNI V2.0 for Gemma 3 12b It Abliterated V2 Gguf: Semantic (S:50), Authority (A:0), Popularity (P:44), Recency (R:51), Quality (Q:50).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
---

🚀 What's Next?

Technical Deep Dive

💎 Gemma 3 12B IT Abliterated

image/png

Gemma 3 Abliterated GGUF 1B â€ĸ 4B â€ĸ 12B â€ĸ 27B

This is an uncensored version of google/gemma-3-12b-it created with a new abliteration technique. See this article to know more about abliteration.

This is a new, improved version that targets refusals with enhanced accuracy.

I recommend using these generation parameters: temperature=1.0, top_k=64, top_p=0.95.

âšĄī¸ Quantization

âœ‚ī¸ Abliteration

image/png

The refusal direction is computed by comparing the residual streams between target (harmful) and baseline (harmless) samples. The hidden states of target modules (e.g., o_proj) are orthogonalized to subtract this refusal direction with a given weight factor. These weight factors follow a normal distribution with a certain spread and peak layer. Modules can be iteratively orthogonalized in batches, or the refusal direction can be accumulated to save memory.

Finally, I used a hybrid evaluation with a dedicated test set to calculate the acceptance rate. This uses both a dictionary approach and NousResearch/Minos-v1. The goal is to obtain an acceptance rate >90% and still produce coherent outputs.

âš ī¸ Incomplete Data

Some information about this model is not available. Use with Caution - Verify details from the original source before relying on this data.

View Original Source →

📝 Limitations & Considerations

  • â€ĸ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • â€ĸ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • â€ĸ FNI scores are relative rankings and may change as new models are added.
  • ⚠ License Unknown: Verify licensing terms before commercial use.

Social Proof

HuggingFace Hub
9.9KDownloads
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Model Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

🆔 Identity & Source

id
hf-model--mlabonne--gemma-3-12b-it-abliterated-v2-gguf
slug
mlabonne--gemma-3-12b-it-abliterated-v2-gguf
source
huggingface
author
mlabonne
license
Gemma
tags
transformers, gguf, image-text-to-text, base_model:mlabonne/gemma-3-12b-it-abliterated-v2, license:gemma, endpoints_compatible, region:us, conversational

âš™ī¸ Technical Specs

architecture
null
params billions
12
context length
4,096
pipeline tag
image-text-to-text
vram gb
10.3
vram is estimated
true
vram formula
VRAM ≈ (params * 0.75) + 0.8GB (KV) + 0.5GB (OS)

📊 Engagement & Metrics

downloads
9,907
stars
0
forks
0

Data indexed from public sources. Updated daily.