🧠
Model

Xtts V2

by coqui ID: hf-model--coqui--xtts-v2
Downloads 6.4M
FNI Rank 37
Percentile Top 0%
Activity
β†’ 0.0%

ⓍTTS is a Voice generation model that lets you clone voices into different languages by using just a quick 6-second audio clip. There is no need for an excessive amount of training data that spans countless hours. This is the same or similar model to what powers Coqui Stu...

Audited 37 FNI Score
Tiny - Params
- Context
Hot 6.4M Downloads
Model Information Summary
Entity Passport
Registry ID hf-model--coqui--xtts-v2
Provider huggingface
πŸ“œ

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__coqui__xtts_v2,
  author = {coqui},
  title = {Xtts V2 Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/coqui/XTTS-v2}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
coqui. (2026). Xtts V2 [Model]. Free2AITools. https://huggingface.co/coqui/XTTS-v2

πŸ”¬Technical Deep Dive

Full Specifications [+]

⚑ Quick Commands

πŸ€— HF Download
huggingface-cli download coqui/xtts-v2

βš–οΈ Free2AI Nexus Index

Methodology β†’ πŸ“˜ What is FNI?
37.0
Top 0% Overall Impact
πŸ”₯ Popularity (P) 0
πŸš€ Velocity (V) 0
πŸ›‘οΈ Credibility (C) 0
πŸ”§ Utility (U) 0
Nexus Verified Data

πŸ’¬ Why this score?

The Nexus Index for Xtts V2 aggregates Popularity (P:0), Velocity (V:0), and Credibility (C:0). The Utility score (U:0) represents deployment readiness, context efficiency, and structural reliability within the Nexus ecosystem.

Data Verified πŸ• Last Updated: Not calculated
Free2AI Nexus Index | Fair Β· Transparent Β· Explainable | Full Methodology
---

πŸš€ What's Next?

Technical Deep Dive


license: other
license_name: coqui-public-model-license
license_link: https://coqui.ai/cpml
library_name: coqui
pipeline_tag: text-to-speech
widget:
- text: "Once when I was six years old I saw a magnificent picture"

ⓍTTS

ⓍTTS is a Voice generation model that lets you clone voices into different languages by using just a quick 6-second audio clip. There is no need for an excessive amount of training data that spans countless hours.

This is the same or similar model to what powers Coqui Studio and Coqui API.

Features

  • Supports 17 languages.
  • Voice cloning with just a 6-second audio clip.
  • Emotion and style transfer by cloning.
  • Cross-language voice cloning.
  • Multi-lingual speech generation.
  • 24khz sampling rate.

Updates over XTTS-v1

  • 2 new languages; Hungarian and Korean
  • Architectural improvements for speaker conditioning.
  • Enables the use of multiple speaker references and interpolation between speakers.
  • Stability improvements.
  • Better prosody and audio quality across the board.

Languages

XTTS-v2 supports 17 languages: English (en), Spanish (es), French (fr), German (de), Italian (it), Portuguese (pt),
Polish (pl), Turkish (tr), Russian (ru), Dutch (nl), Czech (cs), Arabic (ar), Chinese (zh-cn), Japanese (ja), Hungarian (hu), Korean (ko)
Hindi (hi)
.

Stay tuned as we continue to add support for more languages. If you have any language requests, feel free to reach out!

Code

The code-base supports inference and fine-tuning.

Demo Spaces

  • XTTS Space : You can see how model performs on supported languages, and try with your own reference or microphone input
  • XTTS Voice Chat with Mistral or Zephyr : You can experience streaming voice chat with Mistral 7B Instruct or Zephyr 7B Beta
πŸΈπŸ’¬ CoquiTTS coqui/TTS on Github
πŸ’Ό Documentation ReadTheDocs
πŸ‘©β€πŸ’» Questions GitHub Discussions
πŸ—― Community Discord

License

This model is licensed under Coqui Public Model License. There's a lot that goes into a license for generative models, and you can read more of the origin story of CPML here.

Contact

Come and join in our 🐸Community. We're active on Discord and Twitter.
You can also mail us at info@coqui.ai.

Using 🐸TTS API:

from TTS.api import TTS
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2", gpu=True)

generate speech by cloning a voice using default settings

tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.", file_path="output.wav", speaker_wav="/path/to/target/speaker.wav", language="en")

Using 🐸TTS Command line:

 tts --model_name tts_models/multilingual/multi-dataset/xtts_v2 \
     --text "BugΓΌn okula gitmek istemiyorum." \
     --speaker_wav /path/to/target/speaker.wav \
     --language_idx tr \
     --use_cuda true

Using the model directly:

from TTS.tts.configs.xtts_config import XttsConfig
from TTS.tts.models.xtts import Xtts

config = XttsConfig() config.load_json("/path/to/xtts/config.json") model = Xtts.init_from_config(config) model.load_checkpoint(config, checkpoint_dir="/path/to/xtts/", eval=True) model.cuda()

outputs = model.synthesize( "It took me quite a long time to develop a voice and now that I have it I am not going to be silent.", config, speaker_wav="/data/TTS-public/_refclips/3.wav", gpt_cond_len=3, language="en", )

πŸ“ Limitations & Considerations

  • β€’ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • β€’ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • β€’ FNI scores are relative rankings and may change as new models are added.
  • β€’ Source: Unknown
Top Tier

Social Proof

HuggingFace Hub
3.2KLikes
6.4MDownloads
πŸ”„ Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

πŸ“Š FNI Methodology πŸ“š Knowledge Baseℹ️ Verify with original source

πŸ›‘οΈ Model Transparency Report

Verified data manifest for traceability and transparency.

100% Data Disclosure Active

πŸ†” Identity & Source

id
hf-model--coqui--xtts-v2
source
huggingface
author
coqui
tags
coquitext-to-speechlicense:otherregion:us

βš™οΈ Technical Specs

architecture
null
params billions
null
context length
null
pipeline tag
text-to-speech

πŸ“Š Engagement & Metrics

likes
3,218
downloads
6,395,645

Free2AITools Constitutional Data Pipeline: Curated disclosure mode active. (V15.x Standard)