@misc{hf_model__poltextlab__xlm_roberta_large_pooled_cap_v3,
author = {poltextlab},
title = {Xlm Roberta Large Pooled Cap V3 Model},
year = {2026},
howpublished = {\url{https://huggingface.co/poltextlab/xlm-roberta-large-pooled-cap-v3}},
note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
poltextlab. (2026). Xlm Roberta Large Pooled Cap V3 [Model]. Free2AITools. https://huggingface.co/poltextlab/xlm-roberta-large-pooled-cap-v3
Due to the gated access, you must pass the token parameter when loading the model. In earlier versions of the Transformers package, you may need to use the use_auth_token parameter instead.
How to use the model
python
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")
pipe = pipeline(
model="poltextlab/xlm-roberta-large-pooled-cap-v3",
task="text-classification",
tokenizer=tokenizer,
use_fast=False,
token=""
)
text = "We will place an immediate 6-month halt on the finance driven closure of beds and wards, and set up an independent audit of needs and facilities."
pipe(text)
Inference platform
This model is used by the CAP Babel Machine, an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
Debugging and issues
This architecture uses the sentencepiece tokenizer. In order to run the model before transformers==4.27 you need to install it manually.
If you encounter a RuntimeError when loading the model using the from_pretrained() method, adding ignore_mismatched_sizes=True should solve the issue.
â ī¸ Incomplete Data
Some information about this model is not available.
Use with Caution - Verify details from the original source before relying on this data.