@misc{hf_model__poltextlab__xlm_roberta_large_german_media_cap_v3,
author = {poltextlab},
title = {Xlm Roberta Large German Media Cap V3 Model},
year = {2026},
howpublished = {\url{https://huggingface.co/poltextlab/xlm-roberta-large-german-media-cap-v3}},
note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
poltextlab. (2026). Xlm Roberta Large German Media Cap V3 [Model]. Free2AITools. https://huggingface.co/poltextlab/xlm-roberta-large-german-media-cap-v3
We follow the master codebook of the Comparative Agendas Project, and all of our models use the same major topic codes.
How to use the model
python
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")
pipe = pipeline(
model="poltextlab/xlm-roberta-large-german-media-cap-v3",
task="text-classification",
tokenizer=tokenizer,
use_fast=False,
token=""
)
text = "We will place an immediate 6-month halt on the finance driven closure of beds and wards, and set up an independent audit of needs and facilities."
pipe(text)
The translation table from the model results to CAP codes is the following:
We have included a 999 label because our models are fine-tuned on training data containing the label 'None' in addition to the 21 CAP major policy topic codes, indicating that the given text contains no relevant policy content. We use the label 999 for these cases.
Gated access
Due to the gated access, you must pass the token parameter when loading the model. In earlier versions of the Transformers package, you may need to use the use_auth_token parameter instead.
Model performance
The model was evaluated on a test set of 1203 examples (10% of the available data).
Model accuracy is 0.62.
label
precision
recall
f1-score
support
0
0.57
0.64
0.60
95
1
0.21
0.07
0.10
44
2
0.63
0.60
0.61
25
3
0.36
0.48
0.41
21
4
0.72
0.42
0.53
31
5
0.65
0.92
0.76
12
6
0.33
0.27
0.30
15
7
0.69
0.64
0.62
14
8
0.72
0.60
0.66
35
9
0.66
0.69
0.68
59
10
0.42
0.33
0.37
39
11
0.56
0.33
0.42
15
12
0.67
0.22
0.33
9
13
0.51
0.38
0.44
71
14
0.70
0.68
0.69
151
15
0.71
0.52
0.60
23
16
0.33
0.18
0.23
17
17
0.60
0.70
0.64
254
18
0.62
0.74
0.68
252
19
0.00
0.00
0.00
3
20
0.64
0.41
0.50
17
21
0.90
1.00
0.95
57
accuracy
0.62
0.62
0.62
0.62
macro avg
0.55
0.49
0.51
1259
weighted avg
0.61
0.62
0.60
1259
Fine-tuning procedure
This model was fine-tuned with the following key hyperparameters:
Number of Training Epochs: 10
Batch Size: 8
Learning Rate: 5e-06
Early Stopping: enabled with a patience of 2 epochs
Inference platform
This model is used by the CAP Babel Machine, an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the CAP Babel Machine.
Debugging and issues
This architecture uses the sentencepiece tokenizer. In order to run the model before transformers==4.27 you need to install it manually.
If you encounter a RuntimeError when loading the model using the from_pretrained() method, adding ignore_mismatched_sizes=True should solve the issue.
â ī¸ Incomplete Data
Some information about this model is not available.
Use with Caution - Verify details from the original source before relying on this data.