🧠
Model

Phobert Vsa Restaurant 8k

by pqthinh232 hf-model--pqthinh232--phobert-vsa-restaurant-8k
Nexus Index
39.6 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 6
R: Recency 97
Q: Quality 65
Tech Context
8.192K Ctx
Vital Performance
66 DL / 30D
0.0%
Audited 39.6 FNI Score
Tiny - Params
8k Context
66 Downloads
Commercial MIT License
Model Information Summary
Entity Passport
Registry ID hf-model--pqthinh232--phobert-vsa-restaurant-8k
License MIT
Provider huggingface
📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__pqthinh232__phobert_vsa_restaurant_8k,
  author = {pqthinh232},
  title = {Phobert Vsa Restaurant 8k Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/pqthinh232/phobert-vsa-restaurant-8k}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
pqthinh232. (2026). Phobert Vsa Restaurant 8k [Model]. Free2AITools. https://huggingface.co/pqthinh232/phobert-vsa-restaurant-8k

đŸ”ŦTechnical Deep Dive

Full Specifications [+]

Quick Commands

🤗 HF Download
huggingface-cli download pqthinh232/phobert-vsa-restaurant-8k
đŸ“Ļ Install Lib
pip install -U transformers

âš–ī¸ Nexus Index V2.0

39.6
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 6
Recency (R) 97
Quality (Q) 65

đŸ’Ŧ Index Insight

FNI V2.0 for Phobert Vsa Restaurant 8k: Semantic (S:50), Authority (A:0), Popularity (P:6), Recency (R:97), Quality (Q:65).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
---

🚀 What's Next?

Technical Deep Dive

phobert-vsa-restaurant-8k

This model is a fine-tuned version of vinai/phobert-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2661
  • Accuracy: 0.9024
  • F1: 0.9019
  • Precision: 0.9017
  • Recall: 0.9024

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Precision Recall
0.3904 1.0 400 0.3558 0.8473 0.8525 0.8656 0.8473
0.2636 2.0 800 0.3491 0.8773 0.8766 0.8766 0.8773
0.1570 3.0 1200 0.4544 0.8623 0.8601 0.8604 0.8623
0.1329 4.0 1600 0.4998 0.8686 0.8699 0.8718 0.8686
0.0893 5.0 2000 0.5671 0.8648 0.8651 0.8659 0.8648

Framework versions

  • Transformers 5.0.0
  • Pytorch 2.10.0+cu128
  • Datasets 4.0.0
  • Tokenizers 0.22.2

âš ī¸ Incomplete Data

Some information about this model is not available. Use with Caution - Verify details from the original source before relying on this data.

View Original Source →

📝 Limitations & Considerations

  • â€ĸ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • â€ĸ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • â€ĸ FNI scores are relative rankings and may change as new models are added.
  • ⚠ License Unknown: Verify licensing terms before commercial use.

Social Proof

HuggingFace Hub
66Downloads
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Model Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

🆔 Identity & Source

id
hf-model--pqthinh232--phobert-vsa-restaurant-8k
slug
pqthinh232--phobert-vsa-restaurant-8k
source
huggingface
author
pqthinh232
license
MIT
tags
transformers, safetensors, roberta, text-classification, generated_from_trainer, base_model:vinai/phobert-base, base_model:finetune:vinai/phobert-base, license:mit, endpoints_compatible, region:us

âš™ī¸ Technical Specs

architecture
null
params billions
null
context length
8,192
pipeline tag
text-classification

📊 Engagement & Metrics

downloads
66
stars
0
forks
0

Data indexed from public sources. Updated daily.