Helm Bert
| Entity Passport | |
| Registry ID | hf-model--flansma--helm-bert |
| License | MIT |
| Provider | huggingface |
Cite this model
Academic & Research Attribution
@misc{hf_model__flansma__helm_bert,
author = {Flansma},
title = {Helm Bert Model},
year = {2026},
howpublished = {\url{https://huggingface.co/flansma/helm-bert}},
note = {Accessed via Free2AITools Knowledge Fortress}
} 🔬Technical Deep Dive
Full Specifications [+]▾
Quick Commands
huggingface-cli download flansma/helm-bert ⚖️ Nexus Index V2.0
💬 Index Insight
FNI V2.0 for Helm Bert: Semantic (S:50), Authority (A:0), Popularity (P:33), Recency (R:96), Quality (Q:65).
Verification Authority
🚀 What's Next?
Technical Deep Dive
HELM-BERT
A peptide language model using HELM (Hierarchical Editing Language for Macromolecules) notation, compatible with Hugging Face Transformers.
Model Description
HELM-BERT is built upon the DeBERTa architecture, pre-trained on ~75k peptides from four databases (ChEMBL, CREMP, CycPeptMPDB, Propedia) using Masked Language Modeling (MLM) with a Warmup-Stable-Decay (WSD) learning rate schedule.
- Disentangled Attention: Decomposes attention into content-content and content-position terms
- Enhanced Mask Decoder (EMD): Injects absolute position embeddings at the decoder stage
- Span Masking: Contiguous token masking with geometric distribution
- nGiE: n-gram Induced Encoding layer (1D convolution, kernel size 3)

Model Specifications
| Parameter | Value |
|---|---|
| Parameters | 54.8M |
| Hidden size | 768 |
| Layers | 6 |
| Attention heads | 12 |
| Vocab size | 78 |
| Max token length | 512 |
| Pre-training data | ~75k peptides (ChEMBL, CREMP, CycPeptMPDB, Propedia) |
| Pre-training objective | MLM (span masking, p=0.15) |
| LR schedule | Warmup-Stable-Decay (WSD) |
How to Use
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("Flansma/helm-bert", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("Flansma/helm-bert", trust_remote_code=True)
# Cyclosporine A
inputs = tokenizer("PEPTIDE1{[Abu].[Sar].[meL].V.[meL].A.[dA].[meL].[meL].[meV].[Me_Bmt(E)]}$PEPTIDE1,PEPTIDE1,1:R1-11:R2$$$", return_tensors="pt")
outputs = model(**inputs)
embeddings = outputs.last_hidden_state
Training Data
Pre-trained on deduplicated peptide sequences from:
- ChEMBL: Bioactive molecules database
- CREMP: Cyclic peptide conformational ensemble database
- CycPeptMPDB: Cyclic peptide membrane permeability database
- Propedia: Protein-peptide interaction database
Downstream Performance
Permeability Regression (CycPeptMPDB)
Single-Assay (mixed PAMPA/Caco-2 target):
| Split | R² | Pearson | RMSE | MAE |
|---|---|---|---|---|
| Random | 0.769 | 0.878 | 0.388 | 0.269 |
| Scaffold | 0.643 | 0.812 | 0.380 | 0.284 |
Multi-Assay (separate PAMPA and Caco-2 heads):
| Split | Assay | R² | Pearson | RMSE | MAE |
|---|---|---|---|---|---|
| Random | PAMPA | 0.711 | 0.844 | 0.426 | 0.298 |
| Random | Caco-2 | 0.772 | 0.878 | 0.402 | 0.305 |
| Scaffold | PAMPA | 0.584 | 0.788 | 0.393 | 0.299 |
| Scaffold | Caco-2 | 0.701 | 0.846 | 0.381 | 0.287 |
Train/test 9:1, val 10% from train. Scaffold split by Murcko scaffolds.

PPI Classification (Propedia v2)
| Split | ROC-AUC | PR-AUC | F1 | MCC | Balanced Acc |
|---|---|---|---|---|---|
| Random | 0.972 | 0.912 | 0.859 | 0.824 | 0.911 |
| aCSM | 0.868 | 0.702 | 0.613 | 0.559 | 0.735 |
Train/test 8:2, val 10% from train, 1:4 positive:negative ratio.
- Random: random split
- aCSM: clustering-based split on aCSM-ALL complex signatures with protein overlap pruning

SST2 Binding Affinity (pChEMBL)
| Split | R² | Pearson | RMSE | MAE |
|---|---|---|---|---|
| Random | 0.312 | 0.600 | 0.742 | 0.499 |
| Scaffold | 0.006 | 0.236 | 0.632 | 0.551 |
Train/test 8:2, val 10% from train. Scaffold split by Murcko scaffolds.

Citation
@article{lee2025helmbert,
title={HELM-BERT: A Transformer for Medium-sized Peptide Property Prediction},
author={Seungeon Lee and Takuto Koyama and Itsuki Maeda and Shigeyuki Matsumoto and Yasushi Okuno},
journal={arXiv preprint arXiv:2512.23175},
year={2025},
url={https://arxiv.org/abs/2512.23175}
}
License
MIT License
⚠️ Incomplete Data
Some information about this model is not available. Use with Caution - Verify details from the original source before relying on this data.
View Original Source →📝 Limitations & Considerations
- • Benchmark scores may vary based on evaluation methodology and hardware configuration.
- • VRAM requirements are estimates; actual usage depends on quantization and batch size.
- • FNI scores are relative rankings and may change as new models are added.
- ⚠ License Unknown: Verify licensing terms before commercial use.
Social Proof
AI Summary: Based on Hugging Face metadata. Not a recommendation.
🛡️ Model Transparency Report
Technical metadata sourced from upstream repositories.
🆔 Identity & Source
- id
- hf-model--flansma--helm-bert
- slug
- flansma--helm-bert
- source
- huggingface
- author
- Flansma
- license
- MIT
- tags
- safetensors, helmbert, peptide, biology, drug-discovery, helm, helm-notation, cyclic-peptide, peptide-language-model, fill-mask, custom_code, en, arxiv:2512.23175, license:mit, region:us
⚙️ Technical Specs
- architecture
- null
- params billions
- null
- context length
- null
- pipeline tag
- fill-mask
📊 Engagement & Metrics
- downloads
- 2,508
- stars
- 0
- forks
- 0
Data indexed from public sources. Updated daily.