🧠
Model

Helm Bert

by Flansma hf-model--flansma--helm-bert
Nexus Index
43.6 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 33
R: Recency 96
Q: Quality 65
Tech Context
Vital Performance
2.5K DL / 30D
0.0%
Audited 43.6 FNI Score
Tiny - Params
- Context
2.5K Downloads
Commercial MIT License
Model Information Summary
Entity Passport
Registry ID hf-model--flansma--helm-bert
License MIT
Provider huggingface
📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__flansma__helm_bert,
  author = {Flansma},
  title = {Helm Bert Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/flansma/helm-bert}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
Flansma. (2026). Helm Bert [Model]. Free2AITools. https://huggingface.co/flansma/helm-bert

🔬Technical Deep Dive

Full Specifications [+]

Quick Commands

🤗 HF Download
huggingface-cli download flansma/helm-bert

⚖️ Nexus Index V2.0

43.6
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 33
Recency (R) 96
Quality (Q) 65

💬 Index Insight

FNI V2.0 for Helm Bert: Semantic (S:50), Authority (A:0), Popularity (P:33), Recency (R:96), Quality (Q:65).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
---

🚀 What's Next?

Technical Deep Dive

HELM-BERT

A peptide language model using HELM (Hierarchical Editing Language for Macromolecules) notation, compatible with Hugging Face Transformers.

GitHub

Model Description

HELM-BERT is built upon the DeBERTa architecture, pre-trained on ~75k peptides from four databases (ChEMBL, CREMP, CycPeptMPDB, Propedia) using Masked Language Modeling (MLM) with a Warmup-Stable-Decay (WSD) learning rate schedule.

  • Disentangled Attention: Decomposes attention into content-content and content-position terms
  • Enhanced Mask Decoder (EMD): Injects absolute position embeddings at the decoder stage
  • Span Masking: Contiguous token masking with geometric distribution
  • nGiE: n-gram Induced Encoding layer (1D convolution, kernel size 3)

Model Specifications

Parameter Value
Parameters 54.8M
Hidden size 768
Layers 6
Attention heads 12
Vocab size 78
Max token length 512
Pre-training data ~75k peptides (ChEMBL, CREMP, CycPeptMPDB, Propedia)
Pre-training objective MLM (span masking, p=0.15)
LR schedule Warmup-Stable-Decay (WSD)

How to Use

python
from transformers import AutoModel, AutoTokenizer

model = AutoModel.from_pretrained("Flansma/helm-bert", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("Flansma/helm-bert", trust_remote_code=True)

# Cyclosporine A
inputs = tokenizer("PEPTIDE1{[Abu].[Sar].[meL].V.[meL].A.[dA].[meL].[meL].[meV].[Me_Bmt(E)]}$PEPTIDE1,PEPTIDE1,1:R1-11:R2$$$", return_tensors="pt")
outputs = model(**inputs)
embeddings = outputs.last_hidden_state

Training Data

Pre-trained on deduplicated peptide sequences from:

  • ChEMBL: Bioactive molecules database
  • CREMP: Cyclic peptide conformational ensemble database
  • CycPeptMPDB: Cyclic peptide membrane permeability database
  • Propedia: Protein-peptide interaction database

Downstream Performance

Permeability Regression (CycPeptMPDB)

Single-Assay (mixed PAMPA/Caco-2 target):

Split Pearson RMSE MAE
Random 0.769 0.878 0.388 0.269
Scaffold 0.643 0.812 0.380 0.284

Multi-Assay (separate PAMPA and Caco-2 heads):

Split Assay Pearson RMSE MAE
Random PAMPA 0.711 0.844 0.426 0.298
Random Caco-2 0.772 0.878 0.402 0.305
Scaffold PAMPA 0.584 0.788 0.393 0.299
Scaffold Caco-2 0.701 0.846 0.381 0.287

Train/test 9:1, val 10% from train. Scaffold split by Murcko scaffolds.

PPI Classification (Propedia v2)

Split ROC-AUC PR-AUC F1 MCC Balanced Acc
Random 0.972 0.912 0.859 0.824 0.911
aCSM 0.868 0.702 0.613 0.559 0.735

Train/test 8:2, val 10% from train, 1:4 positive:negative ratio.

  • Random: random split
  • aCSM: clustering-based split on aCSM-ALL complex signatures with protein overlap pruning

SST2 Binding Affinity (pChEMBL)

Split Pearson RMSE MAE
Random 0.312 0.600 0.742 0.499
Scaffold 0.006 0.236 0.632 0.551

Train/test 8:2, val 10% from train. Scaffold split by Murcko scaffolds.

Citation

bibtex
@article{lee2025helmbert,
  title={HELM-BERT: A Transformer for Medium-sized Peptide Property Prediction},
  author={Seungeon Lee and Takuto Koyama and Itsuki Maeda and Shigeyuki Matsumoto and Yasushi Okuno},
  journal={arXiv preprint arXiv:2512.23175},
  year={2025},
  url={https://arxiv.org/abs/2512.23175}
}

License

MIT License

⚠️ Incomplete Data

Some information about this model is not available. Use with Caution - Verify details from the original source before relying on this data.

View Original Source →

📝 Limitations & Considerations

  • Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • FNI scores are relative rankings and may change as new models are added.
  • License Unknown: Verify licensing terms before commercial use.

Social Proof

HuggingFace Hub
2.5KDownloads
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseℹ️ Verify with original source

🛡️ Model Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

🆔 Identity & Source

id
hf-model--flansma--helm-bert
slug
flansma--helm-bert
source
huggingface
author
Flansma
license
MIT
tags
safetensors, helmbert, peptide, biology, drug-discovery, helm, helm-notation, cyclic-peptide, peptide-language-model, fill-mask, custom_code, en, arxiv:2512.23175, license:mit, region:us

⚙️ Technical Specs

architecture
null
params billions
null
context length
null
pipeline tag
fill-mask

📊 Engagement & Metrics

downloads
2,508
stars
0
forks
0

Data indexed from public sources. Updated daily.