๐Ÿ“„
Paper

E$^2$CM: Early Exit via Class Means for Efficient Supervised and Unsupervised Learning

by Alperen Gรถrmez ID: arxiv-paper--2103.01148

State-of-the-art neural networks with early exit mechanisms often need considerable amount of training and fine tuning to achieve good performance with low computational cost. We propose a novel early exit technique, Early Exit Class Means (E$^2$CM), based on class means of samples. Unlike most exis...

High Impact - Citations
2021 Year
ArXiv Venue
Top 19% FNI Rank
Paper Information Summary
Entity Passport
Registry ID arxiv-paper--2103.01148
Provider arxiv
๐Ÿ“œ

Cite this paper

Academic & Research Attribution

BibTeX
@misc{arxiv_paper__2103.01148,
  author = {Alperen Gรถrmez},
  title = {E$^2$CM: Early Exit via Class Means for Efficient Supervised and Unsupervised Learning Paper},
  year = {2026},
  howpublished = {\url{https://arxiv.org/abs/2103.01148v3}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
Alperen Gรถrmez. (2026). E$^2$CM: Early Exit via Class Means for Efficient Supervised and Unsupervised Learning [Paper]. Free2AITools. https://arxiv.org/abs/2103.01148v3

๐Ÿ”ฌTechnical Deep Dive

Full Specifications [+]

โš–๏ธ Free2AI Nexus Index

Methodology โ†’ ๐Ÿ“˜ What is FNI?
0.0
Top 19% Overall Impact
๐Ÿ”ฅ Popularity (P) 0
๐Ÿš€ Velocity (V) 0
๐Ÿ›ก๏ธ Credibility (C) 0
๐Ÿ”ง Utility (U) 0
Nexus Verified Data

๐Ÿ’ฌ Why this score?

The Nexus Index for E$^2$CM: Early Exit via Class Means for Efficient Supervised and Unsupervised Learning aggregates Popularity (P:0), Velocity (V:0), and Credibility (C:0). The Utility score (U:0) represents deployment readiness, context efficiency, and structural reliability within the Nexus ecosystem.

Data Verified ๐Ÿ• Last Updated: Not calculated
Free2AI Nexus Index | Fair ยท Transparent ยท Explainable | Full Methodology

๐Ÿ“ Executive Summary

"State-of-the-art neural networks with early exit mechanisms often need considerable amount of training and fine tuning to achieve good performance with low computational cost. We propose a novel early exit technique, Early Exit Class Means (E$^2$CM), based on class means of samples. Unlike most existing schemes, E$^2$CM does not require gradient-based training of internal classifiers and it does not modify the base network by any means. This makes it particularly useful for neural network tra..."

โ Cite Node

@article{Gรถrmez2021E$^2$CM:,
  title={E$^2$CM: Early Exit via Class Means for Efficient Supervised and Unsupervised Learning},
  author={Alperen Gรถrmez and Venkat R. Dasari and Erdem Koyuncu},
  journal={arXiv preprint arXiv:arxiv-paper--2103.01148},
  year={2021}
}

๐Ÿ‘ฅ Collaborating Minds

Alperen Gรถrmez Venkat R. Dasari Erdem Koyuncu

Abstract & Analysis

State-of-the-art neural networks with early exit mechanisms often need considerable amount of training and fine tuning to achieve good performance with low computational cost. We propose a novel early exit technique, Early Exit Class Means (E$^2$CM), based on class means of samples. Unlike most existing schemes, E$^2$CM does not require gradient-based training of internal classifiers and it does not modify the base network by any means. This makes it particularly useful for neural network training in low-power devices, as in wireless edge networks. We evaluate the performance and overheads of E$^2$CM over various base neural networks such as MobileNetV3, EfficientNet, ResNet, and datasets such as CIFAR-100, ImageNet, and KMNIST. Our results show that, given a fixed training time budget, E$^2$CM achieves higher accuracy as compared to existing early exit mechanisms. Moreover, if there are no limitations on the training time budget, E$^2$CM can be combined with an existing early exit scheme to boost the latter's performance, achieving a better trade-off between computational cost and network accuracy. We also show that E$^2$CM can be used to decrease the computational cost in unsupervised learning tasks.

๐Ÿ”„ Daily sync (03:00 UTC)

AI Summary: Based on arXiv metadata. Not a recommendation.

๐Ÿ“Š FNI Methodology ๐Ÿ“š Knowledge Baseโ„น๏ธ Verify with original source

๐Ÿ›ก๏ธ Paper Transparency Report

Verified data manifest for traceability and transparency.

100% Data Disclosure Active

๐Ÿ†” Identity & Source

id
arxiv-paper--2103.01148
source
arxiv
author
Alperen Gรถrmez
tags
arxiv:cs.LGarxiv:cs.NEarxiv:stat.ML

โš™๏ธ Technical Specs

architecture
null
params billions
null
context length
null

๐Ÿ“Š Engagement & Metrics

likes
0
downloads
0

Free2AITools Constitutional Data Pipeline: Curated disclosure mode active. (V15.x Standard)