Vision Transformers for Image Classification: A Comparative Survey
Transformers were initially introduced for natural language processing, leveraging the self-attention mechanism. They require minimal inductive biases in their design and can function effectively as set-based architectures. Additionally, transformers excel at capturing long-range dependencies and en...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Technologies |
Subjects: | |
Online Access: | https://www.mdpi.com/2227-7080/13/1/32 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|