Efficient Structured Prediction with Transformer Encoders
Finetuning is a useful method for adapting Transformer-based text encoders to new tasks but can be computationally expensive for structured prediction tasks that require tuning at the token level. Furthermore, finetuning is inherently inefficient in updating all base model parameters, which prevent...
Saved in:
Main Author: | Ali Basirat |
---|---|
Format: | Article |
Language: | English |
Published: |
Linköping University Electronic Press
2024-12-01
|
Series: | Northern European Journal of Language Technology |
Online Access: | https://nejlt.ep.liu.se/article/view/4932 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
An Efficient Framework for Configurable Video Encoder
by: Wei Li, et al.
Published: (2009-01-01) -
An Efficient Frequency Encoding Scheme for Optical Convolution Accelerator
by: Gongyu Xia, et al.
Published: (2024-12-01) -
Arabic Speech Recognition Based on Encoder-Decoder Architecture of Transformer
by: Mohanad Sameer, et al.
Published: (2023-03-01) -
Ensemble graph auto-encoders for clustering and link prediction
by: Chengxin Xie, et al.
Published: (2025-01-01) -
Efficient Hyperspectral Video Reconstruction via Dual-Channel DMD Encoding
by: Mingming Ma, et al.
Published: (2025-01-01)