Erratum: Measuring and Improving Consistency in Pretrained Language Models
Saved in:
| Main Authors: | Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, Yoav Goldberg |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
The MIT Press
2022-01-01
|
| Series: | Transactions of the Association for Computational Linguistics |
| Online Access: | http://dx.doi.org/10.1162/tacl_x_00455 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Geographic Adaptation of Pretrained Language Models
by: Valentin Hofmann, et al.
Published: (2024-04-01) -
oLMpics-On What Language Model Pre-training Captures
by: Alon Talmor, et al.
Published: (2021-03-01) -
Text-based NP Enrichment
by: Yanai Elazar, et al.
Published: (2022-08-01) -
Erratum: “ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs”
by: Wenpeng Yin, et al.
Published: (2021-03-01) -
Pretrained Language Models as Containers of the Discursive Knowledge
by: Rafal Maciag
Published: (2024-01-01)