Experts ou (foule de) non-experts ? la question de l’expertise des annotateurs vue de la myriadisation (crowdsourcing)
Experts or (crowd of) non-experts ? the question of the annotators’ expertise viewed from crowdsourcing.Manual corpus annotation is more and more performed using crowdsourcing : produced by a crowd of persons, through the Web, for free, or for a very small remuneration. Our experiments question the...
Saved in:
| Main Author: | Karën Fort |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Cercle linguistique du Centre et de l'Ouest - CerLICO
2017-02-01
|
| Series: | Corela |
| Subjects: | |
| Online Access: | https://journals.openedition.org/corela/4835 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Data preparation in crowdsourcing for pedagogical purposes
by: Tanara Zingano Kuhn, et al.
Published: (2022-12-01) -
Your Cursor Reveals: On Analyzing Workers’ Browsing Behavior and Annotation Quality in Crowdsourcing Tasks
by: Pei-Chi Lo, et al.
Published: (2025-01-01) -
Les milles visages de l’expertise. Savoir expert, savoir profane dans les procès pour infanticide à Florence au début du XXe siècle
by: Silvia Chiletti
Published: (2016-05-01) -
Organizing the Net-Wide Public Expert Evaluation Based on Collective Intelligence Technologies
by: B. B. Slavin, et al.
Published: (2018-08-01) -
The Browser-Based GLAUx Treebank Infrastructure: Framework, Functionality, and Future
by: Keersmaekers Alek, et al.
Published: (2024-12-01)