AI and Language: New Forms for Old Discriminations? A Case Study in Google Translate and Canva

The development of artificial intelligence (AI) is one of the greatest technological revolutions in recent human history. AI technology is widely used in various fields, including education. In this field, AI is studied as a discipline, and used as a tool to overcome social barriers. Like any human...

Full description

Saved in:
Bibliographic Details
Main Author: Martina Mattiazzi
Format: Article
Language:Spanish
Published: Universidad de Alicante 2025-01-01
Series:Feminismo/s
Subjects:
Online Access:https://feminismos.ua.es/article/view/27077
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832592225175863296
author Martina Mattiazzi
author_facet Martina Mattiazzi
author_sort Martina Mattiazzi
collection DOAJ
description The development of artificial intelligence (AI) is one of the greatest technological revolutions in recent human history. AI technology is widely used in various fields, including education. In this field, AI is studied as a discipline, and used as a tool to overcome social barriers. Like any human revolution, however, it is necessary to be careful about it and consider that the growing use of these new informatic systems also entails risks. One of them, it is the reinforcement of gender stereotypes and discrimination against women through linguistics feedback. Trough an experimental analysis conducted on common AI-integrated app –Google Translate and Canva–we will investigate linguistic behaviours such as responding to a command prompts. From the results obtained, we can demonstrate the existence of gender biases in the AI’s productions, both in textual and visual language. Gender biases are consequences of the structural inequalities present in society: it is not the technology that is sexist, but it is the dataset on which it is based, which in turn is based on the results produced by users and published on internet. In a society based on democracy and equality, it is important to ensure that the use of such a widespread technology as AI does not perpetuate existing stereotypes and does not allow to become a new form of strengthening discriminations. From a linguistics perspective, this means paying attention to the linguistic outputs, both textual and visual, provided by the AI and checking the dataset it has been training on. Due to their central role in the education of new generations, schools and institutions should prepare students for a critical vision of the phenomenon and provide them with the tools to contrast it. This path could start from teaching AI mechanisms and ethics of technology to students and using an inclusive language in the educational context.
format Article
id doaj-art-8a8e95504c1d44deaef10ca412e5618a
institution Kabale University
issn 1989-9998
language Spanish
publishDate 2025-01-01
publisher Universidad de Alicante
record_format Article
series Feminismo/s
spelling doaj-art-8a8e95504c1d44deaef10ca412e5618a2025-01-21T12:32:02ZspaUniversidad de AlicanteFeminismo/s1989-99982025-01-014511813810.14198/fem.2025.45.0535289AI and Language: New Forms for Old Discriminations? A Case Study in Google Translate and CanvaMartina Mattiazzi0https://orcid.org/0009-0005-7294-4814Consiglio Nazionale delle Ricerche, MilanoThe development of artificial intelligence (AI) is one of the greatest technological revolutions in recent human history. AI technology is widely used in various fields, including education. In this field, AI is studied as a discipline, and used as a tool to overcome social barriers. Like any human revolution, however, it is necessary to be careful about it and consider that the growing use of these new informatic systems also entails risks. One of them, it is the reinforcement of gender stereotypes and discrimination against women through linguistics feedback. Trough an experimental analysis conducted on common AI-integrated app –Google Translate and Canva–we will investigate linguistic behaviours such as responding to a command prompts. From the results obtained, we can demonstrate the existence of gender biases in the AI’s productions, both in textual and visual language. Gender biases are consequences of the structural inequalities present in society: it is not the technology that is sexist, but it is the dataset on which it is based, which in turn is based on the results produced by users and published on internet. In a society based on democracy and equality, it is important to ensure that the use of such a widespread technology as AI does not perpetuate existing stereotypes and does not allow to become a new form of strengthening discriminations. From a linguistics perspective, this means paying attention to the linguistic outputs, both textual and visual, provided by the AI and checking the dataset it has been training on. Due to their central role in the education of new generations, schools and institutions should prepare students for a critical vision of the phenomenon and provide them with the tools to contrast it. This path could start from teaching AI mechanisms and ethics of technology to students and using an inclusive language in the educational context.https://feminismos.ua.es/article/view/27077artificial intelligencebiascanvagoogle translateinclusive languagelinguisticslinguistic sexismethnic stereotypes
spellingShingle Martina Mattiazzi
AI and Language: New Forms for Old Discriminations? A Case Study in Google Translate and Canva
Feminismo/s
artificial intelligence
bias
canva
google translate
inclusive language
linguistics
linguistic sexism
ethnic stereotypes
title AI and Language: New Forms for Old Discriminations? A Case Study in Google Translate and Canva
title_full AI and Language: New Forms for Old Discriminations? A Case Study in Google Translate and Canva
title_fullStr AI and Language: New Forms for Old Discriminations? A Case Study in Google Translate and Canva
title_full_unstemmed AI and Language: New Forms for Old Discriminations? A Case Study in Google Translate and Canva
title_short AI and Language: New Forms for Old Discriminations? A Case Study in Google Translate and Canva
title_sort ai and language new forms for old discriminations a case study in google translate and canva
topic artificial intelligence
bias
canva
google translate
inclusive language
linguistics
linguistic sexism
ethnic stereotypes
url https://feminismos.ua.es/article/view/27077
work_keys_str_mv AT martinamattiazzi aiandlanguagenewformsforolddiscriminationsacasestudyingoogletranslateandcanva