Natural Language Processing (NLP): Identifying Linguistic Gender Bias in Electronic Medical Records (EMRs)

With the rise of feminism, women report experiencing doubt or discrimination in medical settings. This study aims to explore the linguistic mechanisms by which physicians express disbelief toward patients and to investigate gender differences in the use of negative medical descriptions. A content an...

Full description

Saved in:
Bibliographic Details
Main Authors: Site Xu MPH, Mu Sun MD
Format: Article
Language:English
Published: SAGE Publishing 2025-01-01
Series:Journal of Patient Experience
Online Access:https://doi.org/10.1177/23743735251314843
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832575975113621504
author Site Xu MPH
Mu Sun MD
author_facet Site Xu MPH
Mu Sun MD
author_sort Site Xu MPH
collection DOAJ
description With the rise of feminism, women report experiencing doubt or discrimination in medical settings. This study aims to explore the linguistic mechanisms by which physicians express disbelief toward patients and to investigate gender differences in the use of negative medical descriptions. A content analysis of 285 electronic medical records was conducted to identify 4 linguistic bias features: judging, reporting, quoting, and fudging. Sentiment classification and knowledge graph with ICD-11 were used to determine the prevalence of these features in the medical records, and logistic regression was applied to test gender differences. A total of 2354 descriptions were analyzed, with 64.7% of the patients identified as male. Descriptions of female patients contained fewer judgmental linguistic features but more fudging-related linguistic features compared to male patients (judging: OR 0.69, 95% CI 0.54-0.88, p < 0.01; fudging: OR 1.38, 95% CI 1.09-1.75, p < 0.01). No significant differences were found in the use of reporting (OR 0.95, 95% CI 0.61-1.47, p = 0.81) and quoting (OR 0.99, 95% CI 0.72-1.36, p = 0.96) between male and female patients. This study highlights how physicians may express disbelief toward patients through linguistic biases, particularly through the use of judging and fudging language. Both male and female patients may face different types of systematic bias from physicians, with female patients experiencing more fudging-related language and less judgmental language compared to male patients. These differences point to a potential mechanism through which gender disparities in healthcare quality may arise, underscoring the need for further investigation and action to address these biases.
format Article
id doaj-art-07fa169aea89405b99141a7b17a9d408
institution Kabale University
issn 2374-3743
language English
publishDate 2025-01-01
publisher SAGE Publishing
record_format Article
series Journal of Patient Experience
spelling doaj-art-07fa169aea89405b99141a7b17a9d4082025-01-31T14:07:09ZengSAGE PublishingJournal of Patient Experience2374-37432025-01-011210.1177/23743735251314843Natural Language Processing (NLP): Identifying Linguistic Gender Bias in Electronic Medical Records (EMRs) Site Xu MPHMu Sun MDWith the rise of feminism, women report experiencing doubt or discrimination in medical settings. This study aims to explore the linguistic mechanisms by which physicians express disbelief toward patients and to investigate gender differences in the use of negative medical descriptions. A content analysis of 285 electronic medical records was conducted to identify 4 linguistic bias features: judging, reporting, quoting, and fudging. Sentiment classification and knowledge graph with ICD-11 were used to determine the prevalence of these features in the medical records, and logistic regression was applied to test gender differences. A total of 2354 descriptions were analyzed, with 64.7% of the patients identified as male. Descriptions of female patients contained fewer judgmental linguistic features but more fudging-related linguistic features compared to male patients (judging: OR 0.69, 95% CI 0.54-0.88, p < 0.01; fudging: OR 1.38, 95% CI 1.09-1.75, p < 0.01). No significant differences were found in the use of reporting (OR 0.95, 95% CI 0.61-1.47, p = 0.81) and quoting (OR 0.99, 95% CI 0.72-1.36, p = 0.96) between male and female patients. This study highlights how physicians may express disbelief toward patients through linguistic biases, particularly through the use of judging and fudging language. Both male and female patients may face different types of systematic bias from physicians, with female patients experiencing more fudging-related language and less judgmental language compared to male patients. These differences point to a potential mechanism through which gender disparities in healthcare quality may arise, underscoring the need for further investigation and action to address these biases.https://doi.org/10.1177/23743735251314843
spellingShingle Site Xu MPH
Mu Sun MD
Natural Language Processing (NLP): Identifying Linguistic Gender Bias in Electronic Medical Records (EMRs)
Journal of Patient Experience
title Natural Language Processing (NLP): Identifying Linguistic Gender Bias in Electronic Medical Records (EMRs)
title_full Natural Language Processing (NLP): Identifying Linguistic Gender Bias in Electronic Medical Records (EMRs)
title_fullStr Natural Language Processing (NLP): Identifying Linguistic Gender Bias in Electronic Medical Records (EMRs)
title_full_unstemmed Natural Language Processing (NLP): Identifying Linguistic Gender Bias in Electronic Medical Records (EMRs)
title_short Natural Language Processing (NLP): Identifying Linguistic Gender Bias in Electronic Medical Records (EMRs)
title_sort natural language processing nlp identifying linguistic gender bias in electronic medical records emrs
url https://doi.org/10.1177/23743735251314843
work_keys_str_mv AT sitexumph naturallanguageprocessingnlpidentifyinglinguisticgenderbiasinelectronicmedicalrecordsemrs
AT musunmd naturallanguageprocessingnlpidentifyinglinguisticgenderbiasinelectronicmedicalrecordsemrs