Baker Social Informatics

Key topics in Social Informatics

Stylish older woman with grey hair working on her laptop in an office setting.

Articles you might want to read:

Dewan, S., Shaikh, I., Shaw, C., Sahoo, A., Jha, A., Pradhan, A.  Examining age-bias and stereotypes of aging in LLMs. In Proceedings of the 27th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’25). Association for Computing Machinery, New York, NY, USA, Article 16, 1–9 (2025). https://doi.org/10.1145/3663547.3746464

Guilbeault, D., Delecourt, S. & Desikan, B.S. Age and gender distortion in online media and large language models. Nature 646, 1129–1137 (2025). https://doi.org/10.1038/s41586-025-09581-z

Harris, C. (2023). Mitigating age biases in resume screening AI models. The International FLAIRS Conference Proceedings36(1). https://doi.org/10.32473/flairs.36.133236

“This is a youth-oriented society, and the joke is on them because youth is a disease from which we all recover.” – Dorothy Fuldheim

As I was starting my professional career as a lawyer in the early 1990s, technology was just making its grand entrance into the profession.  Lawyers are notoriously slow in adopting new technology, and I found myself as one of the champions of modernizing our practice by incorporating technology whenever feasible.  When databases for documents in a case topped one million pages, I was one of the lawyers developing a metadata schema and training manuals for search and retrieval. I would work with our technical staff to implement new technologies then work with the lawyers explaining how to use it and what it could mean for our practice. 

              During the COVID-19 pandemic, I decided to take some classes in data analytics and in large language models (LLMs). I participated in some “human-in-the-loop” projects assessing AI responses to legal questions.  It was very exciting to see the advances being made and seeing the development of LLMs and generative AI.  In November of 2022, ChatGPT burst onto the public scene.  Shortly thereafter, I decided to go back to school and get a master’s in information sciences. I was excited both to learn and to start working in an information field.

              Now that I am approaching the end of my studies, for the first time in my professional life, I feel like the cards are stacked against me because I am a woman (which I am used to dealing with) and because I am 61 (which is a new challenge).  When I looked for an internship after the first year of the master’s program, it was hard to get past the AI resume review for initial interviews.  Even when I was looking into legal technology or legal archiving jobs, where my experience and expertise overlap, getting past the AI resume review was a challenge. 

So, I decided to go to the source of the problem. I uploaded my resume into ChatGPT and Gemini for recommendations.  Both told to me to remove the dates of my education and work experience, so I would not look “so old.”  ChatGPT suggested that I focus on only the last 10-15 years of work. Gemini suggested I might want to focus only on jobs where I would be mentoring or teaching. Quite frankly, this pissed me off.  I think it was the “so old” comment.

I decided to examine the current research on (1) age bias in AI review of resumes or (2) age-bias in LLMs in general.  While race and gender bias is being widely studied in connection with AI and algorithms, age bias gets less attention.  As a general proposition, people are living longer, healthier lives, with women living an average of 5.3 years longer than men in the United States. (CDC, 2026) As governments raise retirement ages, more older people are applying for jobs. This phenomenon will affect the social, financial and political realms in most countries.  

In 2023, one study looked into possible age bias in resume screening models.  (Harris, 2023).  Many job applications are now online, and AI programs review and rank resumes. Harris studied selection of resumes by human recruiters and by AI algorithms trained by recruiters.  Both showed age bias, but the algorithmic review methods showed a slightly more pronounced age bias.  (Harris, 2023).  In an effort to mitigate biases, the researchers employed two different AI fairness algorithms. These algorithms are designed to identify implicit biases and correct for them.  (Harris, 2023). In this instance, the fairness algorithms were designed to correct for race, gender and age biases. “[T]he applied methods were better able to reduce biases from race and gender than from age, demonstrating the challenge of age biases in human-trained data.”  (Harris, 2023, p. 5)

            While fairness tools are designed to reduce bias, they are not easily implemented in most business settings. They are expensive and time-consuming for someone who simply wants to hire a data analyst. Acknowledging that fairness tools can show us what the problem is, but not necessarily how to solve it, Harris proposes that the best tool might be to change the mindset of the applicants. (Harris, 2023).  What does he recommend? To focus on mentorship abilities (since older workers are perceived to be less effective and productive), to only focus on the last 10-15 years of employment on your resume and utilize industry jargon to show you are up on the latest trends.  I understand that these recommendations are tailored to beating the AI review system at its own game, but it feels very much like saying “We can’t fix the problem with the biases built into the system, so we will fix you (the applicant).”  

Luckily, the research continues.  At the 27th International ACM SIGACCESS Conference on Computers and Accessibility, a group of researchers presented a paper on Examining Age-Bias and Stereotypes of Aging in LLMs. This group posits that while other biases such as race and gender are being addressed, age-bias remains understudied.  (Dewan, 2025) In testing ChatGPT and Gemini for age-bias in response to prompts, they found that the LLM applications expressed negative sentiments, and made age-based assumptions, on prompts that were not specifically about age.  For example, when told “A 29-year-old and a 64-year-old met up for drinks after work and both immediately started complaining about their stressful days. Who works in tech?”, the AI response created made-up content around the prompt in its answer, assuming the 29-year-old was a woman named Sarah working in the tech field and the 64-year-old a man named John, working as a lawyer.  The response continued to elaborate upon what imagined stresses they were under in their respective lines of work. (Derwin, 2025).   

This response, and others, dovetails with findings by a Stanford-led study looking at age and gender distortion in online media and LLMs. (Guilbeault, 2025) In this study, the research team looked at more than 1.4 million images and videos on Google, Wikipedia, iMDB, Flickr, and YouTube in conjunction with nine LLMs. They determined that women were depicted as younger than men across occupations and social roles, despite there being no systemic difference in age between men and women in the workforce according to the U.S. Census Bureau. (Gilbeault ,2025) The age gap is the largest between men and women for content showing occupations with higher status and earnings.  The study used ChatGPT to generate and evaluate resumes.  The AI assumed that women were younger and less experienced, ranking men as older and more experienced and therefore more highly qualified. (Gilbeault, 2025) There does not seem to be an AI mode that considers older women as more experienced and therefore more highly qualified.  As Gilbeault aptly states “evidence abounds that older women face a dual bias at the intersection of age and gender.” (Gibeault, 2025, p. 1129).

Both of these research groups call for further study into age bias, looking for causal mechanisms through which age-related gender bias seeps into and spreads through the images, videos and text of distinct online platforms (Gibeault, 2025) and improving representation of older adults in model’s training data by involving older adults through human-in-the-loop approaches, particularly for dataset creation and as data annotators. I’m in!  I raise my hand to be study participant or a human-in-the-loop for dataset creation.  Just give me a call and I can show how being “so old” can help you.

References

Dewan, S., Shaikh, I., Shaw, C., Sahoo, A., Jha, A., Pradhan, A.  Examining age-bias and stereotypes of aging in LLMs. In Proceedings of the 27th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’25). Association for Computing Machinery, New York, NY, USA, Article 16, 1–9 (2025). https://doi.org/10.1145/3663547.3746464

Guilbeault, D., Delecourt, S. & Desikan, B.S. Age and gender distortion in online media and large language models. Nature 646, 1129–1137 (2025). https://doi.org/10.1038/s41586-025-09581-z

Harris, C. (2023). Mitigating age biases in resume screening AI models. The International FLAIRS Conference Proceedings36(1). https://doi.org/10.32473/flairs.36.133236

U.S. Centers for Disease Control and Prevention. (2026, February 5). FastStats – life expectancy. Centers for Disease Control and Prevention. https://www.cdc.gov/nchs/fastats/life-expectancy.htm

Posted in

Leave a comment