top of page
9.7.24
Implicit bias in LLMs: Bias in financial advice based on implied gender
Published
Dr. Shir Etgar, Prof. Gal Oestreicher - Singer, Prof. Inbal Yahav Shenberger
For the first time in human history, the era of Large Language Models (LLMs) has enabled humans to communicate directly with AIs in conversation-like interactions. For efficient communication, people are encouraged to prompt LLMs with contextual information. However, previous research in machine learning indicates that such information can reveal implicit group affiliations. This study explores whether implied gender affiliation, conveyed through stereotypically gendered professions, affects AI responses to financial advice-seeking prompts. Using GPT-4, we initiated 2,400 financial advice-seeking interactions with an LLM. Each prompt included either feminine or masculine gender cues. We found that advice given to implied women was less risky, more prevention-oriented, and more simplified and patronizing in tone and wording than advice given to implied men. These findings call attention to implicit biases in LLMs, which are more challenging to identify and debias than biases based on explicit group affiliation, and which could have tremendous societal implications.
bottom of page