We were contacted by Hema Thakur, a researcher exploring how artificial intelligence (AI) affects accessibility, especially for people with cognitive disabilities. She asked us to share her important findings, and we’re glad to do so.
The topic is complex but deeply relevant. Hema tested whether GPT-4, a popular AI tool, can simplify expert feedback without losing the point. The results raise important questions for anyone involved in creating accessible content, including writers, designers, and the developers behind large language models (LLMs).
To make the research easier to follow, we’ve presented this article in an Easy Read format, because clarity matters just as much as content.
The Accessibility Illusion
When AI Makes Text Simpler but Gets It Wrong
By Hema Thakur
Based on my research: Read the full study here
What This Article Is About
Making words easier to read is important. It helps many people, especially those with learning disabilities or memory problems.
But sometimes, tools that simplify text—like artificial intelligence (AI)—change the meaning too much. That can cause confusion instead of help.
What I Wanted to Find Out
I wanted to see how well GPT-4, an AI tool, could rewrite complex text. I tested it with hard feedback from experts in academic finance (the study of money and markets).
I asked the AI to make the feedback easier to read in two ways:
- One with a plain-language prompt (just “make this easier to read”)
- One with a prompt that mentioned people with cognitive disabilities (like dyslexia)
Then I checked if the AI kept the true meaning while making the words simpler.
What I Found
-
The Meaning Was Often Lost
Let’s look at an example.
An expert wrote about a method that helps find out what caused a market to react the way it did — how fast and how strongly.
The AI changed that into:
“Study how fast and how strongly the market reacts.”
The expert was talking about finding the cause behind the market’s reaction — not just watching how fast or strong it was. But the AI left out the cause part.
So instead of saying,
“We want to know why the market reacted this way,”
the AI just said, “Let’s see how the market reacted.”
That’s a big difference. It changes the whole point.
-
Some Terms Were Made Too Simple
The word “endogeneity” is a tough finance term.
The AI said it just means “hidden effects.”
But that’s not quite right.
Endogeneity means that some data cannot be trusted because it’s being affected by other parts of the study. This can make the results wrong or confusing.
Calling it just “hidden effects” is too simple and may give the wrong idea.
-
Some Changes Left Out Important Details
The AI turned the term “bounded rationality” into “limited thinking ability.”
Bounded rationality means people make decisions with limited time, information, or mental resources — not that their thinking is weak or flawed.
It’s about real-world limits, not personal ability. Simplifying complex ideas is good — but it should be accurate and respectful, not make people sound less capable than they are.
-
The AI Gave Different Answers Each Time
Sometimes, the AI kept key terms like “sensitivity analysis” but didn’t explain them. Other times, it replaced them with vague words like “double-checking your methods.”
This back-and-forth made it hard to trust the tool. It wasn’t consistent. And that matters when someone depends on clear, simple text.
-
Extra Details Didn’t Make a Big Difference
The outputs from the plain-language prompt (“make this easier to read”) and the one that mentioned cognitive disabilities (like dyslexia) were more or less the same.
So, the extra detail didn’t really help GPT respond differently.
Hema kindly created this short simple video to explain the research
Why This Is a Problem
When we try to help people understand complex ideas, we need to keep the purpose of the message.
Making words easier should not mean losing what they mean.
In subjects like finance, science, or law, getting it wrong can lead to wrong actions, not just confusion.
What We Can Learn
Here are three things content writers and tech developers should do:
- Know the topic. A simple prompt isn’t enough. If you don’t understand the subject, AI won’t either.
- Always check the AI’s work. Don’t trust it blindly. Make sure the meaning stays true and respectful.
- Train AI better. We must teach it not just how to simplify language, but how to keep the real ideas in place.
Final Thought
Making content easy to read is important. Simple words alone are not enough, what matters most is that people understand the real message.
At the moment, GPT-4 can help, but it doesn’t always get it right.
We must do better for everyone.
Read the full study: Simplifying Peer Review for Accessibility: A Case Study on GPT 4’s Performance in Finance Using Cognitive-Informed Prompts
About the Author
Hema Thakur is Manager of Skill Development at Cactus Communications, where she has spent nearly a decade training academic editors and supporting early-career researchers through the challenges of scholarly publishing. Her work focuses on simplifying technical feedback, improving peer review responses, and making research communication more accessible, especially for those with diverse learning needs. She has delivered workshops across Asia, Latin America, and the Middle East in both English and Spanish, and regularly explores how AI tools impact research writing and accessibility. Hema holds a first-class degree in Banking and Finance from the University of London and previously served as an Alumni Ambassador for its International Programmes.
🔗 Social Science Space profile
🔗 Editage Insights profile