By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Stay Current on Political News—The US FutureStay Current on Political News—The US FutureStay Current on Political News—The US Future
  • Home
  • USA
  • World
  • Business
    • Realtor
    • CEO
    • Founder
    • Entrepreneur
    • Journalist
  • Sports
    • Athlete
    • Coach
    • Fitness trainer
    • Life Style
  • Education
  • Health
    • Doctor
    • Plastic surgeon
    • Beauty cosmetics
  • Politics
  • Technology
    • Space
    • Cryptocurrency
  • Weather
Reading: Feedback Bias? How AI Adjusts Replies Based on Race and Gender, Research Finds
Share
Font ResizerAa
Font ResizerAa
Stay Current on Political News—The US FutureStay Current on Political News—The US Future
  • Home
  • USA
  • World
  • Business
  • Cryptocurrency
  • Economy
  • Life Style
  • Health
  • Politics
  • Space
  • Sports
  • Technology
  • Weather
  • Entertainment
  • Cybersecurity
Search
  • Home
  • USA
  • World
  • Business
    • Realtor
    • CEO
    • Founder
    • Entrepreneur
    • Journalist
  • Sports
    • Athlete
    • Coach
    • Fitness trainer
    • Life Style
  • Education
  • Health
    • Doctor
    • Plastic surgeon
    • Beauty cosmetics
  • Politics
  • Technology
    • Space
    • Cryptocurrency
  • Weather
Follow US
Stay Current on Political News—The US Future > Blog > Education > Feedback Bias? How AI Adjusts Replies Based on Race and Gender, Research Finds
Education

Feedback Bias? How AI Adjusts Replies Based on Race and Gender, Research Finds

Sarah Mitchell
Sarah Mitchell
Published April 28, 2026
Share

The AI ​​models addressed female students with more affection and used more first-person pronouns. (“I love your confidence in expressing your opinion!”) ​​Students labeled as unmotivated were greeted with optimistic encouragement. In contrast, students described as high-performing or motivated were more likely to receive direct suggestions and criticism aimed at improving their work.

Different words for different students.

Table of words used in a test.
These are the 20 most statistically significant words that AI models use in feedback for students of different races and genders. The words that black, Hispanic, and Asian students see are compared to those that white students see. The words that women see are compared to those that men see. The underlined words indicate evaluative judgments of the writing. Words in italics reflect the tone used to address the student and plain words refer to the content of the feedback. (Source: Table 4, “Marked Pedagogies: Examining Linguistic Biases in Personalized Automated Writing Feedback” by Mei Tan, Lena Phalen, and Dorottya Demszky)

In other words, the AI ​​feedback was different in tone and in the expectations it had for the student. The newspaper, “Marked Pedagogies: Examining Linguistic Biases in Personalized Automated Writing Feedback”, has not yet been published in a peer-reviewed journal, but was nominated for best paper in the 16th International Conference on Learning and Knowledge Analytics in Norway, where it is scheduled to be presented on April 30.

The researchers describe the feedback results as showing a “positive feedback bias” and a “feedback withholding bias,” offering more praise and less criticism to some groups of students. While differences in any written comment may be difficult to notice, patterns were evident in hundreds of essays.

Researchers believe the AI ​​is changing its feedback on identical trials because the models are trained with large amounts of human language. Human teachers may also soften criticism when responding to students from certain backgrounds, sometimes because they don’t want to seem unfair or discouraging. “They are realizing the biases that humans exhibit,” said Mei Tan, lead author of the study and a doctoral student at the Stanford Graduate School of Education.

At first glance, differences in feedback may not seem harmful. More encouragement could boost a student’s confidence. Many educators maintain that culturally responsive teaching (recognizing students’ identities and experiences) can increase student engagement in school.

But there is a trade-off.

If some students are constantly protected from criticism while others are pressured to sharpen their arguments, the result can be an unequal opportunity to improve. Praise can motivate, but it doesn’t replace the kind of direct, specific feedback that helps students grow as writers. Tanya Baker, executive director of the nonprofit National Writing Project, recently heard a presentation of this study and said she was concerned that black and Hispanic students were not “pushed to learn” how to write better.

This raises a difficult question for schools as they adopt AI tools: When does helpful personalization cross the line into harmful stereotypes?

Of course, teachers are unlikely to explicitly tell AI systems a student’s race or background like the researchers did in this experiment. But that doesn’t solve the problem, the Stanford researchers said. Many educational databases and learning platforms already collect detailed information about students, from their previous achievements to their linguistic status. As AI is integrated into these systems, it can have access to much more context than a teacher would consciously provide. And even without explicit labels, AI can sometimes infer aspects of identity from the writing itself.

The biggest problem is that AI systems are not neutral tutors. Even the usual feedback response (when the researchers did not describe the student’s personal characteristics) takes a particular approach to teaching writing. Tan described it as quite daunting and focused on corrections. “Maybe one conclusion is that we shouldn’t leave pedagogy to the broad language model,” Tan said. “Humans should be in control.”

Tan recommends that teachers review written comments before forwarding them to students. But one of the strengths of AI feedback is that it is instantaneous. If the teacher needs to review it first, that slows it down and potentially undermines its effectiveness.

AI also offers potential for personalization. The risk is that, without careful attention, such personalization could lower the bar for some students and raise it for others.

This story about AI bias was produced by The Hechinger Reportan independent, nonprofit news organization covering education. Enroll in Test points and others Hechinger Newsletters.

Popular News
Education

How One City is Finding Badly Needed Early Educators — And Getting Them to Stay

Sarah Mitchell
Sarah Mitchell
December 8, 2025
Heatwave Alert: What you need to know for this summer’s record crush
Joy Behar Fears Pope Francis Will Be Replaced by ‘Some Conservative Guy’
Jason Cabell: From Navy SEAL to Acclaimed Director – A Journey of Courage, Creativity, and Vision
Four die in plane crash in Illinois
Stay Current on Political News—The US Future
The USA Future offers real-time updates, expert analysis, and breaking stories on U.S. politics, culture, and current events.
  • USA
  • World
  • Politics
  • Education
  • Weather
  • Business
  • Entrepreneur
  • Founder
  • Journalist
  • Realtor
  • Health
  • Doctor
  • Beauty cosmetics
  • Plastic surgeon
  • Sports
  • Athlete
  • Coach
  • Fitness trainer
© 2017-2026 The USA Future . All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?