As the researchers analyzed how students complete work on computers, they noticed that students who had access to AI or humans were less likely to refer to reading materials. These two groups reviewed their essays mainly interacting with chatgpt or chatting with the human. Those with only the verification list spent most of the time by reviewing their tests.
The group of AI spent less time evaluating their essays and making sure they understood what the task asked them to do. The AI group was also prone to copy and paste text that the bot had generated, only although the researchers had caused the bot to not write directly for the students. (It was easy for students to avoid this railing, only in the controlled laboratory). The researchers map all the cognitive processes involved in writing and saw that the students of AIs were more focused on interacting with chatgpt.
“This highlights a crucial issue in human interaction-AI,” the researchers wrote. “Potential metacognitive laziness.” With that, they mean a dependency of the assistance of AI, the discharge of the processes however to the Bot and not participate directly with the necessary tasks to synthesize, analyze and explain.
“Students could depend on chatgpt, using it to easily complete specific learning tasks without participating completely in learning,” the authors write.
He Second study, by AnthropiC, the Asu+GSV education investors conference in San Diego was launched in April. In this study, Anthrope’s internal researchers studied how university students really interact with their loot of AI, called Claude, a chatgpt competitor. This methodology is a great improvement on surveys or students who may not remember exactly how they used AI.
The researchers began collecting all the conversations in a period of 18 days with people who had created Claude accounts using their university addresses. (The description of the study says that the conversations were anonymized to protect the privacy of the students), then the researchers leaked those conversations in search of signs that the person was probably a student, seeking help with academic research, study, learning, study. The researchers ended with 574,740 conversations to analyze.
The results? Students mainly used Claude to create things (40 percent of conversations), such as creating a coding and analyzing project (30 percent of conversations), such as legal concept analysis.
Creation and analysis are the most popular tasks that university students ask Claude to do for them
Anthrope researchers indicated that the thesis were cognitive functions of higher order, not basic, according to a hierarchy of skills, known as Bloom taxonomy.
“This raises questions about how to ensure that students do not download critical cognitive tasks to AI systems,” anthropic researchers wrote. “There are legitimate concerns that AI systems can provide a crutch for students, suffocating the development of the fundamental skills necessary to support higher order thinking.”
Anthrope researchers also noticed that the students asked Claude for direct answers almost half of the time with a minimum round trip commitment. The researchers described how even when students got involved in collaboration with Claude, conversations may not be helping students learn more. For example, a student would ask Claude to “solve the task problems of probability and statistics with explanations.” That could cause “multiple conversational turns between AI and the student, but still download significant thought for AI,” the researchers wrote.
Anthrope doubted that he saw direct evidence of trap. The researchers wrote to an example of students who requested direct answers to multiple choice questions, but Anthrope had no way of knowing if it was an exam to carry or a practice test. The researchers also found examples or students asking Claude to rewrite texts to avoid the detection of plagiarists.
The hope is that AI can improve learning through immediate feedback and customization instruction for each student. But these studies show that AI is also facilitating students. No To learn.
IA defenders say that educators need to redesign tasks so that students cannot complete the subject asking AI to do so for the subject and educate students about how to use AI so that you maximize learning. For me, this seems an illusion. Real learning is difficult, and if there are shortcuts, it is human nature to take them.
Elizabeth Wardle, director of the Howe Center for Excellence in Scripture at the University of Miami, is concerned with both writing and human creativity.
“Writing is not correction or avoidance of error,” she, “she,” Published in LinkedIn. “Writing is not just a product. The act of writing is a way of thought and learning.”
Wardle warned about the long -term effects of too much dependence on AI, “when people use everything, they are not thinking or learn,” he said. “And then what? Who will build, create and invent when we trust AI to do something?
It is a warning that we should all pay attention.