For better or worse, AI tools have steadily become a reality of the academic landscape since ChatGPT launched in late 2022. Anthropic is studying what that looks like in real time.
On Tuesday, shortly after launching Claude for Education, the company released data on which tasks university students use its AI chatbot Claude for and which majors use it the most.
Also: The work tasks people use Claude AI for most, according to Anthropic
Using Clio, the company's data analysis tool, to maintain user privacy, Anthropic analyzed 574,740 anonymized conversations between Claude and users at the Free and Pro tiers with higher education email addresses. All conversations appeared to relate to coursework.
The company found that computer science students made up the largest group of Claude users, accounting for nearly 37%, compared to much lower adoption among business, health, and humanities students.
This is somewhat unsurprising given how programming students are predisposed to knowing about AI tools and how Claude bills itself as a coding assistant. However, based on internal testing, our resident experts do not recommend Claude for programming when compared to other chatbots.
Common Claude queries based on discipline.
Anthropic categorized students' conversations with Claude into four types, all of which were equally represented: Direct Problem Solving, Direct Output Creation, Collaborative Problem Solving, and Collaborative Output Creation. The first two refer to when students sought answers to a question or requested finished content, while the second two refer to students dialoguing with Claude to solve problems and create content.
Also: AI will change the trades too - and field service technicians cannot wait
Almost half of all conversations fell into the Direct categories, indicating students were "seeking answers or content with minimal engagement." In 39% of conversations, students appear to use Claude to "create and improve educational content across disciplines," including by "designing practice questions, editing essays, or summarizing academic material." The next largest group, 34%, shows students asking Claude to explain technical assignments or provide solutions, such as debugging code or breaking down math problems.
Students also used Claude to analyze data, develop tools, design research, make technical diagrams, and translate content.
Usage also varied by discipline: STEM students usually tapped Claude for problem solving and collaborative queries, while humanities, business, and health students both collaborated and sought direct outputs. Those in Education, a smaller category likely including teachers, used Claude to generate content in nearly 75% of conversations, such as creating lesson plans and other teaching materials.
Also: Microsoft is offering free AI skills training for all - and it is not too late to sign up
The findings also include some insights about how students might be using AI to cheat, a common concern within educational institutions. Anthropic flagged queries that asked for answers to multiple-choice questions about machine learning and responses to English test questions, as well as requests to rewrite texts so they would not be detected by plagiarism checkers.
That said, several examples show how a use case could indicate cheating as much as it could indicate routine study prep. "For instance, a Direct Problem Solving conversation could be for cheating on a take-home exam -- or for a student checking their work on a practice test," Anthropic notes. "Whether a Collaborative conversation constitutes cheating may also depend on specific course policies."
Anthropic also clarified it would need to know the educational context in which Claude's responses were used to be certain.
Claude usage indicates several realities about AI and education -- some with more potential than others.
The company adapted Blooms Taxonomy, an education framework that organizes cognitive processes as simple (lower-order) or complex (higher-order), to understand what Claude's uses mean for student skill development.
Overall, the data shows students use Claude to create in nearly 40% of queries and analyze in 30% of queries. Both of these are considered complex cognitive functions, and students used Claude to execute them a combined 70% of the time.
"There are legitimate worries that AI systems may provide a crutch for students, stifling the development of foundational skills needed to support higher-order thinking," Anthropic's report warns.
Also: OpenAI research suggests heavy ChatGPT use might make you feel lonelier
While there is no way to tell whether using Claude is wholly replacing critical thinking for students, the company adds that it plans to continue its research to "better discern which [interactions] contribute to learning and develop critical thinking."
But it is not all bad. According to Anthropic, educators that use Claude to create teaching materials "suggests that educational approaches to AI integration would likely benefit from being discipline-specific." Being able to map the variations in how students in different fields use Claude could lead to more insights on this in the future.
AI is very good at personalization -- using it to tailor lesson plans and better serve individual students, for example, has emerged as a strong potential use case. "Whereas traditional web search typically only supports direct answers, AI systems enable a much wider variety of interactions, and with them, new educational opportunities," Anthropic says of the findings, arguing Claude could be used to explain philosophical concepts or muscle anatomy, or create comprehensive chemistry study material.
Also: 5 reasons I turn to ChatGPT every day - from faster research to replacing Siri
That said, in practice, the quality of chatbot outputs is heavily reliant on training data. Discipline-specific AI may help with accuracy overall, but hallucinations are always a possibility. Chatbots also routinely distort news articles; users should always fact-check outputs from chatbots by verifying any citations are real -- as opposed to hallucinated -- links.
Anthropic noted it is "experimenting with a Learning Mode that emphasizes the Socratic method and conceptual understanding over direct answers," as well as partnering with universities.
Want more stories about AI? Sign up for Innovation, our weekly newsletter.