DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

A.I. Is Coming to Class. These Professors Want to Ease Your Worries.

January 17, 2026
in News
A.I. Is Coming to Class. These Professors Want to Ease Your Worries.

The front line in the debate over whether and when university students should be taught how to properly use generative artificial intelligence runs right through Benjamin Breyer’s classroom at Barnard College in Manhattan.

The first-year writing program he teaches in at Barnard generally bans the use of generative A.I., including ChatGPT, Claude, Gemini and the like, which eagerly draft paragraphs, do research and compose essays for their users.

The program’s policy statement warns students that A.I. “is often factually wrong, and it is also deeply problematic, perpetuating misogyny and racial and cultural biases.”

Wendy Schor-Haim, the program’s director, runs screen-free classes and shows students how she uses different colored highlighters to annotate printed texts. She has never tried ChatGPT.

“Students tend to use it in our classrooms to do the work that we are here to teach them how to do,” she said. “And it is very, very bad at that work.”

But she has made an exception for Professor Breyer, who is determined to see if he can use A.I. to supplement, not short-circuit, the efforts of students as they study academic writing. In that sense, Professor Breyer represents a growing swath of writing and English professors who are trying to find positive uses for a technology that some of their colleagues remain dead set against.

At most universities, including Barnard, it remains up to professors to decide whether and how to allow A.I. use in their classrooms. College administrators often play a dual role, offering support for professors to make those decisions, even as they enable access to the tools through their computer systems.

Legacy academic organizations are evolving in their approaches. In an October 2024 working paper, for example, a task force of the Modern Language Association, which promotes humanities studies, leaned toward engagement, writing that first-year writing courses “have a special responsibility” to teach students how to use generative A.I. “critically and effectively in academic situations and across their literate lives.”

Professor Breyer has been at Barnard and Columbia University for 22 years and is keenly aware of his colleagues’ misgivings about A.I. But perhaps unexpectedly, his experiments with the technology have caused him to be a voice of reassurance to fellow professors that they will still have a pivotal role to play on the other end of this societal transition.

“This is no threat to us at present,” he said he tells them. “A.I. may help with the expression of an idea and articulating that expression. But the idea itself, the thing that’s hardest to teach, is still going to remain our domain.”

During the past two years, he and a computer programmer spent thousands of hours developing a chatbot they named Althea, after the Grateful Dead song. “I don’t think that I’m writing my own obituary by creating this at all,” he said. “It’s a tool.”

Professor Breyer’s conclusions about the strengths and weaknesses of generative A.I. come after three semesters of testing Althea in one of his writing sections and assessing how well those students did compared with those in an otherwise identical section that barred A.I.

The chatbot, which he developed with Marko Krkeljas, a former Barnard software developer, seeks to act as an interactive workbook, helping students practice things such as annotation and coming up with thesis statements. He has set its persona to be a “tutor at an elite liberal arts college” that prompts students to improve their answers, and it gives a “hard refusal” if a student asks it to write something.

Professor Breyer found that off-the-shelf bots were not good enough to teach the academic skills that students needed to help them engage in scholarly conversation. So he applied for about $30,000 in grants and additional technical support to build the tool, which uses a customizable OpenAI platform and exists on a website called academicwritingtools.com.

To train it to mimic his own feedback, he fed it transcripts of his lectures, the materials read in the course and samples of exemplary student work. To tamp down generative A.I.’s typical eagerness to please, he told it to be less flattering.

“I was fascinated by something that everybody seemed to think was the demon in the bottle that was going to kill us all if released,” he said. “And I thought, could you harness it?”

While many instructors are now experimenting with A.I., it is still unusual for English professors to develop a custom chatbot that they believe can improve on the problematic implications of the for-profit models. Another such teacher is Alexa Alice Joubin, an English professor and the co-director of the Digital Humanities Institute at George Washington University. She created her own A.I. teaching assistant bot that helps students refine research questions and summarize readings.

Professor Joubin sees her bot, which was codeveloped with a computer science student, Akhilesh Rangani, as a way to teach A.I. literacy with more safeguards to protect accuracy, personal privacy and intellectual property than the for-profit programs offer. It uses open-source software, and Professor Joubin is making it freely available to professors worldwide to customize for their own classes.

Students already turn to ChatGPT for help, she said. So it is better, she added, that they use a custom bot that is trained to ask reflective questions and responds only in bullet points rather than a product run by “tech bros” that does not have teaching in mind.

“This is a bit of our playground, our sandbox,” she said of her website, teachanything.ai. “Let’s see what A.I. can and cannot do, but inside a controlled environment.”

At Columbia, Matthew Connelly, a history professor who is also the vice dean for A.I. Initiatives, takes a hybrid approach, encouraging his students to use A.I. for research, even as he strongly discourages its use in their writing.

“Generative A.I. is going to be an incredibly powerful and probably superior way of doing research compared to, for example, text searching,” he said. But he finds A.I. “pernicious” in the writing process, he said, “because for a discipline like history, and most disciplines for that matter, writing is thinking.”

These experiments can yield disappointments. Professor Joubin said that she found her students tended to rush to ask the bot for reading summaries just before class, suggesting that they probably hadn’t done the required reading.

And after one year of testing, Professor Breyer found that overall, the students who didn’t use A.I. did better on his writing exercises than the students who used Althea.

“It couldn’t really improve the quality of the ideas,” he said.

But Professor Breyer was stubborn, so instead of scrapping the project, he refined Althea over the summer, training it to be less disruptive and to ask more focused questions. He reintroduced it this fall, getting positive reviews from many of the students in his A.I.-enabled section.

“With some of my other teachers, there’s like a fearing of A.I., but this is using it in a productive way,” said Abby Keller, a student, at the end of a recent class.

“It’s honest,” Riya Shivaram, another student, said. “It’s not looking to please people and not looking to prove you wrong.”

Charlotte Mills disagreed. “I very much prefer getting broad feedback, like from Professor Breyer, rather than getting feedback on every single step of the assignment,” she said.

The improvements to Althea helped. For the first time, students this past semester who used the bot did better on the exercises than those who didn’t. But the deeper effect, he said, was on his teaching. The process of refining the bot was “reciprocal,” he said, helping him learn to ask better, more directed questions of his students as he improved the tool.

Research by individual professors on how A.I. can be used constructively is in many ways late to the game. Corporations have already flooded the market with tools that can write college-level essays for students in seconds. They promote products that they say are educational, without research-backed evidence to show that they are actually helpful in learning, Professor Connelly said.

But with students already using it en masse, standing on the sidelines is not going to help, so for these professors, engagement seems like the only feasible option.

At Columbia, Professor Connelly is conducting a survey about A.I. use among first-year writing students and professors. Through his research, he is finding that instructors who have taught in the same way since they were teaching assistants are having to rethink what they are trying to achieve in their classes, a development that he finds ultimately positive.

“You can call it a work in progress, but that makes it sound lame,” he said. “It’s a very scary time, but I actually think this is an incredibly exciting time in higher education.”

Sharon Otterman is a Times reporter covering higher education, public health and other issues facing New York City.

The post A.I. Is Coming to Class. These Professors Want to Ease Your Worries. appeared first on New York Times.

3 Zodiac Signs Getting a Wake-Up Call With the New Moon in Capricorn
News

3 Zodiac Signs Getting a Wake-Up Call With the New Moon in Capricorn

by VICE
January 17, 2026

This weekend, on Sunday, January 18, we will experience the first new moon of the year. Occurring in the grounded, ...

Read more
News

California dad fatally shoots wife, daughter in murder-suicide — as eldest child survives massacre

January 17, 2026
News

How to lose fat while maintaining muscle, according to the personal trainers of celebrities and business execs

January 17, 2026
News

10 Poets You Should Be Following on Instagram

January 17, 2026
News

Could 400-Year-Old Sharks Hold the Secret to Eye Health for Humans?

January 17, 2026
CIA Director John Ratcliffe meets with Venezuela’s interim leader Delcy Rodriguez in Caracas

CIA Director John Ratcliffe meets with Venezuela’s interim leader Delcy Rodriguez in Caracas

January 17, 2026
Judge Approves Sale of 5,000 Apartments Over Mamdani’s Objections

Mamdani’s Push to Halt Sale of 5,000 Apartments to Big Landlord Fails

January 17, 2026
As Kennedy Center Rebrands, It’s Mired in Black Tape

As Kennedy Center Rebrands, It’s Mired in Black Tape

January 17, 2026

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025