As educators, we are duty bound to defend — and advance — human intelligence. To do that, we first need to recognize that it is under attack. Science fiction has long depicted a future in which artificial intelligence becomes so strong, it overpowers humanity. In fact, the battle between bots and brains has already begun, and educators can see how it might end. Young people are quickly becoming so dependent on A.I. that they are losing the ability to think for themselves. And rather than rallying resistance, academic administrators are aiding and abetting a hostile takeover of higher education.
Hoping to win recognition as leaders in A.I. or fearful of being left behind, more and more colleges and universities are eagerly partnering with A.I. companies, despite decades of evidence demonstrating the need to test education technology, which often fails to deliver measurable improvements in student learning. A.I. companies are increasingly exerting outsize influence over higher education and using these settings as training grounds to further their goal of creating artificial general intelligence (A.I. systems that can substitute for humans).
Given the increased use of A.I. tools, university leaders like me have little choice but to negotiate terms for student and faculty A.I. access, if only because we are legally required to protect sensitive student information. But this doesn’t constitute a real partnership of equals.
In diplomacy, you know you are dealing with an adversary when it sows division in your ranks. Anthropic, for instance, is demanding exorbitant fees for enterprise accounts and paying “campus ambassadors” to promote the use of its Claude A.I. tools in schools. Other companies promise cash bonuses when students meet marketing goals. That creates conflicts of interest, especially when these paid ambassadors hold elected positions in student government.
A Columbia undergrad, Roy Lee, bragged about developing an A.I. tool to cheat his way through online tech job interviews. The venture capital firm Andreessen Horowitz expressed admiration for his “bold approach,” explaining that “behind the scenes his moves are rooted in deliberate strategy and intentionality.” The firm helped raise $15 million to help start the company Mr. Lee co-founded, which said it wants to help users “cheat on everything.”
Some in Silicon Valley say they want to see their tools used responsibly and in ways that maintain integrity in education. OpenAI’s chief executive, Sam Altman, has said educators “should lead this next shift with A.I.,” and OpenAI claims it is building “tools for educators” that “help them lead the way.” But behind the scenes, its actions say otherwise. Years ago it developed a technology that was 99.9 percent accurate in detecting work generated by the company’s ChatGPT. Senior executives had internal debates about whether to let educators have access to the tool. They opted not to. Among the reasons: A survey showed that putting invisible patterns called watermarks on ChatGPT-generated text might lead some users to switch to a competing product.
In reality, A.I. companies seem to look at college students as a strapped customer base to hook when they are most stressed. In April, Mr. Altman announced that ChatGPT Plus would be free for college students during finals. Two weeks later, Google offered free access to its premium A.I. service for the full academic year. Perplexity staged a competition in 2024 in which students at colleges with a high number of sign-ups got its top-tier A.I. program free for a year. Over the past year, some professors have reported a sharp decline in student questions.
The A.I. industry’s ambitions go further. OpenAI wants to have an army of bots “become part of the core infrastructure of higher education,” which, from an administrator’s point of view, could mean parts stretching from admissions decisions to academic advising. Google tells my students they will “learn faster and deeper” if they upload lecture recordings to NotebookLM (Columbia and other institutions prohibit recording lectures without permission). Universities have no access to the data that students and faculty and other staff members upload to these systems. This makes it impossible to ensure that A.I. tools on campus are being used responsibly, both to support education and to prevent harm.
It is still too early to know how A.I. usage affects young people’s ability to learn. But research suggests that students using A.I. do not read as carefully when doing research and that they write with diminished accuracy and originality. Students do not even realize what they are missing. But educators and employers know. Reading closely, thinking critically and writing with logic and evidence are precisely the skills people need to realize the bona fide potential of A.I. to support lifelong learning.
Some educators are finding ways to harness A.I. to boost intellectual engagement and encourage creative exploration. Others are now so mistrustful of Silicon Valley that they prohibit any use of A.I., leaving students to figure out on their own how it can be used ethically and effectively. The headlong pursuit of A.G.I. has not just diminished the education of young people, the foundation of future progress; it has significantly hampered building support for developing systems that might help make students smarter.
History shows that wars can be lost before they are even declared if defenders surrender strategic terrain without a struggle. For universities, that terrain is the ultimate high ground: human intelligence itself. If we do not fight for it now, those who come after us will face an even more unequal struggle.
Matthew Connelly is a professor of history and a vice dean for A.I. initiatives at Columbia University.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].
Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
The post How A.I. Companies Are Preying on College Students appeared first on New York Times.




