On Thursday morning, I sat down with one of my chatbots and asked it to round up the best takes on a recent social media controversy. The results were unsatisfying — hallucinations, apologies and search results that weren’t what I’d asked for. After several prompts and corrections, the chatbot seemed to give up. Shortly thereafter, so did I.
Fortunately, I was intimately familiar with this controversy, since I touched it off. In social media parlance, I was “the main character,” so I already had plenty of raw material and could see how badly ChatGPT had failed.
But if you’re hoping for a column on why artificial intelligence is useless, I regret to disappoint. I rarely read its summaries and never let it touch my copy directly, but it’s still enormously helpful as a super search engine, data downloader and interlocutor to steelman opposing views. It also works as a supplementary fact-checker (before it goes to the human ones) and to suggest clarifications and cuts so I can hand my editor cleaner copy.
Ironically, saying this on X is what caused all the trouble: Many people think that using AI at any stage of the writing process amounts to outsourcing your thinking to a machine, and they reacted badly to a journalist suggesting some AI use might be all right.
Obviously, I disagree, but I recognize those folks are grappling with important questions, such as “What is writing for?” and “Which uses of AI serve those purposes, and which undermine them?”
The people who want AI to be off-limits are right that technology changes how you think and write. I am old enough to have done creative writing in longhand and then on a typewriter, before I got my first computer. Something was lost in each transition, because the slowness and forced rewriting of the old methods improved the text in certain ways. But they also raised the cost (in time and effort) of making changes, and ultimately most writers decided the new ways were worth it.
Most writers have already made that same decision with machine learning. If you’ve used Google, you’ve allowed a complex algorithm to shape what you know and how you think. We’re not arguing about whether machines can ever touch our work. We’re arguing about where to draw the line.
My line is that I outsource tedious tasks such as “searching the web” or “finding data buried in the footnotes” or “clicking through janky websites.” I use AI judiciously to play roles that other humans have always played for writers, such as sounding board or fact-checker, but never involve it in outlining or writing a column or editorial. I draw the line there because my answer to the question “What is writing for?” is that writing — my kind of writing, at least — is a way that humans learn together.
Relying on AI summaries or using AI copy short-circuits the work essential to real learning. College term papers have so little value that people must be paid to read them, yet we make students write them because the merit is in the struggle: developing opinions, trying to lay them out in order, discovering what’s missing or wrong, and tearing down the whole framework and rebuilding it several times.
Used properly, AI can be a way to struggle harder — with better data, more reading, firmer comprehension or sharper criticism. AI means you can do more of those things in a project’s limited time frame. Unfortunately, what makes AI an excellent struggle machine also makes it a top-notch struggle avoider. Like most professional writers, I’m appalled that British journalist Alex Preston used AI to pad out a New York Times book review — even though I’d have been fine if he’d just used it to change “petrol” to “gas.” Using it to provide the actual copy violated the trust of readers who could presumably have queried a chatbot if they wanted a machine’s opinion.
No one wants journalism to end up like those “hand-highlighted” Thomas Kinkade paintings, a flat expanse of mass-produced schlock sprinkled with a dusting of human glitter in the final touch-up process. That makes a hard no very appealing — if you aren’t using AI at all, you can’t be tempted to use it the wrong way. But I doubt that particular line can hold.
Machine learning is simply too useful, and it will tempt even hardcore AI opponents in a thousand ways — searching for half-remembered citations, access to untranslated archives in languages you can’t read, downloading of documents scattered across dozens of badly designed webpages. Each of those uses will shape what we know and how we think, just as search and social media algorithms have. Each successful use will invite more use.
There will be artisanal holdouts who reject all those possibilities, but I doubt they’ll be a majority. So for the foreseeable future, the rest of us will be figuring out where to draw the lines, knowing that some lines will be crossed by others, if not erased entirely. The best we can hope for is that in the struggle to draw and redraw them, we’ll learn where they belong.
The post My AI admission started a firestorm. Here’s some water. appeared first on Washington Post.




