It’s becoming clearer with every passing day that the only people making a serious effort to come to grips with the implications of artificial intelligence for society aren’t legislators, or business leaders, or AI promoters themselves. They’re judges.
Indeed, in recent weeks, judges in two federal cases have drawn a line that seems to have eluded many others contemplating AI. The cases relate to copyright law and attorney-client privilege.
In both cases, the judges have effectively declared that AI bots are not human. They don’t have rights reserved for people, and their outputs don’t deserve to be treated as though they come from human intelligence or have any special high-tech standing.
There’s more to those cases than that. Both cases, including one that got as far as the Supreme Court, underscore the determination of AI promoters and uses to infiltrate the new technology deeper into society.
Start with the more recent case. On Monday, the Supreme Court declined to take up a lawsuit in which artist and computer scientist Stephen Thaler tried to copyright an artwork that he acknowledged had been created by an AI bot of his own invention. That left in place a ruling last year by the District of Columbia Court of Appeals, which held that art created by non-humans can’t be copyrighted.
The case revolved around a 2012 painting titled “A Recent Entrance to Paradise,” depicting train tracks running under a bridge and disappearing into vegetation. Thaler wrote in his application for a copyright that the “author” of the work was his “Creativity Machine,” an AI tool, and that the work was “created autonomously by machine.”
The appellate ruling didn’t engage in artistic criticism, but the work’s artificial origin might be manifest to the discerning eye — its landscape is busy yet indistinct, sort of a melange of green and purple, and the framing doesn’t have any artistic logic — the eye doesn’t know what it’s supposed to be following. But Thaler says it’s the AI bot’s creation and wasn’t generated in response to any user prompt.
In any event, for Judge Patricia A. Millett, who wrote the opinion for a unanimous three-judge panel, the case wasn’t a close one. She cited longstanding regulations of the Copyright Office requiring that “for a work to be copyrightable, it must owe its origin to a human being.”
Millett noted that Thaler hadn’t bothered to conceal the non-human origin of “A Recent Entrance,” acknowledging in court papers that the painting “lacks human authorship.” She rejected Thaler’s argument, as had the federal trial judge who first heard the case, that the Copyright Office’s insistence that the author of a work must be human was unconstitutional. The Supreme Court evidently agreed.
Thaler told me he didn’t see the Supreme Court’s turndown as a “legal defeat.” In a LinkedIn post about the case, he wrote that the decision “represents a philosophical milestone — one that exposes how deeply our intellectual property system struggles to confront autonomous machine creativity.”
As that suggests, Thaler believes we shouldn’t distinguish how we view human creations from machine outputs. “Intelligence, creativity, and invention are not limited to human products,” he told me by email. Autonomous computational systems such as his AI program, he said, “can generate these functions independently.”
Millett’s ruling actually opened the door to admitting AI into the copyright world — but only when it’s used as a tool by a human author. What set Thaler’s case apart from those, she wrote, was his insistence that his AI bot was the “sole author of the work” (emphasis hers), “and it is undeniably a machine, not a human being.”
That brings us to the second case, which involved the question of whether an AI bot’s work should be protected under attorney-client privilege. Federal Judge Jed S. Rakoff of New York ruled, concisely, “The answer is no.”
As I’ve written in the past, Rakoff is one of our most percipient jurists about the impact of new technologies on the law. In his occasional essays for the New York Review of Books, he’s examined how a secret AI algorithm has skewed the sentencing of criminal defendants (especially Black defendants), how cryptocurrency advocates have made a tangle of existing laws on fraud, and how the misuse of cognitive neuroscience has resulted in convictions based on false memories.
In other words, Rakoff isn’t a judge you should try snowing with technological flapdoodle.
The case involved one Bradley Heppner, who was indicted by a federal grand jury for allegedly looting $150 million from a financial services company he chaired. Heppner pleaded innocent and was released on $25-million bail. The case is pending.
According to a ruling Rakoff issued on Feb. 17, the issue before him concerned exchanges that Heppner had with Claude, the chatbot developed by the AI firm Anthropic, written versions of which were seized by the FBI when it executed a search warrant of Heppner’s property.
Knowing that an indictment was in the offing, Heppner had consulted Claude for help on a defense strategy. His lawyers asserted that those exchanges, which were set forth in written memos, were tantamount to consultations with Heppner’s lawyers; therefore, his lawyers said, they were confidential according to attorney-client privilege and couldn’t be used against Heppner in court. (They also cited the related attorney work product doctrine, which grants confidentiality to lawyers’ notes and other similar material.)
That was a nontrivial point. Heppner had given Claude information he had learned from his lawyers, and shared Claude’s responses with his lawyers.
Rakoff made short work of this argument. First, he ruled, the AI documents weren’t communications between Heppner and his attorneys, since Claude isn’t an attorney. All such privileges, he noted, “require, among other things, ‘a trusting human relationship,’” say between a client and a licensed professional subject to ethical rules and duties.
“No such relationship exists, or could exist, between an AI user and a platform such as Claude,” Rakoff observed.
Second, he wrote, the exchanges between Heppner and Claude weren’t confidential. In its terms of use, Anthropic claims the right to collect both a user’s queries and Claude’s responses, use them to “train” Claude, and disclose them to others.
Finally, he wasn’t asking Claude for legal advice, but for information he could pass on to his own lawyers, or not. Indeed, when prosecutors tested Claude by asking whether it could give legal advice, the bot advised them to “consult with a qualified attorney.”
In his ruling, Rakoff did make an effort to address the broader questions judges face in dealing with AI. “Only three years after its release,” he wrote, “one prominent AI platform is being used by more than 800 million people worldwide every week. Yet the implications of AI for the law are only beginning to be explored.”
He concluded that “generative artificial intelligence “presents a new frontier in the ongoing dialogue between technology and the law….But AI’s novelty does not mean that its use is not subject to longstanding legal principles, such as those governing the attorney-client privilege and the work product doctrine.”
In this case and elsewhere, Rakoff has shown a superb grasp of technology issues. In his 2021 essay about the AI algorithm capable of sending people to jail, he put his finger on the factor that makes the very term “artificial intelligence” a misnomer.
The term, he wrote, tends to “conceal the importance of the human designer….It is the designer who determines what kinds of data will be input into the system and from what sources they will be drawn. It is the designer who determines what weights will be given to different inputs and how the program will adjust to them. And it is the designer who determines how all this will be applied to whatever the algorithm is meant to analyze.”
He’s right. That why judges have had so much trouble determining whether the AI engineers feeding information into chatbots to make it seem like they’re “creative” and even “sentient” are infringing the copyrights of the original creators of that information, or creating something new.
The problem is that they’re asking the wrong question. Everything an AI bot spews out is, at more than a fundamental level, the product of human creativity. The AI bots are machines, and portraying them as though they’re thinking creatures like artists or attorneys doesn’t change that, and shouldn’t.
The post In two new court cases, judges find that AI does not have human intelligence appeared first on Los Angeles Times.




