Damien Charlotin is a senior research fellow at HEC Paris and the author of the Substack “Artificial Authority.”
Pity the lawyers: Whenever people want to dramatize the impact of artificial intelligence on white-collar work, they tend to reach for the legal profession as Exhibit A. Mustafa Suleyman, head of Microsoft AI, opined in February that legal tasks “will be fully automated by AI within the next 12 to 18 months.” A headline on a March op-ed from Richard Susskind, an authority on the intersection of tech and law, blared that “artificial intelligence could replace traditional lawyers by 2035.” And as newer, more advanced tools promising to revolutionize the profession just keep coming — Anthropic recently expanded its AI assistant’s legal offerings — it’s easy to assume lawyers are on the brink of extinction.
And yet: Lawyers have never been more numerous in the United States. Their ranks have swelled since 2020, with law school applications jumping nearly 40 percent over the last two cohorts. According to the American Bar Association, last year also saw the highest overall employment rate for law graduates ever recorded and the highest employment rate in jobs that require bar admission.
Amid the breathless hype and warnings of technological catastrophe, lawyers are doing fine, and they will continue to do so.
The replacement story often rests on a particular picture of what a lawyer does: reading documents, applying rules and producing text. Since AI can read, apply rules and produce text, the argument goes, lawyers are cooked. That picture is not entirely wrong, but it is the perception engineers have always had of the legal domain: Feed in the facts, apply the rule, return the output. Yet the reason lawyers exist (and command high prices for their services) is that law is shot through with ambiguity. If the rules ran themselves, no one would need us. Every step in the chain — reading, applying, producing — involves choices, some of which are genuinely difficult.
A better way to think about jobs is as bundles of tasks. Some bundles are loose: A job composed of a handful of discrete, repetitive, well-specified tasks can be peeled apart and the tasks automated one by one. Other bundles are tight, because the tasks reinforce one another and cannot be cleanly separated. The key example here is offered by radiologists, long predicted to be facing extinction due to AI. Despite the dire forecasts, their numbers keep growing, and they keep commanding ever-higher salaries.
Legal work is also hard to neatly separate. For instance, doing legal research and evaluating an argument are, for an experienced lawyer, often the same mental activity: A lawyer checks the argument by writing it. Pull those tasks apart, hand the writing to a machine, and verification suddenly becomes a separate, deliberate, expensive act — at least if you want to avoid landing in my database of courts sanctioning parties for filing “hallucinated” material. In fact, an irony is that automating the easy parts of a job often makes the hard parts harder, not easier.
This experiment has been run before. A decade ago, the emergence of e-discovery — software that could sift through millions of documents in litigation — was supposed to gut the contract attorney workforce and make junior litigators redundant; as a 2011 New York Times headline put it, “Armies of Expensive Lawyers, Replaced by Cheaper Software.”
This did not happen. Instead, the volume of discoverable material grew faster than the software could process it; those tools became part of the practice rather than a substitute for it; and the lawyers moved up the value chain or acquired new jobs and responsibilities. And, of course, the new procedural rules designed to govern e-discovery ultimately created additional opportunities for legal chicanery. This is the Jevons paradox in action: When something becomes cheaper, people use more of it, not less. Cheaper legal advice may simply result in more of it — used in places, by people and for matters that previously could not justify the cost.
Of course, “this has not happened before” does not mean it will never happen. Many consider AI to be fundamentally different from previous technologies. There is strength to this argument: Large language models not only automate discrete tasks but also produce plausible legal outputs in one go. But the bottleneck of legal practice was never the production of plausible-looking text; it was, and remains, deciding which legal answer is the right one for a given client in a given moment. That is a question of judgment, and judgment has not become cheaper.
There are extrinsic reasons the demand for law and legal services — and the professionals supplying these services — will grow. The training of lawyers in modern societies has accompanied the ever-greater presence of laws and norms in our day-to-day lives. Every state, every jurisdiction, is afflicted by legislative inflation, a phenomenon unlikely to abate. But more rules mean more interpretation, more disputes, more advice, more filings — and more lawyers.
Moreover, were AI to prove so revolutionary as to create mass unemployment, the lawyers may do quite well from the wreckage. Disruption is, historically, excellent for the legal profession. The mass unemployment thesis often presupposes societies taking the blow lying down, an assumption that is unwarranted: The response to economic dislocation is rarely quiet acceptance. Someone has to draft the contracts, manage the flurry of lawsuits, litigate the liabilities and lobby the legislatures for new rules — every one of which is a billable event.
A more sophisticated take on the impact of AI on the legal profession focuses on entry-level jobs. This is a much more defensible position, one that seems to be borne out by the data, at least in the coding and software industry. But young lawyers are hired for all sorts of reasons, many of which have little to do with the marginal value of their legal research or drafting: They are an investment in the firm’s future partnership, as a signaling device to clients and a bench for unpredictable surges of work. If the law firms of 2040 are going to need any humans at all — and they will — they need to keep hiring humans now.
All this is not to say nothing will change. The pyramidal law firm, built on the leverage of armies of associates billing out research and drafting, will have to find other things for those associates to do, and those things may look quite different from what juniors do today. Some of the work will be re-bundled; some will be repriced; some will migrate from law firms to in-house teams or to clients themselves. But reorganization is not replacement.
The profession will certainly look different in 2035, but the lawyer is here to stay. There will be more of them, not fewer. They will use AI — they would be fools not to — and they will, of course, charge you for the privilege.
The post AI won’t replace lawyers. It will create more of them. appeared first on Washington Post.




