In college I took a medieval studies seminar on the origins of British common law. There I learned how chancery judges in 14th-century England decided property disputes by applying their own reasoning when testimonies conflicted and records were unclear or incomplete. It was a formative class for me and one whose lessons I return to often. It has remained critical in my work advising companies, even though it didn’t contain a single number, management principle or economic theory.
Artificial intelligence is poised to profoundly affect American business, and executives are rethinking the skills they seek in colleagues and advisers. For my part, I’m reminded of the value of an education deep in reading, research and analysis — one that teaches how to approach problems with no definitive answers.
My college education taught me how to think, not what to think. That is what proves most useful when navigating strategic and financial decisions in a technology-driven world. Today, A.I. is near the center of every boardroom discussion I witness. Roughly half the S&P 500 companies mentioned A.I. on their most recent earnings calls, and they speak of the new technology as a force that no serious institution can ignore.
There is little doubt A.I. will be transformative. And yet, for all the disruption it promises, I am struck by how much will remain unchanged. The most consequential decisions in business have never been about processing information faster or detecting patterns more efficiently. The most salient concerns are questions such as what kind of enterprise a firm should aspire to be, what culture it should embrace, what risks it should tolerate and how its leaders can plan when the path forward is unclear. These are questions of judgment, and judgment cannot be automated — at least not any time soon.
What do I mean by “judgment”? The capacity to arbitrate among competing values and differences of opinion, to weigh considerations that matter independently but cannot be satisfied all at once, to consider several paths of action to alight on the best one. Judgment is the faculty we rely on when trade-offs are unavoidable and the right answer is not waiting to be computed. It is a uniquely human skill.
Not long ago, a client was seeking to sell a portion of its business to raise capital. The company faced a choice: offload a high-growth unit that was volatile and not central to the business or a slower-growth unit that better aligned with the company’s mission. An A.I.-assisted analysis recommended selling the slow-growing business, which made sense in purely mathematical terms. But judgment dictated the other path: Sell the volatile unit and improve the other one. Even though it’s too soon to know which path will be correct, I was struck by the certainty of A.I.’s answer.
A.I. excels at pattern recognition, code generation and the synthesis of large volumes of text and data. But someone still has to decide whether an analysis is trustworthy, what implications it carries and whether acting on it would be wise. Someone has to connect the output of a model to the broader context, such as the political environment, the regulatory landscape and the human dynamics at play.
Last year a board wanted to know how much it should offer for a business it wanted to buy. We fed a huge amount of data into an A.I. model, which suggested a valuation that seemed low to me. The chief executives of the two companies had a fraught relationship, and we debated whether the deal price would need to be higher. Sure enough, when the company offered the A.I.-recommended price, it was rejected. The higher price was later accepted and the transaction secured. The model could not account for the interpersonal dynamics. Judgment could.
This is why I doubt that A.I. will soon match human cognition or that the defining skills of the next generation of professionals will be narrowly technological. Technical fluency matters, of course. But in a world of abundant machine intelligence, the most durable advantage will be broad intellectual range.
There is a tendency in higher education and in business to push people toward specialization. A.I. accelerates that pressure. If a machine can do the general work, the conventional wisdom goes, humans should retreat to the specific. I believe the opposite. As routine analysis becomes automated, what distinguishes professionals is the ability to synthesize across domains, to see patterns that specialists miss, to exercise judgment.
Today, when hiring, leaders I know look for what might be called a generalist with judgment, someone analytical and adaptable who is nimble enough to learn skills and become reasonably conversant in new knowledge. The best candidates share a quality no machine can replicate. They think independently, navigate ambiguity without waiting for instruction, analyze the questions that were not asked but should have been and own their decisions. They use A.I. — as a tool but not a crutch.
That medieval law seminar of long ago helped prepare me for this world more than my courses in finance and economics. Just as centuries ago those judges rendered decisions amid reasonable disagreement, where evidence was mixed and incomplete, professionals today must possess the skills to make things better where machines cannot.
Blair Effron is a co-founder of Centerview Partners, an investment bank.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].
Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
The post Humans Possess One Thing That A.I. Does Not: Judgment appeared first on New York Times.




