“I’m not going to respond to that,” Siri responded. I had just cursed at it, and this was my passive-aggressive chastisement.
The cursing was, in my view, warranted. I was in my car, running errands, and had found myself in an unfamiliar part of town. I requested “directions to Lowe’s,” hoping to get routed to the big-box hardware store without taking my eyes off the road. But apparently Siri didn’t understand. “Which Lowe?” it asked, before displaying a list of people with the surname Lowe in my address book.
Are you kidding me? Not only was the response incoherent in context, but also, only one of the Lowe entries in my contacts included an address anyway, and it was 800 miles away—an unlikely match compared with the store’s address. AI may not ever accomplish all of the things the tech companies say it will—but it seems that, at the very least, computers should be smarter now than they were 10 or 15 years ago.
It turns out that I would have needed an entirely new phone for Siri to have surmised that I wanted to go to the store. Craig Federighi, Apple’s senior vice president of software engineering, said in an interview last month that the latest version of Siri has “better conversational context”—the sort of thing that should help the software know when I’m asking to be guided to the home-improvement store rather than to a guy called Lowe. But my iPhone apparently isn’t new enough for this update.
I would need cutting-edge artificial intelligence to get directions to Lowe’s. This is effectively Apple’s entire pitch for AI. When it launched Apple Intelligence (the company’s name for the AI stuff in its operating systems) last year, the world’s third-most-valuable company promised a rich, contextual understanding of all your data, and the capacity to interact with it through ordinary phrases on your iPhone, iPad, or Mac. For example, according to Apple, you would be able to ask Siri to “send the photos from the barbecue on Saturday to Malia.”
But in my experience, you cannot ask even the souped-up Siri to do things like this. I embarked on a modest test of Apple Intelligence on my Mac, which can handle the feature. It failed to search my email, no matter how I phrased my command. When I tried to use Siri to locate a PDF of a property-survey report that I had saved onto my computer, it attempted to delegate the task to ChatGPT. Fine. But ChatGPT provided only a guide to finding a survey of a property in San Francisco, a city in which I do not live. Perhaps I could go more general. I typed into Siri: “Can you help me find files on my computer?” It directed me to open Finder (the Mac’s file manager) and look there. The AI was telling me to do the work myself. Finally, I thought I would try something like Apple’s own example. I told Siri to “show me photos I have taken of barbecue,” which resulted in a grid of images—all of which were stock photos from the internet, not pictures from my library.
These limitations are different from ChatGPT’s tendency to confidently make up stories and pass them off as fact. At least that error yields an answer to the question posed, albeit an inaccurate one. Apple Intelligence doesn’t even appear to understand the question. This might not seem like a problem if you don’t use Apple products or are content to rawdog your way to Lowe’s. But it does reveal a sad state of affairs for computing. For years, we’ve been told that frictionless interactions with our devices will eventually be commonplace. Now we’re seeing how little progress has been made toward this goal.
I asked Apple about the problems I’m having with Apple Intelligence, and it more or less confirmed that the product doesn’t work—yet. Apple’s position is that the 2024 announcement, featuring Malia and the cookout, represents a vision for what Siri can and should do. The company expects that work on functionality of this kind will continue into 2026, and it showed me a host of other forthcoming AI tools, including one with the ability to recognize an event in a screenshot of a text message and add the info to a calendar, or to highlight an object in a photo and search for similar ones on Google or Etsy. I also saw a demo of live language translation on a phone call, updated AI-created emoji, and tools to refine what you’ve written inside emails and in Apple software. Interesting, but in my mind, all of these features change how you can use a computer; they don’t improve the existing ways.
After rolling around in my head the idea that Apple Intelligence represents a vision for how a computer should work, I remembered that Apple first expressed this vision back in 1987, in a concept video for a product called Knowledge Navigator. The short film depicts a university professor carrying out various actions of daily and professional life by speaking directly to a personified software assistant on a tablet-like computer—all of the things I long to do with my computer 38 years hence. Knowledge Navigator, per the video, could synthesize information from various sources, responding to a user’s requests to pull up various papers and data. “Let me see the lecture notes from last semester,” the professor said, and the computer carried out the task. While the professor perused articles, the computer was able to identify one by a colleague, find her contact info, and call her upon his request.
Although obscure outside computer-history circles, Knowledge Navigator is legendary in Silicon Valley. It built on previous, equally fabled visions for computing, including Alan Kay’s 1972 proposal for a tablet computer he called DynaBook. Apple would eventually realize the form of that idea in the iPad. But the vision of Knowledge Navigator wasn’t really about how a device would look or feel. It was about what it would do: allow one to integrate all the aspects of a (then-still-theoretical) digital life by speaking to a virtual agent, Star Trek style. Today, this dream feels technologically feasible, yet it is still, apparently, just out of reach. (Federighi promised in the June interview that a better Siri was right around the corner, with “much higher quality and much better capability.”)
Apple Intelligence—really, generative AI overall—emphasizes a sad reality. The history of personal-computer interfaces is also a history of disappointments. At first, users had to type to do things with files and programs, using esoteric commands to navigate up and down the directory structures that contained them. The graphical user interface, which Apple popularized, adapted that file-and-folder paradigm into an abstraction of a desktop, where users would click and move those files around. But progress produced confusion. Eventually, as hard disks swelled and email collected, we ended up with so much digital stuff that finding it through virtualized rummaging became difficult. Text commands returned via features such as Apple’s Spotlight, which allows a user to type the name of a file or program, just as they might have done 50 years ago.
But now the entire information space is a part of the computer interface. The location and route to Lowe’s gets intermixed with people named Lowe in my personal address book. A cookout might be a particular event I attended, or it might be an abstraction tagged in online images. This is nothing new, of course; for decades now, using a computer has meant being online, and the conglomeration of digital materials in your head, on your hard disk, and on the internet often cause trouble. When you’re searching the web, Google asks if you’re perhaps really looking for the thing it deems more common based on other people’s behavior, rather than the thing you typed. And iCloud Drive helpfully uploads your files to the cloud to save disk space, but then you can’t access them on an airplane without Wi-Fi service. We are drowning in data but somehow unable to drink from its wellspring.
In principle, AI should solve this. Services such as ChatGPT, built on large language models that are trained on vast quantities of online and offline data, promised to domesticate the internet’s wilds. And for all their risk of fabrication and hallucination, LLMs really do deliver on that front. If you want to know if there exists a lens with specific properties compatible with a particular model of camera, or seek advice on how to carry out a plumbing repair, ChatGPT can probably be of use. But ChatGPT is much less likely to help you make sense of your inbox or your files, partly because it hasn’t been trained on them—and partly because it aspires to become a god rather than a servant.
Apple Intelligence was supposed to fill that gap, and to do so distinctively. Knowledge Navigator never got built, but it was massively influential within the tech industry as a vision of a computing experience; it shows that Apple has expressed this goal for decades, if under different technological conditions and executive leadership. Other companies, including Google, are now making progress toward that aim too. But Apple is in a unique position to carry out the vision. It is primarily a personal-computer-hardware business focused on the relationship between the user and the device (and their own data) instead of the relationship between the user and the internet, which is how nearly every other Big Tech company operates. Apple Intelligence would make sense of all your personal information and grant new-and-improved access to it via Siri, which would finally realize its purpose as an AI-driven, natural-language interface to all that data. As the company has already done for decades, Apple would leave the messy internet mostly to others and focus instead on the device itself.
That idea is still a good one. Using a computer to navigate my work or home life remains strangely difficult. Calendars don’t synchronize properly. Email search still doesn’t work right, for some reason. Files are all over the place, in various apps and services, and who can remember where? If computationalists can’t even make AI run computing machines effectively, no one will ever believe that they can do so for anything—let alone everything—else.
The post Why Are Computers Still So Dumb? appeared first on The Atlantic.