I recently learned of a new way people are using artificial intelligence. “Based on everything you know about me,” they ask ChatGPT, “draw a picture of what you think my current life looks like.”
Like any capable carnival mind reader, ChatGPT appears to mix safe bets with more specific details. It often produces images of people sitting in a home office with a computer. Perhaps an acoustic guitar sits in the corner or an orange cat prowls in the background. But also on occasion something like, say, a large head of broccoli will be sitting in the middle of the desk.
Off-kilter elements like that are what give these portraits not just their quirky charm but also flashes of epiphany. By absorbing the wide-ranging mix of work questions, personal goals and everything else that makes up our ChatGPT history, the system teases out patterns and connections that may not be readily apparent. In this way, these portraits don’t just reflect. They also reveal. Presented with such depictions, a user may be compelled to ask: Am I really mentioning cruciferous vegetables in my chats so often that ChatGPT thinks they’re a central part of my life?
As a board member at Microsoft and an early funder of ChatGPT’s developer, OpenAI, I have a significant personal stake in the future of artificial intelligence. But my stake is more than just financial. I truly believe that by giving billions of people access to A.I. tools they can use in whatever ways they choose, we can create a world where A.I. augments and amplifies human creativity and labor instead of simply replacing it.
That’s why I find these ChatGPT portraits so fascinating: They clarify and dramatize enduring concerns about identity and privacy in the digital age. How much exactly is ChatGPT remembering? They implicitly ask. How judiciously is it processing these memories, and who benefits most when it does? As a user of these technologies, do you sense that you’re being monitored in ways that make you feel as if you’re being exposed, controlled and manipulated? Or do you feel seen?
Few truly powerful technologies come without any risks. Perhaps third parties with different motives and values from your own will somehow gain access to the data. Once made aware of your past patterns, these third parties might be able to effectively anticipate and influence your future decisions. While I recognize that some people see such risks as disqualifying, what I’ve found through my own experiences is that sharing more information in more contexts can also improve people’s lives.
In our concern about potential harms, it can be easy to overlook the many positive effects technology has had. I co-founded LinkedIn, a professional social network, more than two decades ago, but I still get a steady flow of missives from people who have found jobs, started businesses or made promising career changes because of interactions they’ve had on the platform. And this is all because they’re willing to share information about their work experiences and skills in ways that were once considered both imprudent and impractical.
Tech skeptics have long used the adjective “Orwellian” to cast everything from a video recommendation feature to turn-by-turn navigation apps as threats to individual autonomy, but the history of technological innovation in the 21st century tells a different story. In “1984,” George Orwell’s classic novel of state oppression, powerful telescreens enable a totalitarian regime to rule over dispossessed proles with unchecked omnipotence. But today we live in a world where individual identity is the coin of the realm — where plumbers and presidents alike aspire to be social media influencers and cultural power flows increasingly to self-made operators, including the one-man podcasting empire Joe Rogan, the YouTube megastar MrBeast and the human rights activist Malala Yousafzai.
I believe A.I. is on a path not just to continue this trend of individual empowerment but also to dramatically enhance it.
Imagine A.I. models that are trained on comprehensive collections of your own digital activities and behaviors. This kind of A.I. could possess total recall of your Venmo transactions and Instagram likes and Google Calendar appointments. The more you choose to share, the more this A.I. would be able to identify patterns in your life and surface insights that you may find useful.
Decades from now, as you try to remember exactly what sequence of events and life circumstances made you finally decide to go all-in on Bitcoin, your A.I. could develop an informed hypothesis based on a detailed record of your status updates, invites, DMs, and other potentially enduring ephemera that we’re often barely aware of as we create them, much less days, months or years after the fact.
When you’re trying to decide if it’s time to move to a new city, your A.I. will help you understand how your feelings about home have evolved through thousands of small moments — everything from frustrated tweets about your commute to subtle shifts in how often you’ve started clicking on job listings 100 miles away from your current residence.
For those who choose to pursue this new reality, the tools that make it possible are multiplying and evolving rapidly. Developers of all sizes have been introducing apps and features that enable you to automatically record, store and analyze virtually anything — or everything — you do on your PC, phone and other devices. In doing so, they turn such data into the material for a de facto second self, one that can endow even the most scatterbrained among us with a capacity for revisiting the past with a level of detail even the novelist Marcel Proust might envy.
There’s more to this shift. While critics of Big Tech often emphasize how A.I. can empower corporations to use people’s data for manipulation or discrimination, we can also deliberately design A.I. to give individuals greater facility to derive insights from their own data. What if you had an AI that could analyze your browsing patterns and alert you when advertising algorithms were successfully manipulating your purchasing decisions? Or one that could detect when social media algorithms were steering your attention toward increasingly extreme content?
Do we lose something of our essential human nature if we start basing our decisions less on hunches, gut reactions, emotional immediacy, faulty mental shortcuts, fate, faith and mysticism? Or do we risk something even more fundamental by constraining or even dismissing our instinctive appetite for rationalism and enlightenment?
To some degree, we all self-track and always have. We make to-do lists and keep journals of our daily activities. We weigh ourselves and record our daily steps or the number of miles we jog, generally in pursuit of some kind of self-improvement or at least self-awareness. Ultimately, ongoing cycles of reflection, action, assessment and refinement are how humanity progresses and expands what it even means to be human.
So imagine a world in which an A.I. knows your stress levels tend to drop more after playing World of Warcraft than after a walk in nature. Imagine a world in which an A.I. can analyze your reading patterns and alert you that you’re about to buy a book where there’s only a 10 percent chance you’ll get past Page 6.
Instead of functioning as a means of top-down compliance and control, A.I. can help us understand ourselves, act on our preferences and realize our aspirations. In this way, perfect recall isn’t just a tool for remembering the past. It’s also a compass that provides a clearer understanding of our goals and improves our decision-making. It transforms our digital trails from passive records of who we were into dynamic resources, empowering us to shape who we wish to become — with greater self-awareness and freedom to live lives of our own choosing.
The post A.I. Will Empower Humanity appeared first on New York Times.