In a literary flourish long ago, Shantideva, an eighth-century Indian monastic, divulged what he called the “holy secret” of Buddhism: The key to personal happiness lies in the capacity to reject selfishness and accustom oneself to accepting others. A cornerstone of the Buddhist worldview ever since, Shantideva’s verse finds new, albeit unacknowledged, expression in two recent books: Jeff Sebo’s provocative, if didactic, “The Moral Circle” and Webb Keane’s captivating “Animals, Robots, Gods.”
Much like Shantideva, both authors make a selfish case for altruism: asking the reader, in Keane’s words, “to broaden — and even deepen — your understanding of moral life and its potential for change.” Sebo, an associate professor of environmental studies at N.Y.U. and an animal-rights activist, centers his argument on human exceptionalism and our sometimes contradictory desire to live an ethical life.
Those within the “moral circle” — be it ourselves, families, friends, clans or countrymen — matter to us, while those on the outside do not. In asking us to expand our circles, Sebo speeds past pleas to consider other people’s humanity, past consideration of chimpanzees, elephants, dolphins, octopuses, cattle or pets and heads straight to our moral responsibility for insects, microbes and A.I. systems.
A cross between a polemic and that introductory philosophy course you never took, Sebo’s tract makes liberal use of italics to emphasize his reasoning. Do A.I. systems have a “non-negligible” — that is, at least a one in 10,000 — chance of being sentient? he asks. If so (and Sebo isn’t clear that there is such a chance), we owe them moral consideration.
The feeling in reading his argument, however, is of being talked at rather than to. That is too bad, because we are in new territory here, and it could be interesting. People are falling in love with their virtual companions, getting advice from their virtual therapists and fearing that A.I. will take over the world. We could use a good introductory humanities course on the overlap of the human and the nonhuman and the ethics therein.
Luckily, Webb Keane, a professor in the department of anthropology at the University of Michigan, is here to fill the breach. Keane explores all kinds of fascinating material in his book, most of it taking place “at the edge of the human.” His topics range from self-driving cars to humans tethered to life support, animal sacrifice to humanoid robots, A.I. love affairs to shamanic divination.
Like Shantideva, he is interested in what happens when we adopt a “third-person perspective,” when we rise above our usual self-centered identities, expand our moral imaginations and take “the viewpoint of anyone at all, as if you were not directly involved.” Rather than drawing the boundary of the moral circle crisply, as Sebo would have it, Keane is interested in the circle’s permeability. “What counts as human?” he asks. “Where do you draw the line?” And, crucially, “What lies on the other side?”
Several vignettes stand out. Keane cites a colleague, Scott Stonington, a professor of anthropology and practicing physician, who did fieldwork with Thai farmers some two decades ago. End-of-life care for parents in Thailand, he writes, often forces a moral dilemma: Children feel a profound debt to their parents for giving them life, requiring them to seek whatever medical care is available, no matter how expensive or painful.
Life, precious in all its forms, is supported to the end and no objections are made to hospitalization, medical procedures or interventions. But to die in a hospital is to die a “bad death”; to be able to let go, one should be in one’s own bed, surrounded by loved ones and familiar things. To this end, a creative solution was needed: Entrepreneurial hospital workers concocted “spirit ambulances” with rudimentary life support systems like oxygen to bear dying patients back to their homes. It is a powerful image — the spirit ambulance, ferrying people from this world to the next. Would that we, in our culture, could be so clear about how to negotiate the imperceptible line between body and soul, the confusion that arises at the edge of the human.
Take Keane’s description of the Japanese roboticist Masahiro Mori, who, in the 1970s, likened the development of a humanoid robot to hiking toward a mountain peak across an uneven terrain. “In climbing toward the goal of making robots appear like a human, our affinity for them increases until we come to a valley,” he wrote. When the robot comes too close to appearing human, people get creeped out — it’s real, maybe too real, but something is askew.
What might be called the converse of this, Keane suggests, is the Hindu experience of darshan with an inanimate deity. Gazing into a painted idol’s eyes, one is prompted to see oneself as if from the god’s perspective — a reciprocal sight — from on high rather than from within that “uncanny valley.” The glimpse is itself a blessing in that it lifts us out of our egos for a moment.
We need relief from our self-centered subjectivity, Keane suggests — hence the attraction of A.I. boyfriends, girlfriends and therapists. The inscrutability of an A.I. companion, like that of an Indian deity, encourages a surrender, a yielding of control, a relinquishment of personal agency that can feel like the fulfillment of a long-suppressed dream. Of course, something is missing here too: the play of emotion that can only occur between real people. But A.I. systems, as new as they are, play into a deep human yearning for relief from the boundaries of self.
Could A.I. ever function as a spirit ambulance, shuttling us through the uncanny valleys that keep us, as Shantideva knew, from accepting others? As Jeff Sebo would say, there is at least a “non-negligible” — that is, at least a one in 10,000 — chance that it might.
The post Can A.I. Heal Our Souls? appeared first on New York Times.