A lot of humans are feeling very down on humanity these days. Maybe you’ve met them. Or maybe you’re one of them.
I’m talking about those who look around and say: Humans are destroying the planet — causing climate change, making other species go extinct. Soon enough we’ll be mucking up the cosmos, too — polluting it with still more space junk, colonizing the moon, even exporting data centers into the heavens. The world would be better off if we ourselves just go extinct!
One reader recently exemplified this rising anti-humanism by writing in to my philosophical advice column, Your Mileage May Vary, and telling me bluntly: “I’m disgusted to be a human.” I responded by reminding them that hating on humanity is neither a new nor an enlightened position. It lets us off the hook too easily, because it expects nothing of us.
But I’m also aware that this distaste for humanity isn’t only motivating old-school misanthropy these days.
It’s also motivating transhumanism, the movement that says we should use tech to proactively evolve our species into Homo sapiens 2.0. Transhumanists — who span the gamut from Silicon Valley tech bros to academic philosophers — do want to keep some version of humanity going, but definitely not running on the current hardware. They imagine us with chips in our brains, or with AI telling us how to make moral decisions more objectively, or with digitally uploaded minds that live forever in the cloud. All of this will someday, they assert, usher us into a utopian future where we transcend suffering and become as perfect and immortal as gods.
To better understand why a distaste for humanity is driving some people into the arms of transhumanism these days, I reached out to Shannon Vallor, a philosopher of technology at the University of Edinburgh and author of The AI Mirror. Vallor is a devoted humanist — but not a naive one. To her, being pro-human doesn’t mean being anti-technology. We talked about how classical humanism has failed to offer a compelling vision for the 21st century and beyond — and how we can still do better. Our conversation, edited for length and clarity, is below.
What’s driving transhumanism to become more popular these days?
We’re living in a world that digital technologies and social media have made more fragmented and alienating. We are busier, more tired, more lonely, more uncertain than ever about the future and what it holds. So we’re at a low point in our ability to place faith in our fellow humans. And instead of looking at the deeper causes of that — the breakdown of the social fabric and of institutions and of local networks of care — there is an attempt to normalize and naturalize anti-humanism.
It’s an attempt to treat it not as a symptom of some disease or malaise in society — which is how I see it — but rather to treat it as a new, more enlightened frame of mind. To say: If you’re a humanist, you’re somehow stuck in the past, you have this overly romantic attachment to humans, you’re committing a fallacy of exceptionalism.
And there is a history of humanism being inappropriately exceptionalist — for example, imagining that other living things can’t have feelings or intelligence or moral standing. So as we’ve surpassed those errors, it’s very easy to think: Oh, you just go one step further and decide that humans don’t really need to be part of the story, or they don’t need to be writing the story. And if you quiver or flinch at the notion of machines writing the story of the future, that’s just your parochial attachment.
Right, this is the accusation of “speciesism” that we hear a lot these days.
Exactly. At a very superficial intellectual level, this is all very plausible and appealing and seems very enlightened, right? But it’s rooted in a deep misconception of what it is to be human.
The reason why it’s mistaken for humans to place themselves at the center of all value and to see other living beings as mere tools has nothing to do with humans somehow being unimportant, or humans somehow being insignificant in the broad story. It’s rather a failure to understand that to be human is to be dependent upon this much bigger living system, and our value is inseparable and intertwined with the value of other living things. It’s not that humans are something to be cast aside.
Have a question you want me to answer in the next Your Mileage May Vary advice column?
Just fill out this anonymous form! Newsletter subscribers will get my column before anyone else does, and their questions will be prioritized for future editions. Sign up here.
Do you think the classical humanism that we’ve inherited from the Renaissance and the Enlightenment era is enough to meet the current moment? Or do we need a new humanism?
No. I do think we need a new humanism. And one of the reasons, of course, is because classical humanism, in addition to suffering from the flaws of speciesism, had a vision of the human that was itself heavily gendered and racialized. It was very much an ideal that is both unattainable and undesirable in its naive form: the idea of the individual, rational agent that is entirely self-determining and surpassing the more basic networks of care and concern that hold communities together. This Enlightenment version of humanism, which carried with it many of the flaws of European Enlightenment thinking more broadly — that’s not the kind of humanism that’s going to carry us into a sustainable future.
The most common pro-human response to AI that I see nowadays is this style of humanism that tries to say there are certain fixed traits that make humans unique, and that tries to locate value only in humans as they currently exist. It says: Let’s use tech to alleviate problems like disease but not try to augment the species.
To me, that feels insufficient as a guide. Because we’re all already transhuman in some sense, right? “Human” has never been a static category. Homo sapiens has always been evolving and augmenting itself, with everything from meditation and fasting to eyeglasses and antidepressants. A humanism that refuses to recognize that feels like it doesn’t offer a compelling vision for the future.
That’s the naive version of humanism. It’s the idea that there’s this blueprint for what a human is and that somehow technology, or any things that change us, take us away from that blueprint, when in fact we’ve been changing ourselves with language, with tools, with architecture, with culture, from the moment we climbed down from the trees.
“We need to ground ourselves in an ethos of sustainability, of care, of solidarity and mutual aid and repair of the systems that we need in order to have a future. That can be its own philosophy.”
I wrote about this in The AI Mirror, where I talked about the existentialist Jose Ortega y Gasset’s notion of “autofabrication” [literally, self-making]. From the beginning, humans have had to invent and reinvent themselves anew again and again. If there is anything unique about the human, it’s that as far as we know there’s no other creature that has to get up in the morning and decide if it’s going to live differently than it did the day before, or if it’s going to maintain the commitments and promises it’s made to itself or others.
This kind of identity construction is something that our cognitive makeup has given us, both as a blessing and a bit of a curse. It’s the responsibility to choose — and to not fall back on this idea that there’s a blueprint for what a human is supposed to be and that we’re just supposed to follow that blueprint.
I think people really crave a positive vision for the future that they can get behind. To you, what is the positive, humanist-but-not-naive-humanist vision?
Sometimes I think about this demand for a positive vision and I think about how unfair and unreasonable that demand is when the mere homeostasis of life on this planet, and of human life, is fragile. For a being whose future is threatened, survival is a positive future! Maintaining the strength and resilience of our form of life is a victory. And in a way, I think there’s a danger in the desire to immediately leap past that.
We have to look at the fundamental structural causes of the scarcity we face, and see the positive, exciting, mobilizing, motivating work as addressing those deficiencies. We should be able to be excited about doing that work.
I have two simultaneous reactions to this. The first is: Yes, we should be able to get excited about that. And I think if we had a cultural narrative that taught us that just the dynamism of being alive is itself the gift, we’d be better placed to think of sustainability as the thing to treasure.
My second reaction is: But people have this persistent hunger for a story about how we can overcome suffering and make things better than ever before — a transcendence narrative!
And that’s okay. What I want to say is, if you meet people’s basic needs, both as individuals and in community, they will naturally generate the instruments of transcendence.
When you give people the ability to be free from fear and free from imminent threat, and you get them out of this feeling that they’re in a lifeboat situation — that’s when people’s creative energy really kicks in.
I’m someone who loves animals — I’m a big birder, I’m obsessed with snorkeling, I just love exploring different kinds of minds. So I could feel excited about a future where we have a multitude of diverse intelligences — animals, conscious AIs, augmented humans, etc. Do you think part of a positive vision for the future could be an expanded space of different kinds of minds? Does that excite you at all?
Yeah! Look, I’m a giant sci-fi nerd. I spent my whole childhood living in imaginary worlds with other kinds of minds: talking animals, various hybrid human-animal creations, robots, artificial intelligences. There is nothing about my humanism that blocks a future where humans share the planet with many more kinds of minds than we have today.
What I resent is the exploitation of that excitement by tech companies to sell and impose harmful, unsafe technologies that pretend to be minds, that are disguised as minds. Claude is not [a mind]. Claude is a language model built to roleplay that.
I have no assurance that it’s possible to create a machine mind. But I also have no principled reason to think it’s impossible. And the vision that you described sounds wonderful. The problem is that it’s very easy for the AI industry to say: Ah, but that’s what we’re already giving you!
You said in a talk last year that you think maybe we should take a break from a certain kind of philosophizing about humanity’s future. But looking around at the political landscape, that feels like a luxury we can’t afford. The tech broligarchs have links to the authoritarian right. Some of them want to escape the control of democratic governments, so they’re trying to create their own sovereign colonies — whether that’s space colonies or “network states.” Given their influence, taking a break from trying to steer the future feels like capitulation at a time when capitulation is very dangerous.
I hear you. It does seem very dangerous to say that there shouldn’t be some kind of counter-philosophical-movement opposing that. But when I was saying that maybe we need to pause, what I was speaking of is the kinds of philosophical preoccupations that jump ahead of the obvious needs of the moment and serve as a perpetual distraction from those needs.
There is a certain kind of philosophy that I think we need to perhaps put on hold: It’s the philosophy of forget the present, forget the problems of the moment, think bigger, think about the universal point of view.
What I’m suggesting is that we need to ground ourselves in an ethos of sustainability, of care, of solidarity and mutual aid and repair of the systems that we need in order to have a future. That can be its own philosophy.
But it’s not a utopian kind of move. Utopia is very often used as an instrument of authoritarianism and it’s used as a way to rip people away from their present commitments and needs, and to distract them with a dream that relieves the pressure to address our current circumstances. I think that’s the opposite of what we need right now.
Yeah, this is the classic point made about Christendom — how it tells us: Just focus on getting to a good afterlife, don’t expect anything good from your life on Earth. Malcolm X called it “pie in the sky and heaven in the hereafter.” It’s one of the ways I often feel like transhumanism is weirdly doing Christendom’s bidding.
Oh absolutely, 100 percent. It’s strangely regressive, right? It’s bringing us back precisely to that worldview: Don’t worry about the feudal circumstances that you are presently in, because that’s going to be a distant memory soon, when the world of infinite abundance is delivered unto you. That story was effective for millennia. But it was one that we ultimately managed to break ourselves free from.
Right, and that was one of the genuinely great innovations of humanism: Let’s not just put all our faith in the beautiful hereafter, but let’s actually care about human lives here on Earth, now.
The post How to fall in love with humanity again appeared first on Vox.




