DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Can We Stop Our Digital Selves From Becoming Who We Are?

December 7, 2025
in News
Can We Stop Our Digital Selves From Becoming Who We Are?

“Attention is not neutral,” Antón Barba-Kay, a philosopher at University of California, San Diego, writes in “A Web of Our Own Making: The Nature of Digital Formation.” “It is the act by which we confer meaning on things and by which we discover that they are meaningful, the act through which we bind facts into cares.” When we cede control of our attention, we cede more than what we are looking at now. We cede, to some degree, control over what we will care about tomorrow.

The politics of attention are on my mind because a recent court case has sharpened the need to describe what, exactly, has gone wrong in our digital lives. In 2020, the Federal Trade Commission sued Meta for creating an illegal monopoly in the personal social networking market. Last month, a Federal District Court in Washington ruled in Meta’s favor.

The F.T.C. argued that there was a discrete market of personal social networking in which the only competitors were Facebook, Instagram, Snapchat and an app I’ve never heard of called MeWe. Meta’s rebuttal was simple: It also competes with TikTok and YouTube, among others. It might have begun life as a social network, but it is, today, something else entirely. Only 17 percent of time spent on Facebook is spent viewing content posted by friends. On Instagram, it’s 7 percent.

The ruling includes images showing how all these apps have come to resemble each other: Reels on Instagram and Facebook are virtually indistinguishable from Shorts on YouTube or videos on TikTok. I can look at Meta’s products and see they have responded agilely to the innovations of their competitors. It’s made me like Meta’s apps less, but I can’t deny that when I open them, I’m likelier to be drawn into a scrolling hole that I need to wrench myself back from. Competition seems, to me, to have made these apps better at fulfilling their corporate purpose and worse for human flourishing.

“Meta’s goal is to get users to spend as much time on its apps as possible, and it tunes its algorithms to show users the content they most want to see,” the court writes. I think that’s generous. What Meta shows me is what Meta most want me to see, which is whatever their prediction models believe will get me to spend as much time on their apps as possible. The algorithms serve the company’s ends, not my ends.

If Meta wanted to know what I want to see, it could ask me. The technology has long existed for users to shape their own recommendations. These companies do not offer us control over what we see because they do not want us to have it. They do not want to be bound by who we seek to be tomorrow.

Attention is sometimes an act. But it is first an instinct. This is why even the most basic attempt at mindfulness — watching 10 breaths go by, without your attention wandering — requires such concentration. Algorithmic media companies exploit the difference between our attentional instincts and aspirations. In so doing, they make it harder for us to become who we might wish to be.

Seeing these companies as seeking a form of control over our attention reveals, I think, the inadequacy of antitrust law for this particular task. The point of antitrust policy is typically to increase competition in a market, unlocking entrepreneurial ferocity and genius by lowering the barrier to entry. But is fiercer competition for my attention, or my children’s attention, desirable?

There are many markets, from meatpacking to hospitals, in which corporate concentration is choking off competition, raising prices and retarding innovation. But there are many kinds of products in which more innovation can lead to more destruction. Do we need vapes that are more compulsively usable? Is it good that online gambling firms are spending so much on slick marketing to find new users? Do we really want A.I. companies competing to create the most addictive pornbot? The question, I think, is under what conditions algorithmic media becomes such a product.

Max Read, a technology critic, wrote an insightful essay in his Substack newsletter arguing that ideas I’m circling here are best understood as a modern temperance movement, “positioning the rise of social media and the platform giants as something between a public-health scare and a spiritual threat, rather than (solely) a problem of political economy or market design,” he writes. This approach, he goes on to say, is “distinctly not ‘populist’ … so much as progressive in the original sense, a reform ideology rooted in middle-class concerns for general social welfare in the wake of sweeping technological change.”

I think there’s truth in all of that. TikTok’s affects on our wallets matters less, to me, than its effect on our souls. But I don’t see the division here as between populists and progressives — groups that substantially overlap anyway. The F.T.C. lost the Meta case because it is limited in its mission and its tools, but at least it was trying to do something about the power these platforms exert over our society. Where was everyone else?

The division I see here is between progressivism and liberalism as we now understand it. Modern liberalism is built around the idea that the government should make it possible for people to pursue their happiness as they see fit, so long as they are not harming others. It has much to say about individual rights and little to say about the common — or even the individual — good.

Liberalism carries, at its core, a trust that social experimentation will lead to better forms of social organization. That has freed it — and freed us — from the shackles of repressive traditions. But it can be confounded when adults are freely making decisions that don’t harm others but perhaps harm themselves. And it has created a loophole that algorithmic media companies have driven a truck through: We’re just giving people what they want, they say. Who are you to judge what they want?

It’s not an easy question to answer.

In his book “Democracy’s Discontent,” Michael Sandel, a political philosopher at Harvard, argues that American history has been split between two competing public philosophies. There’s the American strain of liberalism, which he describes much as I have, and which has dominated in recent decades. And there is “republicanism,” which rests on “a formative politics, a politics that cultivates in citizens the qualities of character that self-government requires.”

This, to me, comes closer than anything else to capturing my sense of what is wrong with the digital world in which so many of our selves and so much of our society is now formed. About half of teenagers say they are online “almost constantly.” A plurality say these algorithmic platforms are bad for their peers (though fewer say they are bad for them personally). About two-thirds of adults say social media has been bad for the country. When our attention is held for hours each day by black-box algorithms that feed us not what we want so much as what we find it hard to look away from, we are being, in a sense, deformed.

But sensing that the present digital environment harms many of its users is a long way from knowing what would be better or what the government should do about it. As compelled as I am by the idea of bringing ideas of human flourishing back to the center of our politics, I turn queasy when I read the history of movements that have tried to do so.

Sandel tracks the longstanding belief in American life that yeoman farmers and small-business owners were more virtuous citizens than, say, factory workers. I don’t think that’s true, and it’s telling that so many now yearn for the stable, pro-social nature of yesteryear’s manufacturing jobs. The Progressive movement scored many victories, but there’s much in its history — from its embrace of phrenology and eugenics to forcing Native American children into boarding schools — that is repulsive. The past offers little succor to those who claim to know how to perfect, or even improve, the characters of others.

But it feels to me like the outlines of an agenda — or at least ideas worth debating and trying — are coming more clearly into focus. Much of it revolves around two ideas: First, children should be more insulated from the ubiquity of digital temptations. Second, companies that want to shape so much human attention need to take on more responsibility, and liability, for what might go wrong.

States all across the country are banning cellphones in public schools. A number of states have forced age verification on porn sites and rather than comply, Pornhub simply blocked access in those states. Senators Brian Schatz, a Hawaii Democrat, and Ted Cruz, a Texas Republican, were among the sponsors of the Kids Off Social Media Act, which would ban social media companies from offering accounts to children under 13, and ban them from delivering algorithmic recommendations to kids under 17.

Representative Jake Auchincloss, a Democrat from Massachusetts, recently introduced a series of bills that I think are promising: The Deepfake Liability Act would condition the immunity these platforms now have from lawsuits on their efforts to beat back deepfake porn and cyberstalking; the Education Not Endless Scrolling Act would place a 50 percent tax on digital advertising revenue over $2.5 billion; and the Parents Over Platforms Act would strengthen age verification.

In an essay for The Argument, Joel Wertheimer, a plaintiff attorney, suggests reworking Section 230, the law that gives digital platforms immunity from being sued over user-generated content. “Specifically, lawmakers should remove protections for platforms that actively promote content using reinforcement learning-based recommendation algorithms,” he writes. His idea intrigues me because it would hold harmless the old internet while demanding a far higher level of accountability from companies that want to use algorithms we do not understand or control to shape what we see.

But will any of these bills pass? And even if they do, is all of this just fighting the last war? Just as social networks became algorithmic feeds, now personalized A.I. systems are upending our digital lives once again. The algorithms that Meta uses to serve up online video were but a rest station on the path to the A.I. chatbots that are weaving their way into our lives as assistants, teachers, counselors, lovers and friends.

“The average American has fewer than three friends, fewer than three people they would consider friends,” Mark Zuckerberg, the chief executive of Meta, has said. “And the average person has demand for meaningfully more. I think it’s something like 15 friends or something. At some point you’re like, ‘All right, I’m just too busy, I can’t deal with more people.’” A.I., he suggested, could fill that gap.

None of us know how it will change adults to fall into intimate relationships with A.I.s, to say nothing of what it will mean for children to grow up in a world where A.I. companionship is omnipresent. It could be better than today’s opaque algorithms, offering us the ability to ask for what we want and actually get it. But are we so certain what teenagers will want from A.I. companions is something they should have? And what happens when corporations find it is more profitable to have the A.I.s we treat as friends manipulate what we want to better serve their bottom lines?

Which is why, in the end, I don’t believe it will be possible for society to remain neutral on what it means to live our digital lives well. Absent some view of what human flourishing is, we will have no way to judge whether it is being helped or harmed. This line from Barba-Kay might be corny, but it has the virtue of being true: “If the present technological age has a lasting gift for us, it is to urge as decisive the question of what human beings are for.”

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].

Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.

The post Can We Stop Our Digital Selves From Becoming Who We Are? appeared first on New York Times.

Leak Highlights Poor State of Louvre Infrastructure
News

Leak Highlights Poor State of Louvre Infrastructure

by New York Times
December 7, 2025

For the past two months, the Louvre has been facing a nagging question: How could four thieves steal royal jewelry ...

Read more
News

‘Doth protest too much’: Critics flag White House’s aggressive denial of shocking story

December 7, 2025
News

New UCLA coach Bob Chesney’s JMU team snags final spot in College Football Playoff

December 7, 2025
News

‘Taking his lunch’: Ex-Obama advisor says Trump just revealed a major weakness

December 7, 2025
News

7 Cheerful Christmas Comedies Guaranteed to Make Your Holidays Merry and Bright

December 7, 2025
Thanks to one man, Trump has ripped up two centuries of history

Thanks to one man, Trump has ripped up two centuries of history

December 7, 2025
OpenAI goes from stock market savior to burden as AI risks mount

OpenAI goes from stock market savior to burden as AI risks mount

December 7, 2025
Why Troops Are Blowing Whistle on Trump: Senator

Why Troops Are Blowing Whistle on Trump: Senator

December 7, 2025

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025