In 2018, my brother and I sat in a product review at Snap’s Santa Monica offices staring at a chart we’d grown too familiar with. Our daily active users had stalled. Instagram (which had launched a near-identical version of our core Stories feature two years earlier) was accelerating. Snap had built a genuinely better product. It had a loyal, passionate user base. It had once famously turned down $3 billion from Facebook when most people would have taken the money and left. None of it was enough.
Meta controlled the layer beneath Snap: the social graph, the feed, the audience. Once you own that layer, everything built on top bends toward you. We’ve spent more than two decades in tech. We’ve learned to recognize when a pattern repeats.
In social networks, concentration formed at the platform level. Distribution, data, and audiences clustered around whoever had the most users, and everything else followed with them. Today, two of the largest social platforms on earth (Facebook and Instagram) belong to the same company. The cloud infrastructure that powered them followed the same logic: Amazon, Microsoft, and Google now control roughly 63% of global cloud capacity. AI is following the same geometry, only faster.
One Level Deeper
You could always spin up a new social platform. TikTok proved it possible. In AI, you cannot build anything without compute: the GPUs, the data centers, the raw processing capacity that determines who gets to participate in this economy at all. Access to compute has become a prerequisite for existence, not just for scale. A startup without cloud contracts cannot train. A country without chip access cannot compete. The gate is already in place.
The numbers describe how closed that gate already is. NVIDIA holds 85% of the data center GPU market. Three American cloud companies control 63% of the infrastructure on which most AI runs. At the model layer, just two companies capture the overwhelming majority of revenue from AI-native companies, according to The Information. The geographic picture is starker than most people realize. The United States controls approximately 75% of global high-performance AI compute capacity. China holds around 15%. The remaining countries share the last 10% between them, most holding a fraction too small to measure. The rest of the world is not competing for AI infrastructure. It is dependent on whoever wins. There is no antitrust mechanism for countries.
There is also a quieter form of inequality embedded in the technology itself. Large language models are trained overwhelmingly on English-language data, which means prompts in other languages consume more tokens for the same output. Non-English users pay more, hit context limits faster, and receive lower-quality results. One price does not mean equal access.
This has stopped being a business concentration story. When AI shapes what gets built, how economies function, and how information flows, whoever controls the compute layer controls something closer to a geopolitical chokepoint than a market position. The United States has already demonstrated this with export controls on advanced chips, restricting which nations can develop certain AI capabilities. Two countries are now setting the terms for 191 others. Those 191 are, in varying degrees, already dependent, and that dependency is growing.
The consequences are concrete and current. Services become unavailable because of sanctions, regulatory shifts, or local law. AI models can be degraded or quietly redirected without notice. Businesses that have built products on centralized infrastructure face existential dependency on a handful of providers whose terms can change overnight, and who have demonstrated they will change them. We have already seen major AI providers retire popular models despite user backlash, restrict API access without warning, and throttle developer capabilities under the cover of safety policies that no independent body can audit.
The Alternative
Bitcoin and Ethereum did not solve financial concentration by building a better bank. They rebuilt the layer underneath: open protocols where anyone could participate as a contributor, not a customer. Thousands of independent operators, no central gatekeeper, no single point of failure or control.
The same logic applies to compute. An open, decentralized network for AI infrastructure (one where GPU capacity can be contributed and accessed across organizational and geographic boundaries) does not need to beat hyperscalers on their own terms. It makes the hyperscalers optional. It puts a floor under the market, and a ceiling on the power any single actor can extract.
That is what led us to build Gonka: a decentralized, community-governed network for AI compute, designed specifically for inference. It is our attempt to rebuild the layer, not simply compete on top of it.
What we’ve carried from that experience: in technology, whoever controls the infrastructure decides what gets built on top, and what gets pushed out. In AI, that layer is compute. Right now, it belongs to very few people. If we wait until the infrastructure is fully locked in, the next generation will not be choosing between models. It will be choosing between permissions.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.
The post We watched social media concentrate. The same thing is happening in AI, only at a deeper layer appeared first on Fortune.




