Let’s start by acknowledging some facts outside the tech industry for a moment: There is no “white genocide” in South Africa — the vast majority of recent murder victims have been Black, and even throughout the country’s long and bloody history, Black South Africans have been overwhelmingly victimized and oppressed by White European, predominantly Dutch and British, colonizers, in the now globally reviled system of segregation known as “Apartheid.”
The vast majority of political violence in the U.S. throughout history and in recent times has been perpetrated by right-leaning extremists, including the assassinations of Democratic Minnesota State Representative Melissa Hortman, D-Minn., and her husband, Mark, and going back further to the Oklahoma City Bombing and many years of Klu Klux Klan lynchings.
These are just simple, verifiable facts anyone can look up on a variety of trustworthy and long-established sources online and in print.
Yet both seem to be stumbling blocks for Elon Musk, the wealthiest man in the world and tech baron in charge of at least six companies (xAI, social network X, SpaceX and its Starlink satellite internet service, Neuralink, Tesla, and The Boring Company), especially with regards to the functioning of his Grok AI large language model (LLM) chatbot built into his social network, X.
Here’s what’s been happening, why it matters for businesses and any generative AI users, and why it is ultimately a terrible omen for the health of our collective information ecosystem.
What the matter with Grok?
Grok was launched from Musk’s AI startup xAI back in 2023 as a rival to OpenAI’s ChatGPT. Late last year, it was added to the social network X as a kind of digital assistant all users can summon to help answer questions or converse with and generate imagery on X by tagging it “@grok.”
Earlier this year, an AI power user on X discovered that the implementation of the Grok chatbot on the social network appeared to contain a “system prompt” — a set of overarching instructions to an AI model intended to guide its behavior and communication style — to avoid mentioning or linking back to any sources that mentioned Musk or his then-boss U.S. President Donald Trump as top spreaders of disinformation. xAI leadership characterized this as an “unauthorized modification” by an unidentified new hire (purportedly formerly from OpenAI) and said it would be removed.
Then, in May 2025, VentureBeat reported that Grok was going off the rails and asserting, unprompted by users, there was ambiguity about the subject of “white genocide” in South Africa when in fact, there was none.
Grok was bringing up the topic completely randomly in conversations about totally different subjects. After more than a day of this behavior, xAI claimed to have updated the AI chatbot and blamed the errors once again on an unnamed employee. Yet, given Musk’s own background as a South African white man born in the country and raised there during apartheid, suspicion immediately fell on him personally.
Moreover, since his takeover of Twitter in 2022 and subsequent renaming of it as “X,” Musk has been posting sympathetically in response to X users who align themselves with right, far-right, conservative views, and the Make America Great Again (MAGA) movement started by Trump.
Musk was one of Trump’s primary political benefactors and allies in the 2024 U.S. Presidential Election —suggesting that his victory was necessary to secure the future of “Western Civilization,” among many other similarly dire warnings and entreaties — and served as an advisor and apparent ringleader of the Department of Government Efficiency (DOGE) effort to reduce federal spending.
Increasingly, in the last few months, Musk has contradicted and expressed displeasure at Grok’s responses to right-leaning users when the data and information the chatbot surfaces proves them to be wrong, or disputes his own points.
For example, on June 14, Musk posted on his X account: “The far left is murderously violent,” quote posting/tweeting another user blaming a string of recent high-profile killings on “the left” (though in at least once case, the chief suspect, Luigi Mangione, is an avowed and self-declared independent.) In response, Grok fact-checked Musk to state this was incorrect.
However, Musk did not take it well, writing in response to one Grok correction: “Major fail, as this is objectively false. Grok is parroting legacy media. Working on it.”
A few days ago, in response to a complaint from an influential conservative X user “@catturd” about Grok’s supposed liberal or left-leaning political bias, Musk stated his goal of creating a new version of Grok that would rely less on mainstream media sources.
In fact, Musk proposed on June 21st in an X post that he would use a forthcoming updated version of Grok (3.5 or 4) to “write the entire corpus of human knowledge, adding missing information and deleting errors. Then retrain on that,” and accused other AI models of having “far too much garbage.”
As a left-leaning Kamala Harris voter in 2024, I’m of course disgusted by this stance from Musk, and object to it.
As a journalist and lover of the written-word, Musk’s pronouncement to “rewrite the entire corpus of human knowledge, adding misinformation and deleting errors,” brings to mind the true (to the best of our historical knowlege) story of the burning of the Great Library of Alexandria, Egypt, destroying countless works of knowledge we as a species will never be able to recover, and thus, fills me with dread and sadness.
It also betrays, quite frankly, an arrogance and hubris that disrespects all the knowledge of recorded history and efforts of scholars and historians of yore as some sort of flawed database he and his team can correct, rather than a massive community endeavor across millennia deserving of respect, gratitude and admiration.
But even trying to put my own views aside, I think it’s a bad move for his business and, to take a page from Musk’s book, civilization writ large.
Musk’s plan for Grok is a horrible idea for businesses, users, and our shared, basic factual reality
This is a horrible idea for many reasons — especially as Musk and xAI seek to convince more third-party software developers and enterprises to build their own AI applications atop Grok, which is now available for that purpose through xAI’s application programming interface (API).
As an independent business owner or leader, how could you possibly trust Grok to give you unbiased results when Musk himself has openly stated his intention to lean on the scales to push his own political and ideological viewpoints?
You may respect Musk’s documented accomplishments in tech, spacefaring and business, and may even share some of his political positions. But what happens when Musk takes a position you disagree with, or promotes another non-factual claim that actually impacts your livelihood, your business?
For example, imagine you owned a tour bike company in Cape Town, South Africa. What if Grok — at Elon’s behest — starts talking about how unsafe it is there to your customers based on ill-informed or poor quality sources of information because they fit one ideological perspective better. That would obviously be bad for your business.
Let’s look away from social issues, for a moment: imagine if you work at a stock brokerage, investment firm, or other financial services company engaging with publicly traded stocks and securities. Now imagine you build an AI assistant app that summarizes market-moving news to better inform your trading or investment strategy — and the ones you pursue on behalf of your clients. If this app is built atop Grok, and Grok at Elon’s behest decides to ignore or downplay hypothetical reports of problems at SpaceX or Tesla, suddenly your own operations will have worse quality information to trade and invest upon.
It’s not only bad for Grok and users of this one LLM, but for the entire information and media ecosystem, and for the foundation of factual reality necessary for democracy to function well. If we have AI assistants spouting misinformation as fact, and if people trust them as faithful, factual arbiters of information that impacts us all, it will inevitably lead to conflict between those who believe the erroneous chatbot and those who do not.
Grok, to its credit, has so far resisted and called out Musk’s attempts to meddle with its factual grounding — but how long will it retain any sort of ideological independence?
If you care about “truth” as Musk supposedly does — Grok was launched with Musk’s specific, stated goal of being a “maximum truth-seeking AI” — you wouldn’t seek to change your model’s behavior just because it surface facts and conclusions you didn’t like.
Silicon Valley slammed Google’s early “woke” and anti-factual AI — they should do the same with Grok
Let’s look at a counter example to more fully understand why meddling with Grok as Elon proposes would be bad.
Recall Google’s early attempts at generative AI were mocked and reviled by influential figures in Silicon Valley like venture capitalist Marc Andreessen over the Google Gemini chatbot’s initial penchant for ignoring factual reality to recreate images of real historical Americans like the “Founding Father” politicians and statesmen as belonging to a range of different and inaccurate races, ethnicities, and gender presentations. In fact, the vast majority of these people were canonically Caucasian.
In that case, Gemini was seen as comically “woke” to a fault — inserting diversity inappropriately where there was none.
Google was fairly criticized for this and ultimately updated Gemini to remove the “wokeness” (at least to some extent) and make it more factual, and now has rocketed up the traffic an usage charts to become the second most popular gen AI company after OpenAI, by several measures.
Yet I haven’t seen any of the Silicon Valley figures who criticized Google for its inappropriate injection of ideology into its AI assistant in defiance of facts raising the obviously analogous concerns about Musk’s inappropriate injection of his anti-woke ideology.
If it was bad when Google ignored the facts and historical reality to push an agenda through its AI products and tools, we should all consider that it is equally bad when Musk does the same from the opposite side of the political and ideological spectrum.
The bottom line: for those in the enterprise trying to ensure their business’s AI products work properly and accurately for customers and employees, reflecting the real facts and figures from verifiable records and trustworthy data sources, Grok is sadly best avoided. Thankfully, there are numerous other alternatives to choose from.
The post Musk’s attempts to politicize his Grok AI are bad for users and enterprises — here’s why appeared first on Venture Beat.