From time to time, Twitter thinks about things. And then it goes on thinking about them for a very long time.
It thought about killing off third-party applications for six years, and when it finally decided to act, only went halfway through with it.
It thought about banning Alex Jones, and then decided not to, and then held a meeting where it debated the nature of “dehumanizing speech,” and then banned Alex Jones, and then asked users for their input.
It thought about changing “the core of how Twitter works,” and discussed this idea on podcasts for the better part of last summer, and then released a beta app that threads replies. (Presumably the core-rethinking continues.)
Anyway, today came the news that Twitter is thinking about getting rid of the many vocal white supremacists on the platform. Jason Koebler and Joseph Cox talk to Vijaya Gadde, Twitter’s head of trust and safety, legal and public policy, about the company’s recent discussions with academic researchers on the subject:
”We’re working with them specifically on white nationalism and white supremacy and radicalization online and understanding the drivers of those things; what role can a platform like Twitter play in either making that worse or making that better?” she said.
”Is it the right approach to deplatform these individuals? Is the right approach to try and engage with these individuals? How should we be thinking about this? What actually works?” she added.
On one hand, it’s great that Twitter is asking these questions. On the other, as various academics quoted in Vice’s piece will tell you, it’s hard to take any of it seriously.
When Motherboard described Twitter’s plans on the phone, two of the academics laughed before responding.
”That’s wild,” Becca Lewis, who researches networks of far right influencers on social media for the nonprofit Data & Society, said. “It has a ring of being too little too late in terms of launching into research projects right now. People have been raising the alarm about this for literally years now.”
”I mean, these quotes are a disaster, I’m going to be honest,” Angelo Carusone, president of Media Matters, a progressive group that studies conservative disinformation, said. “The idea that they are looking at this matter seriously now as opposed to the past indicates the callousness with which they’ve approached this issue on their platform.”
I understand that it’s easy — too easy, honestly — to dunk on Twitter over stuff like this. And it’s better that the company pays lip service to fighting white supremacy on its platform than nothing.
But the thing is — it’s only a little bit better than nothing. Because reading Twitter’s constant proclamations about what it might change about itself someday, you get the feeling that the company is seeking credit in the court of public opinion for its good intentions. And if there’s one thing we have learned about the consequences of social networks on society, it’s that good intentions just don’t matter.
So what could the company do instead?
One obvious answer: Twitter could enforce its own rules. While Gadde is seeking academic confirmation that removing Nazis from the platform is a good and useful thing to do, the fact remains that they aren’t supposed to be there in the first place. Recall this January interview with Jack Dorsey in Rolling Stone:
BRIAN HIATT: Technically, being a professed white nationalist isn’t grounds for removal, right? Someone has to make specific threats?
JACK DORSEY: It actually is. If they align themselves with a violent extremist group, like the American Nazi Party, we suspend their account. There are not self-professed Nazis. If you can show them, I would love to see them, and figure out why we haven’t taken action on them, but…
I can confirm that there are Nazis on Twitter.
A lot of the calls for “remove the Nazis” are also due to the fact our enforcement operates on reporting. A lot of people don’t report. They see things, but it’s easier to tweet out “get rid of the Nazis” than to report it.
This conflict gets to the heart of the trouble with Twitter. In one interview, an executive will low-key brag about the intellectual rigor with which the company is approaching actually-not-that-difficult questions about what to do with users who favor varying degrees of genocide to achieve their political aims. And in another, the CEO will acknowledge that the question has basically already been resolved, but the company lacks the technical competence to find all the bad actors on its platform.
In the Dorsey interview, he goes on to say that Twitter needs to be more proactive about finding white nationalists. It’s a good idea, now five months old, and we’ve heard nothing about any concrete steps that Twitter might take to implement it. Instead, as ever, the company wants some time to think. And while I understand why the academics quoted in Vice’s article are laughing, I can’t say I find it all that funny.
From time to time, Twitter thinks about things. And then it goes on thinking about them for a very long time.
In yesterday’s issue, I endorsed Alexios Mantzarlis’ suggestion that Facebook should tell people that the distorted Pelosi video was fake, rather than simply that there is “additional reporting available.” A smart person wrote to remind me that when Facebook actually did label content as false, people were more likely to share it.
House Speaker and distorted video star Nancy Pelosi had strong words for Facebook yesterday in the wake of its decision not to remove digitally altered videos that appear to show her slurring her words:
“We have said all along, poor Facebook, they were unwittingly exploited by the Russians. I think wittingly, because right now they are putting up something that they know is false. I think it’s wrong,” she said. “I can take it … But [Facebook is] lying to the public.”
Pelosi added, “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”
Everyone has a content policy plan until the Nazis actually show up. Ben Makuch and Jordan Pearson report on the social network Minds:
In the neo-Nazi Minds group with over 350 followers that was not banned, images depicting overtly hateful and neo-Nazi content are currently visible, while some posts were hidden behind an age block. One post, which appears to dox someone described as a “race traitor,” is plainly visible. Minds’ content policy says that doxing is grounds for a ban.
“The extreme right is always going to look for loopholes in content policies when it comes to propaganda and encouraging violence. The variable is where social media companies draw the line and decide that they don’t want to assist in this endeavor,” Joshua Fisher-Birch, of the Counter Extremism Project, told Motherboard in an email.
Megha Rajagopalan reports that US companies, including IBM, are helping to build China-style surveillance systems in the United Arab Emirates:
At a recent government-organized conference on artificial intelligence in Dubai, representatives from technology companies including Huawei, which the Trump administration recently put on a trade blacklist as a threat to national security, China’s Hikvision, and IBM said they saw the UAE and other countries in the Persian Gulf as an exciting market to sell their video analysis platforms, which they say can do everything from analyzing the behavior of groups to automatically blacklisting individuals based on their faces.
Other governments in the Persian Gulf, such as Bahrain and Saudi Arabia, are also using cellphone hacking and other high-tech surveillance measures to monitor and intimidate dissidents, including exiles. The UAE meanwhile has poured money into developing its surveillance capabilities.
Patrick Klepek reports:
Carl “Sargon of Akkad” Benjamin, a YouTube figure who rose to prominence railing against feminism during the GamerGate movement, will not become part of the European Parliament. Benjamin ran as part of the far-right UKIP party, a group almost exclusively focused on racist anti-immigration policies. UKIP did especially poorly, according to BuzzFeed, following their attempt to inject energy into the movement by recruiting loud voices from the Internet.
The thing about a shareholder proposal at a company where the founder and CEO retains majority voting control is that it usually doesn’t go very far. Kurt Wagner and Paula Dwyer report on the latest:
There’s a problem with Goodridge’s plan: She can’t advance her proposal at Thursday’s meeting to limit Zuckerberg’s monopoly voting power because Zuckerberg has monopoly voting power. He controls 88% of these more powerful shares, which gives him nearly 58% of Facebook’s voting power. In order to change Facebook’s voting structure and reclaim some of Zuckerberg’s control, Goodridge needs the support of the one man who has the most to lose by changing it.
Which is why, even before any of the votes are tallied, Goodridge knows she will fail for the fifth time.
Pretty dumb headline — Facebook engagement is still quite healthy by any measure — but it is on the decline. (As an aside, it seems strange to predict engagement for a social app two years out. Who knows what will have launched by then?)
Engagement with Facebook is set to decline or remain flat for the foreseeable future, according to a new report from eMarketer.
Daily time spent on Facebook declined by 3 minutes among U.S. users in 2018, according to the firm. Users spent an average of 38 minutes per day on the platform in 2018, the report says, down from 41 minutes a day in 2017. eMarketer expects usage to further decline to 37 minutes a day by 2020 and remain flat in 2021.
People were creating new accounts to broadcast porn and murder on Twitch, so Twitch started limiting streaming privileges, Julia Alexander reports:
The streaming service’s Artifact category was brigaded all weekend by trolls, who reportedly first fled to the section as a way of joining in on a recent meme. Artifact, a card game developed by Valve, was recently named the least popular game on Twitch. Twitch users, and the broader gaming community, have been dunking on the game’s failure for months, both before and after Valve announced it was taking time to redesign the title in late March. But this weekend that changed with a flood of new streams using the category’s low visibility to stream content violating Twitch’s policies.
Twitch’s statement acknowledged that they “became aware of a number of accounts targeting the Artifact game directory” over the weekend. Twitch’s team also recognized trolls were using the category “to share content that grossly violates our terms of service.” The majority of the accounts that “shared and viewed content were automated.”
There are few things I enjoy reading about more than brands angrily storming off social media, so thank you CrossFit:
CrossFit, the branded workout regimen, deleted its Facebook and Instagram pages earlier this week and explained the reasoning through an impassioned press release. The announcement lists various reasons for the indefinite suspension of its accounts, including accusations that Facebook’s News Feeds are “censored and crafted to reflect the political leanings of Facebook’s utopian socialists.”
The issue stemmed from the deletion of a South Africa-based Facebook group, Banting 7 Day Meal Plans, which the company says happened without warning or explanation. The group, which is unrelated to CrossFit but has 1.6 million members espousing the benefits of a low-carb, high-fat diet like CrossFit’s recommended nutritional regimen, has since been reinstated. But the damage was done, and the deletion was the final straw in addition to CrossFit’s wariness over how Facebook handles user data.
Susie Cagle profiles a company that created a huge database of banned bar patrons, alarming civil liberties advocates. It’s interesting to see how China-style social-credit systems are evolving in America; see also this CNET piece on Uber banning customers with low scores.
The same report indicates that PatronScan collected and retained information on over 10,000 patrons in Sacramento in a single day. Within a five month period, that added up to information on over 500,000 bargoers. PatronScan claims to have a networked list of more than 40,000 banned customers, many of whom may not even know about their eighty-sixed status until they try to gain entry into another bar covered by the system.
To some onlookers, PatronScan’s product raises a number of concerns about privacy, surveillance, and discrimination. PatronScan’s reports reveal the company logged where customers live, the household demographics for that area, how far each customer travelled to a bar, and how many different bars they had visited. According to the company’s own policies, the company readily shares the information it collects on patrons, both banned and not, at the request of police. In addition to selling its kiosks to individual bars and nightlife establishments, PatronScan also advertises directly to cities, suggesting that they mandate the adoption of their service.
There’s a Portal app now so you can call your Facebook Portal from your home when you are not around your Portal!
What exactly are game developers doing on Facebook these days? Haven’t seen one in ages:
Facebook game players will soon encounter more advertising tied to in-game rewards. The social media platform will be giving publishers more options to monetize their mobile games. Developers can now display in-app “rewarded video” ads and advertise their games with previews that play right on News Feed.
In the wake of the Pelosi video controversy, Farhad Manjoo says that Fox News should worry us more than Facebook:
Whatever Facebook decides to do with this weird little video is a big meh, because if you were to rank the monsters of misinformation that American society now faces, amateurishly doctored viral videos would clock in as mere houseflies in our midst. Worry about them, sure, but not at the risk of overlooking a more clear and present danger, the million-pound, forked-tongue colossus that dominates our misinformation menagerie: Fox News and the far-flung, cross-platform lie machine that it commands.
And that’s exactly what happened last week. In going after Facebook, many observers forgot about Rupert Murdoch’s empire, whose Fox Business spinoff aired a similarly misleading Pelosi hit job on “Lou Dobbs Tonight.” This was upside down. While newfangled digital manipulations should raise some concern, they are still emerging, long-range threats, and social networks are at least experimenting with ways to mitigate their negative impact on society. But we don’t have much hope nor many good ideas for limiting the lies of old-media outlets like Fox News, which still commands the complete and slavish attention of tens of millions of Americans every night, polluting the public square with big and small lies that often ricochet across every platform, from cable to YouTube to Facebook to Google, drowning us all in a never-ending flood of fakery.
Here are some smart thoughts on misinformation from one of my favorite new newsletters:
If you look at the spam problem long enough, and squint a bit, it starts to resemble the fake news problem. Replace Eudora with Facebook and Nigerian princesses with some Russian-government trolling, and you have a system where the costs of distribution of material is cheaper the returns, and the entire thing flies off the wheel. This isn’t really a new line of thinking and I’ll credit some Benedict Evans tweets (who ironically blocked me on Twitter) for some of the terminology I’m using here.
Anyway. It’s natural to think that the previous approaches should work on this problem too; 1) centralize to get better data and leverage (i.e. one tweak fixes everything) 2) apply machine learning. Rinse, repeat. Simple enough, really.
And finally …
Whoever they hire can’t do any worse than the current one, who personally attacked me on Wednesday:
FYI: there IS an edit button. (In your brain)
— Twitter (@Twitter) May 29, 2019
Talk to me
Send me tips, comments, questions, and something you plan to think about but never actually act on: [email protected].
The post Forget new research on Nazis — Twitter should just enforce its existing ban appeared first on The Verge.