Twitter headquarters in San Francisco, California. (Photo: Justin Sullivan, AFP)
Twitter, Facebook stocks fall on shooting regulations
Facebook’s like button puts websites into EU privacy firing line
Facebook’s big Libra launch ‘only fanned the flames’ for critics
How Facebook fought fake news about Facebook
Social media companies came under fire again last week after another mass shooting was linked to a hate-filled manifesto posted online.
For years, Twitter and Facebook largely reacted to horrific events after the fact. Now they’re now being asked by politicians to take a more proactive approach to prevent domestic terrorism. But while the companies certainly want to make their platforms safe and not be seen as nests for violence or hatred, the proposals from politicians wouldn’t be easy to achieve and could even backfire on some of the proponents.
Subscribe to Fin24’s newsletter here
The attacker in El Paso, Texas, who killed 22 in a rampage at a Walmart store last Saturday, posted a racist screed on message site 8chan minutes before the attack began, laced with words and phrases President Donald Trump has used in reference to immigrants and the media.
On Monday, the president ordered federal officials to work with social media companies to identify people who may perpetrate mass shootings before they can act. He asked tech companies to find “red flags” in social media postings that would help deter shooters before they strike and the FBI has put out a call for a contract on social media monitoring technology to parse public user postings to predict threats.
On Friday, the White House called a meeting with tech companies to discuss violent online extremism. The group “focused on how technology can be leveraged to identify potential threats, to provide help to individuals exhibiting potentially violent behavior, and to combat domestic terror,” according to White House spokesman Judd Deere.
The requests are complicated to put in place. FBI monitoring would go against Facebook’s and Twitter’s rules that bar the use of data from their sites for government surveillance. And efforts to take down posts that espouse the anti-immigrant and anti-minority sentiment often expressed by mass shooters could also end up capturing posts from politicians, including Trump himself. Social media companies are already accused by Trump and other politicians of harboring anti-conservative bias.
Still, the companies may be able to come up with strategies for thwarting future attacks if they look at the online forums popular with people with extreme views that have been linked to violent events and collaborate about what they find. The proclivity for mass murder might be near-impossible to predict in an individual, especially when combined with other factors like easy access to guns. But Facebook and others have already built up systems to counter Islamic terrorist content. Here’s a look at how they could apply similar tactics to domestic white nationalists, without FBI surveillance:
Facebook, Alphabet’s Google, Twitter, Microsoft Corp. and others are part of the Global Internet Forum to Counter Terrorism, which shares information about terrorist threats. If one of the companies discovers terrorist content, it can put a unique identifier of the image or video, called a “hash,” into a database, which helps every other company quickly find it on their own sites and take it down. Both Facebook and Twitter said the systems have helped them remove more than 90% of ISIS and al-Qaeda content before any users report it.
After the attacks on two mosques in Christchurch, New Zealand, in March, the companies said they would expand the group’s mandate to include “violent extremism in all its forms.” But it’s unclear if the system is working yet.
Facebook, Twitter and YouTube wouldn’t say specifically whether they shared hashes related to the El Paso shooting. Facebook said it used the system after the attack occurred to find and ban clips of the shooter’s manifesto and anything praising the attack, but only on the company’s own properties.
2. Shared Definitions
Tech companies have agreed on definitions for certain kinds of illegal content, like that which exploits children. They have a system for categorizing the content by level of severity, and then can send a signal to their peers so other companies could also move to take it down. But there is no such standard set of rules for white supremacist content.
“You can’t just say something is white supremacist unless you define really specifically what that is,” said Alex Stamos, a former Facebook security executive, now an adjunct professor at Stanford University. “That’s one of the real challenges of the system.”
While companies can move quickly to share information about posts glorifying or depicting an attack so that they can be taken down, it’s difficult to do without catching legitimate news organizations in the mix as they report on the events.
That shouldn’t be an excuse, said Hany Farid, a professor at the University of California at Berkeley and a senior adviser at the Counter Extremism Project.
“This is the technology industry — they solve enormous problems,” Farid said. “I don’t buy the argument that it’s too complicated.”
3. Blocking links to 8chan
Before the El Paso attack, the killer posted a manifesto on 8chan. It was the third instance of white supremacist mass violence linked to the controversial message board this year. Hosting websites including Cloudflare have cut off 8chan as a customer. Facebook, YouTube and Twitter could sever direct links to that website.
“If you’re Facebook or YouTube, there’s not a lot of things you can do to stop 8chan,” said Stamos, “but you can at least increase the gap between users of your site and of that site, so that there are fewer opportunities to lure people there.”
Making the platform harder to find would make it more difficult to recruit people to extremist groups, according to Eshwar Chandrasekharan, a Ph.D. student in computer science at Georgia Tech. He has studied the effect of the message board Reddit banning some of its most hateful groups, and found the members didn’t recongregate elsewhere on the site.
But, Stamos said, the companies are unlikely to take this step, out of concern regulators would consider it anticompetitive.
4. Reconsidering encrypted messaging
After years of scrutiny over how it handles user data, Facebook is changing the focus of its product development to encrypted messaging — with content so private, not even the company can see it. If Facebook does that, it will be impossible to follow terrorists, human traffickers, illegal drug sales, and the radicalization of mass shooters.
“Do not let them do this,” Farid said. “It will be incredibly dangerous.”