Skip to content
Link copied to clipboard

Facebook needs to referee more hate speech after Charlottesville | Opinion

If internet forums fail to police themselves, the government may get involved - and that's not always the best thing.

Paul Sakuma / AP

On Thursday, Mark Zuckerberg released a message on the horrible events in Charlottesville. The Facebook CEO and founder's statement came later than Donald Trump's views about last weekend, but it was more profound than any of the things the president has said so far. Though the latter might not be too hard, in this case Zuckerberg's words are noteworthy.

He wrote: "We aren't born hating each other. We aren't born with such extreme views." He further mentioned a responsibility, which everybody has, to do what one can. "I believe we can do something about the parts of our culture that teach a person to hate someone else. It's important that Facebook is a place where people with different views can share their ideas. Debate is part of a healthy society. But when someone tries to silence others or attacks them based on who they are or what they believe, that hurts us all and is unacceptable."

An important statement, coming from the CEO of the world's leading social network.  Facebook's main task may not be to protect and unite the country — which is the role of the president — but the company does have a social responsibility when it comes to its two billion worldwide users. Facebook and others must be fully aware of their influence on society.

Sadly, Facebook and other social media have a lengthy and dark history of handling hate speech and comments endorsing violence or echoing racist ideology or organizing criminal actions or terrorist attacks. They rightly have been under pressure for their lax stance towards all kinds of hate on the internet. When it comes to terror attacks, social media are simply often not able to deal with the huge amount of propaganda. And sometimes Facebook's internal rules are so complex that they affect people who are actually fighting hate by making it public.

From my standpoint as a European, it is always puzzling how Facebook, Twitter, Google, and others have a tougher control on any presentation of even low-level nudity, like, only a little while ago, even pictures of breastfeeding, in comparison to hate and violence. It is imperative that Facebook and other social networks take more responsibility for what is going on their platforms; it is not enough to just provide the technology. At a sports match, there are not just two teams on the field. There also is a referee watching what is going on. Social media should take a more attentive look what is happening on their field.

"There is no place for hate in our community," Zuckerberg's statement continues. "That's why we've always taken down any post that promotes or celebrates hate crimes or acts of terrorism — including what happened in Charlottesville. With the potential for more rallies, we're watching the situation closely and will take down threats of physical harm."

Facebook did indeed remove content from white supremacists and neo-Nazis last week.

Further actions by other tech companies also took place: Reddit announced it was banning groups and pages linked to far-right extremists. Furthermore, Go Daddy, the internet domain registrar and web-hosting service, took down the website of the National Stormer, which helped organize the violent neo-Nazi gathering in Virginia.

These steps, taken by technology companies themselves, are welcome; otherwise, the state may take matters into its own hands, which does not always lead to the best results.

Germany, for instance, just passed a highly controversial law to fine social media companies over hate speech this summer. Under the new Network Enforcement Act, companies face fines of up to $57 million for failing to remove illegal content within 24 hours. This includes racist or slanderous comments and incitements of violence. Digital rights activists have been criticizing the new law for infringing on free speech. Also, the initiative has been criticized as a single-handed action by one government, where a common approach would be better.

As technology evolves, there must be a discussion — with all players involved — about how to deal with hate speech practically. There are many questions that must be resolved: What can be considered as freedom of speech and where is that line drawn? Where does censorship start? Will these actions violate the First Amendment? Can companies tackle some extremist views while ignoring others?  Do the companies need more workers or more sophisticated artificial intelligence to improve their response to the problem and conduct proactive searches for hate speech?

"We won't always be perfect, but you have my commitment that we'll keep working to make Facebook a place where everyone can feel safe," Zuckerberg wrote.

Closer looks will be, at least, a start. Will this action by Facebook, Reddit, and other tech companies be permanent or is this a onetime change in the wake of Charlottesville? Time will tell. Taking more responsibility is just one step in the right direction – with many more to go.