Much of our online lives these days are centered around social media. We treat these platforms—sites like Facebook and Twitter, YouTube and Google+—as the “public sphere,” engaging in debate and activism on them the way our predecessors might have in the town square. But unlike the town square, these companies are privately-owned, with their own rules and systems of governance that control users’ content.
Social media companies are also subject to content removal and user data requests from governments around the world. Over the past few years, this important issue has been brought to the fore by groups such as the Global Network Initiative—which seeks to ensure companies respect the principles of privacy and free expression—and by the companies themselves, many of which have issued transparency reports showing exactly what data they take down or give up to governments.
This is important, but the focus on it has obfuscated another issue: The rules that companies use to govern speech are often applied unevenly, and can be used to stifle what is otherwise protected speech.
With millions of users, companies rely on “community policing” to apply their rules. This means that a piece of content is unlikely to be taken down unless someone else reports it for violating a company’s community guidelines or terms of service. As such, the rules that are meant to apply to everyone equally are skewed in practice, used to regulate only those individuals and groups whose detractors report them.
This issue is what my co-founder, Ramzi Jaber and I had in mind when we created OnlineCensorship.org. Ramzi, who is from Palestine, noticed that content from his local networks seemed to be taken down disproportionately to other locales. We sought to test that pattern, and find out what other patterns might exist. After two years of building the platform, I’m thrilled to say we’re a 2014 Knight News Challenge winner!
OnlineCensorship.org—currently in alpha and open to suggestions—seeks to capture user-generated reports of content removal and account deactivation on private platforms. We’ve started small with five major US sites—Facebook, Twitter, YouTube, Flickr, and Google+—but eventually hope to expand to include others. Once developed, the site will be available in several languages.
We hope that the data we collect will give us a better sense of how social media companies regulate speech around the world. We’ll analyze the data to better understand how companies are applying their own rules; for example, most companies ban “hate speech,” but how do they define it in practice? We’ll also look for edge cases, those that may fall outside of regulations (such as a 2011 incident in which Flickr deleted a paying Egyptian customer’s account after he posted photos taken from a raid on State Security offices). Finally, with the help of Ramzi’s team at Visualizing Impact, we’ll present anonymized data in graphic form, to give a sense of what regulation looks like around the world.