In February, the ACLU of Massachusetts released a damning report detailing prejudice in social media surveillance efforts by the Boston Police Department (BPD). The report revealed that between 2014 and 2016, the BPD had tracked keywords on Facebook and Twitter in an effort to identify potential terrorist threats. The BPD labeled as “Islamist extremist terminology” keywords like “ISIS” and “Islamic State,” but also phrases like #MuslimLivesMatter” and “ummah,” the Arabic word for community.
Christopher Raleigh Bousquet (@chrisrbousquet) is a researcher at the Ash Center for Democratic Governance and Innovation, a think tank out of Harvard Kennedy School.
These practices by the BPD reflect a growing trend in law enforcement called social media mining. Using natural language processing tools, police departments scan social platforms for keywords they believe indicate danger. According to the Brennan Center for Justice at the NYU School of Law, all large cities, and many smaller ones, have made significant investments in social media monitoring tools. A 2016 survey by the International Association of Chiefs of Police and Urban Institute revealed that 76 percent of officers use social media to gain tips on crime, 72 percent to monitor public sentiment, and 70 percent for intelligence gathering.
Until recently, companies like GeoFeedia, SnapTrends, and Media Sonar peddled their products from city to city, advertising their ability to prevent crimes and catch perpetrators. However, a 2016 report from the ACLU of California presented a major setback for these companies, revealing that cities were using their products to target words like “#blacklivesmatter” and “police brutality” following the killings of Michael Brown and Freddie Grey.
And the way police departments in these cities accessed this user data is reminiscent of practices that inspired outrage over Cambridge Analytica. Law enforcement agencies partnered with companies that gained access to backend data streams via APIs, a privilege that Facebook, Twitter, and Instagram revoked shortly thereafter.
Yet police departments have quietly continued to employ social media mining, partnering with a number of startups. While Facebook and Instagram announced in 2017 that they were banning developers from using their data for surveillance, privacy advocates suspect police departments maintained access to these data streams via third parties that didn’t advertise their surveillance intentions to social media companies. In the wake of the Cambridge Analytica scandal, however, Facebook made it more difficult for anyone to access the back-end data that law enforcement has relied on for social mining.
But even without access to back-end data, police departments have persisted with social mining. In the last couple of months alone, law enforcement officials have proposed efforts to use social monitoring to identify potential school shooters and secure borders using readily available information from users’ news feeds.
Police departments should continue to monitor social media to inform law enforcement. After all, social media sites are full of data that can make police interventions more effective, from posts about crimes in progress to damning evidence offered freely by criminals and even live videos of crimes. However, in designing these initiatives, police departments need to pay closer attention to the Constitution as well as the needs of citizens.
For one, law enforcement agencies must place more emphasis on privacy. The Fourth Amendment protects citizens from warrantless searches in areas in which they have a reasonable expectation of privacy. For example, police can’t search someone’s house without a compelling reason that justifies a search warrant, because citizens expect to have privacy in their homes.
Do citizens have a reasonable expectation of privacy regarding social media posts? One might think that because this information may be publicly available to anyone on the internet, users would abandon any privacy expectations when posting, liking a page, or checking into a location. And yet, while they might expect their friends to see a few of their posts, very few users expect someone to track every single piece of their social media activity over the course of a week, month, year, or longer—as police departments often do with social mining. While not yet accepted by the Court, under the “mosaic theory,” while one social post might be public, citizens have a reasonable expectation of privacy over the whole of their social media activity during an extended period.
With the Facebook-Cambridge Analytica debacle in mind, cities need to pursue public engagement campaigns to educate residents, gain feedback on social mining efforts, and help individuals understand how their data might be used. Through community meetings and online discussions, city governments can learn if residents object to practices like tapping into back-end data and bring public expectations in line with technological realities. Rather than always curbing technology to retrofit citizens’ expectations, government can educate the public to improve their understanding of existing technology, allowing citizens to take the necessary privacy precautions. This strategy not only protects privacy, but it can also help cities avoid political resistance, like the backlash that followed surveillance during the Michael Brown and Freddie Grey protests.
The other Constitutional issue that social mining raises is free speech. The ACLU has argued that the practice has a chilling effect, discouraging free expression. With the knowledge that law enforcement is constantly watching, citizens may be less likely to express themselves online. In fact, a study in Journalism & Mass Communication Quarterly showed that Facebook users are less likely to weigh in on controversial issues when reminded about government surveillance.
Yet just because social mining has a chilling effect does not mean that it’s unconstitutional. According to the Constitutional doctrine of strict scrutiny, governments can pursue practices that burden free speech if they are “narrowly tailored to serve a compelling state interest.” In other words, if a practice like social media mining effectively addresses an important policy goal—reducing violent crime, for instance—it is Constitutionally acceptable even if it restricts speech.
In order to justify service to such a compelling state interest, cities need to be able to make a case that social mining promotes public safety. This means that police departments need to more rigorously test their social mining initiatives and implement only those practices that have a proven effect on policing. In the Boston case uncovered by the ACLU, there was no evidence that scanning for terms like #MuslimLivesMatter thwarted terrorist activities, and therefore no good reason to pursue this strategy.
By analyzing data from other cities and from independent tests, police departments can identify social media activity that does in fact correlate with crime, and design initiatives to target these posts only.
This testing process would also help cities avoid perhaps the most poignant criticism of social mining: that it is biased against certain racial or religious groups. Analyzing social mining practices from cities across the country would reveal whether or not these strategies rely on tainted data, employ ineffective algorithms, or produce inequitable outcomes. In some cases this testing might not be enough—if it turns out that a racially loaded phrase does indeed correlate with crime, a city would need to deliberate further on the consequences of targeting such posts.
Prioritizing citizens’ well-being over arrest or conviction numbers can make social mining a valuable tool for improving residents’ lives and the safety of communities, rather than perpetuating the most harmful aspects of the criminal justice system.
The author thanks Mason Kortz, Wendy Seltzer, and Fred Cate for their help with this piece.
WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here.
More on Social Media and Privacy