The world’s leading publication for data science, AI, and ML professionals.

Network Modeling Suggests Novel Strategies to Eliminate Online Hate

Researchers from George Washington University and the University of Miami propose a novel approach, based on their analysis.

Photo by JJ Ying on Unsplash
Photo by JJ Ying on Unsplash

Online Hate is a rampant problem on social media. Not only can it create a toxic environment of discriminatory harassment, but it can also incite real-world violence and hate crimes. With the ease of creating an anonymous account, a large number of haters, and the challenge of tracking all the incidents down, policing the platforms is difficult. The search for new solutions continues.

Anti-Hate Approaches Today

The typical approach for combating online hate is local or platform-wide content moderation and enforcement of policies, sometimes augmented by machine learning tools. Content moderation is costly and time-consuming since most of the time, it is best done manually. Yet even when each hate incident is reviewed by a staff member, human moderators are far from infallible. Moderation can take a tremendous toll on mental health, and moderators can even come to believe the very conspiracy theories they were hired to combat. As a result, it is nearly impossible to run an effective anti-hate effort at platform scale, let alone at Internet scale.

A number of analyses have been performed, recommending different strategies and policies for the social media platforms. Often, the focus is on monitoring or booting individual bad actors, or on suppressing entire ideologies on the premise they are hateful ideologies. The former is too fine-grained to effectively scale, and in many cases, shifts in public opinion are properties of groups rather than individuals. The latter leaves the platforms open to accusations of censorship.

Photo by Alexander Shatov on Unsplash
Photo by Alexander Shatov on Unsplash

Modeling the online social media communities as networks seems to be one of the most productive avenues of anti-hate research. Although human behavior is complex, human social networks share many properties with biological networks, computer networks, and other types of networks. Network analysis allows researchers to test different strategies for combating hate, under a simplified set of conditions.

Novel Strategies From Johnson et al.

A particularly interesting network analysis of online hate was published by Johnson et al. in 2019, in the journal Nature. In this study, online hate is modeled as a network of networks – networks of hate with connections spanning platforms, countries, languages, and types of hate.

Johnson et al.’s modeling approaches use a somewhat different paradigm, compared to others. They represent each cluster of hate as having a specific probability of spreading hate to another cluster, with a cost incurred when hateful content passes to a better-policed platform. They use publicly available data, and focus especially on the flow of hate between highly monitored Facebook and less highly monitored Vkontakte.

Photo by Jason Leung on Unsplash
Photo by Jason Leung on Unsplash

Johnson et al. found that comprehensive, platform-specific policing merely leads users to migrate to less well-policed platforms, creating "dark pools" of online hate. In disguised form, the hate often spreads back to the original better-policed platform. So, for combating online hate, Johnson et al. suggest somewhat different strategies than are currently used. Based on their model, they recommend four approaches, which are largely based on isolating or fragmenting hate groups:

  1. Shut down smaller hate clusters. The study found that the sizes of hate clusters follow a power-law distribution. The reason for shutting down hate clusters that have reached a certain size, rather than the largest, is that targeting the largest would just allow new hate clusters to re-form. On the other hand, targeting hate clusters that have reached a certain size, before they become too large, was shown to prevent clusters from growing. Another advantage is that smaller hate clusters are easier to find since there are more of them.
  2. Randomly ban hateful users. Attempting to ban all the hateful users on a platform can lead to their migration to another platform, and accusations of censorship. A random ban, on the other hand, need not be based on personally identifying information and can break up online hate groups. The randomly banned users can come from the same country as the platform, to best leverage the Social Media company’s in-house legal knowledge. It is possible, however, that groups would have an explicit reaction to a random ban. How they might react was not studied.
  3. Engage the hate clusters with anti-hate clusters. A significant number of Internet users dedicate themselves to combating hate. If the hate clusters are engaged with these groups, they are disunited. They become less likely to take the next steps toward radicalization and may be less likely to perform hateful real-world actions. While this approach may require less moderation on the part of the platform, the study estimated that it would take a long time to drive off the last haters. Other disadvantages of this approach are that people may refuse to engage or their positions may harden when they encounter opposing views. The anti-hate users may also radicalize. These possibilities were not analyzed in the study.
  4. Engage the hate clusters with other hate clusters that disagree. This approach exploits a characteristic of many hate groups – their tendency to have serious disagreements with other hate groups. For example, some white supremacists support a unified Europe, others oppose one. The study suggested that anti-hate users could facilitate white supremacists with opposing views on European unification to come into contact. Their infighting could prevent further radicalization and further growth of the hate clusters. Only some hate clusters show this property. In addition, the possibility that views may harden when opposing views are encountered was not studied.
Photo by dole777 on Unsplash
Photo by dole777 on Unsplash

Johnson et al.’s paradigm and approaches have a number of advantages. Their model and proposed strategies do not require information about individuals, of the sort that would be a privacy concern. They also do not require explicit coordination between social networks, which may view one another as industry competitors and may be concerned about the sharing of confidential information.

The Future of Combating Online Hate

While Johnson et al.‘s approach might be promising, further studies are needed under more realistic conditions to evaluate its effectiveness. Relatively few studies take a network-modeling approach, and additional network models can help elucidate methods of combating online hate.

Johnson et al., for example, significantly focus on KKK hate clusters because their symbols are easily identifiable. Not all types of online hate show this property – Israel-Palestine hate, for example, can take many different forms. Analyzing other types of hate, other platforms, and other model representations can complement this study.

The moderation approaches also need to be tested in the wild – on real-world social media platforms. This can be done at a pilot scale, and if successful and economical, increasingly widely. Methods that do not require large-scale coordination between platforms, such as those recommended in Johnson et al., lend themselves better to smaller-scale testing.

Photo by Chirayu Trivedi on Unsplash
Photo by Chirayu Trivedi on Unsplash

Even if Johnson et al.’s approaches or their variants work, they are not a silver bullet. They would merely reduce rather than eliminate online hate. Even so, knocking out some of the online hate is like taking down crime. It keeps the Internet and the world safer and healthier for everyone.


Related Articles