Social media and Political Turmoil

Protesters and counter-protesters square off and descend into violence as arrests are made. A flash mob assembles and ransacks a major retailer, shoplifting thousands of dollars in items. Observers look on as a quickly organized rally spouts misinformation and hate speech. An active shooter appears on campus. 

What do all these situations have in common? The likelihood that they were at least partially organized or inspired by social media and online communities.

Whether you’re a school administrator or a police chief, you know your community is more online than ever – and also more volatile, with online anger spilling into the real world more often. 

But what’s the extent of the problem, what are its leading causes, and what can companies do to keep their online communities safe? We answer these questions and more below.

Living in the Age of Online Volatility

A recent Anti-Defamation League (ADL) survey found that around one-third of Americans have experienced online hate in 2023, up from 23 percent the year prior. The trend is even starker for teens, of which 51 percent reported online hate this year (up from 36 percent in 2022).

“Online hate and harassment is a really serious problem,” said the ADL’s Jordan Kraemer in USA Today. “Even when it stays online, it’s hugely damaging.”

The poll found that Facebook users experience the most frequent harassment, at 54 percent of respondents (down from 66 percent in 2021). But other social platforms have seen year-over-year increases in hate and harassment in 2023:

  • Reddit: Increased from 5 percent to 15 percent 
  • TikTok: Increased from 15 percent to 19 percent
  • X (Twitter): Increased from 21 percent to 27 percent

The trend is unmistakable: Hate speech and bad behavior online are prevalent and, in most cases, increasing.

The news aggregator AllSides illustrates this growing divide best by distinctly showing the dividing line between various media (left, center/moderate, and right). 

We already know that news content helps drive divisions on social media. If a group gets wound up over a particular news story, they’ll click, share, and post about it, fueling already-existing divisions and animosity. 

Why is This Happening?

There are three main reasons why threat levels in digital communities are on the rise: 

  • Increasing social and political turmoil in the real world
  • Reduced content moderation by Internet companies
  • Increasing amounts of time spent online 

Let’s break each causal factor down in more detail.

Increased Social and Political Turmoil

You’d be forgiven if you came away with the impression that the entire world is angry after scrolling social media or news feeds. 

Whether it’s the war in Ukraine, the Israel-Hamas war, the Rohingya genocide, the growing China-U.S. rivalry, trans and LBGTQ issues, diversity issues, or election denialism in the U.S., the tension online – and the misinformation that often feeds it – is often palpable. 

Even lecturers and university professors have recently gotten themselves involved in questionable and, in some cases, criminal behavior over issues of global politics.

While much of this turmoil is due to real-world problems, it’s impossible to deny that the online world amplifies much of today’s brand of anger.

And all that anger is a feature, not a bug. Studies show that showing content that stirs deep emotions – like anger – is baked into the design of social media platforms. Social media rewards people for sharing misinformation through likes, comments, and shares by like-minded people. And newsfeed algorithms amplify content that gets the most reactions. 

This kind of manipulative growth hacking is all you need to create a vicious circle of wound-up and angry content (and users). 

Reduced Social Media Moderation 

None of this is a big deal, you might tell yourself. Social media platforms have moderation teams that scan and eliminate hateful content.

On that count, you’d be wrong. Or at least partially wrong. 

  • That’s because coinciding with the increase in hate-related content are notable reductions in moderation staff at social media companies: 
  • After cutting more than 20,000 staffers in late 2022 and early 2023, Facebook now relies mainly on automated content moderation systems
  • Data from USC, UCLA, UC Merced, and Oregon State show that hate speech on X nearly doubled after its current owner took over the company; X has reduced its overall workforce by nearly 75 percent since October 2022.
  • According to the ADL, only 28 percent of antisemitic tweets reported by the organization were removed or sanctioned.
  • The Reddit moderator strike led to the replacement of many of the site’s former volunteer moderators, which critics say has led to safety concerns for users.

It’s no wonder some critics have predicted that the age of social media is ending

Time Spent Online is Growing

The final ingredient in this long-festering online cocktail is the sheer amount of time most of us now spend online. 

More than 90 percent of U.S. households now have internet access. And during the first quarter of 2023, the average time spent online was nearly seven hours per day – the largest amount of time spent over the past year.

Indeed, people now spend way more time engaging and organizing themselves online than ever before. That’s generally a good thing, but not when unruly, law-breaking mobs appear seemingly out of nowhere based on an influencer’s social media post

Even more disturbing is the connection between online activity / social media and radicalization. According to Profiles of Individual Radicalization in the United States (PIRUS) data, social media was a factor in the radicalization of nearly 90 percent of those identified as extremists.  

Tracking these users and events is crucial for campus or community safety organizations, facilities managers, or compliance officers.

What Can Companies Do to Keep Their Online Communities Safe?

Organizations can be held responsible if public threats are made and then carried out (especially if your organization was unaware of the threat or ignored it). 

When threats are made, or community members engage with your social channels negatively or dangerously, your organization needs to know – and quickly.

Using Fullintel’s tried and tested PR crisis and event reporting processes, our new Issues Management Service can help. It provides 24/7/365 monitoring of any social channel, digital community, or media source. Media analysts then put their listening into action by proactively informing stakeholders – including your teams, campus police, or emergency services – about incoming risks.

The service has already won an AMEC Award for helping to keep a U.S. postsecondary institution secure via real-time social media monitoring and threat assessment. 

For more information on Fullintel’s Issues Management Service, please contact your account representative or reach out to us to request a 30-minute, interactive demo.

Jim Donnelly is a former journalist and content marketing and communications consultant who works with clients across a range of industries, including mobile technology, IT security, enterprise IT consulting, and media monitoring and intelligence. His previous roles have included Chief Media Officer at Canada’s largest IT and technology association, a director at a major media intelligence firm and, prior to that, editor-in-chief at a regional business publication.