Skip to main content

A great experience for whom and at what cost?

Four steps to address Meta's flawed political content policy

In February, Meta announced an update to their approach for what they recognize as political content on Instagram and Threads: they will no longer recommend anything they identify as political content. What they call an “approach” we recognize as a broad based and clumsy elimination of information — much of which is vital to effective public discourse. 

The social media giant says this shift is a continuation of its approach to political content and will help “make a great experience for everyone.” But information that fits their definition of political content is far-reaching, including anything related to voting, elections and social issues. Meta defines social issues very broadly, and they don’t offer many details beyond the high-level topics that include civil rights, education, environmental politics, health and immigration, among others. This change likely means that Meta isn’t recommending any content that nonprofits and foundations post about social issues, and that means new audience members won’t be seeing information you share. Ahead of a consequential election, this could have serious implications for get-out-the-vote efforts and organizing at state and local levels. 

We all know that social media can be a confusing and ugly place. Cat videos mingle with hate speech. Mis- and disinformation spread like wildfire and breed fear and division. But social media also has radically changed how we engage, educate and activate audiences to spark change and build more just futures. We’re seeing this play out in real time as people share updates from Gaza, Haiti and Sudan and call attention to the fighting and heartbreak widely underreported by mainstream news media. And that’s why this shift in Meta’s approach is unacceptable.

Meta asserts that this shift is part of their strategy to limit the spread of disinformation on their platforms, a claim that is suspect at best. Over the last two years, Meta’s leaders have fired hundreds of employees responsible for content moderation, even among growing incidents of violence fueled by transphobia, anti-semitism and Islamophobia. These leaders made cuts to Meta’s election security team, despite the January 6 insurrection and a major party’s continued march toward authoritarianism. Flagging content to be removed, one of the best ways people online could fight disinformation, has lost its potency. Meta leaders’ decision to limit paid content makes it impossible for organizations to buy ads using words like “abortion” or “Gaza.” Content creators, especially Black and queer creators, have accused the platform of shadow banning their content — meaning the algorithm deprioritizes their content until it’s basically invisible. 

We stand alongside advocates who seek an approach that addresses mis- and disinformation, but this effort from Meta is a poor substitute for an actual policy. Meta’s cursory attempt to claim this policy will promote effective discourse is nothing but an attempt to seek some positive attention. Any social media approach must be nuanced enough to curb the biggest threats without stifling public discourse. The approach to change an increasingly partisan and divisive country is not to shut down conversation or eliminate content that challenges the status quo — and especially not to remove vital information on voting and elections. Meta, as a public square for at least 120 million Instagram users in the U.S., has a responsibility to safely and conscientiously moderate content in ways that help democracy to endure and ensures accurate information is accessible to voters.

So what can communicators do about it? 

  1. Spread the word about the change and share how users can update their preferences to continue to see political content. The Washington Post’s Instagram post about the change outlines the simple steps to take. 

                

        
 

  1. Put pressure on platforms to take immediate action to ensure they will allow broad sharing of important GOTV information ahead of the upcoming election.
     
    • Voice your concerns to Meta representatives that support your organization’s ad buys.
    • Consider organizing a coalition sign-on letter to reverse efforts to curb content about social issues. 
    • Urge members of Congress to take action and mobilize grassroots supporters to do the same.
       
  2. Learn more about how disinformation spreads online and how your organization can help limit it by visiting JustTruthGuide.org.
     
  3. Bolster efforts to reach audiences through different channels. We can’t be too reliant on privately owned platforms to reach our audiences and affect change. Remember to use the full gamut of tactics of tried-and-true organizing and communication best practices. Leverage trusted messengers and build connections and trust with the communities you’re trying to reach for your campaigns.

Although this abrupt change in policy is concerning, it’s not set in stone. Social media tech giants are known to clarify, update or even change course in response to public outcry. For example, prolonged and concerted criticism from organizations and individuals have forced Facebook to change its privacy settings and “real name” policy in the last decade. Even Elon Musk updated what displays For You on X after many users spoke up. 

No information isn’t the same as good information, and organizations and activists must be able to discuss social and political issues on the ever important social media platforms in our digital lives. 



 

This entry was posted on Wednesday, April 10, 2024 at 11:18 am and is filed under Combating disinformation and Digital strategy. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.