Content moderation and advertising in social media platforms

The paper “Content moderation and advertising in social media platforms” published on the Journal of Economics and Management Strategy by Leonardo Madio of Department of Economics and Management of the University of Padova and Martin Quinn of the Rotterdam School of Management, Erasmus University, Rotterdam, study the delicate trade-off faced by social media platforms that make money primarily from advertising and host user-generated/uploaded content.

The core issue is the presence of unsafe content, that is content that is not necessarily illegal but could be harmful or controversial and might appear next to advertisements, creating a brand safety risk for companies paying for those advertising. For example, after Elon Musk took over Twitter/X, the world’s largest media buyer, GroupM, labeled the platform as “high risk” for advertisers. Many brands, including Balenciaga, paused or stopped their ads. Similar concerns arose on YouTube in 2018, causing a major loss of advertisers in what became known as the “Adpocalypse.”

Think of the platform as being in a “two-sided market” connecting different groups – users who spend time on the site (often for free) and advertisers who pay the platform to reach those users.

Here's the conflict: Advertisers want their brands to appear in a safe environment and are less willing to pay if their ads are next to unsafe content. Users consume content, but they might actually like or dislike unsafe content.

The platform makes money from advertisers and decides how much "unsafe content" to remove (based on its moderation policy) and how much to charge advertisers. The platform faces a trade-off when deciding how much to moderate. More moderation makes the site safer for advertisers, which is good for the platform's revenue, but it might drive away users who enjoy unsafe content.

  What are the main findings?

The paper finds that because the platform's revenue comes from advertisers, its moderation choices are heavily influenced by advertiser needs, but also by the platform's need to keep users. This means the platform may not always moderate at the level that would be best for society as a whole, potentially moderating "too little" or "too much" depending on user and advertiser preferences and how much brand-safety risk there is.

  What are the policy implications?

When governments mandate stricter content moderation, (for example with the EU's Digital Services Act, that requires more accountability for online platforms), platforms typically respond by raising their advertising prices. This generally benefits advertisers by enhancing brand safety, as their ads are less likely to appear next to risky content. However, for users, the outcome is more nuanced: while they might see less genuinely harmful content, they would experience an increase in the number of ads which can potentially worsen their user experience.

Government taxes on social media platforms also have distinct implications depending on their design. A tax on advertising revenues can reduce a platform's incentive to moderate content, making them less strict because it decreases the profitability of attracting advertisers. Conversely, a tax based on the number of users a platform has, can sometimes lead to more content moderation, particularly if it prompts the platform to prioritize attracting advertisers who prefer a safer environment.

Finally, increased competition among social media platforms for user attention can unexpectedly lead to looser content moderation policies. Platforms might become less strict about what is allowed to avoid losing users to rivals, especially if those users prefer less-filtered content. This competitive dynamic can create a "race to the bottom" in online safety, potentially conflicting with the goal of fostering a secure online environment.