Enhancing Brand Safety: Understanding Self-Regulation vs. Independent Industry Self-Regulation

Jan 11, 2022 by Eric D. Reicin, President & CEO, BBB National Programs

With copious amounts of content proliferating across a growing number of platforms and websites, it is an ongoing challenge for advertisers and platforms to ensure that digital ads are not placed next to harmful content.

This challenge took center stage at the beginning of the pandemic when life moved even more online. Advertisers worried about their brand being associated with disinformation around Covid-19, and soon after, concerns shifted to alarm about ads appearing next to hate speech on social media and news platforms. The World Federation of Advertisers (WFA) established the Global Alliance for Responsible Media (GARM) to start to peel back the layers on this challenge, beginning with a framework for defining safe and harmful content online and a way for advertisers to assess brand safety on various platforms. Platforms are providing GARM data to shine a light on their efforts.

Platforms are spending considerable funds and personnel but still have not yet solved this seemingly insurmountable challenge. Published reports suggest that Facebook spent $13 billion and employs 40,000 people to keep users safe on the platform. Though not an entirely new problem (advertisers have had clauses for decades keeping them away from breaking news stories), what we are now seeing is a growing focus on these issues and a monumental struggle to address them.

On more than one occasion, this focus has expanded to include speculation on the “failure” of self-regulation. Stephan Loerke, CEO of the WFA, recently published an op-ed on this subject, writing, “Frankly speaking, the self-regulatory approach alone is failing when it comes to the digital ad market. This frustrates our ability to be better marketers, it impacts the reputation of companies and, most importantly, it can end up causing damage to society.” 

In a separate context, but related to this narrow perception of self-regulation, the Chairman of the House Energy and Commerce Committee, U.S. Rep. Frank Pallone (D-NJ) said to big tech company leaders in a recent hearing, "Your business model itself has become the problem. The time for self-regulation is over. It is time we legislate to hold you accountable.” 

In this conversation, there is a key distinction few are making — the difference between ‘self-regulation’ and ‘independent, industry-wide self-regulation.’

Self-regulation, as some are discussing it, is when companies monitor themselves without much guidance from regulators or the industry. Company compliance programs are one example of such “self-monitoring,” the most practiced form of self-regulation.

By contrast, independent industry self-regulation, when done right, provides active collaboration between industry and regulators necessary for credible accountability. It is a system with three critical components built in: industry-wide accountability to agreed-upon guidelines, independent monitoring and coordinating enforcement mechanisms.

As the CEO of an organization that builds industry self-regulation programs, I think the best “systems” of industry self-regulation are developed by the companies in the industry, deciding together how to solve the challenges they collectively face and finding the appropriate third party to hold them accountable for the guidelines and standards they agree upon. Industry self-regulation programs often rely on and align with the Federal Trade Commission and other government agencies and industry codes to strengthen their ability to act, particularly when it comes to enforcement.

On a large scale, an independent self-regulatory body could play a similar role to address disputes over harmful online content. Of course, the First Amendment and Section 230 limit the ability of the government to perform the same backstop function for harmful online content as it does with false advertising. And while creating that type of accountability mechanism might be seen as a heavy lift, it is prudent to keep in mind that the First Amendment hurdles to government action can be considerably lower when children are the audience.

In the meantime, therefore, while business leaders are wise to keep one eye on brand safety challenges in the headlines and in conversations on Capitol Hill, they should watch what is happening at the FTC, where the Chair recently outlined eight priority enforcement areas for the year ahead.

Here are two to pay attention to: 

  • Acts or practices harming children: From the FTC’s perspective, this is an escalation of an existing focus on this space. It applies to everything from the data companies collect from children to the way kids are targeted. It is possible that the FTC may try to argue that when platforms use algorithms that promote — not simply disseminate — harmful content to minors, platforms effectively become the speaker.
  • Deceptive and manipulative conduct on the internet: The FTC put advertisers on notice that if they are deceiving consumers with misleading endorsements, they will be held accountable. Additionally, the FTC announced its intention to increase enforcement against illegal dark patterns that trick consumers into subscriptions. As applied to children and teens, FTC action may be even more vigorous.


The FTC’s priority enforcement areas are not a surprise to me. They reflect a consistent call over the years for companies to incorporate a consumer-first approach into the way they design products, collect data and advertise.

These enforcement areas align with existing industry self-regulation guidelines for advertising and for the children’s space. Indeed, these FTC enforcement areas are a demonstration of how independent industry self-regulation can work to protect consumers and enhance trust and fairness in the marketplace.

Originally published on Forbes.

Suggested Articles


Injunction Junction: NetChoice v. Bonta and Securing the Future of Teen Online Privacy and Safety

While the AADC injunction is not the final word on the constitutionality of California’s approach to regulating online harms, the injunction—and the reasoning that underlies the district court’s decision—raises important questions and creates an entry point to establish a robust minimum bar of protections for teens.
Read more

Developing Principles and Protocols for Recruiting and Hiring with AI

Employing AI in the recruiting and hiring process voluntarily, under the auspices of independent industry self-regulation, is often far preferable to being forced to do so under a regime of top-down government regulation.
Read more

A Not-So-Sweet Sixteen? Teen Online Privacy and Safety Faces New Policy Dilemmas

Pop culture powerhouse Barbie teaches us that corporations can have a long-lasting impact on children and teens, and the FTC seems to agree, adopting an aggressive stance on children’s and teen privacy in the last few months. We break down what this means for companies in looking to engage a child or teen audience.
Read more

Spilling the Tea on AI Accountability: An Analysis of NTIA Stakeholder Comments

The NTIA recently issued a request for comment to gather stakeholder feedback on AI accountability measures and policies to assist in the crafting of a report on AI accountability policy and the AI assurance regime. Nearly 200 organizations responded and we pulled a diverse, representative sample of the responses to summarize stakeholder feedback on this important question.
Read more