How Are Social Media Publishers Guaranteeing Brands Safety?

David Geithner
4 min readDec 20, 2021

Social media users were the first to demand control over content, but now marketers are pressuring platforms to guarantee their brands’ safety. They don’t want to pay for nullified advertising — advertising that actively brings the brand into disrepute because it appears alongside inappropriate content. An ad appearing next to controversial or objectionable content can imply support of, or unintentionally lend credibility to, that content.

Independent Arbiters and Improved Suitability Tools

TikTok, the Chinese-owned video-sharing app, has been actively taking steps to appease its advertisers. The app recently reached its 1 billion active users milestone, and while that’s less than a third of Meta’s platform combined, users spend more time on TikTok than on Facebook. TikTok was the first to introduce third-party verification last year when it partnered with OpenSlate to screen ad placement on its “For You” page, the app’s main personalized user feed. In September, it added a second independent ad-vetter, Integral Ad Science (IAS). OpenSlate has since been acquired by DoubleVerify, which also provides digital auditing services for the advertising industry.

TikTok was also the first platform after YouTube to introduce adjacency controls, installing “inventory filters” that allow marketers to set their level of risk within its feed and control where their ads appear.

In November, Meta announced it will also introduce third-party verification for appropriate ad placement. It has added new brand-appropriateness controls, specifically “topic exclusions,” for advertisers using Facebook’s News Feed, which reaches almost 3 billion users a month. In addition, the company has promised to introduce the same tools to Instagram. Even more controls will likely be available in 2022, and Meta has committed to updating them constantly during the first quarter. Users will gain greater control of their News Feed content by being able to adjust what they see from who, as well as disconnect, reconnect, unfollow, and snooze more easily

Twitter has also recently joined the race to promise protection to brands, saying it will provide adjacency controls and third-party verification to its timeline feature shortly. This and other algorithmically controlled channels such as TikTok’s For You and Facebook’s News Feed stream personalized content to millions of users daily and are highly complicated to moderate. The vetting process represents a significant overhead that involves classifying video, audio, and text frame by frame. While it can improve suitability, it also potentially reduces audiences by limiting the number of places ads appear — the more topics excluded, the more users excluded. It also changes the costs, by as much as 15% in the case of Facebook’s News Feed controls.

Platforms are Increasingly Being Held to Account for Harmful Effects

As a result of Frances Haugen’s whistleblowing, Meta is being publically taken to task for actions that suggest the company knew, but ignored the harm its apps impose on some teens and society at large. The leaked documents demonstrate the organization knew its algorithm was promoting “malicious and divisive content,” and there’s a chance it will be subjected to legal action as a result.

However, TikTok, which has a predominantly young user base, has also come under fire for exposing kids to undesirable imagery that promotes extreme weight loss or drug use, for example. Brands and agencies are now joining disenchanted users and parents in expressing their lack of faith in any platform’s ability to self-regularize and monitor. They want them all to be held to the same level of accountability and want independent arbiters to oversee the process.

Advertising industry bodies such as the Global Alliance for Responsible Media (GARM) are working with agency and brand representatives and the various platforms to establish suitability standards for topics such as hate speech and fake news, and more. Meta has advised in a blog post that it will be testing and designing suitability tools and functionality in line with the GARM appropriateness framework.

But How Effective Are These Suitability Tools?

TikTok is already quoting a case study where tech giant HP recently worked with OpenSlate to control its in-feed ad campaign and verified 99.5% of the impressions as brand-safe and suitable.

Meta allows News Feed advertisers to disallow ads on three topic categories in their Ad Preferences settings tab: news and politics, social issues discussions, and crime and tragedies. Its early testing reveals 94% successful avoidance in the case of the news and politics category, 99% for tragedy and conflict, and 95% for social issues discussed.

The changes in the way social networks are used have slowly shifted them from platforms to publishers in recent years. With this change has come louder calls to make them take great responsibility for users’ content. Social media forces brands to be more transparent, nondiscriminatory, and authentic. But now, the platforms themselves must do the same.

--

--

David Geithner

David Geithner is a senior finance executive who draws upon nearly three decades of experience to serve as EVP and COO, IMG Events and On Location.