Facebook is developing tools that advertisers can use to keep their ad placements away from certain topics in its news feed.
The company announced that it would start testing controls to exclude topics with a small group of advertisers. For example, a children’s toy company could avoid “crime and tragedy” content if it so wished. Other topics are “News & Politics” and “Social Issues”.
The company said it would take “much of the year” to develop and test the tools.
Facebook, along with players like Google’s YouTube and Twitter, has worked with marketers and agencies through a group called the Global Alliance for Responsible Media (GARM) to develop standards in this area. They have worked on measures to help “consumer and advertiser safety” including establishing harmful content definitions, reporting standards, independent monitoring, and agreeing to develop tools to better manage ad adjacency .
Facebook’s newsfeed tools are based on tools that run in other areas of the platform, such as: B. in-stream videos or in the audience network. These tools enable mobile software developers to deliver in-app advertisements to users based on Facebook data.
The concept of “brand safety” is important to any advertiser who wants to make sure their company’s ads are not around specific topics. But the advertising industry has also put increasing pressure to make platforms like Facebook more secure not only near their ad placements.
The CEO of the World Federation of Advertisers, which founded GARM, told CNBC last summer that it was a move away from “brand safety” to focus more on “social safety”. The whole point is that even if ads don’t appear in or next to certain videos, many platforms are essentially funded by advertising dollars. In other words, ad-supported content helps subsidize all ad-free content. Many advertisers claim that they feel responsible for what happens on the ad-supported web.
This became very evident last summer when a number of advertisers temporarily ripped their advertising dollars off Facebook and urged them to take stricter steps to stop the spread of hate speech and misinformation on their platform. Some of these advertisers not only wanted their ads to stay away from hateful or discriminatory content, they also wanted a plan to ensure that the content was completely removed from the platform.
Twitter is working on its own security tools for in-feed brands, it said in December.