Facebook is working on controls on advertising topics

Facebook is building tools to help advertisers keep their ad placements away from certain topics in their News Feed.

The company said it will start testing “topic exclusion” controls with a small group of advertisers. She said, for example, that a children’s toy company could avoid content related to “crime and tragedy” if it wanted to. Other topics will include “news and politics” and “social issues”.

The company said that the development and testing of the tools would take “much of the year”.

Facebook, along with players like YouTube and Google’s Twitter, have been working with marketers and agencies through a group called the Global Alliance for Responsible Media, or GARM, to develop standards in this area. They are working on actions that help “consumer and advertiser safety”, including definitions of harmful content draft, reporting standards, independent oversight and agreeing to create tools that better manage ad adjacency.

The tools for the Facebook News Feed are based on tools that run in other areas of the platform, such as in-stream video or its Audience Network, which allows mobile software developers to deliver in-app ads targeted to users based on the data from Facebook.

The concept of “brand safety” is important for any advertiser who wants to make sure their company’s ads aren’t close to certain topics. But there has also been a growing push from the ad industry to make platforms like Facebook more secure, not just close to their ad channels.

The CEO of the World Federation of Advertisers, who created GARM, told CNBC last summer that it was a “brand security” transformation to focus more on “social security”. The crucial point is that, even if the ads are not appearing in or alongside specific videos, many platforms are financed substantially by advertising dollars. In other words, ad-supported content helps to subsidize everything without ads. And many advertisers say they feel responsible for what happens on the web with advertising.

This became very clear last summer, when a series of advertisers temporarily withdrew their advertising dollars from Facebook, asking them to take more stringent measures to prevent the spread of hate speech and misinformation on their platform. Some of these advertisers don’t just want their ads to stay away from hateful or discriminatory content, they want a plan to ensure that the content is completely off the platform.

Twitter is working on its own branded security tools in the feed, the company said in December.

.Source