Setting New Standards
Facebook has made a raft of changes to its advertising policies. Not big news, you’d think, but under the surface, it reveals a complex web of issues now facing the web giant. Has the social network found a happy balance between revenue and responsibility?
2 Mins (427 Words)
In response to the rise of fake news and the use of social media to promote hate speech, Chief Operating Officer Sheryl Sandberg stated that “last week we temporarily disabled some of our ads tools following news reports that slurs or other offensive language could be used as targeting criteria for advertising”. These changes in policies will have a wide spread effect on internet advertising given Facebook and Google accounting for around two fifths of online ads.
The immediate, automatic nature of Facebook advertising has led to criticism from marketing executives, as major advertisers claim that adverts were sporadically placed alongside detrimental content. In the future, content creators will have to comply with “community standards” which seek to ensure that content is not offensive and that it adheres to Facebook guidelines. Content creators and publishers who post fake news and those who title their content with “clickbait” or sensationalist titles could be ineligible to profit from Facebook advertising.
Similarly, Facebook will stop any advertiser from targeting a customer using topics which go against the company’s standards. Sandberg noted that targeting categories that include attacks on people based on ethnicity, religious affiliation, sexual orientation or disabilities is not acceptable according to new guidelines. Facebook is aiming to be employing more people to stop offenders.
Facebook found itself in a scandal when ProPublica reported that companies could place their ads in front of users who identified themselves as “Jew haters” based on their Facebook behaviour. These categories were created by Facebook’s automated system. To test whether the ad categories were real, ProPublica paid $30 to target individuals within hate based groups. Facebook approved of the ads within 15 minutes.
The automatic nature of online advertising has led to criticism of how ads target individuals and how at times products and services appear next to videos that might promote hateful messages. In the wake of violent protests in Charlottesville by right-wing groups, Facebook and other companies have vowed to strengthen their monitoring on hate speech.
The fast growing and constant nature of digital content continues to be a minefield for advertisers. With an estimated 300 hours of video being uploaded to YouTube every minute alone, attempts to digitise monitoring systems and advertising on a massive scale has inevitably resulted in advertisers appearing next to videos which they do not approve of. Clearly advertisers want more control of where their ads appear in the quickly growing online space. It feels like the lightning fast growth of online advertising is still experiencing growing pains. And the cure may not be as simple as some would like.