Why is advertising by big brands appearing alongside inappropriate content such as extremist videos?
As odd as it may sound, in the digital age many brands do not know exactly where their online advertising is running. The computerisation of digital advertising, where machines are largely responsible for choosing where ads run, has taken over much of the job of deciding where they should appear on the internet. This process is called programmatic advertising.
So what is programmatic advertising?
Think eBay, but quicker and more advanced. Until relatively recently, for an ad campaign to appear – on TV, radio or in print, for example – it would be booked through sales teams and ad agencies picking up a telephone and striking a deal for where it would go, when it would run and how much it would cost. The rise of digital media means this is being rapidly replaced by computerised, or programmatic, advertising systems, where the parties transact digitally in a similar way to buyers and sellers on auction site eBay.
How does it work?
Media owners, such as YouTube and many thousands of other publishers, make their advertising slots available within the programmatic system for advertisers to bid on. This process is handled through digital trading desks used by media agencies, which plan, book and execute campaigns on behalf of their clients. These connect with exchanges such as AdX, which is owned by Google, to then run ads around media such as videos on YouTube. Google also delivers ads to many other third-party sites.
What is going wrong?
The key here is a fundamental shift in how digital ad systems have transformed the targeting of ads to reach audiences with their messages, whether it be buying a car, a holiday deal or a charity appeal. Previously, advertisers would know the environment where they were running their ads online – for example, targeting readers of the Guardian website because they fit a particular demographic. With programmatic buying, there is a wealth of data on audiences but not the specific website or content that audience might be visiting. So a jihadi video might provide what looks like a valuable audience based on, for example, age data, but in reality is nothing of the sort.
Why is Google the villain here?
With great power comes great responsibility. Or, as Google’s critics say is proven by issues such as this, a total lack of the latter. Google – and Facebook, of which more later – have a near duopoly of control of the entire digital advertising market. The two Silicon Valley giants control almost 60% of the £11bn UK digital ad market, according to eMarketer. And almost 90p of every new £1 of digital ad spend is going to these two players. Programmatic advertising has gone from zero to accounting for almost 80% of the £3.3bn spent on the display advertising part of the market. As well as raking in the cash, Google is responsible for much of the infrastructure that delivers digital advertising. “Google provides most of the plumbing that enables programmatic,” said Scott Moorhead, founder of media consultancy Aperto One. “Google is not controlling the inventory coming in well enough in advance of making it available. And buyers of ads for clients, the media agencies, are not vetting it themselves.”
What was that about more on Facebook?
The programmatic advertising furore is the latest in a string of issues that have put the spotlight on the digital advertising market, which has hitherto been viewed as providing brands with the most accurate and measurable means of reaching consumers. Last year, a damning study found that potentially vast amounts of ads that brands were being charged for were viewed by “bots”, computer programs that mimic the behaviour of internet users. This was followed by Facebook admitting to a string of measurement errors, such as how many people are watching videos. Sir Martin Sorrell, chief of the world’s largest marketing group, WPP, said the issue was akin to Facebook “marking its own homework”. Keith Weed, marketing chief at Unilever, which owns brands including Dove and Lynx, said the lack of transparency around the efficacy of digital ads was akin to having “billboards underwater”. More recently, Facebook and Google have been taken to task for not cracking down on fake news, which came to prominence during the US election.
What are YouTube’s policies on advertising and controversial material?
Google knows that advertisers don’t like their brands appearing next to a whole host of controversial topics and tries to head off problems like this before they occur with its “advertiser-friendly content guidelines”.
“Content that may be acceptable for YouTube under YouTube policies may not be appropriate for Google advertising,” the site warns film-makers, before reeling off a long list of content which it would consider inappropriate, including (but not limited to): sexually suggestive content; violence; inappropriate language; promotion of drugs; and “controversial or sensitive subjects and events”, including “war, political conflicts, natural disasters and tragedies”.
How does YouTube enforce those policies?
The video platform says it uses “technology and policy enforcement processes” to determine whether a video is suitable for advertising. A substantial portion of the work is done automatically, by scanning the video title, metadata and imagery to try to get a sense of how appropriate the video is.
As well as the automatic tools, Google also relies on a crowdsourced approach, asking its users and advertisers to flag up content they consider inappropriate. That then undergoes manual review, which can result in the advertising being pulled from the video, or the video being removed. But controversial videos with narrow audiences – such as a piece of Britain First propaganda with fewer than 20,000 views – often will never reach users who consider the content controversial, limiting the usefulness of such an approach.
What Google doesn’t do is manually check every video for controversial content. To do so would be a mammoth task: 300 hours of video are uploaded to the site every minute, which would require more than 50,000 full-time staff doing nothing but watching videos for eight hours a day.
What happens to film-makers who break the rules?
The current system was introduced in September 2016 and rapidly attracted controversy from YouTubers, many of whom rely on the site as their sole source of income. The automated system YouTube applies errs on the side of caution, with film-makers often forced to appeal against false positives, and also served to cut funding from film-makers working on subjects such as LGBT history and even skincare for acne sufferers.
Hank Green, one of the site’s biggest stars, lost advertising on two videos at once: Zaatari: Thoughts from a Refugee Camp, and Vegetables that look like Penises.
guardian.co.uk © Guardian News & Media Limited 2010