Op Ed, Drop Dead!

The rise of the opinion columnist is one side effect of the decline of newspaper sales. But are ‘op-eds’ (opinion columns) really a substitute for genuine journalism? What are the dangers of the new ‘opinion economy’, and can they be avoided?

There’s little doubt that newspapers are dying. Late last year, a ‘briefing’ in The Week summed up the parlous state of the British press. 2016 was a bad year: The Independent ceased to exist as a print publication, and went online; while a new title called New Day, launched by Trinity Mirror, one of our leading newspaper groups, lasted just two months. The Sun, formerly the country’s most popular newspaper, has seen its circulation halved over the past six years; while the same has occurred at The Guardian, the leading liberal broadsheet. Since 2005, more than 300 UK local newspapers have closed.

UK newspaper circulation wikipedia

UK Newspaper Circulation (Wikipedia)

Perhaps predictably, young people are the least likely age group to read newspapers, while over-55s are the most likely. A YouGov report from a few years ago predicted that newspaper readership would die out with the older generation (although it didn’t actually compare readership over time). Even so, the report found that young people were the most inclined to seek news online, including from newspaper websites: the problem, it seems, is not so much with people’s desire for news itself, but with the form of the printed newspaper.

The same appears to be true elsewhere, at least in Europe and the USA. John Oliver’s excellent satire of the situation in the US summed up some of the consequences for journalism. As its parody of the Spotlight trailer implied, the chances of old-style investigative reporting surviving in a much more commercially-driven, cost-cutting environment are pretty minimal. A few years ago, Nick Davies’ book Flat Earth News made some similar points through its detailed study of the changing working practices of British journalists. And as the Guardian’s Katharine Viner has argued, ‘clickbait’ and ‘churnalism’ – the recycling of press releases designed to attract attention online – are bound to take precedence over serious journalism.


Spotlight: A Different World?

The internet is generally seen as the culprit here, although (as The Week explained), the situation is more complex. The advertising on which newspapers depend has increasingly moved online, and online advertising on newspaper sites doesn’t make enough money to offset the decline in revenue. Most online advertising (and commercial data gathering) is in the hands of technology companies like Google and Facebook; while smaller ‘classified’ ads have now migrated to specialist websites. While some newspapers have resorted to paywalls – most notably The Times – this significantly reduces traffic, and tends to work better for upmarket specialist papers like The Financial Times.

While we might hope that journalism – across a variety of media – will survive, it’s quite possible that the physical form of the printed newspaper will not (although you might not think that as you wade through the piles of discarded freesheet papers on the London underground at the end of an evening). However, the history of media suggests that something more complex might occur. New media do not necessarily replace old media: what more often happens is that the functions of those old media – the reasons for which we use them – are subtly changed. As our repertoire of media choices widens, we tend to turn to particular media for more specific purposes.

In a world of instant communication, newspapers are less and less useful as a medium for news – that is, as a means of finding out about events that are happening right now. When there is a major news event, most of us turn to radio and 24-hour rolling news channels, and now to social media. We might read all about it the following day in the newspaper, but what we expect from newspapers now (and what we get) is rather different.

As a result, the newspaper is increasingly becoming a medium, not so much of news, but of commentary on news. More disturbingly perhaps, it is becoming a medium of opinion rather than a medium of information. I don’t have any research on this in relation to British newspapers, but my sense is that the proportion of commentary has grown while the proportion of news reporting has declined. Most newspapers continue to run ‘editorial’ columns: generally a single column, without a named author, that is seen to represent the newspaper’s position on the issues of the day. These seem to have changed very little over the years. However, what has changed is the number of authored commentaries, where a named journalist will produce a regular (often weekly) column on a range of issues that have come to their attention during the week – a practice that in the US is often called the ‘op ed’ (the page opposite the editorial). While some are focused on particular areas of content – politics, social issues, ‘women’s issues’ – many op eds seem to cover a wide range of topics.


Of course, this isn’t a new development: it goes back to the origins of the newspaper as a medium. Even so, the amount of ‘opinion journalism’ has grown over time, and the boundaries between news and opinion have become steadily blurred. With the advent of social media, we have seen the emergence of a kind of ‘opinion economy’, in which competing providers vie for readers’ attention (and the advertising revenue that follows it) by offering ever-stronger doses of opinionated prose. The claim that ‘bad news sells’ may be somewhat outdated: today, it is strong opinions that sell.

Certainly, such opinion columnists seem to have proliferated in recent years, and many have attained the status of celebrities. There are numerous examples in the British popular press: Tony Parsons, Kelvin Mackenzie, Katie Hopkins, Piers Morgan, Rod Liddle, Richard Littlejohn, Melanie Phillips… Without exception, all of them are virulently right-wing. Of course, we might think that liberal-minded, ‘quality’ newspapers don’t do this sort of thing – but of course they do. In the newspapers I read, I am irritated on a regular weekly basis by the insistently strong opinions of the likes of Nick Cohen, Suzanne Moore, Barbara Ellen, Polly Toynbee, and several others.


These writers are members of what is sometimes called ‘the commentariat’. They are hired to generate content, typically in the form of several hundred words of opinions every week – and the stronger their opinions are, it would seem, the better. There is little space here for balance, nuance or speculation. In general, such columns are not based on journalistic research of any kind; and their writers are mostly not experts or specialists in any particular field. They are simply writers with strong personal opinions, who are considered qualified to comment on just about any issue that catches their attention. They are aptly parodied in the satirical magazine Private Eye, with their regular columnists Phil Space and Philippa Column.

There is a kind of continuum between the 500-word opinionated rants of these newspaper writers and the 140-character bursts of vitriol that can be found on Twitter. Indeed, many of these writers also broadcast their opinions on Twitter, thereby driving readers to their columns online and simultaneously generating advertising revenue. What’s required here is clickbait: sensational assertions are much more effective than accuracy or thoughtful discussion. Once readers arrive at the newspaper sites, they are further encouraged to comment ‘below the line’, adding to what often seems like a torrent of opinion – and which often seems to fan the flames of debate even further. While columnists themselves are sometimes encouraged to respond, they rarely do so.


Advocates of journalism have traditionally pointed to its role as a ‘fourth estate’ – an independent forum that holds the powerful to account. Others have argued that the press plays a major role in the modern ‘public sphere’, in which citizens come together to engage in rational debate about the issues of the day. For some enthusiasts, the internet is now the ‘networked fourth estate’, or a ‘virtual public sphere’, in which democratic political communication will thrive. Reading below the line on newspaper websites, or following some of the Twitter feeds of noted columnists, it is very hard to share such optimism.

Of course, there’s an irony here. I am blogging about this issue, and the blog is perhaps the ultimate medium of the highly opinionated. As a kind-of academic, I like to believe that this blog is a little more considered than some. But perhaps not. As the saying goes: opinions are like arseholes: everyone’s got one, and most of them stink.


Should we ban advertising to children?

A proposal currently under consideration by the European Parliament would result in a complete ban on advertising and ‘commercial communications’ targeting children. But what is the logic for such a ban – and what unintended consequences might it have, especially for children’s television?

The Audiovisual Media Services Directive, which is the key legislative instrument of European media and communications policy, is currently being revised to take account of changes in the wider media landscape. A new draft was accepted by the Commission last May, and amendments will be considered by the Culture Committee of the European Parliament in March, before returning to the full session.


Amendment 144 is very broad in scope, but among other things, it proposes that ‘audiovisual commercial communications, during and between programmes, targeting children’s audience shall be prohibited’ [sic]. If passed in this form, the amendment will result not just in a ban on advertising to children (presumably aged under 16), but also on licensing, merchandising, sponsorship and other forms of marketing. Irrespective of Brexit, it will have significant consequences for the UK media industries.

Calls to ban advertising to children have been growing in recent years, not least in the wake of concerns about childhood obesity, which I have considered elsewhere on this blog. Yet while a ban on ‘commercial communications’ might seem to offer valuable protection to children, it is worth unpicking some of the arguments here, and considering some potential unintended consequences.

One of the problems with the debate about children and advertising is that it tends to conflate very different issues. It isn’t always clear whether the concern is about advertisers selling ‘bad’ things (junk food, for example); about the attitudes or values they may invoke in doing so (such as gender stereotypes); or about the very idea of selling things to children in the first place. In the latter case, the concern typically feeds into a broader narrative about the ‘commercialisation of childhood’, and fears of a more general cultural decline.

The call to ban advertising to children presses some powerful emotional buttons, and has appeal right across the political spectrum. For many anti-capitalist campaigners, advertising is an instrument of mass deception: it is how capitalism does its evil work. For many on the political right, it is a symbol of everything that’s wrong with the modern world. Needless to say, both sides of the argument would be significantly weaker if they focused on adults: if we can claim that advertising (and the media more generally) are destroying childhood, we have framed the debate in a way that neatly undermines potential criticism.


Yet the debate raises some complex questions about our conceptions both of childhood and of ‘commercial communications’, and of the relationships between them. We need to consider some fundamental issues here.

Firstly, why does the debate focus almost exclusively on children? And which children are we talking about here? Children are often assumed to be especially gullible when it comes to advertising, but the evidence here is very mixed. While some researchers argue that children cannot fully understand advertising before the age of twelve, others suggest that they are aware of its commercial intentions (the fact that ads are intending to sell you things) from around the age of six or seven.

Of course, we might well ask whether understanding advertising makes any difference: we can assume that adults understand these commercial intentions, but that doesn’t mean they are not influenced. Even so, it does suggest that lumping everyone under 16 together as ‘minors’ – to use the European Commission’s terminology – might be rather misguided.

Secondly, are we talking simply about advertising, or about ‘commercial communications’ more broadly? Advertising is merely part of a much wider world of commercial promotion and marketing. It makes up a declining proportion of promotional budgets, as companies increasingly use other means to get their messages across. Some of these means are fairly obvious (such as sponsorship) but others are much less overt. This is especially true of digital media like Google and Facebook, where commercial intentions may be much less apparent to users. If we consider the wider context of children’s media (the world of Harry Potter, Pokemon, Lego or Disney), the line between content, advertising and merchandising is very hard to draw: everything is effectively ‘advertising’ everything else.

When we conducted our government review of these issues some years ago, I argued that commercial forces are acting on children’s lives in all sorts of ways. The marketisation and privatization of public services in areas such as welfare provision, education and play has had significant implications for the opportunities that are available to children. Much of the evidence here suggests that these things exacerbate inequalities. Yet for some strange reason, government officials seemed reluctant for us to consider these issues…

Third, we need to consider the evidence about the effects of commercial communications. Child obesity, for example, is a complex phenomenon with multiple causes. The reasons why people are obese are partly genetic, partly about how much they exercise, and partly about diet. In the latter case, what they eat is partly influenced by advertising, but research suggests that advertising plays a relatively insignificant role. In fact, one of the biggest factors determining child obesity is poverty: poorer children are much more likely to be obese than more affluent children.


For all their apparent concern, governments seem largely unable or unwilling to do anything about these other factors. Selling off school playing fields and privatizing leisure facilities – as UK governments have done over the past two decades – obviously reduces children’s opportunities to get exercise. Meanwhile, child poverty is growing at an alarming rate. In this context, blaming advertising provides a nice means of distracting attention from the more inconvenient issues at stake. This kind of scapegoating is apparent in many other areas of debate about children and media, not least in relation to the perennial topics of sex and violence.

Finally, we need to consider the unintended consequences of any ban on advertising to children. Of course, it would be possible to institute a ban, at least in principle. Sweden is one of very few countries that have done so, albeit only to children under twelve, and only on terrestrial TV channels. In the UK, there has been a ban in force since 2006 on advertising for foods high in fat, salt and sugar (HFSS) during programmes where a high proportion of children are watching.

The direct effects of these moves have been limited, however. The UK ban has reduced the amount of food ads children are exposed to, but children still see a lot of advertising during adult programmes, and especially through online and social media. The Advertising Standards Authority has recently introduced new guidelines on these latter areas, although a full-on ban seems unlikely. Meanwhile, children are surrounded by a whole range of other forms of food marketing and promotion (think about who sponsored the Olympics, for example).

More importantly, there is no evidence that banning such advertising has reduced levels of child obesity. Such evidence would be very difficult to establish in any case, but the latest statistics seem to show that obesity is rising, after having flat-lined or slightly fallen over the past decade.

However, the ban on advertising has had some very significant consequences for the production of children’s television. The problems here began with the 2003 Communications Act, which downgraded the public service quota requirements for children’s TV, meaning that commercial terrestrial broadcasters were no longer required to produce a fixed number of hours of original programming. Children’s television is expensive to produce and not especially lucrative in terms of generating advertising revenue: it is much cheaper to buy in programmes from overseas (such as US cartoons), and more profitable simply to target adults. In research conducted in the mid/late 1990s, we found that ITV – the UK’s leading commercial terrestrial broadcaster – was seriously considering abandoning children’s television production. The advent of the HFSS ban in 2006, in combination with the 2003 Act, effectively gave them the alibi to justify this.

Recent research by PACT and others clearly demonstrates the consequences of this. The production of UK-originated children’s programming is now in steep decline: over the past decade, spending on commercial terrestrial channels has declined by 95%. The BBC is now effectively the only player in this market, producing no less than 97% of UK children’s programmes – which is the very opposite of the plurality the government claims is necessary for a healthy market. Without competition, and under pressures of its own, the BBC’s expenditure on children’s programmes is now also falling. Groups within the industry have recently begun a campaign against these developments, arguing that intervention is needed to save children’s content.


Horrid Henry, one of very few remaining ITV original commissions

Of course, there is no convincing evidence that children actually need good-quality television, any more than there is evidence that they need good-quality books. It is impossible to prove that they would somehow be harmed if their TV diet consisted entirely of wall-to-wall US cartoons or sitcoms. To argue that British children are entitled to good-quality, home-produced programmes that represent aspects of their own lives is ultimately a normative judgment about the kind of culture we want to have.

So, should we ban advertising to children? In the present situation, I don’t believe this is either necessary or realistic. The simple fact is that, outside the BBC, it is advertising that pays for children’s content. While we might fondly wish for children’s television to be publicly funded, this is most unlikely to happen: at a time when governments are privatizing the health service and the education system, it is implausible that they will give more money to public broadcasting.

If a European ban extends to licensing – for example, to programmes that feature commercial merchandise such as toys – it will completely undermine the basic business model of children’s television (including, one might add, the funding of a great deal of BBC productions, which rely on merchandising income as well). These programmes will continue to need buyers in European markets, even after Brexit. Ultimately, I would prefer us to have commercial television for children than very little children’s television at all.

There are good reasons to be concerned about children’s exposure to ‘commercial communication’, but a ban of this kind is merely a form of symbolic politics. It allows politicians to look as though they are doing something, while actually ignoring the real issues at stake. My alternative – which will come as no surprise to readers of this blog – is that we should enable children to deal with the challenges of a commercial world, rather than vainly trying to keep them away from it. For better or worse, advertising is a fact of life: we live in a commercial world, and children (like adults) need the means to understand that, to critically analyse it, and to make their own choices.


Some of these ideas were presented at a ‘citizens’ jury’ event at Warwick University. Thanks to the participants for their observations, especially Rosina Robson of PACT, and to David Rowthorn for the invitation.

Can we still teach about media bias in the post-truth age?

Bias – along with related ideas like objectivity, impartiality and balance – is a staple issue in public debates about media, and in media literacy education. Yet in the wake of the Brexit referendum campaign, the victory of Donald Trump, and the attacks on the Labour Party leader Jeremy Corbyn, many have argued that we are entering a ‘post-truth’ era. In this context, is bias still a useful and meaningful concept in media literacy education? And if so, how should we teach it?

Two weeks ago, the BBC Trust – the body that regulates the UK’s national public service broadcaster – ruled that its chief political correspondent, Laura Kuenssberg, had breached impartiality and accuracy guidelines in her reporting of a story involving the British Labour Party leader, Jeremy Corbyn. Given that the Trust is not exactly the most ferocious of media regulators, the ruling might have been rather surprising; although in light of Kuenssberg’s persistent misrepresentations of Corbyn’s views, it was by no means undeserved.

In my previous post, I wrote about the phenomenon of ‘fake news’, which has continued to gather headlines. On Monday, an official UK parliamentary enquiry was announced on the topic. Yet, as I argued, the focus on fake news rather leaves aside the question of ‘non-fake’ news – the news reporting provided by professional journalists. Clearly, it is not going to be possible – even for Mark Zuckerberg and Larry Page – to eradicate ‘fake news’; but even if it were, it would be a mistake to believe that this would lead to a new era of reliable, trustworthy news reporting. Indeed, in some respects, the phenomenon of ‘fake news’ and claims about ‘alternative facts’ places an even greater responsibility on professional journalists.


Famously, ‘post-truth’ was the Oxford English Dictionary’s word of the year for 2016. The editors defined it as ‘relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief’. As this implies, the term post-truth represents a critical, even sarcastic, claim about the world, rather than a celebratory one. It describes a situation in which politicians in particular have not only made false promises or tried to manipulate public debate, but actually told outright lies, and managed to do so with impunity. While this is most apparent with Trump – as in his claims about climate change or voter fraud, or about Obama’s place of birth – there have been similar instances in the Brexit campaign, and in the challenges to Corbyn’s leadership.

One leading UK figure in this respect – and one of the first British politicians to meet with Trump after his election – is the former Education and Justice Secretary, Michael Gove. Gove was party to the false claim made during the Brexit campaign that Britain was paying £350 million a week to the EU, and that after Brexit this money would be diverted into the National Health Service. When challenged by leading economists, Gove claimed that ‘people in this country have had enough of experts’ – as though established sources of knowledge carried no weight against the superior wisdom of ‘the people’.


It may not come as a surprise to readers of this blog if I admit that I voted for Jeremy Corbyn, and that I remain a (somewhat ambivalent) supporter. Yet in claiming that Corbyn has been subject to a campaign of biased media coverage from the outset – not just in the right-wing press, but also in more ‘liberal’ news outlets such as the Guardian and the BBC – I am not simply giving voice to my own bias. The BBC Trust ruling confirms what several people have been arguing for some time about the BBC’s coverage, and specifically about Laura Kuenssberg.

About six months ago, the London School of Economics published a detailed research report that substantiated such claims, at least as regards the newspapers. Of course, there are bound to be questions raised about such research. The researchers used content analysis to calculate the proportions of positive and negative stories. Yet what counts as positive or negative obviously depends upon the reader’s existing views; and enumerating negative comments does not in itself tell us anything about their overall meaning, let alone their effects.


Counting instances of bias is also much easier in situations where there is a straightforward binary choice – are we for or against Corbyn, or Brexit? – but it is obviously much harder when it comes to more complex issues. Even such seemingly simple choices play out against a background of wider social and political issues: so how we understand Corbyn’s line on nuclear missiles, for example, calls into question how we understand Britain’s place and status in the world.

Even so, it is impossible to deny that newspapers like the Sun, the Daily Express and the Daily Mail have systematically ridiculed and vilified Corbyn. In some instances, this has involved manipulation and misrepresentation of a kind that definitely amounts to ‘fake news’: the story about Corbyn apparently ‘dancing a jig’ while on his way to a memorial service at the Cenotaph in London comes to mind. ‘Quality’ newspapers like the Telegraph or the Times may be less crude and blatant in their approach; but apparently neutral, factual reports in such papers are often designed to press particular readers’ buttons.


As the media academic Robert Entman has argued, bias is ‘a curiously undertheorized staple of public discourse about the media’. Like ‘stereotype’, it is a commonsense concept that frequently recurs in everyday discussion. We might wish to replace it with something more academically ‘theorized’, or at least more precise; while some might argue that it is simply old-fashioned and irrelevant in a ‘post-truth’ era. Yet the idea is unlikely to disappear. Accusations of bias remain a persistent charge in debates about news media, and they have recurred throughout the debate about ‘fake news’. Ultimately, bias remains a useful umbrella term, although it is one that teachers and students need to work with and interrogate.

So what might we teach about media bias? Firstly, like other key concepts in media theory, we need to question and problematise the idea. Alongside bias, there is a cluster of terms that are frequently used interchangeably, such as objectivity, impartiality, fairness and balance. Yet while these terms may overlap, they do not mean exactly the same thing. Carefully separating these out, and looking at instances of each, would seem to be a necessary first step. There are several taxonomies of types of media bias that might be useful here – although again, they need to be used carefully and critically.

At the same time, we need to recognize the inevitability of bias. What we perceive as bias in the first place obviously depends upon our own biases – our own prejudices, assumptions or preconceptions. In the age of the ‘filter bubble’, there is a risk that this is becoming easier to ignore. We are increasingly able to select and customise our media environment in ways that are likely to confirm those biases. Encouraging students to reflect on their own media practices in this respect, and the sources of their own information and judgments, would help to illuminate the complexities at stake.

Secondly, we need to refine the idea of bias in order to apply it to a wider range of situations. Bias may be evident not just in overt commentary, but also (and perhaps more powerfully) in what a given report chooses to focus on and to ignore. It may be apparent in the connections that are made (explicitly or implicitly) across news stories, and in how key topics are defined in the first place. There is bound to be bias in how the news agenda is set – in what is seen to count as news in the first place – and in how particular topics are framed or defined – that is, what is included or excluded from consideration, or what is seen to be relevant or irrelevant. These two ideas – agenda-setting and framing – have generated considerable amounts of research. One could well argue that these less overt forms of bias are actually more influential, because they are harder for readers to notice, and hence to resist.

Thirdly, there is the issue of how we understand the institutional causes of bias. In some instances, bias may be a result of the direct interference of media owners or proprietors – most notoriously, of course, Rupert Murdoch. Yet this kind of institutional bias is often less direct: it is about the ‘culture’ of a news organisation, and how its staff are encouraged to behave. It may also reflect what the owners and the employees believe about the relationship between the institution and its audience. Journalists often argue that the bias of their own reporting simply reflects that of their readers. In this sense, there can be an economic motivation for bias: people will not pay to consume things that conflict with their own biases. Bias sells.

Institutional connections across and between media are also important here. Bias in one medium may well be reinforced by other media: even if specific positions vary, the overall agenda that is set is frequently shared, as news organizations compete with each other. However, it is important to beware of a monolithic view of media – epitomized by the blanket term ‘mainstream media’ (MSM). This term is becoming increasingly fashionable, not only on the political left (among supporters of Corbyn, for example), but also on the right: Donald Trump’s strategist Steve Bannon has apparently referred to the ’mainstream media’ as the new administration’s ‘opposition party’.


This idea seems to rest on a very old-fashioned, conspiratorial account of the media simply brainwashing the masses. It assumes that ‘The Media’ are all the same; and it also implies that something preferable (the truth, perhaps) is only to be found outside the mainstream. Some on the Left – including some of Jeremy Corbyn’s media strategists – look to social media as a more effective vehicle for progressive ideas, ignoring the fact that the political Right (and not least Trump) have used such media with considerable success. Meanwhile, there’s a new irony as some are now looking to the same mainstream media to correct the excesses (fake news, hate speech) of social media.

I believe we can and should still teach about media bias. The changing political and media context – the so-called ‘post-truth’ age – makes this more complex and problematic, but it also makes it more necessary. However, it is important to avoid simply blaming the media, or overestimating their power. Aside from anything else, this can lead to a situation where the mediation of politics comes to be seen as more important than politics itself. In the process, the apparent lack of media appeal of particular politicians (such as Corbyn) is seen to define their electability – as if getting the packaging and presentation right is the fundamental issue. Ultimately, such arguments reinforce a generalized distrust and cynicism that is increasingly shared across the political spectrum. It’s not something I would regard as a good outcome for media literacy education.




Fake news: is media literacy the answer?

Growing concerns about ‘fake news’ have led to calls for young people to be taught critical media literacy skills. Yet while media literacy would obviously be useful, it isn’t enough to address the problem. Media educators need to frame the issue more broadly, and join forces with those calling for media reform.

My apologies for the length of this post (believe me, I’m trying). I hope it won’t take you more than ten minutes to read.



In the wake of the election of Donald Trump, there has been considerable debate about the problem of so-called ‘fake news’. Trump’s opponents have accused his supporters – including the Russian government – of circulating fabricated news stories in order to gain support. Yet Trump himself has frequently used the term to discredit what he claims is false information about him.

The problem is somewhat less evident here in the UK, although the Brexit campaign and the attempts to unseat Jeremy Corbyn as Labour leader have raised similar issues about media bias and misrepresentation. There may have been fewer outright lies, but there have been plenty of examples of the media – and, of course, of politicians themselves – manipulating the truth.

Back in 1710, the satirist Jonathan Swift noted that ‘falsehood flies, and the truth comes limping after it.’ ‘Fake news’ has always been with us. Yet developments in technology, and in the wider media landscape, are undoubtedly making it a more urgent problem.

The dangers of fake news are fairly self-evident. The democratic political process depends upon the circulation of reliable information. If information can no longer be trusted, citizens have little basis on which to make political decisions. As such, there is now high-level concern about fake news among governments around the world. Barack Obama has gone so far as to describe it as a threat to democracy.

In the US in particular, there has been an interesting debate about the potential of media literacy education in this respect. Some difficult issues are at stake here – about trust and credibility, about epistemology (what’s fake? what’s truth?), about the role of educators, and about public knowledge more broadly.

These are not new issues for media educators. Media Studies teaching (at least in the UK) has always addressed questions about news bias and representation – to the point where it almost seems like an old-fashioned concern. The advent of the internet raised new questions about credibility, and about the need for information literacy: yet these have also been on the agenda for educators for many years.

Nor are these issues confined to new media. They also apply to ‘old’ media, and to the behaviour of politicians themselves. Politicians have always made false claims, and indeed told blatant lies – although the success of people like Donald Trump, Nigel Farage and Boris Johnson would suggest that they are increasingly able to get away with it.

Yet while these may be well-established phenomena, they are taking on a new form in the digital age. Social media has made it much easier for lies and misinformation to be circulated from person to person, bypassing the gatekeepers and regulators who controlled old media. So what is the role of media literacy education in what some are calling this ‘post-truth’ era?


What is fake news?

Most simply, fake news is news that is fabricated, and deliberately intended to mislead or deceive. As such, it’s important to distinguish it from satirical parodies of news (sites such as The Onion in the US) – although some readers may not always appreciate this distinction. Fake news typically appears on sites that masquerade as genuine news sites, although it is often picked up and re-circulated by mainstream media.

Fake news often has a political dimension: it is intended as a form of misinformation or propaganda that is designed to exert political influence. It may even constitute a form of ‘cyberwarfare’ between nations (although again, the history of the Cold War suggests that such activities are far from new).

However, in some cases, it may have a primarily economic motivation. Fake news often functions as ‘clickbait’, which will generate revenue thorough advertising and the selling of user data. Networking services – and in particular Facebook – will do most of this work for you: their entire business model depends upon it. As such, the fake news phenomenon needs to be understood in relation to the wider political economy of the internet.

These political and economic motivations may also be blurred. Although the Russian government may have been involved, much of the pro-Trump fake news generated during the 2016 election campaign apparently came from a cluster of sites run by teenagers in the Macedonian town of Veles. These entrepreneurial young people claimed that they were making easy money simply giving Trump supporters what they wanted to hear.

There are now so many examples of fake news stories – Wikipedia has an interesting list – that it is bound to become harder to identify them, or to tell fake from true. Of course, many are blatantly and obviously absurd. Perhaps my favourite example was the spoof story about North Korean leader Kim Jong Un being voted the sexiest man alive. The story was posted on the US satirical site The Onion in 2012, but it was taken up by the official newspaper the China People’s Daily, which featured a 55-page photo spread on its website.


However, there are limits to such absurdity. During the 2016 Presidential campaign, the so-called ‘Pizzagate’ story made ludicrous claims that Hillary Clinton was involved in a paedophile sex trafficking ring run out of a Washington restaurant. Yet the laughter turned sour when a Trump supporter turned up at the pizza parlour firing an automatic weapon.

While there is little doubt that fake news was used extensively in support of Trump’s campaign, and is particularly driven by the political right, the charge runs across the political spectrum. Return of Kings is just one extreme right-wing site that identifies ‘fake news’ purportedly circulated by so-called liberal media. Trump’s latest claim that the US Intelligence Services have been circulating fake news about his exploits in Russian hotel rooms is a further example. It may have already happened, but in the near future we are bound to see fake news stories about fake news.


In some ways, the debate about fake news can be seen as a further deflation of the bubble of internet hype – especially the idea that networked technology would lead to a flowering of civic participation and democratic engagement. While some of us were sceptical about such claims from the outset, others – including early technology enthusiasts such as Timothy Wu and Wired magazine – have taken a little longer to get there.

While such technology might well be a great resource for progressive political activists, our research showed that it is also a valuable tool for anti-democratic forces, including the resurgent extreme right and those who peddle racism and other forms of abuse. The so-called ‘alt-right’ can also play the game of creating memes, optimizing search engines, trolling and tagging and going viral. Indeed, present evidence would suggest that they are much better at these things than those on the political left.


Technical solutions

Potentially, there are various ways of dealing with the problem of fake news. As with other problematic aspects of online content, such as pornography, some have called for a system of labeling. Sites might be encouraged – or even required – to obtain some kind of official certificate of approval from fact-checkers. ‘White lists’ of trustworthy sites might be established, or repeat offenders warned and then taken down by internet service providers.


Such responses would clearly require collaboration on the part of the technology companies. Facebook’s Mark Zuckerberg initially denied that his company had anything to do with the spread of fake news, although he has subsequently backtracked a little. Facebook has undertaken to block advertising by self-evidently fraudulent sites, and Google has claimed it will do this via its Adsense service. Facebook has also partnered with fact-checking websites on an initiative that will tag articles whose veracity is ‘disputed’.

However, any further steps along these lines seem unlikely. Such actions would imply that these companies are not merely technology companies, but media companies – and this is something they are very keen to avoid. Requiring them to take responsibility for content would completely undermine their basic economic model, which is premised on the claim that they are simply technological services that users are free to use in any way they choose.


Media literacy

With governments unlikely or unwilling to challenge this free-market argument, commentators often look to media literacy education as some kind of alternative solution. For example, when a recent study by researchers at Stanford University found that most young people were unable to distinguish between real and fake news, there were calls for them to be taught ‘internet literacy’. (It’s not clear why this argument applied only to young people, but we can let that pass…). The influential website Vice, for example, ran the headline: ‘we need to teach kids how to be skeptical of the internet’.

Most media literacy educators are likely to respond to this with a degree of weariness. Duh! Isn’t that what we’ve been trying to do for decades – despite fact that we have been consistently marginalized within the mainstream curriculum?

Yet there is a broader problem here. Media literacy is often invoked in a spirit of ‘solutionism’. When media regulation seems impossible, media literacy is often seen as the acceptable answer – and indeed a magical panacea – for all media-related social and psychological ills. Are you worried about violence, sexualisation, obesity, drugs, consumerism? Media literacy is the answer! Let the teachers deal with it!

This argument clearly frames media literacy as a protectionist enterprise, a kind of prophylactic. It oversimplifies the problems it purports to address, overstates the influence of media on young people, and underestimates the complexity of media education. Thus, violence in society is not simply caused by media violence, and it will not be reduced by simply telling kids that movies are teaching them the wrong lessons – or indeed by stopping them watching TV, as some propose.

Similar arguments apply to fake news. Fake news is a symptom of much broader tendencies in the worlds of politics and media. People (and not just children) may be inclined to believe it for quite complex reasons. And we can’t stop them believing it just by encouraging them to check the facts, or think rationally about the issues.

Of course, this is not to say we shouldn’t try. The US media educator Frank Baker has assiduously gathered a set of checklists and recommendations for educators trying to tackle fake news in the classroom. Students are encouraged to cross-check online information, to verify and compare sources, to analyse the design and construction of sites, to check the provenance of the material, and to think about the producers’ motivations. There are several lesson plans and videos on Frank’s site, many of them undoubtedly useful.

fake-newsadviceHowever, there has also been some debate about the role of media literacy here. Microsoft’s Danah Boyd has claimed that, far from being any kind of solution, media literacy might actually be the cause of the fake news problem – a strange and muddled argument that has understandably exasperated practitioners.

However, there have been some other contributions that point to the need for a broader view. They argue that fake news is merely a manifestation of much broader problems, that apply to ‘real’ news as well; and that while media literacy is obviously important to pursue, it might not prove to be a sufficient solution. So what are the problems here?


Problems with media literacy

Firstly, there are some significant pedagogical problems in how we might deal with fake news in the classroom. Assessing the reliability and credibility of sources is arguably much more difficult with online media than used to be the case with ‘old’ media, although this is something that experienced media educators can probably handle. However, in this case, judging truth and falsehood must depend to some extent on knowing about content as well as form – understanding the topics at hand, rather than just how they are presented. This is particularly hard in a context where most of one’s students are unlikely to be interested in political or social issues, let alone knowledgeable about them (although again, this applies to adults just as much as young people).

Even if these critical skills can be cultivated, there is a further question about how far they will actually be applied outside the classroom. How many people are willing to routinely evaluate the reliability of online sources, or to cross-check information – especially in an age when we have become used to instant access to information? Personally, I am sorry to say that I rarely do this, and I doubt that I could persuade an average sixteen-year-old student to do so either.

Beyond this, there is the problem of epistemology. You don’t have to be a complete relativist to acknowledge that a given ‘fact’ can be interpreted in many different ways by different people in different contexts. There are some absolute truths and some absolute falsehoods, but between them lies a very large grey area.

And interpretation is a complicated business. As Maha Bali has pointed out, ‘real’ news often requires a great deal of critical and emotional energy to deal with. As such, discerning true from false – or, rather, identifying and coming to terms with the elements of truth and falsehood in most representations of the world – may only be the beginning of a much more complex educational journey.

There’s a danger here of assuming that we are dealing with a rational process – or at least one that can, by some pedagogical means, be made rational. But from an educational perspective, we surely have to begin with the question of why people might believe apparently ‘fake’ news in the first place. By no means all media use is rational. Where we decide to place our trust is as much to do with fantasy, emotion and desire, as with rational calculation. All of us are inclined to believe what we want to believe.

This is arguably much more complex at time when we can exercise much greater control over the media and sources to which we are exposed. In terms of digital media, this has led to growing concerns about the ‘filter bubble’, or the ‘echo chamber effect’. We can easily filter out things we dislike or do not agree with, and thereby remain in a comfortable world where everything appears to confirm our existing world-view. Research suggests that people positively want to remain in such filter bubbles – and, more generally, that news that plays to already-established positions or prejudices is much more inclined to be ‘liked’ (and hence to generate more income for social media companies). While this is partly a consequence of the proliferation and fragmentation of media, it is also a symptom of growing political polarization, and of diminishing trust in authority much more broadly.

False beliefs may derive from exposure to fake news, or be simply reconfirmed by it. But either way, it seems unlikely that they will be easily dispelled by a good dose of facts, or the force of rational argument. The history of media education certainly tells us this. Racism, for example, is unlikely to be dispelled by appeals to rationality, or by critically analyzing racism in the media. Indeed, it often proves extremely resistant to such teacherly strategies.

There is a further danger here, which media literacy education can easily play into. A critical perspective can easily slide into a generalized, superficial cynicism – a blanket distrust of everything and everyone, and especially of the media. This might appear nihilistic, but it is also quite a comforting position to adopt; and it is a stance that unites conspiracy theorists of all political persuasions.

Ultimately, media literacy is an individualistic solution. Policy makers accept, however regretfully, that the media are not doing a good job in informing citizens and promoting democracy, and that regulating them is impossible. And so they pass responsibility down to the individual consumer: it’s their problem to sort it out.

I have argued elsewhere that this approach informs many governmental endorsements of media literacy, and the work of regulatory bodies: it is key to understanding the history of media literacy policy in the UK, for example. While it appears to be about empowering consumers, it effectively absolves governments of responsibility for addressing problems that arise in a media landscape that is increasingly driven by the imperatives of the free market.

I am not arguing that media literacy is not the answer – or at least an answer – but simply that it is not enough on its own. Media literacy needs to be linked to wider campaigns for media reform. This is partly about professional practices. As Charlie Beckett has argued, there is now a growing need for professional journalists to be more sceptical, and more explicit, in their use of evidence, and more transparent themselves. Especially in the era of President Trump, mainstream journalism has a much greater responsibility to perform its traditional role of ‘speaking truth to power’.

However, there is a broader challenge posed by the ‘non-professional’ journalism of social media. As Evgeny Morozov argues, fake news ultimately exists because of the business model of what he calls ‘digital capitalism’: it exists because it’s profitable. If we want to challenge fake news, we have to challenge the enormous power of digital advertising and the global companies that thrive on it.

Like Paul Mason, Morozov calls for better government regulation, and ultimately for the break-up of the big data companies. This might seem an unlikely outcome, perhaps. But surely understanding this bigger picture is what critical media literacy education should be all about…



The end of innocence?

Channel 4’s new drama National Treasure has brought the issue of celebrity paedophiles back to public attention. What does the most notorious and well-documented of these cases – that of Jimmy Savile – tell us about the role of media celebrity and children’s culture?

When it comes to holiday reading, most sensible, well-adjusted people pick up a pile of three-for-the-price-of-two novels at the airport bookshop. For better or worse, I tend to take things that will give my brain something to chew on. And so it was that I found myself in some beautiful locations in Southern Italy this summer reading all 600 pages of In Plain Sight: The Life and Lies of Jimmy Savile by Dan Davies.


While it’s eminently readable, this is not a book you want to escape into – more one you want to escape from. I felt almost soiled while reading it, and I kept hoping that its subject would die prematurely and put me out of my misery. It’s clear that the book’s author also hated Savile, despite his extensive contact with him, which makes the book interestingly different from a great many biographies.

For non-UK readers of this blog, I should explain that Jimmy Savile was a very famous British DJ and children’s TV presenter. He is one of a series of celebrity paedophiles who have been exposed retrospectively – and in Savile’s case, not until after his death in 2011. Others include the once much-loved children’s TV presenter Rolf Harris, along with several other DJs, TV presenters and pop music personalities who came to fame in the 1960s and 1970s. There have also been several wrongful accusations, although it remains clear that the pop culture of the time was by no means as innocent as some of us might nostalgically wish to imagine.

The term paedophile literally means ‘lover of children’. As such, it’s not clear quite how well this would apply to Savile: he pursued and molested countless teenage girls, but Davies’ biography shows that he positively hated children.

The causes of paedophilia are far from clear. Most public discussion of the issue (not least in relation to children and social media) defines paedophiles as simply ‘evil’ or ‘monstrous’ – a summary judgment that seems quite unhelpful, not least in terms of identifying how we might prevent or treat such behaviour. In Plain Sight offers little insight in this respect. The ‘real’ Jimmy Savile remains obscure throughout; and Davies admits that he never really gets behind his ‘wacky’ public persona. Indeed, it would seem that Savile’s inscrutability was critical: as the title of the book suggests, he was never publically exposed in his lifetime, even though he ran astonishing risks.

How might we understand the role of media in this phenomenon? In light of my interest in children’s media – and specifically in children’s television – I might have been inclined to read the Savile case almost metaphorically, as a kind of challenge to the sentimental nostalgia that tends to infect discussions of the history of children’s media. Here, one might think, is the grim underside of the fake, manufactured innocence of so much adult-produced media for children.


This isn’t an unfamiliar argument. It colours almost everything we might say today about Lewis Carroll, for example, or Shirley Temple or Michael Jackson. Writers like Ann Higonnet and James Kincaid have traced the history of images of children being used as objects of erotic contemplation by adults. Indeed, paedophile desire would seem to depend precisely on the notion of innocence that it seeks to defile.

The cases of Savile and Harris are part of a broader history of children’s media – perhaps especially in the BBC, although not only there. One of the early producers of Doctor Who, John Nathan Turner, appears to have been a prodigious child molester; and, even more disturbingly, so were some of the ‘uncles’ who hosted the very earliest children’s radio and TV programmes of the 1940s and 1950s. In these contexts, paedophile activity was not just present, but apparently rife, and perhaps even tolerated by a management that preferred to turn a blind eye. It’s certainly very hard to believe the protestations of some of Savile’s managers in the BBC that they were unaware of his activities, or at least not suspicious about his ‘creepy’ behaviour.

This is a fairly repulsive history, but generalizing it – and suggesting that it tells us something broader about relations between adults and children’s culture – could run the risk of over-reading it. Rather, as I will argue later, Savile’s case might actually tell us more about the nature of celebrity, and what can happen when people with enormous power find themselves with an almost infinite opportunity for abuse.

An alternative, perhaps quite opposing, view would be to see the whole thing as some kind of media-driven moral crusade. This is the line taken by the polemicist Frank Furedi. A former Marxist turned libertarian, Furedi is the leading guru of the so-called Institute of Ideas, whose predictable liberal-baiting has a small but very vociferous cult following. (The prominence of the IoI is a revealing symptom of the culture of opinion in contemporary media – a topic to which I shall return…).

Furedi’s booklet, published at the height of the Savile scandal in 2013, challenges what he calls ‘the hysterical climate of child-abuse obsession’. (There’s a summary of the key points on his website.) Furedi makes some interesting points about the deeper anxieties that are being played out here – not least the way the scandal provides a means for an ongoing rewriting of history, and of the legacy of the 1960s and 1970s. He’s also correct to challenge some of the hyperbolic rhetoric of such moral crusades, and to emphasise the danger of the authorities immediately believing all allegations of this kind. However, Furedi was justifiably reviled by several critics for his apparent lack of interest in the experience of Savile’s victims.

I might be prepared to agree that DJs having sex with fifteen-year-old girls who queue up outside their dressing rooms is not quite the same as abusing toddlers – although a power-relationship obtains in both cases. However, the claim that Savile was a victim of a media witch-hunt is hard to sustain when reading about his abuse of hospital patients and vulnerable teenagers in institutions, or his acts of necrophilia in hospital mortuaries. (Although Furedi’s cohorts would undoubtedly accuse me of moral authoritarianism for saying so…)


Most instances of child abuse are committed by people well known to the victim, and many occur in the context of the family home. But the key point about Savile’s abuse is that much of it happened in institutional settings where he was granted power by virtue of representing those institutions. As I have suggested, it is simply implausible that the people who allowed him that power would not have known about what he was doing, or at least strongly suspected it.

Ultimately, it was Savile’s celebrity that gave him the opportunity to commit his crimes, and made it difficult to challenge him. Yet it was not simply his fame or his status as a ‘national treasure’ that protected him, but his institutional position, and his close relationships with powerful people. One of the most astonishing things about Davies’ book is his recounting of Savile’s relationships with his powerful friends – regularly spending New Year’s Eve with Margaret Thatcher, being Prince Charles’s private mentor, even being ‘knighted’ by the Vatican. It was this that enabled him to operate ‘in plain sight’ – while giving plenty of public indications (both positively and by protesting too much) as to his behind-the-scenes behaviour.


So if the Savile case might not tell us much about childhood or children’s culture, it does seem to reveal a good deal about media, celebrity and power. It’s also interesting to revisit in light of Channel 4’s new drama about historic sexual abuse, National Treasure, which started this week – and which (having done my background reading) I hope to review once it’s completed.



Whatever happened to media literacy?

Media literacy policy seems to have become one of the living dead. It is still contained in statute, but no longer displays any discernible signs of life.


During the noughties, media literacy was one of the flavours of the decade in policy circles. The 2003 Communications Act bestowed on Ofcom – the new converged media regulator – a responsibility for ‘promoting’ media literacy. A similar requirement was contained in the 2006 BBC Charter. High-profile representatives of the media industries were falling over themselves to join the conversation, and many educators were keen to get on board. Yet in 2016, media literacy is barely more than a whisper. So what happened?

Last week saw the publication of the second of two articles co-authored by Richard Wallis of Bournemouth University and myself tracing the rise and fall of media literacy policy in the UK. You can find the latest one here, and the previous one here. The articles are based on Richard’s PhD research, which involved extensive analysis of policy documents as well as interviews with a great many key players. I was Richard’s supervisor, but I had also been very much involved as a participant in many of the discussions and initiatives he studied.

The appearance of media literacy in the Communications Act was quite a surprise for many. In our first article, we traced its rather strange and uneven journey into policy, beginning from a concern about the effects of media violence raised in the wake of the murder of London headteacher Philip Lawrence in 1995. While its origins were thus in very traditional ideas about protecting children from harmful influences, the place of media literacy in the legislation reflected a more modern approach.

Although it was responsible for creating a new ‘super-regulator’, the Communications Act was essentially about deregulation. In various ways, it sought to open up the British media to market forces, and to roll back centralized control. Yet the government was also seemingly aware of the potentially negative consequences of such a move: individuals would need help in handling the new risks and challenges of a more commercially driven, more technologically complex environment. Centralised state regulation was increasingly seen to be impossible; and so individual consumers needed to learn to take care of themselves.

This strategy of ‘responsibilisation’ – making the individual responsible for their own welfare, rather than the state – is increasingly prevalent in many areas of social policy. According to neoliberal doctrine, the market is the most efficient means of addressing the needs of the citizen/consumer, and the ‘burden’ of government regulation has to be avoided as far as possible. ‘Empowering’ the individual to make responsible choices, and to avoid risks, might be seen as a democratic move; although in the process, any wider concept of the social good tends to disappear. To this extent, media literacy can be seen as a classic piece of Blairite cultural policy, of the kind I have discussed elsewhere on this blog.

However, for some Labour politicians (and indeed for many educators) media literacy seemed to imply a broader educational project: it was a matter of developing critical awareness about the media and their role in society. Thus, in 2004, Culture Secretary Tessa Jowell predicted that ‘…in the modern world, media literacy will become as important a skill as maths or science. Decoding our media will become as important to our lives as citizens as understanding literature is to our cultural lives’. Jowell pointed to the need for media literacy to be embedded in the National Curriculum, and identified several spaces in which that might be achieved. For those of us who had been pressing for such things for several decades, this was hair-raising stuff.


Yet by 2009, only five years later, the government’s report Digital Britain dismissed media literacy as ‘a technocratic and specialist term, understood by policy makers but not really part of everyday language’; and it proposed to jettison it in favour of ‘a National Plan for Digital Participation’. Even before the end of the New Labour government, media literacy was already dead in the water; and with the advent of the Coalition, large cuts in Ofcom’s budget significantly reduced the scope of its media literacy work. Its current activities in ‘promoting media literacy’ come down to little more than an occasional newsletter. It still commissions useful research, but this mainly seeks to describe the changing behaviour of consumers in the communications market, with very little attention to people’s understanding of media – let alone any application to education or learning. Media literacy has become what we call an ‘undead’ cultural policy – a policy that remains in force, but rarely shudders into life.


The demise of media literacy can partly be explained in terms of the dynamics of policy-making itself. The term itself was not defined in the Communications Act, and Ofcom was left to determine the focus and scope of its activities in this area. It did this in an open and careful way, but its own remit as a regulator was inevitably narrower than the broader educational approach envisaged by Tessa Jowell and others. Those in the media industries, who tended to attract the most attention from Ofcom and the DCMS, were generally happy with a form of media literacy that involved children imitating the work of professional broadcasters, and with feel-good celebrations of digital creativity, but wanted little more than that.

Like many other policy ideas, media literacy was a solution looking for a problem. And the problems that increasingly presented themselves as the decade progressed were those of online paedophiles and pornography, and the need to encourage uptake of technology among marginalised groups. Over time, media literacy was steadily reduced to what Ofcom’s head of media literacy, Robin Blake, called ‘internet safety and getting grannies online’. Any extended educational conception of media literacy ultimately gave way to a much more reductive, functional notion of technological skill; and to the need to be seen to be ‘doing something’ about online safety.



Ofcom HQ

As such, the abandonment of media literacy might be seen as simply a matter of ‘wrong time, wrong place’: there were too many competing interests at stake in it, and other policy imperatives eventually became more pressing. Yet the most striking reason here was to do with the failure – if not outright refusal – of educational policy-makers to engage with the idea. Civil servants in the Department of Children, Schools and Families (as it was then) had other priorities, and were largely indifferent to the demands of the DCMS, a much smaller and less powerful department of government. They were mostly resistant, and in some instances overtly hostile, towards media education, especially in the form of the specialist subject of Media Studies – a hostility that has gathered in force in recent years, as I have argued elsewhere. Literacy itself was steadily being redefined during the New Labour years, but in ways that were quite opposed to the notions of critical literacy that are familiar to media educators. There were many other places where media literacy could easily have been incorporated into the mainstream curriculum, but it failed to make any headway whosoever.

Ultimately, the abandonment of media literacy raises the question of how far governments really want citizens to adopt a critical stance towards media, however much they might pay lip-service to the idea from time to time. It might be worth speculating – in the wake of the Leveson Report in particular, but also the increasingly confrontational nature of social media – quite how the zombie policy of media literacy might be prodded back into life.


How does Google make so much money?

Research published last week by the UK media regulator Ofcom found that half of adults (51%) who use search engines are not aware that the top items on many results pages are adverts or sponsored links. While the research suggests that users are increasingly sceptical about the reliability of online information (less than one in five trust results from search engines to be accurate or unbiased), people’s understanding of the commercial dimensions of online search is limited.

What do we need to know about the commercial dimensions of internet use, and particularly of its most basic function, search? How does Google – far and away the market leader – manage to make so much money from what seems to be such a basic, straightforward service?

Google’s recently renamed parent company Alphabet is currently the most profitable business in the world. Alphabet also owns YouTube (bought by Google in 2006) and Android (the market leader in mobile phone software). It has a market value of US$565 billion, marginally ahead of Apple. The company’s quarterly revenue is over 20 billion US dollars, while its operating profit is almost six billion – an extraordinary rate of return compared with many other businesses.

At present, the company is losing money on what market analysts call ‘moonshots’ – speculative ventures such as self-driving cars, ‘smart homes’ and robotics. It is also threatened by the European Commission’s attempts to prosecute the company for favouring its own apps and services on its Android operating system (currently on 80% of smartphones), and thereby driving out the competition.

Alphabet-Google-Monopoly-2 thenextweb.com

[Source: thenextweb.com]

While a good slice of Google’s profit comes from advertising on YouTube and games sales in its Google Play store, online search – especially on mobile devices – provides far and away its main source of income. It is the area in which Google massively dominates the market, with around 90% market share. As early as 2006, the verb ‘google’ was added to the Oxford English Dictionary: it seems to have become a generic term for internet search, whichever search engine one actually uses.

Ofcom’s research suggests that few people really understand online advertising. On the face of it, this might seem puzzling. Many internet searches return links at the top of the page that are clearly (though not very prominently) labelled ‘sponsored’. Of course, it’s possible that users don’t understand the term ‘sponsored’, and few are likely to click on the information button that it explains it:

Based on your search query, we think you are trying to find a product. Clicking in this box will show you results from providers who can fulfil your request. Google may be compensated [i.e. paid] by some of these providers…

Depending on the nature of your search, you may also find further results that are unambiguously labeled ‘Ad’ with a small orange tab.

Even so, advertising on Google is much less overt than on YouTube, where pop-up ads routinely precede and interrupt the videos. Google’s home page has never contained overt advertising – and in the early days of internet search, this significantly distinguished it from other (now largely deceased) engines such as Yahoo and Alta Vista. The plain design of the page, with its large expanse of white, helps to promote its brand image as a free, non-commercial utility.


However, advertising in the form of sponsored results is only part of the broader economics of online search. Much less obvious to ordinary users is the way in which information about them – gleaned from their online search behaviour – is gathered, used and sold. We may know something of this from our everyday experience: for example, if I search on Google for ‘black suede shoes’ (as I have been doing recently), I am likely to be targeted with adverts for such shoes on other platforms I happen to use.

Compared with traditional advertising (for example on TV), this kind of ‘personalised’ advertising is more effective for companies and less annoying for the consumer. We see the ads that are most relevant to us, without having to suffer ads for things that we are never likely to buy. Meanwhile, advertisers don’t have to waste time and money bombarding people with irrelevant marketing. This kind of targeted advertising is obviously much easier with digital technology; but it depends upon advertisers being able to track very closely what we do online – the search terms we use, the sites we visit and the clicks and keystrokes we make.

This is mainly achieved by the use of ‘cookies’ that are permanently planted on our computers and mobiles. This process used to be entirely hidden, and it was not until 2011 that an EU Directive required users to be informed about it and agree to it before they use a particular site. However, other forms of consent – such as our agreement to our personal data being made available to ‘third parties’ (that is, sold to companies) – are mostly hidden in the terms and conditions of service that few users will be bothered to read.

Sometimes this information is aggregated – that is, combined from different sources and accumulated into bigger data-sets, which can then be analysed using a computerized process known as data mining. It’s not always possible here for analysts to identify specific individuals – which could have major implications for privacy – but it can be useful for identifying broader trends. Thus, the people who analyse this data might discover that people who like black suede shoes also tend to like fruit smoothies or exotic adventure holidays, and so they will be inclined to target them with ads for such products as well. Google’s own subsidiary Google Analytics is the leading company in this business, and it has several other companies and services in the area, including AdSense, Google Trends and PageRank. It recently partnered with Twitter in its efforts to track consumer behaviour and attention on a more immediate, minute-by-minute basis.

All these issues need to be seen in the context of Google’s (or Alphabet’s) overall business strategy. In the early days of the internet, enthusiasts like Nicholas Negroponte predicted that it would decentralise the media environment, leading to a ‘cottage industry’ of small producers. While there have undoubtedly been elements of this, digital media have been increasingly dominated by a small number of mega-corporations, on whom all users have to depend. This is a ‘winner take all’ industry, where smaller players – and sometimes those who were a little ahead of their time – frequently go to the wall, while larger companies get ever larger.


[Source: wisdomquarterly.blogspot.co.uk]

This is partly a result of economies of scale, which are apparent across the media industries in general. Unlike many other commodities, media products and services are intangible goods: they are not used up when they are consumed. The makers of baked beans have to make new cans of baked beans for everybody who wants to buy them. By contrast, the makers of Avatar or Lemonade – or indeed the designers of Google’s platform and the algorithms that it uses – only have to make one original. The initial cost of production may be very large indeed, but the more people who use or buy the product, the more the cost is spread. The economic logic here favours larger companies who are in a better position to spend the money upfront.

This tendency towards monopoly or oligopoly – the dominance of a few powerful players – is also evident in the pattern of company take-overs. Small start-up companies will often do a great deal of the speculative research-and-development work, pioneering and testing out new products and services. Once the product appears to be commercially viable – and ideally, before it becomes too much of a threat – these companies will then be bought by large companies like Apple, Facebook and Google/Alphabet (all of whom were small start-ups in their time, of course).

For example, back in the mid-2000s, Google bought the companies AdWords and DoubleClick, which gave it the means to successfully combine search and advertising. Its purchase of YouTube in 2006 was greeted with scepticism by some commentators, but it has proven extremely lucrative, not least because users provide all the content while simultaneously creating the opportunity for the company to advertise and track consumer preferences. (I’ll be writing another post on the so-called ‘sharing economy’ shortly.) According to an article on Wikipedia, Google has bought an average of one company a week since 2010.

Google-Alphabet Techweez.com

[Source: techweez.com]

The larger companies also have the ability to undertake speculative projects themselves, and to diversify into other areas. Some of Google’s ‘moonshots’ may not seem so promising, but the company has more or less successfully diversified or bought into e-mail (Gmail), browsers (Chrome), maps (Google Maps, Street View), social networking (Google Plus, Google Hangouts) and document sharing and editing (GoogleDocs), as well as older media such as photography (Picasa), books (Google Books) and of course television and video (via YouTube). It even provides the leading academic search engine, in the form of Google Scholar. Like Facebook, Google has been accused of attempting world domination – a charge that might be a little melodramatic, but maybe isn’t too far from the truth.

Finally, a more cynical – but entirely accurate – answer to my question would be to note that Google (like Facebook and Amazon, for example) has been extremely skilful in the business of tax avoidance. There have been many news stories about this over the past five years, but it is only very recently that the company’s tax affairs have been challenged by the UK parliament and the European Commission. It hasn’t broken any rules, but it has paid very little in ‘corporation tax’ (taxes on businesses) over the years, not least because it uses off-shore tax havens, often with the collusion of governments.

Nothing I have described here is necessarily ‘evil’, in the words of Google’s famous corporate motto. Some people would argue that it’s just how modern capitalism works. Allowing a company to gather data about our every move online can certainly be seen as a threat to privacy, but some would see it as a reasonable price to pay for services that are convenient to use, and apparently ‘free’. The key question, in my view, is whether we really understand what is happening. The Ofcom figures are fairly alarming, but I would argue that very few users understand what goes on ‘under the hood’ of Google and other companies like it. And that’s another reason why we need Media Studies.


NOTE: I’m currently experimenting on this blog with writing that’s targeted more at students and at a general readership. The aim is to take current media phenomena and link them to Media Studies research and theory, but with a light touch. I’ll continue to write pieces aimed mainly at specialist teachers and researchers, but I’m also trying to write things that will be more widely accessible. Like my earlier posts about Rihanna and Selfies, this is one of those posts. Feedback is most welcome!