Artificial Intelligence in Education: A Media Education Approach

63c0f72e66978f096bdd286d_jaso0596_AI_is_dangerous_for_the_education_system_38cffdea-ffbb-4cd8-b6fb-f57555a73955What are the implications of artificial intelligence for media educators, and for the media themselves?

Last weekend, a letter by a group of British headteachers was published in the Times newspaper, with the front page headline ‘AI is clear and present danger to education’. Calling recent developments in the area ‘bewildering’, it urged the government to set up an independent enquiry to establish which of them are ‘beneficial’ and which are ‘damaging’. The letter’s lead author was Sir Anthony Seldon, a well-known private school head and a noted political biographer, whose popular book on the subject is fairly even-handed – despite the dramatic headline.

Coincidentally, I was at an international conference on ‘21st Century Learning’ at the time, hearing some much more enthusiastic accounts of the issue. I doubt that my general scepticism about technology-in-education (familiar to readers of this blog) went down well with many of the delegates. While one or two of the other presentations were somewhat critical, the overall tone was almost overwhelmingly celebratory. As is commonly the case in such discussions, the breathless excitement often resembled a sales promotion, as presenters marvelled at the swathes of ‘cool stuff’ that is now coming onto the market.

The use of AI is already widespread in many arenas: image recognition software, search engines, file-sharing platforms and retailers’ ‘recommender’ systems all depend upon it. However, what we’re talking about here is something more specific: that is, generative AI. This refers to software that (like the algorithms developed by Google, YouTube, Spotify or Amazon, for example) is capable of gathering, processing, analysing and synthesising user data. In this case, however, the outputs that ensue are not just a few more-or-less useful recommendations or suggestions, but extensive texts that users can subsequently refine or edit.

imagesThe debate has significantly gathered pace since the launch last November of the latest (fourth) iteration of ChatGPT, created by the US company OpenAI. Like some other Silicon Valley start-ups, OpenAI began life in 2015 as a not-for-profit venture, but it has quickly become a hot commercial enterprise: it partners with Microsoft, and is currently (some might say conservatively) projecting annual revenues of $1 billion by 2024. Within two months of its launch, ChatGPT-4 apparently became the fastest selling application of all time, reaching 100 million users by the start of this year. However much some commentators may fantasise about ‘banning’ such products, they are very much here to stay.

ChatGPT has attracted most attention, but it is only one of many such applications. Notable rivals in the field of verbal text (or ‘large-language-model’) generation include Google’s BARD and Microsoft’s BING-AI. Others – such as OpenAI’s own DALL-E, along with packages like Open Diffusion and Midjourney – are capable of producing high-quality images based on fairly simple verbal prompts. This issue hit the headlines most recently, when the creative photo category in the Sony World Photography Awards was won by an image (pictured here) that turned out to be generated in DALL-E. (The winner, Boris Eldagsen, had entered precisely in order to draw attention to the issue, and turned down the prize.)

Sony picContrary to OpenAI’s rhetoric of ‘deep learning’, when it comes to education, the fear is that this technology is actually about the very opposite: cheating. In the UK, the debate about AI has been dominated by an overwhelming anxiety about assessment – a phenomenon that reflects the absolutely determining role of high-stakes assessment in our current system. This takes on a particular force in higher education, where institutions are massively (and precariously) dependent on high-paying international students, and where language requirements have been progressively reduced.

Of course, plagiarism is by no means a new phenomenon – nor indeed are ‘essay farms’ that will sell sufficiently competent and seemingly original work. Like most academics, I could tell some amusing and depressing tales about this that go back several decades. Yet while packages like Turnitin are now routinely used to detect cut-and-paste plagiarism, the threat with ChatGPT is that plagiarism is now becoming impossible to detect. While some maintain that new services (like gptzero.me) can already perform this task, others are less convinced.

Judging from results I have seen, packages like ChatGPT are pretty good at mimicking the bland ‘on the one hand this, on the other hand that’ style of conventional essay writing. The essays that emerge are typically dutiful and mechanical: they are rarely personal or passionate or humorous or inspired, and of course they cannot include first-hand research.  In the face of what seems like undetectable plagiarism, some institutions are already desperately turning back to sit-down written examinations; yet if the advent of AI encourages educators to develop more creative and challenging approaches to assessment than the predictable essay format, that may be no bad thing.

At my conference, I was asked how I thought the advent of AI would affect the media themselves; and how media educators might respond. Of course, the technology is changing so quickly that it may be too early to say very much about this. As with earlier technologies, we need to be wary both of excitable hype and of doom-saying: AI is unlikely to transport us to a creative utopia any more than it will take us all to hell. This was easy to say; yet as I listened to the hucksters enthusiastically promoting all that ‘cool stuff’, I felt the need to go further. What follows are some early thoughts on these two questions – from the position of one who is evidently far from an expert.

Furistic-image-of-child-sitting-next-to-a-robot-in-a-classroomIn some respects, the implications for media themselves might seem easier to identify. The use of AI represents a form of automation that seems bound to lead to job losses, or at least to a significant restructuring of working practices. Of course, this is an ongoing process in the media industries in any case; and it may be that the kinds of work that will be replaced will be fairly low-level. Applications like ChatGPT are good at generating generic or descriptive content, and the kind of bland ‘space filling’ reports that journalists routinely have to produce: they are much less good at being innovative or creative, and they don’t do original research. Yet if routine entry-level jobs disappear, where will creative careers begin?

This kind of software initially ‘trains’ itself by scraping or gathering existing online data, and it does so largely without the consent or knowledge of those who originally created it. It does not require permission; nor indeed is it currently possible to prevent this from happening. Inevitably, this has massive implications for copyright; and beyond that, for workers’ livelihoods. Once again, this is not a new issue; but the potential for appropriating (or stealing) existing content from multiple sources and then reworking it is vastly increased when it is possible to do so invisibly.

Meanwhile, packages like ChatGPT – like Google in its early days – do not outwardly appear to depend upon advertising. They appear to talk to you, to be your friend, to help you, all for free. Yet, as the saying goes, if the service seems to be free, then you are the product. The profitability of such packages depends upon the recording and gathering – and eventually selling – of personal data. The more ‘interactive’ they seem, the greater the potential for surveillance.

A further issue that has generated concern here has carried over from debates about so-called ‘fake news’. Generative AI packages currently have no meaningful way of checking the veracity of the original data on which they depend, or indeed of assessing the reliability of sources. This renders them vulnerable to all sorts of ‘algorithmic bias’ that further privilege particular social groups, and raise the potential for dissemination of disinformation – not least in political campaigns. Self-evidently, the outcomes will only be as good as the existing data on which the application depends.

AI-in-Education-How-the-Emergence-of-Tech-is-Impacting-StudentsCritics gleefully report on the basic factual mistakes that can be made by applications like ChatGPT (which some refer to as ‘hallucinations’) – although, as I’ve argued before, factual errors are generally easier to spot (and correct) than more complex and subtle forms of misinformation and bias. We might well ask whether misinformation is likely to be a much greater hazard than it is with existing forms of journalism. Likewise, visual AI applications are capable of producing convincing ‘deepfakes’; although the potential for this has also been available for several years.

Somewhat less obvious to most critics – but arguably much more concerning – are the environmental implications of such technologies. Some reports have suggested that a single ChatGPT dialogue with 20-50 questions consumes half a litre of clean drinking water; and with 100 million global subscribers already, the environmental impact is potentially vast. It has been estimated that ‘training’ the earlier iteration of ChatGPT (version 3) may have required as much as 700,000 litres to generate and cool the vast data centres that the industry euphemistically refers to as the ‘cloud’.

To some extent, all these problems might be seen merely as extrapolations of existing trends, rather than entirely new developments; all that AI does is to automate (and thus accelerate) processes that have already been under way for some time. Nevertheless, they do undoubtedly reinforce the need for regulation. When Sam Altman, the CEO of OpenAI, recently appeared before Congress, he appeared to accept the need for government regulation, and even called for a moratorium on new developments until this could be achieved.

While it might seem extraordinary for technology companies to be asking for their own products to be regulated in this way, Altman’s performance recalled that of Mark Zuckerberg making similar requests to Congress in the wake of the Cambridge Analytica scandal – as the New Yorker correspondent pointed out. With this new market still in its infancy, companies like OpenAI are trying very hard to manoeuvre themselves into the key position of being early brand leaders, and to steal a march on the opposition. They may be keen to manicure their public image, but they surely recognise the likelihood that any meaningful regulation will be lost in the miasma of bureaucracy.

AI-in-educationSo to the second question: how might media educators respond to these developments? On one level, the advent of AI provides us with yet more new curriculum content. The crucial thing here, in my view, is to build on the critical frameworks and classroom approaches that we have developed over time. Just as with social media, it’s both possible and productive to extend media education ‘key concepts’ to this new medium. My observations above suggest a series of questions to do with the political economy of AI that might well be explored by students: basic questions about business, work and regulation are all familiar from the approaches we have developed with students relating to ‘older’ media. Likewise, well-established questions about representation, reliability and trust, and about the wider social implications of AI, will be familiar, and can usefully be explored and debated by students.

However, there are other questions about how we might actively use AI in our teaching. Here I begin with a quotation adapted from the semiologist Umberto Eco’s arguments about television, from almost fifty years ago: if you want to use media (or technology) to teach somebody, you first have to teach them how to understand media. Media education is not primarily about teaching with or through media, but teaching about media: it is not to be confused with educational media, or educational technology. This takes us well beyond the use of technology as an instrumental tool for learning – an approach that seems to be taken for granted in debates about AI, just as it has been in relation to earlier educational technologies. Arguably, the more effective and seamless the technology becomes, the harder (and yet more necessary) it is to take a critical distance from it: we need to slow it down, to de-familiarise it, and to consider how it might be otherwise.

There are numerous resources that explain the basic principles of how AI works, at least on a technical level. At my conference, I was impressed by some very straightforward materials produced by BSD Education, which are quite usable at primary school level; and from there, it would not be difficult to move on to more critical questions about algorithmic bias. (There are numerous others, notably Day of AI, which comes out of MIT; and a large collection here.)

63c0f72e66978f096bdd286d_jaso0596_AI_is_dangerous_for_the_education_system_38cffdea-ffbb-4cd8-b6fb-f57555a73955However, ‘teaching the algorithm’ should entail more than just showing how it works. One long-standing approach in media education is the practical simulation, where students experiment systematically to create different versions of a media text, in line with a defined brief, and then analyse and compare them. Thus, different groups within a class might be given slightly different parameters (different audiences, for example), or be encouraged to create different ‘prompts’ for the software, and then assess the resulting outcomes. At least in the case of ChatGPT, it should be possible to show the sources from which the application has drawn, which should allow some further interrogation (although it’s not clear if this is still possible). Ideally, it should be interesting to compare outputs from different applications as well.

This could lead on to a more open-ended creative approach, provided it is handled thoughtfully. The most interesting insights on this at my conference came in a presentation by Glenn Malcolm, which can be accessed as a blog here. Glenn’s overall approach is refreshingly critical; but he also offers some engaging and productive ways to use the technology to create media such as comics and animation. Of course, this by no means precludes the need to teach the ‘languages’ of media (codes and conventions, as we used to call them); and there is always a danger of being seduced by the gloss and professionalism of the finished products, and forgetting the need for critical reflection. The work Glenn presents arises from the context of an elite international school: I’m not sure it is within the reach of most of us yet, although it surely won’t be long before we have the simple AI equivalents of iMovie or Garage Band.

Teaching almost always uses media and technology in some form. The book is a medium; the chalkboard is a technology. If a particular technology can accomplish certain mechanical tasks, freeing us up to think more deeply, that is surely a good thing; although we also need to weigh up the downsides. If we learn to collaborate with technology, then it is unlikely to replace us. As ever, the question is whether teachers can learn to use technologies critically and creatively.

(NOTE: This blog post was not created by ChatGPT.)

1 thought on “Artificial Intelligence in Education: A Media Education Approach

  1. Pingback: Themenkonferenz: Fakt oder Fake? – Medienkompetenz in der Erwachsenenbildung – bfi.community

Comments are closed.