What’s new

AI Tools Must Be More Than Just Fuel for Fake News

Relevant Tags
Expand Button

‘Pope in a coat’ and former U.S. President Donald Trump behind bars; welcome to the internet in 2023. These two headline–grabbing ‘stories’ were not so much hot off the presses as they were the manifestations of Midjourney, an AI programme which promises to ‘expand the imaginative powers of the human species’.

It’s certainly not short of ambition. Midjourney and other AI–based image generators are products of this new–age information arms race that we find ourselves in the thick of, torn between fake news peddlers and their increasingly complex mechanisms to shape our perceptions of what’s real, and what is not, when browsing online.

Hyperreal images of Trump’s arrest and triumphant escape, or the Pope sporting a white puffer jacket, are more recent examples when synthetic media and viral misinformation are one and the same.

But how do we know what is real and what isn’t and given that we don’t know what the AI of tomorrow looks like, how does the industry regulate and legislate to ensure transparency across the board?

What we are seeing is social platforms now beginning to put an obligation on users to label AI generated material for what it is: synthetic media. Even the new hotbed for viral social content, TikTok, now requires all realistic deepfakes to be ‘clearly disclosed’, creating a scenario where the terms ‘fake’, ‘not real’ or ‘altered’ are starting to seep into news feeds the world over.

It is abundantly clear to me that we need a process that can keep up with the changes that are happening in the sector right now and, when reading between the lines, it’s clear the battles with tech companies around regulation have only just begun. The new AI chatbot tools from Google and Microsoft, Bard and ChatGPT respectively, have the power to mimic human language so convincingly that we are surely on course for a deluge of misinformation online the likes of which we haven’t experienced before. Not just in terms of volume, but sophistication.

Indeed, such is the rapid growth trajectory of AI that the UK’s Science, Innovation and Technology Secretary warned last month that any legislation drafted now will be out of date by the time it’s implemented. Artificial intelligence is the genie that’s already out of the bottle, but it’s not one that’s beyond our control. Despite what the sci–fi narratives will tell you.

Yes, we are now facing something of an ‘infodemic’ when it comes to fake news and forged videos, however it is artificial intelligence which also holds the power to help separate fact from fiction and, in effect, curb the spread of misinformation. Fighting fire with fire in the annals of the internet.

From deepfake detection to preventive technologies, these tech solutions can serve as a shield for cyberspace, and the better the collaboration between government, tech companies and academia, the better chance we’ll have at staying perhaps not ahead of the curve, but at least acutely aware of where it’s headed.

Education will also be vitally important in the years to come. Through digital literacy and public awareness campaigns, we can teach generations old and new about the watch–outs for deepfakes and generative AI that is deliberately designed to dupe the end user, not to mention the potential consequences.

‘Pope in a coat’ may have been fairly benign at first glance, but the doctored image is now being called the “first real mass–level AI misinformation case” whereby experimental technology is pushed into the wider culture without any real oversight or regulation. How will future historians decipher which parts of the 2020s were real? And that’s without treading into the many more malicious examples which pose a threat to people’s privacy, safety and sanity.

Gone are the days when being fooled online amounted to Rick Astley’s “Never Gonna Give You Up” appearing spontaneously at the start of a video clip. There’s still a time and a place for playful content, of course, but when the tools for generative AI are used to maliciously misled, steps must be taken to turn the tide on misinformation and help protect the integrity of online news.

Because fool me once, artificial intelligence, and it is shame on you. Fool me twice – well, when the content rendered is so uncanny that it’s capable of duping bona fide experts, perhaps the parameters for ‘being fooled online’ ought to be redrawn on the internet of tomorrow.

Our use of cookies

Some cookies are necessary for us to manage how our website behaves while other optional, or non-necessary, cookies help us to analyse website usage. You can Accept All or Reject All optional cookies or control individual cookie types below.

You can read more in our Cookie Notice


These cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.

Analytics cookies

Analytical cookies help us to improve our website by collecting and reporting information on its usage.

Third-Party Cookies

These cookies are set by a website other than the website you are visiting usually as a result of some embedded content such as a video, a social media share or a like button or a contact map