As if we needed more proof we’re living inside a Black Mirror episode, political parties are now using the near-infinite power of artificial intelligence to bolster their campaigns.
The National Party in New Zealand has admitted to using AI to produce images for their attack advertisements against the ruling Labour Party.
The ads featured pictures of a group of thieves raiding a jewellery store, two nurses, and a supposed crime victim staring out of a window. One of the ads even appeared to show the cast of the Fast And Furious movies.
The images raised suspicions due to their peculiarities, such as a woman with unusually large eyes, nurses with skin that appeared plasticine, and balaclava-wearing thieves with openings that didn’t align with human features.
The arrival of ChatGPT and similar applications now available to consumers has given the regular person access to some of the most advanced language models ever seen.
In a few short months of it being available, people have already used the free service to generate income, among several other useful exploits.
However, its use in politics has raised a number of ethical questions, with governments facing increasing pressure to prepare for the inevitable grey areas that will arise.
When queried about the images, party leader Christopher Luxon expressed uncertainty and stated that he needed to consult with the team before providing a response.
However, the National Party later confirmed that the nurses, crime victims and robbers were indeed created using AI.
A spokesman for the party referred to it as an “innovative way to drive our social media” and emphasised their commitment to responsible use of AI.
While AI-generated images are becoming increasingly sophisticated, they often exhibit visual anomalies like extra fingers, strange features, or distorted details, which can give them an eerie quality.
As AI programs continue to advance, concerns arise regarding the difficulty for the public to distinguish between AI-created pictures, videos and audio recordings that are real or fake. This raises questions about whether political parties should be required to disclose their use of AI.
In the UK, experts have expressed worries about a potential influx of AI-driven misinformation during upcoming elections and are advocating for regulations governing the use of AI in political advertising.
The New Zealand election is scheduled to take place in October, and there are currently no laws in New Zealand specifically addressing the use of AI in political advertising.
Similar concerns have been raised in the United States after the GOP released a video attack ad featuring AI-generated images of President Joe Biden and computer-generated depictions of social collapse.
In response, politician Yvette Clarke introduced a bill in Congress that would require disclosures of AI-generated content in political ads.
The bill highlights the potential of generative AI to exacerbate and spread misinformation and disinformation at a large scale and unprecedented speed, aiming to inform the public when AI-generated images are used in political advertisements.
Even experts are wary about the alarming escalation in the development of AI.
Geoffrey Hinton, an AI pioneer known as the “godfather of artificial intelligence”, recently announced his resignation from Google, citing growing concerns about the potential dangers of artificial intelligence.
Hinton, 75, expressed regret about his work in a statement to The New York Times, warning that chatbots powered by AI are “quite scary” and could soon surpass human intelligence.
He explained that AI systems like GPT-4 already eclipse humans in terms of general knowledge and could soon surpass them in reasoning ability as well.
He described the “existential risk” AI poses to modern life, highlighting the possibility for corrupt leaders to interfere with democracy, among several other concerns.
Even the CEO of OpenAI, the company developing ChatGPT, admitted that there are real dangers caused by their exploits.
“We’ve got to be careful here,” Sam Altman told ABC News last month.
“I think people should be happy that we are a little bit scared of this. I’m particularly worried that these models could be used for large-scale disinformation.
“Now that they’re getting better at writing computer code, it could be used for offensive cyberattacks.”