Barrage of State-Backed Cyber Attacks Threaten Britain's 2024 Elections, AI a Key Risk

Britain faces cyber threats ahead of 2024 elections, including AI-driven disinformation and manipulation by nation-states, posing a risk to the democratic process.

Barrage of Cyber Attacks Expected in 2024 UK Elections

Britain is expected to face a barrage of state-backed cyber attacks and disinformation campaigns as it heads to the polls in 2024. According to cyber experts who spoke to CNBC, artificial intelligence is identified as a key risk in the upcoming elections.

Upcoming Elections and Risks

Brits are set to vote on May 2 in local elections, with a general election expected in the second half of the year. The country is currently facing several challenges, including a cost-of-living crisis and stark divisions over immigration and asylum. Cybersecurity risks are anticipated to emerge in the months leading up to the election day, with a majority of U.K. citizens voting at polling stations on the day of the election.

Previous Disruptions and State-Backed Attacks

In 2016, both the U.S. presidential election and the U.K. Brexit vote were found to have been disrupted by disinformation shared on social media platforms, allegedly by Russian state-affiliated groups. Since then, state actors have made routine attacks in various countries to manipulate the outcome of elections.

Last week, the U.K. alleged that a Chinese state-affiliated hacking group attempted to access U.K. lawmakers' email accounts, although the attempts were unsuccessful. In response, the U.S., Australia, and New Zealand imposed their own sanctions. China, however, denied the allegations of state-sponsored hacking, calling them "groundless."

Malicious Interference and Artificial Intelligence

Cybersecurity experts expect malicious actors to interfere in the upcoming elections in several ways, including the use of artificial intelligence. They anticipate that disinformation, especially through the use of artificial intelligence, will be even more prevalent this year. Synthetic images, videos, and audio generated using computer graphics and simulation methods, commonly referred to as "deep fakes," are expected to be a common occurrence. Threat actors are likely to utilize AI-powered identity-based attacks like phishing, social engineering, ransomware, and supply chain compromises to target politicians, campaign staff, and election-related institutions.

Heightened Awareness and International Cooperation

The cybersecurity community has called for heightened awareness of AI-generated misinformation and has stressed the importance of international cooperation to mitigate the risk of such malicious activity. China, Russia, and Iran are identified as highly likely to conduct misinformation and disinformation operations against various global elections with the help of generative AI and other tools. Cybersecurity experts emphasize the fragile nature of the democratic process in the face of these threats, particularly when leveraged by hostile nation states with the ability to craft compelling narratives using generative AI and deep fakes.

Impact of AI on Cybersecurity

A major concern is that AI is reducing the barrier to entry for criminals looking to exploit people online. Examples include scam emails and more advanced, personalized attacks crafted using AI models trained on individuals' data available on social media platforms. As a case in point, a fake AI-generated audio clip of Keir Starmer, leader of the opposition Labour Party, abusing party staffers was posted on a social media platform, garnering as many as 1.5 million views. The rise of deep fakes has raised concerns among cybersecurity experts about the upcoming U.K. elections.

Role of Tech Companies in Mitigating Misinformation

Local elections are also expected to serve as a test for tech giants like Facebook-owner Meta, Google, and TikTok in their efforts to keep their platforms free from misinformation. Meta has already taken steps to label AI-generated content, and deep fake technology is continually advancing. Some tech companies are engaged in a "cat and mouse game" using AI to detect deep fakes and mitigate their impact.

Verifying Authenticity

With the increasing difficulty in discerning what is authentic, cybersecurity experts stress the importance of verifying the authenticity of content before sharing it. While deep fakes are anticipated to be prevalent throughout the election process, verifying the authenticity of information is a critical step to counter the spread of misinformation.

Share news

Copyright ©2025 All rights reserved | PrimeAi News