Ministers are to step up plans to protect the next general election from interference or manipulation by artificial intelligence, it has emerged.
Risks from the technology – including the threat to democracy by bad actors – have shot to the top of the government’s priority list in the past few weeks, after senior figures in the AI sector warned it needed more regulation.
Rishi Sunak, at the G7 summit in Japan, has promised that the UK would be at the forefront of international attempts to put “guardrails” on AI to stop it getting out of control.
He said the UK was in a “natural position” to lead international work on regulating AI, pointing to the “huge benefits” of the technology, but warning it would need co-ordinated action to prevent dangerous risks from emerging.
Inside Whitehall, officials are to use new legislation to tackle AI threats as well as a series of measures to stop future local and general elections being hijacked by deep fakes and misinformation spread by hostile forces.
Sam Altman, the chief executive of OpenAI, the company behind ChatGPT, told US congress this week that next year’s presidential election risked being manipulated by AI.
A special Election Cell will be stood up in the run up to the next general election, expected in 2024, which will respond to any threats including foreign interference.
The cell was created for the 2019 election, but ministers and officials believe the rapid progress of AI technology since then means it will be all the more vital when voters go to the polls next time.
Whitehall will also bolster its Counter Disinformation Unit, which will support the Election Cell, to identify and rebut harmful or false content that emerges during an election campaign.
The Online Safety Bill currently going through parliament contains new powers for Ofcom to take action against social media companies if any elections come under threat from bad actors including with AI, while the National Security Bill, which is in the Lords, includes new measures to limit foreign interference of UK elections including through AI, including deep fakes proliferated by hostile state actors.
A government spokesperson said: “The government recognises the threat that digitally manipulated content can pose, and takes the issue very seriously. Our priority is always to protect our elections and take action to respond to any threats to the UK’s democratic processes and institutions.
“Under the Online Safety Bill, all companies subject to the safety duties will be required to remove illegal content from their platforms when they become aware of it. This will include the unlawful use of deepfakes or manipulated media.”
In March the government published a white paper setting out its strategy on AI, but MPs and experts have urged ministers to move faster to set up new regulation and protections against the technology.
A Downing Street spokesman said: “Of course the integrity of elections is something we will always continue to monitor and look at and that’s something that the Cabinet Office does frequently.
“We know AI is an evolving technology that is regularly changing and is moving at pace, which is why we’ve set out the white paper – I believe we were one of the first nations to set out a blueprint for safe and responsible development of AI. So that’s something that we will continue to work on and take forward.”
Speaking at the close of the G7 summit, Mr Sunak said: “A new theme of this summit was AI. AI can bring huge benefits for our economy, society, and public services. But of course – it needs to be developed safely, securely, and fairly. And that will require international co-operation, something the UK is in a natural position to lead.”
The leaders of the G7 countries agreed in their official communique that they would work together to keep AI “trustworthy”, guarding against threats such as disinformation and copyright theft.
Concern over the impact AI could have has spread across several industries, with education leaders being the latest to express fears over its possibly detrimental impact.
A letter sent to The Times earlier this week, signed by more than 60 education figures, said schools are “bewildered” by the rate of change of AI and believe it is moving “far too quickly”.
Sir Anthony Seldon, headteacher of Epsom College, said risks of plagiarism and deepfakes could cause “moral damage” to young people.
Mr Sunak had previously said “guardrails” are to be put in place to maximise the benefits of AI while minimising the risks to society.Source: i NEWS