You can tell just by the way he talks about it – Rishi Sunak is very excited about the benefits of AI.
Today he spoke about the transformation it will bringthat is “as far-reaching as the Industrial Revolution”.
Latest Policy: PM says AI can be ‘co-pilot’ for jobs
But on the podium adorned “long-term decisions for a brighter future”, PM wanted to confront the risks.
The government, he said, had taken the “highly unusual step” of publishing its analysis of the risks AI – including those for national security.
Three reports were drawn up by a team of expert advisers – drawn from tech start-ups, Whitehall and academia – to lay the groundwork for next week’s AI Safety Summit at Bletchley Park.
It is sober reading.
“A smorgasbord of doom,” said one political journalist.
Biggest threats exposed
The reports describe the risks that artificial intelligence can pose through deliberate misuse or purely by accident.
From enabling cybercriminals to more dangerous virusesto crash the economy – and in the most extreme case the “loss of control” over some future AI general intelligence that is more capable than humans at multitasking – and can ultimately destroy society.
The Prime Minister announced a new AI security institute that will be dedicated to assessing these risks from the most powerful “frontier AI” models currently available – such as OpenAI’s ChatGPT, Meta’s Llama or Google’s PaLM2 – and those expected soon to replace them.
But how hard is the UK prepared to come down on tech companies over security?
I asked Mr. Sunak if he would force tech companies to hand over the code for their models as well as the data used to train them.
“In an ideal world, what you say is true,” he said. “These are all the types of conversations we have to have with the companies.”
Not exactly a yes.
Risk v reward at the heart of the AI dilemma
Sunak chose his words carefully because his summit is as much about encouraging big tech companies to want to do business in the UK.
It aims to develop a regulatory environment that does not discourage investment or innovation.
There is also another reason.
When dealing with “potential risks” in an exponentially growing area of technology, it’s hard to know what it is you’re actually regulating.
Then there’s the fact that big tech is multinational, and drawing up a set of rules here might be pointless if the same doesn’t apply elsewhere.
The best many hope for from the summit is that it is a profile-building exercise. The beginning of a conversation.
But some in the AI world say certain red lines could be drawn now.
A ban, for example, against the pursuit of artificial general intelligence (AGI) models that are able to perform multiple tasks and are superior to humans in each of them.
Rules could be drawn up now to, in principle, prevent a future AI model that can control the majority of the world’s industrial robots from talking to the AI that dominates our office software or drives our cars.
Tech companies have made no secret of their desire to develop AGI. They have also said they want to make sure they are safe and are willing to be regulated.
But next week, Rishi Sunak will walk a technological tightrope – encouraging the development of the best AI has to offer (preferably in the UK), without limiting that potential by looking like someone who wants to regulate too hard.
We might come out of the AI Safety Summit with a better idea of what the biggest threats are—and the options we have to avoid them to ensure the true benefits of AI are realized.
But if you expect to see any long-term decisions for the brighter future, don’t hold your breath.