The world is “nowhere close” to artificial intelligence (AI) becoming an existential threat, according to the Pentagon’s head of computer intelligence, who predicts future supremacy in the field will all come down to data.
“It’s not a singular technology where if we have AI we’ll be successful and if the other guys have AI we’ll be at risk,” he said.
“It is neither a panacea nor a Pandora’s box.“
Dr. Martell was in London at Defense and Security Equipment International, one of the world’s largest arms fairs.
After months of hype around large language models and predictions of human replacement by machinesit’s no surprise that AI featured prominently at the event.
But according to Dr. Martell is AI’s power to plan and execute wars misunderstood.
“There isn’t an AI system in the modern world that doesn’t take data from the past, build a model from it and use it to predict the future,” he said.
“So the moment the world is different, that model is no longer maximally effective.”
And in the fog of war, that might not make AI very useful at all.
The role of AI in future warfare
But among the weapons systems and surveillance tools at the London Arms Show, AI stands out.
Weapons are increasingly autonomous, missiles can move faster than a human’s ability to make decisions, and the amount of data available from satellites and drones is increasing exponentially.
Arms companies tout artificial intelligence as the tool to give commanders the edge, but will the US military allow artificial intelligence to make life-and-death decisions?
“Well, if I have my way, we won’t,” said Dr. Hammer. “There will always be a manager’s decision-making that implements these systems.”
But that doesn’t mean the Pentagon isn’t aggressively pursuing AI.
It is under pressure from Congress to evaluate and integrate AI into its operations before its rivals do.
China has made it clear that military applications of artificial intelligence are part of its strategy to become a world leader in the field.
Vladimir Putin once declared that the nation that achieves dominance in AI “will rule the world”.
But according to Dr. Martell are large language models, while technically exciting, currently too unreliable to be used in anything but the lowest risk activities of the Defense Department – perhaps writing the first draft of a memo.
Other AI tools, such as computer vision technology or pattern recognition tools, are already widely used in the military and businesses — but each must be evaluated on a case-by-case basis.
Last month, the Pentagon launched Taskforce Lima, a program to assess the suitability and safety of the latest generative AIs.
But according to Dr. Martell’s main goal right now is to collect the “exabytes” of data that the US military has access to.
Just as an army marches on its stomach, its AIs will only work if fed high-quality data.
“The value of that technology will be completely dependent on the quantity and quality of the data we have,” said Dr. Hammer.
“A lot of uncorrected, unlabeled data. It’s not information, it’s noise.”
The Pentagon has requested $1.4 billion (£1.1 billion) from Congress to centralize its data – everything from satellite surveillance images to troops’ fuel and food consumption.
The road to AI supremacy sounds more like an IT systems overhaul than a surge of robot warriors.
And while the US military is pursuing a strategy of AI superiority, comparisons to an arms race like that of nuclear weapons are misplaced.
“Where you either know how to do this, in which case you’ve unleashed a Promethean fire, or you don’t know how to do it, I don’t think this [AI] is close to it,” Dr. Martell said.