The nation’s biggest technology leaders loosely endorsed the idea of government regulations on Wednesday artificial intelligence at an unusually closed session of the US Senate. But there is little consensus on what regulation will look like, and the political path for legislation is difficult.
Executives attending the meeting included Tesla’s CEO Elon Musk, Meta’s Mark Zuckerberg, former Microsoft CEO Bill Gates and Google CEO Sundar Pichai. Musk said the meeting “may go down in history as being very important to the future of civilization.”
First, however, the legislators must agree on whether and how to regulate.
Senate Majority Leader Chuck Schumer, who organized the private forum on Capitol Hill as part of a push to legislate artificial intelligence, said he asked everyone in the room — including nearly two dozen tech executives, lawyers and skeptics — whether the government should have a role in AI oversight, and “every single person raised their hands, even if they had different views,” he said.
STEFANI REYNOLDS/AFP via Getty Images
Among the ideas discussed were whether there should be an independent agency to oversee certain aspects of the rapidly developing technology, how companies could be more transparent and how the United States can stay ahead of China and other countries.
“The key point was really that it’s important for us to have a judge,” Musk said during a break in the daylong forum. “It was actually a very civilized discussion among some of the smartest people in the world.”
Schumer won’t necessarily take the advice of tech executives as he works with colleagues on the politically difficult task of ensuring some oversight of the burgeoning sector. But he invited them to the meeting, hoping they would give senators a realistic direction for meaningful regulation.
Congress should do what it can to maximize AI’s benefits and minimize its downsides, Schumer said, “whether it’s fixing bias, or job losses, or even the kinds of doomsday scenarios that were mentioned in the room. And only government can be there to put a guardrail in.”
Congress has a lackluster track record when it comes to regulating new technology, and the industry has grown largely unchecked by government in recent decades. Many lawmakers point to no legislation passed around social media, such as for stricter privacy standards.
Schumer, who has made artificial intelligence one of his top issues as leader, said regulating artificial intelligence will be “one of the hardest problems we’ll ever tackle,” and he listed some of the reasons why: It’s technically complicated, it keeps changing, and it “has such a broad, broad impact all over the world,” he said.
Triggered by release of ChatGPT Less than a year ago, companies demanded new generative AI tools that could compose human-like passages of text, program computer code, and create new images, audio, and video. The hype over such tools has accelerated concerns over its potential societal damage and prompted calls for more transparency in how the data behind the new products is collected and used.
South Dakota Republican Sen. Mike Rounds, who chaired the meeting with Schumer, said Congress must get ahead of fast AI by ensuring it continues to develop “on the positive side” while also addressing potential issues around data transparency and privacy.
“AI is not going away, and it can do some really great things, or it can be a real challenge,” Rounds said.
The technology leaders and others outlined their views at the meeting, where each participant was given three minutes to speak on a topic of their choice. Schumer and Rounds then led a panel discussion.
During the discussion, Musk and former Google CEO Eric Schmidt raised existential risks associated with AI, according to attendees who spoke about it, and Zuckerberg brought up the issue of closed vs. “open source” AI models. Gates talked about feeding the hungry. IBM CEO Arvind Krishna expressed opposition to proposals favored by other companies that would require licenses.
As for a potential new agency for regulation, “that’s one of the biggest questions that we have to answer and that we will continue to discuss,” Schumer said. Musk said afterward that he believes the creation of a regulatory agency is likely.
Outside the meeting, Google CEO Pichai declined to elaborate on the specifics, but generally supported the idea of Washington’s involvement.
“I think it’s important that the government plays a role, both on the innovation side and building the right safeguards, and I thought it was a productive discussion,” he said.
Some senators were critical of the public being shut out of the meeting, arguing that the tech executives should testify publicly.
Republican Sen. Josh Hawley of Missouri said he would not attend what he said was a “giant cocktail party for big tech.” Hawley has introduced legislation with Democratic Sen. Richard Blumenthal of Connecticut to require tech companies to apply for licenses for high-risk AI systems.
“I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money and then close it off to the public,” Hawley said.
While civil rights and labor groups were also represented at the meeting, some experts worried that Schumer’s event risked emphasizing the concerns of big business over everyone else.
Sarah Myers West, executive director of the nonprofit AI Now Institute, estimated Wednesday that the room’s combined net worth was $550 billion, and it was “hard to imagine that a room like that represented in any meaningful way the interests of the broader public .” She did not participate.
In the US, major tech companies have expressed support for AI regulations, although they don’t necessarily agree on what that means. Similarly, members of Congress agree that legislation is needed, but there is little consensus on what to do.
Some concrete proposals have already been introduced, including legislation from Sen. Amy Klobuchar, D-Minn., that would require disclaimers for AI-generated election ads with misleading images and sounds. Schumer said they discussed “the need to do something fairly immediately” before next year’s presidential election.
Hawley and Blumenthal’s broader approach would create a state regulator with the power to audit certain AI systems for harm before issuing a license.
Some of those invited to Capitol Hill, such as Musk, have expressed deep concern and evoked popular science fiction about the possibility of humanity losing control of advanced AI systems if the right safeguards are not in place. But the only academic invited to the forum, Deborah Raji, a researcher at the University of California, Berkeley who has studied algorithmic bias, said she was trying to emphasize that real-world harm is already occurring.
“There was a lot of care to make sure the room was a balanced conversation, or as balanced as it could be,” Raji said.
What remains to be seen, she said, is which voices senators will listen to and which priorities they elevate as they work to pass new laws.
Some Republicans have been wary of following the path of the European Union, which in June signed the world’s first comprehensive set of artificial intelligence rules. The EU’s AI law will regulate any product or service that uses an AI system and classify them according to four levels of risk, from minimal to unacceptable.
A group of European companies has urged EU leaders to rethink the rules, arguing it could make it harder for companies in the 27-nation bloc to compete with rivals abroad in the use of generative AI.