If the burgeoning artificial intelligence industry has a spokesman, then it is Sam Altman, the CEO of OpenAI and creator of ChatGPT. On Tuesday, Altman testified before the Senate Judiciary Committee in what turned out to be a wide-ranging, big-picture conversation about the future of artificial intelligence.
The stunning speed with which artificial intelligence has advanced, in only a matter of months, has inspired Congress with a rare bipartisan zeal to keep Silicon Valley's innovations from outpacing Washington again.
For decades, policymakers had deferred to the promises coming out of Palo Alto and Mountain View. But as evidence has grown that reliance on digital technology comes with serious social, cultural and economic downsides, both parties have shown willingness — for different reasons — to regulate technology.
Integrating artificial intelligence into American society presents a major test for lawmakers to show the high technology sector that it cannot escape the scrutiny to which other industries have long been accustomed.
Testifying alongside Altman were IBM’s Christina Montgomery, chair of the company’s AI ethics board, and Gary Marcus, a critic of artificial intelligence who teaches at New York University.
In his opening remarks, which were partially generated by artificial intelligence, Sen. Richard Blumenthal, D-Conn., acknowledged Congress’s struggle to impose meaningful regulations on social media.
“Congress failed to meet the moment on social media,” he said. “Now we have an obligation to do it on AI before the threats and risks become real.”
After the 2016 election, many Democrats charged platforms like Twitter and Facebook with disseminating misinformation that they claimed helped Donald Trump defeat Hillary Clinton for the presidency. Republicans, meanwhile, accused those same platforms of suppressing right-leaning content or “shadow banning” conservatives.
Political grievances aside, it has become clear that social media is harmful to teens, facilitates the spread of bigoted views and leaves people more anxious and isolated. It may be too late to address those concerns when in the case of social media companies that have become mainstay of the corporate and cultural landscape. But there is still time, members of Congress agreed, to make sure that AI does not generate the same social ills.
IBM’s Montgomery acknowledged the obvious risks that artificial intelligence poses to workers in a variety of industries, including those whose jobs had previously been seen as safe from automation.
“Some jobs will transition away,” she said.
Altman, who has emerged as a kind of industry elder statesman and seemed eager to embrace that role on Capitol Hill on Tuesday, offered a different take.
“I think it’s important to understand and think about GPT4 as a tool, not a creature,” he said, referencing OpenAI’s latest generative AI model. Such models, he said, were “good at doing tasks, not jobs” and would therefore make work easier for people, without replacing them altogether.
Anticipating the senators’ regrets about having missed the chance to regulate social media, Altman presented artificial intelligence as an altogether different development, one likely to be far more transformative and beneficial than a feed of cat memes (or, for example, racist messages).
“This is not social media,” he said. “This is different.”
Altman and Montgomery agreed that regulation was required, but neither they nor lawmakers could say, at this relatively early stage in the policy conversation, what such regulation should look like.
“The era of AI cannot be another era of "Move fast and break things,'” Montgomery said, alluding to the outworn Silicon Valley mantra. “But we don’t have to slam the brakes on innovations, either.”