Sam Altman, AI’s biggest star, sure hopes someone figures out how not to destroy humanity
Analysis by Allison Morrow, CNN
New York (CNN) — Sam Altman, the PT Barnum of the AI industry, has a message for the folks concerned about the technology he’s dedicated his life to advancing: Don’t worry, the nerds are on it.
Let’s back up a bit.
Altman, the 39-year-old venture capitalist and CEO of OpenAI, was speaking with journalist Andrew Ross Sorkin at the New York Times’ Dealbook Summit on Wednesday. Altman was his usual mellow but disarmingly kind self, almost making you forget he’s a billionaire doomsday prepper who has also repeatedly warned about the risks of artificial intelligence.
At one point, Sorkin asked: “Do you have any faith that the government, or somebody, is going to figure out how to avoid” the existential threats posed by “superintelligent” AI systems?
Cue the shy-guy deflection.
“I have faith that researchers will figure out to avoid that,” Altman replied. “I think there’s a set of technical problems that the smartest people in the world are going to work on. And, you know, I’m a little bit too optimistic by nature, but I assume that they’re going to figure that out.”
He goes on to suggest, without elaborating, that perhaps the AI itself will be so smart that it will just figure out how to rein itself in.
“We have this magic —” Altman said, before correcting himself. “Not magic. We have this incredible piece of science called deep learning that can help us solve these very hard problems.”
Ah, yes. And ExxonMobil will solve the climate crisis…
Look, it’s hard not to be charmed by Altman, who did not respond to a request for comment. He carries himself with the coolness of knowing that even if his technology wrecks the global economy, he’ll be safe in his coastal California bunker. (“I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to,” he told The New Yorker in 2016.)
But for the rest of us, it’d be nice to hear Altman, or any of his fellow AI boosters, explain what exactly they mean when they say things like, “we’ll figure it out.”
Even AI researchers admit that they still don’t understand precisely how the technology actually works. AI systems are essentially black boxes that pose “an extinction-level threat to the human species,” according to a report commissioned by the US State Department.
Even if researchers could sort out the technical mumbo jumbo and resolve what they refer to as the “alignment problem,” — making sure AI models don’t become world-destroying monster bots — Altman admits there would still be problems for someone, or some government, to fix.
At the Dealbook Summit, Altman once again punted responsibility for regulating the technology to some made-up international body of rational adults who don’t want to kill one another. He told Sorkin that even if “even if we can make that (super-intelligent model) technically safe, which I assume we’ll figure out, we are going to have to have some faith in our governments… There is going to have to be global coordination… I assume we’ll rise to the occasion, but it seems challenging.”
That’s a lot of assuming, and it reflects a myopic understanding of, like, how policymaking and global coordination actually work: which is to say, slowly, inefficiently and often not at all.
It’s a naivete that must be pumped into the water of Silicon Valley’s one-percenters, who are keen on jamming AI into every device we use despite the technology’s shortcomings. Which is not to say it’s useless! AI is being used to do all kinds of cool stuff, like help people who are disabled or elderly, as my colleague Clare Duffy has reported. And some AI models are doing exciting things with biochemistry (which are frankly over my head, but I trust the honest-to-God scientists who took home the Nobel Prize for it earlier this year).
Still, the brightest stars in AI, who understand the spectrum of the tech’s potential better than anyone, seem shockingly blasé about the lack of regulations around it.
Perhaps the spectacle of naivete is all part of the Altman image. In the same interview Wednesday, the OpenAI CEO made some other hard-to-believe statements, like when he suggested that he isn’t motivated by the billions of dollars in equity he could get from the company — he just loves the work.
And later, Altman also tried to tamp down speculation about his dramatic fallout with his OpenAI co-founder, Elon Musk, who has since founded his own AI company, xAI, and ascended to President-elect Donald Trump’s inner circle.
When asked whether he was worried about Musk abusing his newfound influence and potentially shutting out competitors to xAI and his other tech businesses, Altman responded, quaintly, that he was not losing sleep over it.
“I believe, pretty strongly, that Elon will do the right thing,” he said. “It would be profoundly un-American to use political power, to the degree that Elon has it, to hurt your competitors and advantage your own businesses.”
The-CNN-Wire
™ & © 2024 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.