- OpenAI CEO Sam Altman said AI’s risks include “disinformation problems or economic shocks.”
- Altman said he empathizes with people who are very afraid of advanced AI.
- OpenAI has said it taught GPT-4 to avoid answering questions seeking “illicit advice.”
OpenAI CEO Sam Altman is still sounding the alarm about the potential dangers of advanced artificial intelligence, saying that despite its “tremendous benefits,” he also fears the potentially unprecedented scope of its risks.
His company — the creator behind hit generative AI tools like ChatGPT and DALL-E — is keeping that in mind and working to teach AI systems to avoid putting out harmful content, Altman said on tech researcher Lex Fridman’s podcast, in an episode posted on Saturday.
“I think it’s weird when people think it’s like a big dunk that I say, I’m a little bit afraid,” Altman told Fridman. “And I think it’d be crazy not to be a little bit afraid, and I empathize with people who are a lot afraid.”
“The current worries that I have are that there are going to be disinformation problems or economic shocks, or something else at a level far beyond anything we’re prepared for,” he added. “And that doesn’t require superintelligence.”
As a hypothetical, he raised the possibility that large language models, known as LLMs, could influence the information and interactions social media users experience on their feeds.
“How would we know if on Twitter, we were mostly having like LLMs direct whatever’s flowing through that hive mind?” Altman said.
Twitter’s CEO Elon Musk did not respond to Insider’s emailed request for comment. Representatives for OpenAI did not respond to a request for comment beyond Mr. Altman’s remarks on the podcast.
OpenAI released its latest model GPT-4 this month, saying it was better than earlier versions at things like excelling in standardized tests like the bar exam for lawyers. The company also said the updated model is capable of understanding and commenting on images, and of teaching users by engaging with them like a tutor.
Companies like Khan Academy, which provides online classes, are already tapping into the technology, using GPT-4 to build AI tools.
But OpenAI has also been upfront about kinks that still need to be worked out with these types of large language models. AI models can “amplify biases and perpetuate stereotypes,” according to a document by OpenAI explaining how it addressed some of GPT-4’s risks.
Because of this, the company tells users not to use its products where the stakes are more serious, like “high risk government decision making (e.g, law enforcement, criminal justice, migration and asylum), or for offering legal or health advice,” according to the document.
Meanwhile, the model is also learning to be more judicious about answering queries, according to Altman.
“In the spirit of building in public and, and bringing society along gradually, we put something out, it’s got flaws, we’ll make better versions,” Altman told Fridman. “But yes, the system is trying to learn questions that it shouldn’t answer.”
For instance, an early version of GPT-4 had less of a filter about what it shouldn’t say, according to OpenAI’s document about its approach to AI safety. It was more inclined to answer questions about where to buy unlicensed guns, or about self-harm, whereas the version launched declined to answer those types of questions, according to OpenAI’s document.
“I think we, as OpenAI, have responsibility for the tools we put out into the world,” Altman told Fridman.
“There will be tremendous benefits, but, you know, tools do wonderful good and real bad,” he added. “And we will minimize the bad and maximize the good.”