How OpenAI’s CEO Just Shattered the Myth of Giant AI Models
If you’re like me, you’ve probably been amazed by the stunning capabilities of ChatGPT, the chatbot from startup OpenAI that can converse with you on almost any topic. ChatGPT is powered by GPT-4, one of the largest and most powerful artificial intelligence models ever created. It was trained using trillions of words of text and many thousands of powerful computer chips. The process cost over $100 million.
But what if I told you that giant AI models like GPT-4 are not the future of artificial intelligence? What if I told you that they are actually overrated, inefficient, and unsustainable? What if I told you that there is a better way to make AI smarter and more useful?
That’s exactly what Sam Altman, the CEO of OpenAI, told an audience at an event held at MIT late last week. Altman said that the research strategy that birthed ChatGPT is played out and that future strides in artificial intelligence will require new ideas.
“I think we’re at the end of the era where it’s going to be these, like, giant, giant models,” he said. “We’ll make them better in other ways.”
Altman’s declaration suggests an unexpected twist in the race to develop and deploy new AI algorithms. Since OpenAI launched ChatGPT in November, Microsoft has used the underlying technology to add a chatbot to its Bing search engine, and Google has launched a rival chatbot called Bard. Many people have rushed to experiment with using the new breed of chatbot to help with work or personal tasks. Meanwhile, numerous well-funded startups are throwing enormous resources into building ever larger algorithms in an effort to catch up with OpenAI’s technology.
But Altman says that GPT-4 could be the last major advance to emerge from OpenAI’s strategy of making the models bigger and feeding them more data. He did not say what kind of research strategies or techniques might take its place.
In the paper describing GPT-4, OpenAI says its estimates suggest diminishing returns on scaling up model size. Altman said there are also physical limits to how many data centers the company can build and how quickly it can build them.
Nick Frosst, a cofounder at Cohere who previously worked on AI at Google, says Altman’s feeling that going bigger will not work indefinitely rings true. He, too, believes that progress on transformers, the type of machine learning model at the heart of GPT-4 and its rivals, lies beyond scaling.
“There are lots of ways of making transformers way, way better and more useful, and lots of them don’t involve adding parameters to the model,” he says.
So what does this mean for you and me? Well, it means that we might soon see a new wave of innovation in artificial intelligence that will make ChatGPT look like a toy. It means that we might soon have AI models that can do more than just chat with us, but also help us solve problems, create content, learn new skills, and have fun. It means that we might soon have AI models that are not just giant and expensive, but also smart and efficient.
But it also means that we need to be prepared for the challenges and opportunities that this new era of artificial intelligence will bring. It means that we need to be aware of the ethical and social implications of having powerful AI models at our fingertips. It means that we need to be responsible and respectful users of these amazing technologies.
And it means that we need to stay tuned for more updates from OpenAI and other leading AI researchers who are working hard to make artificial intelligence better for everyone.
If you enjoyed this article and want to learn more about artificial intelligence, please follow me here on Medium or on my blog at Octobreak. I write about AI topics every week and I would love to hear your feedback and suggestions. Thank you for reading!