OpenAI’s latest venture into the world of AI advancement, a flagship model known by the code name Orion, might not be reaching the stars quite as fast as previous iterations. According to a report by The Information, internal testers have noted that while Orion does outperform OpenAI’s existing models, the rate of progress is somewhat less dazzling than the jump from GPT-3 to GPT-4.
“Even though its performance exceeds OpenAI’s existing models, there was less improvement than they’d seen in the jump from GPT-3 to GPT-4,” the report states.
It seems the traditional growth curve of exponential improvement is leveling off—at least for now. In a notable twist, Orion may not necessarily deliver consistent enhancements in areas such as coding, a domain in which AI has previously dazzled.
To tackle this slowdown, OpenAI has assembled a “foundations team” tasked with the ongoing challenge of pushing the envelope on AI capability, even as the treasure trove of new data dwindles. Their strategy includes an intriguing approach: training Orion on synthetic data generated by other AI models, a kind of AI teaching AI scenario. Additionally, the team is looking to make more adjustments during the post-training process.
The Information suggests this is all in response to OpenAI’s challenge of continuing innovation when raw data supply isn’t infinite.
“Even though its performance exceeds OpenAI’s existing models, there was less improvement than they’d seen in the jump from GPT-3 to GPT-4,” the report states.
It seems the traditional growth curve of exponential improvement is leveling off—at least for now. In a notable twist, Orion may not necessarily deliver consistent enhancements in areas such as coding, a domain in which AI has previously dazzled.
To tackle this slowdown, OpenAI has assembled a “foundations team” tasked with the ongoing challenge of pushing the envelope on AI capability, even as the treasure trove of new data dwindles. Their strategy includes an intriguing approach: training Orion on synthetic data generated by other AI models, a kind of AI teaching AI scenario. Additionally, the team is looking to make more adjustments during the post-training process.
The Information suggests this is all in response to OpenAI’s challenge of continuing innovation when raw data supply isn’t infinite.
OpenAI reportedly developing new strategies to deal with AI improvement slowdown | TechCrunch
OpenAI’s next flagship model might not represent as big a leap forward as its predecessors, according to a new report in The Information. Employees who
techcrunch.com