Image Credit: Skydance Productions

New OpenAI Strawberry Model Might Make ChatGPT Worth Using

ChatGPT from OpenAI is the most recognizable generative AI brand so far, but it could be more useful. While Large Language Models have caused an AI boom, with OpenAI at the forefront, to this point they’ve also had huge issues with mathematical accuracy and “hallucinating” incorrect information. However, OpenAI is reportedly close to launching a new model, known internally as “Strawberry,” which might blow past its current model, GPT-4o, and put it well ahead of the competition.

New OpenAI Strawberry model might supercharge ChatGPT

If you’ve used ChatGPT before, you’ll know you must be wary of the results. While it can do simple tasks like defining words and concepts reasonably well, it struggles with complex tasks. That wouldn’t be a terrible issue if it was able to flag when that happens, but it has a bad habit of ignoring instructions and confidently hallucinating incorrect facts and figures. It’s improved with each model released, but you still have to carefully check its work.

However, SiliconAngle reports that the newest OpenAI model, Strawberry, can handle complex tasks with much more accuracy. It’s said to be able to solve complex word puzzles like New York Times Connections, solve math problems it hasn’t been trained on, and even develop marketing strategies.

Previous rumors about Strawberry said it scored over 90% of the MATH benchmark, which is a collection of championship-level math problems. In comparison, GPT-4 got 53%, and GPT-4o got a 76.6% after completing the same test.

However, the story behind this model is strange. Allegedly, Strawberry was once known as Q* and was one of the reasons behind the brief upheaval at OpenAI in late 2023, during which CEO Sam Altman was removed from the company for several days. It’s claimed that a group of OpenAI researchers wrote a letter to the board with concerns that Q* is a major step toward Artificial General Intelligence, a form of AI that can learn, understand, and apply knowledge like a human.

Obviously, there are significant risks to developing AGI. The biggest issue would be if one became self-aware and began making its own decisions without human input. One of the end results of this scenario could be that the AGI’s goals run counter to humanity’s, leading to a major conflict. We could just flip the switch if that was the case (maybe), but then there’s the ethical quandary that we’d be killing a sentient lifeform.

Really, though, we’d just be happy if ChatGPT could answer a question without creating an alternate reality. Fortunately, we don’t have long to wait. Rumor has it we’ll see Strawberry release sometime this fall.

TRENDING

X