You are viewing a single comment's thread from:

RE: LeoThread 2025-03-02 19:39

in LeoFinance7 months ago

Part 2/8:

The excitement leading up to the release was met with a rather apologetic blog post from OpenAI. The company detailed two approaches to scaling language models: one focusing on increasing the computational resources during testing, and the other emphasizing the expansion of model size and data. While the blog suggests that GPT-4.5 is their largest endeavor yet, there is a clear attempt to steer clear of comparisons with previous models like GPT-3.5 and GPT-3.

The central argument rests on the idea that increasing size alone does not equate to improved performance. In fact, it appears that although GPT-4.5 has undergone some enhancements, it remains an underwhelming release compared to what could have been, had it actually embodied the anticipated "GPT-5" model.