You are viewing a single comment's thread from:

RE: LeoThread 2025-11-04 23-07

in LeoFinance2 days ago

Part 4/13:

OpenAI's focus on reasoning is evident, with models like GPT-4 Pro and GPT-4.5 being used in practical contexts requiring complex decision-making. However, as reasoning improves, so does hallucination—meaning models sometimes generate plausible but false information. Current hallucination rates in GPT-3.5 and GPT-4 hover around 30% or higher, but expectations are that GPT-5 will reduce this to below 15%, possibly closer to 10%. This indicates significant progress in reliable AI output.

Coding and Mathematical Skills