Altman's view, at least based upon what he claims, is that models must be closed to protect them from being duplicated by nefarious players.
Does this really even make sense? Say some bad guys get ahold of the model and the data used to train it. What then? They train the model to be a bit better, faster, whatever?
Or are they worried that somehow, someone else will take their model and develop AGI?
Meanwhile, there are some bad guys USING OpenAI's already trained model to do bad stuff. But that's OK because business.
This is obviously just to stifle competition, nothing more.
Yeah.
And even taking him at face value, have to presume to believe he/OpenAI are trustworthy.
History shows Big Tech is anything but that.