The problem I see is that most people can't figure out what exactly was used by the AI in the model to generate the result.
Yes. So at some point, it has to be traceable and attributable to a source and that source has to be trusted as the source. It requires a web of trust and that only happens in one way.