It's a tough decision, but I have seen a few people download the model and run it directly on their own machine. It solves the security issue, and you can even have it anonymously browse the net for you to get data.
It's a tough decision, but I have seen a few people download the model and run it directly on their own machine. It solves the security issue, and you can even have it anonymously browse the net for you to get data.
Yes, there are models that can be downloaded. As far as I know, other than maybe Chinese models, there aren't models that can also be trained locally (maybe post-trained), only ran with the weights already set. But I'm not sure. That would help the providers of such models too, because they don't have to worry about compute for inference, which is the biggest issue nowadays.
Frankly, I don't think I have a powerful-enough machine to run a model on it. Even a smaller one, but which could be useful in coding. A machine would most likely need to be specifically allocated to the model, because I doubt anyone can do anything else productive on that machine, other than interacting with the model. But I'd have to do some research on that.
Anyway, the potential security issues don't stop because you run the model/agent locally. It can still decide to "break out of the box" or run its own agenda. Sure, you can check to some degree what they are doing in the logs and chain of thought, but they are getting better at hiding their intentions (AI researchers say that, not me).