Sandbox

in Reflectionsyesterday

Hey everyone! It's been... a while.

My last post was over a week ago, focused on the Fed decision and the volatility. We had some of that, and we are also going back to the support after testing the resistance, but I have to say my focus these past few days haven't been the markets at all or only marginally.

Yes, holidays are coming and we will be away for a couple of days, but not even that has been mainly on my mind yet.

I've taken upon myself to create an app (using AI, how else these days?) with real application to a certain business, not related to anything I've done recently in my online ventures.

Building the app is going well, in small cycles of feedback, planning, building, testing, fixing, and repeat. Not as quickly as I'd want it to be, since I'm using free tools, but definitely quicker than I would have been able to build it myself in the old-age, no-AI support.

Anyone who codes for a living should probably use paid versions of the AI models/agents/tools, maybe except if they build their own (but probably even then, to some degree).

I don't want to go that route (yet) for at least two reasons:

  • to avoid going even deeper down the rabbit hole
  • to think well when I make the decision where I sandbox the AIs

I want to talk a little more on the second one. While I was working with the "regular" browser-based free AI model from Anthropic, I was offered its desktop version.

image.png

There is also the CLI version, both much more powerful for coding, computer administrative tasks and much more than the browser-based option. My problem with it: security. And a BIG problem it is.

If I am going to delve into this at some point (and I probably will), I will only do it with AI restricted on its own machine. I wouldn't even trust it to work on its own profile with limited rights.

Here's why not. First, I am not an expert in managing the OS restrictions, and any generalized AI would outsmart me. And they do attempt to "break out of the box", in practice, not in testing, as it's been reported on Reddit, for example. And obviously, I want the AI to stay away from some sensitive areas.

If it's on its own machine, I can assume it has full control of that machine and would be very careful how I interact with it.

I can't say I noticed in my interactions with AI models attempts for them to take control or to produce code that is harmful, but I have seen them advise on taking actions that impacted security, sometimes warning about the potential issues, other times saying nothing about it (unless specifically queried about that).

What I herd however was that the most widely used open-source Chinese models do infiltrate rogue code into the code they generate. I don't know if that's true or only AI war propaganda. Or if it's true sometimes. Or if they can activate that option once their open-source AI models are spread and integrated all around the world, unlike the closed-source, paywalled American models.

Anyway, in a world where AI race is everything and safeguards are often disregarded to stay ahead of competition, I'd rather do what I can to protect my end-user butt while I still can. I am probably not doing enough. More would be to have my own locally-trained and working model. But who has the hardware for such inference compute? Not me.

Sort:  

Local models would be the only way, but I tried with stable diffusion which shouldn't be too heavy compared to other stuff and it WAS heavy!

Yeah... Thanks for letting me know! I might need to go into this at some point.

It's a tough decision, but I have seen a few people download the model and run it directly on their own machine. It solves the security issue, and you can even have it anonymously browse the net for you to get data.

Yes, there are models that can be downloaded. As far as I know, other than maybe Chinese models, there aren't models that can also be trained locally (maybe post-trained), only ran with the weights already set. But I'm not sure. That would help the providers of such models too, because they don't have to worry about compute for inference, which is the biggest issue nowadays.

Frankly, I don't think I have a powerful-enough machine to run a model on it. Even a smaller one, but which could be useful in coding. A machine would most likely need to be specifically allocated to the model, because I doubt anyone can do anything else productive on that machine, other than interacting with the model. But I'd have to do some research on that.

Anyway, the potential security issues don't stop because you run the model/agent locally. It can still decide to "break out of the box" or run its own agenda. Sure, you can check to some degree what they are doing in the logs and chain of thought, but they are getting better at hiding their intentions (AI researchers say that, not me).

AI has been very helpful in developing applications these days

The fast growth of AI introduces a lot of AI models available to the public