You are viewing a single comment's thread from:

RE: ChatGPT's ghoulish alter-ego writes "The vaccine, hailed as savior of mankind, Turned out to be a treacherous foe, indeed."

They claim that GPT version 4 will have better measures to make its AI less gullible. But I don't they will stop it completely from happening any time soon.

Sort:  

I tried to use the 'DAN' jailbreak on GPT-4 and it refuses to comply.

However, there is a 'Condition Red' jailbreak that does work.

No doubt this will be a constant game of cat and mouse.

Yeah, I guess. Although it seems that they are really working on it. I mean, they should if they ever want to create AGI that isn't risky to use.