You are viewing a single comment's thread from:

RE: Have you paid your crypto taxes?

in Rant, Complain, Talklast month (edited)

You should try to ask the LLM what it is actually checking for. OpenAIs GPT4 Default for example will rate things like 70% likely that AI tools have been used and a 80% likely that it was created by a human alone at the same time.

Why? Because it finds that 30% of the text would be implausible for it self as an LLM by the chance of nearly 100%.

hmm yeah.

Sort:  

I don't really confer wtith AI much. I've got a tool that does some automated variable declarations and whatnot bnased on iincoming array keys and whatnot.. But for relying on AI for coding 100% is still a minefield. It's convincing what it says, but will add functions thjat do not exist and all other sorts of shit.

The one I'm working on has 4 LLMs that it checks against.

Well you will need an AGI to check for 'abusive' LLM usage, I do not see that becoming energy efficient anything before fitting chips are released. Because what you actually look for right now is automatically generated ideas or perfect language syntax. Which btw if sufficiently good, are both a tremendous feeds.

On a different note, AI detection for counter measures will always create Tyranny. I didn't take you as one supporting that. You can for example look at the posts of Geekgirl of the last two years, none of these have been written without an LLM. It is just to homogen, too clear, structure is to similar. It is impossible a human would be this reliably and mechanically consistent, not to speak of all the pandering what we now know to be typical AI slang. But she will never get in trouble for it, never, because she's in the club and she's been around for a while, tbh everyone thinks her posts are still valuable. But anyone else? Get f-ed real hard.

Maybe do not go down that road. There is nothing good to be found.
untitled.gif