You are viewing a single comment's thread from:

RE: Missing awareness and basic knowledge about AI risk’s and why it might already be too late to safe us!

in #ailast year

Thanks for your comment!

Tbqh... I hope it doesn't just wreak of fear, because I've been following AI development since the early nineties, but what has happened in the last 12 months, actually struck fear into me, far beyond stuff I've experienced in the cold war.

I'd say I'm rather tech savvy and my business came with constantly learning about new technology, mostly in IT but well beyond the datacenter perspective when it came to physical security measures.

I've been a Information Security Office for a German insurance company starting in the late 2000's. Before that I managed IT projects and datacenter infrastructure and way back I started as a systems programmer in the mainframe world.

So managing IT stuff and especially InfoSec is my backyard.

...before this gets to be an essay on itself, I've seen stuff, a lot, and it takes a while to scare me. This👆does scare me and nobody that I know, that took the time to look into the risks that come with this technology, feels much different than me.

But, just saying...

Sort:  

...nobody that I know, that took the time to look into the risks that come with this technology, feels much different than me.

Therein lies the problem on the end of the creator. What were they thinking when this tech was created. And what about the tech users? What is on their mind when using it?

The promise of solving problems beyond human capabilities. The Shangri-La of the AGI chaser’s is, at least to some extent, the so called “technical singularity”, when a super AI then AGI (artificial general intelligence) is smarter than the brain “power” of all humans that have ever lived, alife and all that will live. Such intelligence will be god like and no matter which problem you toss at it it’s basically all solvable.

Diseases, including death, blown away. Hunger gone, power needs it’ll build you a dyson spehre and maybe a little later harness the energy of black holes.

To cut a long story short, limitless.

On the way to the event horizon of the tech singularity of course there are big bucks that they want to make. Exactly this is the problem when you try to introduce common sense and want to push for risk managing these capabilities, that can easily get out of hand and completely out of human control.

They’re competing for the biggest gains in this new gold rush since the introduction of the internet.

Money to be made, election’s to be won and wars to be fought.

Basically every niche of human “competence” and creativity can be the backyard of AGI.

You don’t really care about the ants you step on and roll over, do you? I mean even if you don’t want to annihilate ants, you won’t start flapping your arms trying not to step on them.

It’s the same with AGI.

It can, it will surpass us. It might play you quite perfectly when needed, but you’re just an ant and if you get squashed, so be it.

It all comes down to the unsolved alignment problem.

I doubt if it is ever solvable.

Maybe check out Ray Kurzweil’s work regarding his predictions of innovation up to the technical singularity.