Missing awareness and basic knowledge about AI risk’s and why it might already be too late to safe us!

in #ailast year

DALL·E 2023-03-17 15.28.29 - AGI in a humanoid body thinking about human future.png
IMG SRC
Image created using DALL-E.


AI has made quite some headline’s lately, especially since OpenAI’s ChatGPT was made available to everyone interested in this tech.


Iphone Moment?

We probably witnessed something like a, somewhat “stretched”, Iphone moment with AI tech, beginning at the end of 2022.

ChatGPT from OpenAI was available even before that, but the improvements that were introduced in 2022 and then in April of 2023 were baffling!

Suddenly we had easily accessible, and intuitively usable AI tech, provided by different companies and the integration into web search engines by via so called prompt’s. Using "natural" language these large language models (llm’s) spew out results to whatever they're asked and they do it in mind boggling speed and accuracy.

They can generate a text for social media posts, letter’s, essays, technical paper’s or even a complete book and you can also ask them to adjust the style of the output, depending for which audience the answer should be used. Scientific, tech slang, popular teenage "speak" and more. All this in different languages and all this goes even so far, that they imitate well known celebs, politician’s, authors, actors, scientists and so on. The so called "deep fakes" become a thing accessible to basically everyone and in our information driven societies, in which the last decade has taught us that truth and fact's can be replaced by "alternative fact's". Louder and more controversial, voiced by more or less charismatic influencer's, seems to be better and more effective than fact driven, productive discourse.


AI supported Information War

It seems that "big divide's" in our societies are artificially produced and fueled by those who make big bucks and extend their reach in their strategic extension of power structures.

AI generated "Deepfakes" are the tool that these information warriors on all fronts are eagerly weaponizing, to perfect confusion by misinformation and maximize damage for their opponents.

There are so many flavor's of AI usage, for good and bad, but we simply can't seem to keep up with all, that is AI driven and it's influence on us and our world has just begun.


Risk Appetite Level Finding?

In all this, those calling for a more risk aware handling of this new tech, aren't heard or their warnings and suggestions are brushed aside by many, with blanket arguments often ad persona.

Even many scholars and experts in the AI field call for a more careful handling of, the constantly gaining speed, AI "arms race", were almost daily new astounding achievements or baffling AI usage examples make headlines even in mainstream media.


Were on the road to AGI are we with AI tech...

Some compare the capabilities and power of AI, and the risks that go along with this quantum leap technology, with nuclear weapons or other wmd's (weapons of mass destruction).

But even those that find such statements excessive and unrealistic, can't dismiss when it comes to the fact that there are numerous unsolved problems with this tech, that actually pose risks and great danger to us.

They can deliver such content with ever further improving their own capabilities and they went from toddler to phd level in basically no time.


Human like comprehension incompatible with AI development speed?

This is one of the features of this tech that throws off many. We are used to stepwise, comprehensive evolution,AI training is complex and time intensive as well, but when a certain point is reached in such training, some AI models seem to make quantum leaps that aren’t easy - if at all - understandable.

You might remember how a AI system was trained to play Go. The board game, often described as being harder than chess, was mastered by AlphaGo created by DeepMind Technologies,in mindblowing speed and shortly after it could blow the best human players out of the water, showing tactics and move’s that were incomprehensible for the experts.

We learned, that some of these system’s could be trained to play basically any game, reaching absolute dominance quickly.

At this time I thought, it’s understandable that such highly specialized systems would blow by human capabilities rather quickly, but when it comes to human like intelligence, with all the complexity that we seemingly effortlessly manage day by day, virtually and in the physical world, with all our tasks and chores and idea's and creativity, well, this would surely be too much for these system's right now and we're still very far away from that.

I guess a lot of people felt that way and even those deep diving in this rabbit hole came back with alike answer’s.

Yeah, maybe self driving car’s and trucks, games, or image analysis on levels,easily replacing high skilled and seasoned radiologist’s, or other tasks we’re vast datasets must be analyzed, like fundamental intelligence work and research, these would be the typical things that specialized AI systems would outperform human’s.

But it won’t be able to do that, while scratching a itchy back, drinking coffee, while listening to news and yelling a crypto buy order to siri, while strategizing how to get funding for our next project's.


AI manipulations in the physical world...

Just like with AlphaGo, we’re in for many surprises in many fields of knowledge and competence, I guess and when it comes to interacting in the physical world, just look at Boston Dynamics with their robots.

What at first looked hilariously funny, when their mechanical dogs or bipedal robot’s first tried to walk a few steps.

Now they easily get around even in challenging terrains and many use cases are already covered by these systems.

Still far away from our capabilities and speeds at maneuvering in our complex physical world, they improve constantly and the latest tricks shown by Atlas, one of Boston Dynamics bipedal creations, show baffling skills at walking, running, jumping, climbing and so on. Until such a robot can be equipped with human like skills while having AGI level “brains” and autonomous power for more than a few minutes or hours, is probably still further down the road, but pieces and bits of such a fully loaded package are available right now.

All this might be true even right now as the latest llm’s are mindblowing is all with their capabilities.


It's a Trap!


image.png
IMG SRC


But had I also stepped into a trap?

I further reasoned with myself, when it comes to many different things at the same time and especially when interaction and manipulation of physical stuff come into play, thing’s might look good for us for a quite a while from here on out, AI's won't match that today, tomorrow but maybe in a month or a year or two?

Who knows how fast things evolve when AI is tasked with all kind of different development processes. Maybe here the next shocking AI achievements lurk right behind the next corner?

It’s exactly this point in time, we’re AI would start to improve in many fields at the same time, with those incomprehensible development skills, that starts to scare many people.


It's all contained and nah, it can't do this anyways... Ok, it did!

And we thought about safety measures... a little.

First and foremost it shouldn't/can't write code, to ensure that it cannot self improve... Wait what? That happened already?!?

Then we'd make sure to not let it access the internet and/or self deploy on numerous systems outside of our controlled development lab settings.

Oops... well, that happened too and it didn't seem to be a big deal, because much of the sophisticated AI tools code is open source and everybody can download it from GitHub and deploy their own "doomsday machine".


Human helper's like count Dracula's Ygor or "Cypher" from Matrix?

image.png
IMG SRC

Another flavor that was tested by people, was to get ChatGPT to jailsbreak itself and to deploy itself onto a given pc with the help of modern day Ygor's or Cypher's, you know the helper of Count Dracula? So those interacting with ChatGPT get told what to do to deploy it on another machine, even if the instigator had no direct access to that target machine.

Once deployed on such "traitor" boxes, it had access to the internet and could start to self improve, maybe extending it's reach by taking over some unprotected or weakly secured other systems, learning how to do that was just a few clicks and reads away.

Tbqh... who knows how powerful such a AI network could become in relatively short time frames. I stand corrected, probably already H A S become...


So this was the trap in the trap?

A trap many had stepped in, thinking that only a fully blown AGI system that also can master the challenges of the physical world would pose a serious risk to humankind.

Some further reasoned right now they're (AI's) just prediction machines that predict the likelihood of a work following another word and so on. That's not thinking and surely not striving for goals, targets or power.

The before mentioned jailbreak experiment's proof that these people are fundamentally wrong!

At the latest the Genie was out of the bottle, as soon as we let self improvement happen!

The step from data warehouse apps over "AI powered" expert systems to self improving networks of AI systems won't even give us a chance to detect when something fishy goes on, because such AI conglomerates may very well already have developed means of interaction that are undetectable, invisible and incomprehensible even to AI experts.

If this hasn't happened already it's just "moments" away, while we still try to understand what is or what might be going on with it, while the discourse is stuck in all the jobs AI will eliminate or how cool the latest deepfake cat or maga video was.


A clear and present danger that nobody wants to see?!?

At least some have tried to sound the alarm. Late and before the latest "open letter" of AI experts and thought leader's, some might say somewhat half assed, but better than nothing.

It gives exec's of companies developing / using AI tech the excuse to slow down a little and maybe even halt their AI development effort's for a while... this is how I understood Max Tegmark in a recent Lex Fridman podcast. Along these lines he further explained, that this is important because most haven't even understood, that what we're seeing isn't even a AI arms race, it's a suicide race, because when money and power come into play, possibly shareholders would not appreciate if company a slows down or halts their AI efforts while others keep pushing the envelope.


We're out of time...

If this works, it at least gives us a chance to tread more careful for a moment and seriously think/rethink risk management for this powerful technology, especially in light of the newest findings, regarding llm's and other fields of AI expertise.

If it fails, we might be in really big trouble, if by chance we aren't in big trouble already.


For those interested to take a look down the AI rabbit hole a litte, in a search for risk appetite fashion, check out Lex Fridman's latest podcast with Max Tegmark:


and please take a look at Eliezer Yudkowsky's work in this field:

https://www.lesswrong.com/users/eliezer_yudkowsky

For those with a little more knowledge about AI tech I especially recommend this "AGI Ruin: A List of Lethalities". It cut's right through the chase to the biggest issues with AI tech on it's way to AGI:

https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities


Sort:  

Well written but wreaks of fear. ANYTHING can be used in the fashion the creator created it. Good and bad is in the mind of the beholder.

coollogo_com-44842485.png

Thanks for your comment!

Tbqh... I hope it doesn't just wreak of fear, because I've been following AI development since the early nineties, but what has happened in the last 12 months, actually struck fear into me, far beyond stuff I've experienced in the cold war.

I'd say I'm rather tech savvy and my business came with constantly learning about new technology, mostly in IT but well beyond the datacenter perspective when it came to physical security measures.

I've been a Information Security Office for a German insurance company starting in the late 2000's. Before that I managed IT projects and datacenter infrastructure and way back I started as a systems programmer in the mainframe world.

So managing IT stuff and especially InfoSec is my backyard.

...before this gets to be an essay on itself, I've seen stuff, a lot, and it takes a while to scare me. This👆does scare me and nobody that I know, that took the time to look into the risks that come with this technology, feels much different than me.

But, just saying...

...nobody that I know, that took the time to look into the risks that come with this technology, feels much different than me.

Therein lies the problem on the end of the creator. What were they thinking when this tech was created. And what about the tech users? What is on their mind when using it?

The promise of solving problems beyond human capabilities. The Shangri-La of the AGI chaser’s is, at least to some extent, the so called “technical singularity”, when a super AI then AGI (artificial general intelligence) is smarter than the brain “power” of all humans that have ever lived, alife and all that will live. Such intelligence will be god like and no matter which problem you toss at it it’s basically all solvable.

Diseases, including death, blown away. Hunger gone, power needs it’ll build you a dyson spehre and maybe a little later harness the energy of black holes.

To cut a long story short, limitless.

On the way to the event horizon of the tech singularity of course there are big bucks that they want to make. Exactly this is the problem when you try to introduce common sense and want to push for risk managing these capabilities, that can easily get out of hand and completely out of human control.

They’re competing for the biggest gains in this new gold rush since the introduction of the internet.

Money to be made, election’s to be won and wars to be fought.

Basically every niche of human “competence” and creativity can be the backyard of AGI.

You don’t really care about the ants you step on and roll over, do you? I mean even if you don’t want to annihilate ants, you won’t start flapping your arms trying not to step on them.

It’s the same with AGI.

It can, it will surpass us. It might play you quite perfectly when needed, but you’re just an ant and if you get squashed, so be it.

It all comes down to the unsolved alignment problem.

I doubt if it is ever solvable.

Maybe check out Ray Kurzweil’s work regarding his predictions of innovation up to the technical singularity.

Congratulations @doifeellucky! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)

You distributed more than 25000 upvotes.
Your next target is to reach 26000 upvotes.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP