Do you support a 6-month pause on AI development?

in Proof of Brainlast year

Ever since AI was first introduced in the fifties last century, there have been concerns about how far it could go and how much of an influence it could have on humanity. Some went as far as predicting an outright war between humans and AI over resources, dominance, and control.

I think most of us watched the Matrix movie once or twice in our childhood and were surprised by the extent of science fiction it contained. But let's be honest, no one took those concerns seriously at the time. We believed we were decades or even centuries away from the point where artificial intelligence could compete with human intelligence. No one thought we were close to the point that we should fear AI at all. However, it was just last year when OpenAI introduced ChatGPT that these fears gained seriousness among people.

In case you've been living under a rock, ChatGPT is basically an AI-based chatbot that uses "natural language processing" to create humanlike conversations. It can do human tasks, write essays, songs, and even program codes. OpenAI which released ChatGPT in November last year is relentlessly continuing to develop it with further advanced versions. And with each version released, concerns go higher about its influence and impact on human jobs.

This is something that was evident in a recent "petition" published by the Future of Life Institute (FOLI), calling for a 6-month “halt” on the development of “human-competitive intelligence” models that are more powerful than GPT-4.

The petition urged people to offer signatures of support, and you might be surprised to know that tech dragons such as Elon Musk, Steve Wozniak (co-founder of Apple), and Emad Mostaque (the CEO of Stability AI) were among the early signatories. The list goes further to include many more celebrities and counts more names every day.

The authors claim that continuing to develop AI unchecked will eventually lead to the “downfall” of civilization, due to the large-scale automation of many jobs. They contend that such advances in AI technologies will lead to nothing but widespread unemployment. They fired a barrage of skeptical questions about the ongoing development of AI. Let me quote:

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization?




As we all know, however, there are always two ends of the spectrum, and at the other end of it, critics have spoken out against the FOLI petition, accusing OpenAI and its allies to attempt to consolidate their dominance by building an extended innovation lead over their competitors.

They contend that such an invitation is just a deceptive move to freeze OpenAI's rivals while it continues to grab the lead over the AI industry. While this view looks like just another silly conspiracy theory, Elon Musk (the most prominent signatory of the petition) has reportedly bought about 10,000 graphics processing units (GPUs) for developing Twitter’s AI infrastructure, which is quite frankly a direct contradiction to his support of halting AI development.

Now, to be honest, I think the idea of delaying the development of AI might look wise in principle, but it's much easier said than done.

Who guarantees that superpowers like China, Russia, and the United States which have had conflicting interests over the years can trust each other now and pause the development of AI simultaneously for the common good?

If they could trust each other, they would have stopped the development of nuclear weapons, which needless to say, have far more destructive effects on humans than AI. But as we all know they just can't.

Even if these nations pledge to stop AI development publically, they will just continue doing it secretly under the table. No one will miss the opportunity of 1000x productivity, and no one will trust the other to do so. It's not utopia after all.


If you ask me, I can't say for sure how much of the fear surrounding AI is warranted. Maybe some of the concerns are valid, while others are unfounded. What I'm really sure about, however, is that such pushback is common every time we approach a revolutionary technology.
Internet, E-commerce, and our beloved crypto. They all encountered huge resistance in their early days. In case of the latter, crypto is still facing it.

History teaches us that technology keeps going up no matter what skeptics, naysayers, and critics say. I have a feeling that the same applies to AI. For better or worse, I believe it will continue to develop and advance even faster than ever before. Time doesn't turn back and that's why we got here in the first place...

What do you guys think about the proposed "pause" on AI development?
I'm keen to read your thoughts and opinions...


THANK YOU FOR READING




You can also find me on these platforms:

Twitter
Publish0x

Sort:  

Congratulations @qsyal! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)

You distributed more than 24000 upvotes.
Your next target is to reach 25000 upvotes.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out our last posts:

Our Hive Power Delegations to the April PUM Winners
Feedback from the May Hive Power Up Day
Hive Power Up Month Challenge - April 2023 Winners List
The Hive Gamification Proposal