gpt-3... rather a sidenote to most, but this seems like a major breakthrough for AI research to me!

in #ai3 years ago

image.png
IMG SRC

Could it be that an AGI could solely be based on predictive processing rather than on other mechanisms?

Okokok... I jumped the gun here, lets start a little before the latest, very careful messages from the people around OpenAI regarding gpt-3.

What is gpt-3?

It is: Generative Pre-trained Transformer 3 is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series created by OpenAI, a San Francisco-based artificial intelligence research laboratory.GPT-3's full version has a capacity of 175 billion machine learning parameters. GPT-3, which was introduced in May 2020, and is in beta testing as of July 2020, is part of a trend in natural language processing systems of pre-trained language representations. Prior to the release of GPT-3, the largest language model was Microsoft's Turing NLG, introduced in February 2020, with a capacity of 17 billion parameters or less than 10 percent compared to GPT-3. The quality of the text generated by GPT-3 is so high that it is difficult to distinguish from that written by a human, which has both benefits and risks.

My understanding is that they basically fed it parts of the "Internet", gave it an relatively extended "learning" time and computing resources, so that you can now throw all kinds of questions at it that it's neural network will try to answer.

It does this by continuously predicting "the next word" to a word or sentence that was spoken/written"thought" before. Sounds simple, kind of.

The results are fascinating though!

You can task gpt-3 with writing an essay or an poem if you ask it too and give some additional infos in which direction or style it should progress.

It does this right now at a level of a high school or even college student which is clearly beyond my capabilities for instance. ;-)

There where even certain aspects in human - gpt-3 dialogue's that show signs of irony and other eerie stuff that it does!

For further reading on this I recommend this article here that will make you an AI risk believer, or not!

https://machinelearningknowledge.ai/openai-gpt-3-demos-to-convince-you-that-ai-threat-is-real-or-is-it/

How would it look like when AGI came into existence?

To me it always seemed natural, that once we get to a sufficient point in technological advancement, we could use ANI (artificial narrow intelligence) parts, do a little composing, give this conglomerate of AI systems time to learn and loads of data - aka the internet - that it would get to a point were it would start to change it's own setup while finding more efficient ways to learn.

This then would be the point were humans would have a hard time even following what it does. In the blink of an eye it would then be surpassing "natural" evolution by machines coding machines. Vast numbers of try and error iterations, still possibly mimicking evolutionary processes, would lead to constantly smarter and more capable AIs in staggering speed.

Given the sufficient computing resources it might have started as a bunch of not very capable ANI's in the morning and would end up as a god in the afternoon.

While this AGI has learned what hinders it's further evolvement very quickly, it then would start producing the needed tech to speed up this process 10, 100, 1000, billion times faster. By extending it's capabilities further and further by massive parallelism it would very soon clash with resource issues that will just be a set of further problems it needs to solve... it will solve.

En passe it would highly likely solve all energy, engineering, material resource issues.

Where along the way, probably days or maximal weeks after it became an ASI (artificial super intelligence), this/these ASIs have forgotten about, who made it's first baby steps possible, while it had eliminated these threats, the humans, that were competing for resources, is irrelevant anyways.

As irrelevant as it was for us when we stepped on ants while walking as the apex predator on Earth.

I guess this is why this point in time, were a ASI comes into existence is very fittingly named technological singularity.

Isn't a singularity that thing that makes it impossible to look beyond also because no information can cross over the event horizon?

If there's nothing you could give an AI along on it's way to an ASI it makes no sense to write "don't step on us!" signs anyways. ;-)

Maybe Elon's approach to assimilate with them is very smart and the only way of surviving this as a species? But at the end of the day where we would only be parasitic to them, don't you think? What do we tend do with parasites again? 🤣

Maybe you like to check out this here as well (Listening to this made me write this little post!)?:

Is it much to early to speculate on an possible outcome for human kind with AI? Or is it maybe even too late already? So, what do you think? Do we have even a very slight chance of coexisting with ASI Gods?

Cheers!
Lucky

Sort:  

Congratulations @doifeellucky! You have completed the following achievement on the Hive blockchain and have been rewarded with new badge(s) :

You published more than 850 posts. Your next target is to reach 900 posts.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP