Artificial Intelligence: A blurred panorama?

in #ai8 years ago

First of all a brief extra to this article since today December 31st I publish what I hope will be the first of many articles to continue nourishing this, until now, surprising community. My name is Michael but little by little we will know each other better, I am very excited to go through this experience and I know that with time we will become a great community. Without any more preambles.

I know that many of us, including myself, have wondered if all those science fiction movies about the machines taking over the world could come true one day, because there is a great worldwide debate on this subject since some of the brightest minds in the world agree that the development of artificial intelligence (AI) at present could not be taking adequately or not taking the necessary precautions.

Although it is very true that wonderful things have been achieved so far with AI and that we currently have AI systems capable of creating other AI systems, AIs that created their own language, and others that are simply curious by nature. And although today Singularity and robotics to the point that is shown in movies are still '' fiction '', the speed of the advances related to AI makes these theories more credible every day.

I personally admire an entrepreneur and inventor named Elon Musk, in fact one of his lectures on AI was what motivated me to start in Steemit with this topic.

Elon Musk is always committed to innovation and advancement in technology. However, artificial intelligence is something that makes him nervous, and rightly so. According to the CEO of Tesla, SpaceX, Solar City and The Boring Company, the AI poses a very great risk to human civilization, and must be regulated as soon as possible.
-Eduardo Marín.

Musk insists that to prevent the worst of AI, we must create a brain-computer interface (BCI) that connects us neurologically to them. The concept is simple, if to destroy us they have to destroy themselves, they will not. He has already said he is working on that interface but despite the fact that one of his laboratories, OpenAI, has already created an AI capable of teaching himself, Musk has said that the efforts to create a safe AI are between 5 a 10 percent success.

So what is your position on Artificial Intelligence? Do you think that the scenarios of science fiction movies will come true or that we will know how to control it and use it to our benefit?

Do not forget to leave your opinion about it and if you liked the content follow me, I will be uploading articles of this type and more every day.

By the way, Happy New year folks. Keep up the good work.


Image Source: Pexels and Unsplash.


Sort:  

I think that the media has over exaggerated the potential that AI is here to take over the world, leading to job loss or something out of the movies. It would be naive to think that some jobs won’t be automated because of reduced costs, but I believe it will simply provide us the opportunity to do more valuable work. I personally believe that humans will be complemented by Artificial intelligence and it will allow us to become more efficient.

In my opinion because of movies like Terminator people have a very biased notion of AI, although this could also be a cultural thing: I forgot where, but recently I read that in japanese culture (e.g. Anime) robots usually are depicted very friendly and helpful to humans, while in western culture they are threatening mankind.

I think that people are seeing this great advancements in AI, like an AI beating the best Chess or Go players and think that because AI is good in this fields, that an AI can do anything. But well, all those impressive AI systems are so far really just good in the field they were designed for and transfering this kind of "intelligence" to be more general is not a trivial task, actually I think it's pretty much impossible. And an AI that is good at Chess or Go isn't threatening anyone, except the "ego" of the human race.

I agree. I think one thing that people don't hear about is the sheer amount of training data that is used to create these specialist systems that can't be transferred to create a general intelligence. For example:

AlphaGo was initially trained to mimic human play by attempting to match the moves of expert players from recorded historical games, using a database of around 30 million moves Source: (Wikipedia)

True, in the age of big data and the ride of IOT, this issue could be tackled, but it's true that collecting data in these dimensions for a general AI is a huge challenge.

I agree that people may be overreacting in some aspect but the real threat is not the AI that plays chess and learns everything about chess but only that. The supposed threat are those AIs that will be made to learn about everything, the infinite knowledge could be dangerous if we are not cautelous of who we are giving it to.