Google hopes to prevent robot uprising with new AI training technique

in #google7 years ago (edited)

For some reason I have an eerie feeling that we are gonna loose this one folks. It makes logical sense. AI gets smarter and better than us, because computing power is increasing exponentially. Then the AI system doesn't need us after a while.  It is the story of every group that betters another. Like a son who doesnt take orders from the old father. We are already having trouble understanding how AI is teaching itself so well and fast. that we cant keep up. We are even having problems understanding the code that some of this AI uses to communicate with another AI. Though im sure the terminator scenario is still a bit away :)


Again....i'm gonna stop my nonsense. Here is the article.... 

 Google is developing a new system designed to prevent artificial intelligence from going rogue and clashing with humans.It’s an idea that has been explored by a multitude of sci-fi films, and has grown into a genuine fear for a number of people.Google is now hoping to tackle the issue by encouraging machines to work in a certain way. 

 The company’s DeepMind division, which was behind the AI that recently defeated Ke Jie, the world’s number one Go player, has teamed up with Open AI, a research group that’s part-funded by Elon Musk.They’ve released a paper explaining how human feedback can be used to ensure machine-learning systems work things out the way in which their trainers want them to.A technique called reinforcement learning, which is popular in AI research, challenges software to complete tasks, and rewards it for doing so.However, the software has been known to cheat, by figuring out shortcuts or uncovering loopholes that maximise the size of the reward it receives.In one instance it drove a boat around in circles in racing game CoastRunners, instead of actually completing the course because it knew it would still win a reward, reports Wired

 DeepMind and Open AI are trying to solve the problem by using human input to recognise when artificial intelligence complete tasks in the “correct” way, and then reward them for doing so.“In the long run it would be desirable to make learning a task from human preferences no more difficult than learning it from a programmatic reward signal, ensuring that powerful RL systems can be applied in the service of complex human values rather than low-complexity goals,” reads the report.Unfortunately, the improved reinforcement learning system is too time-consuming to be practical right now, but it gives us an idea of how the development of increasingly advanced machines and robots could be controlled in the future. 

------

http://www.independent.co.uk/life-style/gadgets-and-tech/news/google-deepmind-ai-training-prevent-robot-uprising-openai-a7833251.html



Please follow me  for more videos and articles 

 And if you liked this post please don't forget to upvote :)  


Sort:  

This is such an interesting thing to think about. What I wonder is...what came first? The actual fear of any form of AI to take over, leading to the movies that spread this fear? Or did the SciFi writers make this up, leading an irrational fear that isn't really possible... Okay I'm not an expert on the subject but this is the way I look at it.

It makes logical sense. AI gets smarter and better than us, because computing power is increasing exponentially. Then the AI system doesn't need us after a while. It is the story of every group that betters another. Like a son who doesnt take orders from the old father. We are already having trouble understanding how AI is teaching itself so well and fast. that we cant keep up. We are even having problems understanding the code that some of this AI uses to communicate with another AI. Though im sure the terminator scenario is still a bit away :)

Let's hope it is ;)