How Fast Is Artificial Intelligence Progressing? Stanford's AI Index Attempts To Answer That Question - SingularityHub

 We are essentially “flying blind” in our conversations and decision-making related to Artificial Intelligence.  -Stanford

This article is based on a report from Stanford University with members from MIT and elsewhere on the Steering Committee.  The AI Index is an open, not-for-profit project to track activity and progress in AI. It aims to facilitate an informed conversation about AI that is grounded in data.  The Index tracks everything from research papers from academia to AI-related startups, AI-funding, and job openings for people with AI experience. Most importantly, it tracks technical performance development in terms of vision, natural language understanding, theorem proving, and SAT solving.

You can read the original article this post was based on at Singularity Hub. I've become a big deep learning/artificial intelligence nerd recently as I've begun taking courses on coding and programming. There's a tremendous amount of news coming out of this field and for those who are interested, there's tremendous opportunity because there aren't many people who can work in this field.

Here are other interesting articles from Singularity Hub on this topic...

 This Neural Network Built by Japanese Researchers Can ‘Read Minds’ 

 Will Artificial Intelligence Become Conscious?

 Machines Teaching Each Other Could Be the Biggest Exponential Trend in AI

Here is the full article...

How Fast Is AI Progressing? Stanford’s New Report Card for Artificial Intelligence

By Thomas Hornigold

January 18, 2018

 When? This is probably the question that futurists, AI experts, and even people with a keen interest in technology dread the most. It has proved famously difficult to predict when new developments in AI will take place. The scientists at the Dartmouth Summer Research Project on Artificial Intelligence in 1956 thought that perhaps two months would be enough to make “significant advances” in a whole range of complex problems, including computers that can understand language, improve themselves, and even understand abstract concepts.
Sixty years later, and these problems are not yet solved. The AI Index, from Stanford, is an attempt to measure how much progress has been made in artificial intelligence.
The index adopts a unique approach, and tries to aggregate data across many regimes. It contains Volume of Activity metrics, which measure things like venture capital investment, attendance at academic conferences, published papers, and so on. The results are what you might expect: tenfold increases in academic activity since 1996, an explosive growth in startups focused around AI, and corresponding venture capital investment. The issue with this metric is that it measures AI hype as much as AI progress. The two might be correlated, but then again, they may not.
The index also scrapes data from the popular coding website Github, which hosts more source code than anyone in the world. They can track the amount of AI-related software people are creating, as well as the interest levels in popular machine learning packages like Tensorflow and Keras. The index also keeps track of the sentiment of news articles that mention AI: surprisingly, given concerns about the apocalypse and an employment crisis, those considered “positive” outweigh the “negative” by three to one.
But again, this could all just be a measure of AI enthusiasm in general.
No one would dispute the fact that we’re in an age of considerable AI hype, but the progress of AI is littered by booms and busts in hype, growth spurts that alternate with AI winters. So the AI Index attempts to track the progress of algorithms against a series of tasks. How well does computer vision perform at the Large Scale Visual Recognition challenge? (Superhuman at annotating images since 2015, but they still can’t answer questions about images very well, combining natural language processing and image recognition). Speech recognition on phone calls is almost at parity.
In other narrow fields, AIs are still catching up to humans. Translation might be good enough that you can usually get the gist of what’s being said, but still scores poorly on the BLEU metric for translation accuracy.
Measuring the performance of state-of-the-art AI systems on narrow tasks is useful and fairly easy to do. You can define a metric that’s simple to calculate, or devise a competition with a scoring system, and compare new software with old in a standardized way. Academics can always debate about the best method of assessing translation or natural language understanding. The Loebner prize, a simplified question-and-answer Turing Test, recently adopted Winograd Schema type questions, which rely on contextual understanding. AI has more difficulty with these.
Where the assessment really becomes difficult, though, is in trying to map these narrow-task performances onto general intelligence. This is hard because of a lack of understanding of our own intelligence. Computers are superhuman at chess, and now even a more complex game like Go. The braver predictors who came up with timelines thought AlphaGo’s success was faster than expected, but does this necessarily mean we’re closer to general intelligence than they thought?
Here is where it’s harder to track progress.
We can note the specialized performance of algorithms on tasks previously reserved for humans—for example, the index cites a Nature paper that shows AI can now predict skin cancer with more accuracy than dermatologists. We could even try to track one specific approach to general AI; for example, how many regions of the brain have been successfully simulated by a computer? Alternatively, we could simply keep track of the number of professions and professional tasks that can now be performed to an acceptable standard by AI.

“We are running a race, but we don’t know how to get to the endpoint, or how far we have to go.”

Progress in AI over the next few years is far more likely to resemble a gradual rising tide—as more and more tasks can be turned into algorithms and accomplished by software—rather than the tsunami of a sudden intelligence explosion or general intelligence breakthrough. Perhaps measuring the ability of an AI system to learn and adapt to the work routines of humans in office-based tasks could be possible.
The AI index doesn’t attempt to offer a timeline for general intelligence, as this is still too nebulous and confused a concept.
Michael Woodridge, head of Computer Science at the University of Oxford, notes, “The main reason general AI is not captured in the report is that neither I nor anyone else would know how to measure progress.” He is concerned about another AI winter, and overhyped “charlatans and snake-oil salesmen” exaggerating the progress that has been made.
A key concern that all the experts bring up is the ethics of artificial intelligence.
Of course, you don’t need general intelligence to have an impact on society; algorithms are already transforming our lives and the world around us. After all, why are Amazon, Google, and Facebook worth any money? The experts agree on the need for an index to measure the benefits of AI, the interactions between humans and AIs, and our ability to program values, ethics, and oversight into these systems.
Barbra Grosz of Harvard champions this view, saying, “It is important to take on the challenge of identifying success measures for AI systems by their impact on people’s lives.”
For those concerned about the AI employment apocalypse, tracking the use of AI in the fields considered most vulnerable (say, self-driving cars replacing taxi drivers) would be a good idea. Society’s flexibility for adapting to AI trends should be measured, too; are we providing people with enough educational opportunities to retrain? How about teaching them to work alongside the algorithms, treating them as tools rather than replacements? The experts also note that the data suffers from being US-centric.
We are running a race, but we don’t know how to get to the endpoint, or how far we have to go. We are judging by the scenery, and how far we’ve run already. For this reason, measuring progress is a daunting task that starts with defining progress. But the AI index, as an annual collection of relevant information, is a good start. 

In a Nutshell

  • The A.I. Index doesn't offer a timeline for general artificial intelligence.
  • In some areas, A.I.'s are still catching up to humans. Language-to-language translations are still subpar.
  •  AI can now predict skin cancer with more accuracy than dermatologists .
  • In some visual areas such as Object Detection, the best A.I.'s have surpassed humans when it comes to identifying objects, but the ability of the best A.I. systems to answer open-ended questions about visual images still lags far behind humans.
  • Speech recognition is on par with that of humans. We know this obviously because of all the annoying voice response systems that major companies use to get you to not talk to an actual representative lol.

The Response From Critics of Artificial Intelligence

Elon Musk has been on a tear against artificial intelligence and has said everything from, "A.I. poses an existential threat to human civilization," to  "A.I. technology could lead to WW3." 

He hasn't been alone either, Stephen Hawking has joined Musk in fearing the end of human civilization with the advent of A.I.

Many big name scientists have issued dire warnings about this technology and let's not forget that popular culture has already warned us about this too...for what it's worth. The Terminator and The Matrix anyone?

The Problem

The problem right now is the lack of regulation and the question of whether or not we'd really be able to control machines that have the ability to eventually think for themselves. I mean after all, if we're talking about conscious intelligence on par with human intelligence, then the issue of control is a major one.

Elon Musk has brought up several issues himself such as wondering whether we'd want the Facebook's and Google's of the world controlling that much power. The issue of privacy has become a hot button topic in recent years and the advent of A.I. could take that discussion to a whole new level.

There is also the issue of job security. The predictions vary wildly, and they are predictions, but some range as high as 50% in terms of jobs that may be lost due to automation. This is not outside the realm of possibility as we've already seen the impact technology has had on the workforce today...with simple tech. This isn't including the impact of artificial intelligence. As mentioned above, A.I.'s have already all but replaced certain contact center jobs.

Lastly, there is the issue of ethics. Should we do this? I don't think it can be stopped personally. Technology always advances, despite potential risks. History has shown this. But do we start talking about "robot rights" in the future if A.I.'s become conscious? How do we protect our respective national infrastructure's from cyber attacks? The days of just worry about a group of hackers may be over when we have to worry about legions of self-replicating conscious bots attacking entire financial, government, and potentially military systems at once.

The Solution

I kind of alluded to it above, but I think the answer is regulation. Not too much that will hamper growth, but enough to establish a set of rules about what can and can't be done.

This technology is coming and A.I. is not the only thing people should keep their eyes on. Also developments in nanotechnology and genetic engineering. Technology is moving at a pace that has the potential to dramatically alter the human race. It's a great time to fantasize about the future, but also be realistic about how these changes will shift the balance of power in our world.

I'm not a fearmonger or a pessimist, so I don't think A.I.'s are going to go all Skynet on us and create an army of killer robots like The Terminator. However, even though at one time that seemed like a far-fetched science fiction story - the possibility is becoming disturbingly real.

On a positive note, there is tremendous potential here for people looking to jump on a ship that hasn't yet sailed. What I mean by that is, the desire for programmers and developers knowledgeable of neural networks, deep learning and A.I. will only increase and likely increase exponentially.

Let's talk about this Steemians. Tell me what you think. 

Have a nice day!

IMAGE USED:

https://pixabay.com/en/artificial-intelligence-ai-robot-2228610/

Sort:  

Congratulations! This post has been upvoted from the communal account, @minnowsupport, by Brian from the Minnow Support Project. It's a witness project run by aggroed, ausbitbank, teamsteem, theprophet0, someguy123, neoxian, followbtcnews, and netuoso. The goal is to help Steemit grow by supporting Minnows. Please find us at the Peace, Abundance, and Liberty Network (PALnet) Discord Channel. It's a completely public and open space to all members of the Steemit community who voluntarily choose to be there.

If you would like to delegate to the Minnow Support Project you can do so by clicking on the following links: 50SP, 100SP, 250SP, 500SP, 1000SP, 5000SP.
Be sure to leave at least 50SP undelegated on your account.