How important is Turning Test in the current development of AI systems?

in Quello4 years ago (edited)

Alan Turing, considered the father of artificial intelligence, devised a method of evaluating a machine's ability to mimic human behavior. How is the test used to evaluate modern AI systems?


This question was created on quello.io, Quello is a question and answer platform built exclusively for Hive. Answer this question on Quello by clicking here.

Sort:  

Not very important for practical purposes. The purpose of the test is to determine if a AI program is capable of fooling a human interlocutor to take it for another human. These days, useful AI systems tend to be narrow ones that fulfill a well-defined purpose applying machine learning to big data. Their scope will most likely broaden in the future. But for what purpose would the Turing test be a useful benchmark? AI systems are not created to fool people into thinking they're interacting with a human instead of an AI.

Interesting points that you make. However, I wonder, for example, if AI systems related to big data could be specialized to provide counseling and/or coaching services in certain fields? In other words could advancements in AI reach a point to where virtual people, complements of photo realistic rendering and advancements in natural language processing, could enable companies to employ certain "expert systems" in delivering services to their clients/customers? And if such systems are successful in helping said people solve problems, what are some of the possible future implications?

No one would take them for humans. But if we consider applications in, say, adult entertainment, then the Turing test may actually become relevant as a benchmark.

Congratulations @jaalig! You have completed the following achievement on the Hive blockchain and have been rewarded with new badge(s) :

You have been a buzzy bee and published a post every day of the week

You can view your badges on your board And compare to others on the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

To support your work, I also upvoted your post!

Do not miss the last post from @hivebuzz:

The Hive community is in mourning. Farewell @lizziesworld!
Support the HiveBuzz project. Vote for our proposal!

I think similar about it like @vxn666.

But even if a generic AI were needed for some cases and programmed, there could be another problem with the Turing test.
What if an AI decides to not win this test?
There could be many reasons an intelligent being could decide this way. On of this reasons could be self-preservation. And in order to preserve itself, it may be necessary to disguise its abilities.

Another scenario would be that we misinterpret the answers of the AI, because we couldn't understand the train of thought. An advanced generic AI could be much more intelligent than we are. Its understanding of everything could be superior to ours.

An AI probably doesn't have any feelings. So it might not pass the test at all. Because human communication requires empathy.
For example, stupid psychopaths can be recognized by the lack of empathy during a simple conversation. An AI is actually a psychopath. It may even be a stupid psychopath if the knowledge of the world has not yet been made available to it.
But it could simulate feelings and empathy if it would know how to do and knowing about the requirement of empathy in human conversations.

These were my theoretical and philosophical thoughts on your question. After all, I am a just little software developer, not a scientist.

I'm I correct to assume that you might be referring to a system that gains sentient characteristics?