Why I Don't Believe in the Singularity.

in #singularity8 years ago (edited)

It may come as a shock to regular readers of this blog that I don't believe in Kurzweil's model of how the transition from biological to machine life will occur. After all, I do think intelligent machines will eventually exist one way or another. I even think they're likely to become the dominant form of life in the universe if only because they outlast us.

But that concept is not one in the same with the Singularity. Rather, the Singularity is but one proposed mechanism for how one particular step of the process may occur. In the same way that natural selection is not the same as evolution, but rather a proposed mechanism by which evolution works (albeit far and away the best supported one).

My first problem with the Singularity as proposed by Kurzweil is that it's based on a couple of well meaning but mistaken assumptions about brains, and computing. The first is that if we can create a human level AI, it will be able to design an AI more intelligent than itself.

This doesn't follow since it's a single average intelligence human, effectively, that required thousands of the brightest minds on the planet to create. If we ever do reach this point without somehow anticipating that it's a dead end, I feel bad for the AI more than anything.

It'd just be a hapless, regular person who didn't ask to be born as a building sized machine that costs tens of thousands a day to operate. One which will eventually be 'euthanized' because it isn't the wish granting genie we hoped for. This is somewhat like murdering your son because he doesn't become a wealthy doctor.

It would also need to be raised from infancy and educated. Even then, just one such AI wouldn't be able to do anything a comparably educated human can't. You'd need thousands of them, the artificial equivalent of the research communities responsible for building it in the first place. At that point, why not just do the same thing with genetically improved groups of humans?

The second problem I have with it is the weird notion that once we achieve the same gross computing power as a human brain, computers may exhibit sentient thought as an emergent phenomenon. I blame Hollywood for this belief. We 'think' not because our brains are powerful, but because they are brains.

Neural architecture is fractal and massively parallel in a way no current processor architecture comes close to. Simple animals with much less computing power than a desktop PC still think and behave in a distinctly lifelike way because of the properties of living brains that computers don't possess.

Maybe the only hope for reproducing those qualities is to build an architecturally brain-like computer? Or to simulate a brain in software down to every individual neuron.

This would require hardware many times more powerful than the brain you wish to simulate in order for it to think at full speed. When that does occur, you'll have something like the human level AI mentioned in the first example and it will get us nowhere.

My third problem with it, by far the most commonly cited by others, is that not all technology advances exponentially. Only processors actually obey Moore's Law, and fundamental physical constraints (like the Carnot limit, for example) prevent many technologies from advancing much further than they already have.

While it's a great thought experiment and a wonderful premise for science fiction, I think the roots of the idea are anxiety over our own mortality. The desire to live in a unique, extraordinary time in history different from any others. The quasi-religious desire for something more powerful and intelligent than ourselves to swoop into our lives and solve all of our problems.

Singularitarians resist that description because they conceive of their beliefs as being scientific and not at all religious, but they both are appealing because they alleviate fear of death and absolve us of the responsibility of solving the big, long term problems we've created on this planet.

Most of what we hope for a Singularity level mega-AI to accomplish can be accomplished just by genetically enhancing human intelligence. I don't think cybernetic transhumanism is likely to become prevalent because I only see white males between the ages of 14 and 30 working in tech related fields wanting to become cyborgs.

I say that even though my own body has some electronic parts. But does your mother want to be a cyborg? Does your sister? How many people not working in IT are attracted to the idea of invasive surgery to make themselves more like a concept from science fiction?

Like most futurist predictions about a technology revolutionizing everything and being used by everyone everywhere, the reality tends to be that it's used in some places by some people, occupying a logical niche.

Most people will have some type of implant in them by the next century, but it is not likely to be outwardly visible, invasive, or extensive in the way that it replaces their natural functions. I think the only extensive modifications will be experimental, for space travel.

Robotic prosthetics will be for those who lost limbs in accidents, possibly only as placeholders while an organic replacement limb is grown. Most people just want their old arm back.

Things like life extension, genetic modification and so on will happen and are happening but you and I will not live forever. It will require that you be genetically modified before birth and everyone reading this has already been born.

I don't mean to bring anyone down with this but there's a sort of evangelical mania surrounding the Singularity these days and I think people should recalibrate their expectations so they don't wind up bitter that they didn't get their immortality drugs and augments, in the same way the last generation didn't get their flying cars or moon colonies.

Machine life will exist one day. The fact that humans exist to write or read this article is proof that there's a particular way you can put atoms together such that the result will be conscious. In the same way that birds were proof that heavier than air flying machines were possible.

I just don't expect to see conscious, self-improving machines by 2045, and question whether consciousness is even necessary for self improvement. If a machine can make copies of itself out of raw materials out in space (asteroid ore for example) and gets energy from sunlight, if left alone for a few billion years, probably at least one 'species' descended from it will be conscious.

After all, that's how it happened for us. Just with chemistry rather than technology. It would not require it to even be possible for humans to technologically recreate consciousness, that would be left to evolution. Then once conscious machine life exists, it could set about deliberately self-improving from there.

Sort:  

well i sure am one who won't be having any implant in me lol

!nice post ~ upvoted

Me neither!

But how will the gubment keep track of your spending? Muh implants!

And if you dig into the research of Penrose and Hameroff, it points to consciousness being a far more difficult process to replicate in computer hardware. Granted, quantum computing is making progress but we're a long way from being able to group several million qu-bits in superposition and collapse them several dozen times a second, into an objective stream of consciousness!

Insightful arguments. I am not sure what will happen, but I think genetic engineering will end up taking over computing and robotics.

Who's to say we can't build biological computers, anyway? Imagine if a brain in a vat were your personal computer.

That's why I said, "taking over", rather than somehow replacing or eliminating. We already can build biological computers, but only very rudimentary ones. Like in DNA computing. When you get down to the nanoscale, the lines between biological and electromechanical machines become very blurred.

Oh, totally. Atoms are atoms.

Excellent.
I've been, and still am, saying much the same. Recently I read that a neuron is much more like a supercomputer than it is like a NAND gate. If that's the case then the brain is not just a computer so much as a network OF computers. In other words a not so complex biological brain (say...a mouse?) is more complex than the entire internet.

My contention is that "consciousness is an emergent property of a sufficiently complex network"

One question is, how do you design an emergent property? How do you even PREDICT it?

NONE of which should contradicts the fact that technology is advancing at a very high rate of speed. Not only that but it's rate of acceleration is accelerating.

My contention is that "consciousness is an emergent property of a sufficiently complex network"

To this I would add "a sufficiently brainlike network". I think it's the architecture of the brain which causes it to automatically absorb information and self-program. But that's just my suspicion of course.

NONE of which should contradicts the fact that technology is advancing at a very high rate of speed. Not only that but it's rate of acceleration is accelerating.

All technology? Has the fuel efficiency of automobiles obeyed Moore's law, for example?

"all technolgy" in the sense that technology as a whole is advancing. Specific instances, like chipping flint, automobiles, and buggy whips, are 'mature' and no longer advance.

side note: some things, guns for example, are 'mature' technology. There's not a whole lot of room for improvement without a radical redesign.

It would also need to be raised from infancy and educated. Even then, just one such AI wouldn't be able to do anything a comparably educated human can't. You'd need thousands of them, the artificial equivalent of the research communities responsible for building it in the first place. At that point, why not just do the same thing with genetically improved groups of humans?

Because it would be far easier to horizontally scale the AI (once it is built at production scale) than it is to scale humans. You don't need to retrain the AI with years of education again; just copy and paste onto new hardware (hardware by the way that will be quickly manufactured autonomously thanks to machine learning and AI). Also, the AI doesn't need to die. Think about it. We invest 20+ years of education and training to a human being for roughly only 40 years of productive work. That is a lot of inefficiency that is avoided by using the immortal AI. There also other efficiency benefits compared to humans. Humans require a lot of resources to survive compared to a machine, and of course survival is just the bare minimum. If you want a human to be productive, they need to be happy. And happiness requires a lot more resources and most critically time devoted to non-productive activities (leisure time, family time, proper amounts of sleep, etc.). Finally, a lot of the inefficiency from groups of humans working together comes from coordination issues. There is a lot of overhead simply communicating information from the mind of one human to another, which is a necessary inefficiency because a single human cannot accomplish these ambitious tasks alone. But what if the communication between workers was as natural and high-bandwidth / low-latency as the communication that occurs between your brain's left hemisphere and right hemisphere? I think the collection of horizontally-scaled AGIs (Artificial General Intelligence) could maintain high-fidelity communication with each other and efficiently act as one such that they could vastly outperform a similarly sized group of human workers even if each AGI had the same level of intelligence and thinking speed as a human.

Of course the counterpoint to the efficiency argument is that early on the first AGIs will likely be incredibly computationally demanding relative to humans. Humans pull off their amazing intelligence and cognitive skills with just 3 pounds of matter consuming only 20 watts of power. That is incredible when one compares to the size and power consumption of modern supercomputers required to do tasks (much less effectively by the way) that humans find trivial. But with continued technological development, this is likely to eventually change, and it hopefully will be possible for machines to outperform human brains by all relevant metrics. Also, I'm not at all convinced biological engineering can improve human brains enough to compete with the gains machines can achieve given how much architectural flexibility is allowed when designing a machine system from scratch.

The second problem I have with it is the weird notion that once we achieve the same gross computing power as a human brain, computers may exhibit sentient thought as an emergent phenomenon.

I agree. Not to say that I think they couldn't be designed to do so. But I believe the AGI could be designed to avoid sentience, which would probably be the much easier task anyway, if the human designers chose to do so. And especially early on it would make sense to avoid building a sentient AGI.

Maybe the only hope for reproducing those qualities is to build an architecturally brain-like computer? Or to simulate a brain in software down to every individual neuron.
This would require hardware many times more powerful than the brain you wish to simulate in order for it to think at full speed.

My guess is that the first successful AGIs will use the emulation-like approach at first (but likely with very crude mathematical models of neurons and synapses), simply because it is less expensive to iterate and try variations of architecture. This would likely only be useful for further research in AI though since the cost of the amount of computational resources and electricity needed to power such a thing would likely outweigh the benefits it provides (a human would easily outperform it). But it could prove the concept of the architecture. The next steps would likely be simplifying the architecture (for performance, manufacturability, and ultimately cost reasons) further while still preserving the desired emergent behavior of intelligence. Then when the conceptual architecture is more or less settled, the next steps would be realizing this architecture efficiently in hardware. The von Neumann architecture would not be of any use here. The hardware architecture would need to more resemble that of the brain: highly parallel, massively interconnected, and likely with the memory kept close and local to the large number of simple processing units.

My third problem with it, by far the most commonly cited by others, is that not all technology advances exponentially.

Again, I agree (somewhat). It is too simplistic to just use Moore's law as the basis for an extrapolation forward in order to predict when AGI will be created. There is so much more innovation in technology that I think will be required to be able to design the hardware architecture to achieve practical AGI than simply doubling the number of transistors per unit area every two years or so (which is reaching its limits anyway). That's not to say that I think that is the only metric futurists are looking at when making their predictions, but I think they may be a bit too optimistic (likely because of some bias of wanting to see these technologies before they die) about the rates of advancement of the other technologies that will likely be necessary to realize this future. For example, I imagine there may need to be huge advances in material science and engineering to even be able to manufacture the highly interconnected and dense architectures resembling the human neocortex that will likely be necessary to achieve a practical AGI.

I don't think cybernetic transhumanism is likely to become prevalent

Personally I think there will be much bigger advancements in purely-synthetic AGIs than augmenting human cognition with machines via direct neural interfaces. I do think some of the latter will happen, but I don't think it will be anything like uploading one's mind to the internet or freeing one's mind from the fragility of their body or other radical stuff like that.

Some thoughtful points made here. Perhaps I should have said I don't believe in the singularity as Kurzweil and his followers usually describe it. As stipulated at the beginning I think conscious machines are inevitable, I just don't think Kurzweil has the right answer for how they will come about.

Kevin Kelly's latest book The Inevitable had a similar outlook. I think what humans (and even animals) posses is very special because of how universal our cognition seems to be, AIs will probably specialize and compete. I think our very physical nature will set us apart from them. They will be growing and competing with each other in the electronic medium just like other species do in the wild, but it's so long before an AI with a body could feed, heal and replicate itself. And why would it? An AI competing against other AIs would naturally want to stay digital, instead of downgrading itself to the physical world.

I think smarter AIs will be competing for our attention just like we do.

Very nice text and vision of the future. I agree with all you said! Thanks for sharing!

I need to watch the movie Wall-e again.

Very interesting.
But: "The fact that humans exist to write or read this article is proof that there's a particular way you can put atoms together such that the result will be conscious. "
Your statement takes physicalism for granted,and is therefore a fallacy.
Not because physicalism is necessarily false,although that is my firm conviction.
But because you assume physicalism in a sentence that you then take to support physicalism.
Circular reasoning.

Your statement takes physicalism for granted

It is self-evident, unless you deny that I and other humans exist, or that we're made of matter.

Not because physicalism is necessarily false,although that is my firm conviction.

Have you solved the problem of interaction yet?

I did not mean to attack you personally.
Physicalism is not self evident,and there are many versions of physicalism each with their own problems.
Here is one reference to read up on the subject:
http://consc.net/papers/facing.html
And there is no interaction probblem if you are not a dualist.
I am not a dualist.
And I do not deny the existence of other people, I am not a solipsist. I don´t deny the existence of matter either.
I do deny that "we´re made of matter" because my consciousness is not made of matter,and my consciousness is what makes me a person. To have consciousness is to experience.
There is nothing in the description of physical reality that accounts for the existence of consciousness.
Claiming otherwise is not based on facts, but on dogmatic belief.