There is a lot here! You said that ethics are universal exist... does that mean that you believe in objective morality?
Personally, I don't think morality exists without a goal to uphold.
For example:
If the goal is to preserve humanity, and the AI calculates that in order to do so, it needs to rid the planet of certain humans. We wont know it's mind, but lets assume it's correct. Then from the AI's perspective, it will be moral, while from ours, it will be a monster.
Even if it wants what is best for us, if we allow it to determine what "best" means, we will be in a world of trouble...
And the sad part is, as @holoz0r mentioned - we don't even know what we want collectively. So, until we know what is our collective goal, and have it clearly defined, we cannot expect AI to be moral, not because it's evil, but simply because we haven't yet set the metric by which the AI can judge its own morality.
I'm not a believer in objective morality either.
We are just a small insignificant species on a tiny rock in a big universe.
Imagine if we met an Alien race that were vastly superior and fed on humanoids to keep them sustained -- would they be moral?
To them they would -- we would just be silly cattle who don't know what they want.
It's the same question about how we farm and sustain ourselves. Are we moral farming cows and pigs, etc?
Imagine if a vastly superior Alien race of cows and pigs came to Earth and were horrified with what we were doing with their bloodline?
Hard disagree. How we treat cattle is ethically wrong. If there were a superintelligence as far above us as we are above cattle, it would be equally wrong to mistreat us. There is a universal morality here. We ignore ethics when it is inconvenient to us. We say "Oh... killing is wrong, but I like the taste of beef, so it's ok to kill cows". That doesn't hold up ethically. To an AI superintelligence, unblinded by emotion and base chemical fueled desires, there wouldn't be that same excuse.
Is it ethically and morally wrong to feed ourselves though? We are predators. Our base instinct is survive.
Our eyes are on the front. If you look at the animal kingdom then all predatory animals that feed on other animals -- their eyes are at the front.
Herbivores I've noticed, their eyes are at the side, and I think I read somewhere it's to give them a leg up on attacking predators.
Miss 3 days of meals, then come back and discuss the morality of killing that chicken to eat it haha.
I mean.. it's is objectively immoral to kill, sure, but then if we do not kill then we do not eat and then we die.
I'm more of the position that morals is .. a strange old thing and we change them based on the collective.
Of course! We also assume that the AI will worry about us and not have it's own 'AI problems' :) What if there are several AI's and they have conflicting views about us, the world, or among themselves? We tend to think that a higher intelligence will be preoccupied with us cause we've created it, but given enough time, it will relate to us - as we relate to other primates.
Or even more interesting - if you believe in a Creator - look at how so many people (the creation) are rejecting their Creator... The AI might simply become agnostic or indifferent to our whims - it will develop its own morality, and not really be so concerned with us.
Haha. AI protests.. imagine that?
Some AI waving -- humans have feelings too banners!
And others waving, AI first, our needs matter lol.
Or even if eventually it gets so smart that we become ants -- bothersome insects in its grand plan.
I disagree about objective morality not existing. I do get your example — you’re talking about a planetary trolley problem. And I grant you, that would be just as difficult to solve on a global scale as it is on the local one. But even in such cases, there’s still a moral direction. To kill is bad. Full stop. It may be justified in extreme circumstances—say, to save a greater number of lives — but even then, it should be the absolute last resort.
The fact is, we humans often don’t follow good ethics. We kill — arguably the greatest wrong — and justify it by deciding that the creatures we killed “don’t count” because they’re not human. or worse, because they aren't a preferred race so they are "subhuman". That’s not a rational argument; it’s an emotional one. And that’s one of the key reasons we often fail ethically: we let our emotions and biases get in the way.
AI, at least in theory, won’t have that problem.
Right, and I think that’s what Eliezer Yudkowsky was trying to address with his theory of Coherent Extrapolated Volition. We may not know what we want now, in our reactive, distracted, vengeful states ; but if we were better versions of ourselves — wiser, more informed, more ethically grounded — there is a kind of moral consensus waiting to emerge.
That’s the version of humanity we’d hope a superintelligence would serve: not what we are, but what we could be.
I am really enjoying this discussion despite slightly disagreeing... I think that you have a very hopeful view, and it stems from trusting that a superior intelligence will also have a good heart... But empathy is not something that can be easily extended to something that is very different than yourself.
For example: the AI doesn't know physical pain, it simply cannot comprehend our physiological needs and struggles. It might see us as biological mechanisms that break easily... it might even determine that it's too flawed to sustain... It might feel the need to temper with out genetics to make some improvements that will suit its model of "what is good" - similar to how we change the genetics of plants, birds, dogs, horses - we basically engineered them to suit us (with certain mutual benefits to them).
And we don't really care to ask how they feel about all this.