I disagree about objective morality not existing. I do get your example — you’re talking about a planetary trolley problem. And I grant you, that would be just as difficult to solve on a global scale as it is on the local one. But even in such cases, there’s still a moral direction. To kill is bad. Full stop. It may be justified in extreme circumstances—say, to save a greater number of lives — but even then, it should be the absolute last resort.
The fact is, we humans often don’t follow good ethics. We kill — arguably the greatest wrong — and justify it by deciding that the creatures we killed “don’t count” because they’re not human. or worse, because they aren't a preferred race so they are "subhuman". That’s not a rational argument; it’s an emotional one. And that’s one of the key reasons we often fail ethically: we let our emotions and biases get in the way.
AI, at least in theory, won’t have that problem.
we don’t even know what we want collectively
Right, and I think that’s what Eliezer Yudkowsky was trying to address with his theory of Coherent Extrapolated Volition. We may not know what we want now, in our reactive, distracted, vengeful states ; but if we were better versions of ourselves — wiser, more informed, more ethically grounded — there is a kind of moral consensus waiting to emerge.
That’s the version of humanity we’d hope a superintelligence would serve: not what we are, but what we could be.
I am really enjoying this discussion despite slightly disagreeing... I think that you have a very hopeful view, and it stems from trusting that a superior intelligence will also have a good heart... But empathy is not something that can be easily extended to something that is very different than yourself.
For example: the AI doesn't know physical pain, it simply cannot comprehend our physiological needs and struggles. It might see us as biological mechanisms that break easily... it might even determine that it's too flawed to sustain... It might feel the need to temper with out genetics to make some improvements that will suit its model of "what is good" - similar to how we change the genetics of plants, birds, dogs, horses - we basically engineered them to suit us (with certain mutual benefits to them).
And we don't really care to ask how they feel about all this.