Sort:  

Good point. But who would decide, if the data really is complete? And which contents of data are more important than others? Wouldn't it lead to different calculated scenarios and mankind would still have to decide which way to go? Is it more moral to do the best for the present or for the future in 100 years or the future in 1000 years? Or a middle way of all of that?

Just like we’re seeing in Law, A.I. would make a case for the solution it proposes while offering understandable evidence using concepts we’re familiar with like life expectancy, infant mortality, quality of life, well-being, etc. Part of the solution would including explaining it.

So A.I. would give us the arguments, why we should believe in the solutions it provides... I see... I must admit, that would really be fascinating!

Okay, I'm still thinking about that topic... I want to add something: so when A.I. could make moral decisions for us, the only question left would be, if it's moral THAT it can make moral decisions for us? And maybe A.I. could even provide the answer to this question? Wow! Thank you for having this great conversation!

Wouldn't that be evidence of incapacity of whole mankind?