You are viewing a single comment's thread from:

RE: Can science answer moral questions? I don't think so

in #science7 years ago

I'd love to hear a reply from Sam Harris on your thoughts.

When you say some forms of mortality can be entirely reduced to mathematics what does that actually mean? You exampled consequentialism, does that mean one act has one outcome as in a logical mathematical outcome? You can see I'm struggling to stretch my intellectual capabilities here :-) enjoyed your post though!

Sort:  

If we all agree on what "social utility" means, let's say it's the sum of human happiness? Then we now have turned morality into mathematics calculations. In essence the ethical calculus was formalized by Bentham. In addition, if you have consequentialist ethics even without agreeing on "social utility" you can have your own "self interest" and based on that you can use cost to benefit analysis to decide on which decisions provide the most expected utility. Consequentialist ethics are mathematics, and utilitarianism is a kind of consequentialist ethics.

You can be an ethical egoist consequentialist and simply rely on game theory to determine the correct course of action. Emotions do play a role because these emotions are what determines what we value but while value is the subjective part of the equation, the calculations are pure mathematics. Social exchange theory also shows how this works in economics which also is a discipline of mathematics not science.

Consequences which are known can be computed by probability, using math, using a calculator or super computer, to inform or advice someone on what actions to take, just as legal advice or medical advice can be delivered by an AI doing the calculations. If you value your life or certain interests or have some utility function then that determines right and wrong outcomes for you but the calculations are always the same process of cost to benefit analysis, probability distribution, etc. The reason consequentialism isn't currently fashionable for humans is because it's a burden to calculate it all but this burden over time will be reduced by AI.

References

  1. https://en.wikipedia.org/wiki/Ethical_calculus
  2. https://en.wikipedia.org/wiki/Expected_utility_hypothesis
  3. https://en.wikipedia.org/wiki/Social_exchange_theory
  4. https://en.wikipedia.org/wiki/Consequentialism
  5. https://en.wikipedia.org/wiki/Probability_distribution
  6. https://en.wikipedia.org/wiki/Ethical_egoism
  7. https://en.wikipedia.org/wiki/Game_theory

Thanks, I'll have a read through the references. I'm struggling to fault Harris on this to be honest so I'm going to have another watch.

Having said that, I remember him discussing the Morality of AI more recently. His example was the driverless car that finds itself in a nearing unavoidable collision with some children. Does it's course of action save the children by swerving off the road and killing the driver or plough into the children and save the driver. I think his moral jury was out on that!

This has been solved. In my opinion the owner of the car should decide whether to sacrifice the car to save others in the situation of an accident. I don't think the AI in specific should be enabled to self sacrifice to save others if a human is in the vehicle unless the human owner accepts the same morality.

From a utilitarian perspective where all lives are complete strangers then it's better to save more lives than less. So it becomes a mere calculation (Trolley Experiment case) where you win by saving as many lives as possible. Taking 1 life to save 5 is a bigger win than saving 1 life to lose 5.

However if that 1 life is not a stranger and those 5 lives are complete strangers, now the values of those lives is no longer equal. So in practice it's not always the case to say it's ethical to save the maximum number of people in a situation because at the end of the day human beings do not value all people equally. Human beings like or love some people.

The jury isn't out on the question of the Trolley Experiment or self driving car variety of the same experiment. The solution under utilitarianism is always calculated to save the most lives even at the lowest cost if all lives are of equal worth.

So in the real world how would that work? Maybe the car owner would sit through a morality interview with the car dealership and his preferences would define the outcome of any accident. Surely there will be future laws to govern this and it would make sense to follow the utilitarianism approach but with no exception to whether the driver has any connection to the would be lost lives.

The car could have a screen and put forth the question in the form of a video and simple ask the owner what they would like their car to do in that situation.