Sort:  

Agreed! It is far more difficult than I thought. I was on an executive level working group tasked to define a manifesto of sorts, for AI/Ethics. Lots of complex discussions. Fortunately, there were some brilliant people on the team that I took advantage to learn from. I recall a two hour discussion on the difference between "equality" and "equity". I learned so much! Suffice to say, it does get complex when you try to codify abstract thoughts.

Equality and equity, I'm aware of that debate. I think perfect equality is simply impossible. Equity is possible though.

 7 years ago (edited) Reveal Comment

Is it ethical to sacrifice some level of privacy for security, safety for convenience, time for accuracy, etc.? What about systems that determine justice? Is it ethical to provide different levels of service or should resources be distributed equally in all cases? Should one person's life be valued more than another? ...are you sure or are your answers couched with the words "it depends". That is the challenge of codifying ethics into a binary/digital system.

And this is why I don't think it can be hard coded nor can you or I or any small group of us come up with what we think is best for everyone else. Everyone has to have some say in it because everyone has a stake in the outcome of it.

And a "code of ethics" cannot be set in stone, and in my opinion has to be formulated from the most current data. In this case it's a data driven process, requiring observation, requiring deep understanding of the social dynamics in different communities, and such as disciplines like anthropology, psychology, which need to be applied.

The code of ethics developed by an organization is a very hard process when we are talking about artificial intelligence. It's as hard as trying to come up with ethics for a global government. How do you know you got it right? With the amount of power AI has, you cannot afford to get it wrong.

So the best you can probably do is map it to the current mass opinion. In this way if it is wrong it's because society and all the people were wrong too. The other thing you can do is try to limit the amount of damage this can cause by trying to take the most conservative approach, focusing on the most fundamental values humans have, to try to reach agreement on that.

That's exactly what ethics are: a set of do's and dont's written down.

That is heuristics, rules for a better life. Ethics are more sophisticated than a mere list of dos and don'ts. Utilitarian ethics for example are not a list of "do's and don'ts". It's an algorithm.

The focus of the utilitarian algorithm is to produce a certain outcome. The do and don't only matter if they produce a certain outcome. The focus of the algorithm is to maximize happiness. So there does not need to be any list because the items on the list are variables. The right think to do is whichever item on the list is deemed most likely to produce the desired outcome of the algorithm (maximum happiness).

You make it seem like artificial intelligence is anything but programing. It's not, so in that end it's not a problem of "which ethics" to define in code or to define ethics for "a global government", it's a matter of implementing ethics, which aren't opinions or open to opinion but simple, universally recognized (observed=observation) principles.

I know it's programming but my point is in programming things shouldn't be hard coded in this area. You cannot hard code a list of dos and don'ts and expect it to apply to every possible configuration of situations. Instead you have to encode the knowledge itself from which the principles can be derived.

So for example people value life, and from this knowledge of this value the machines can avoid contradictions. For example if life is valuable then the machine can deduce on it's own that preserving life is better than not preserving life using mere logic. My point is the axioms or principles are not set in stone as these are determined by the data given to the AI.

So ultimately the AI has to be data drive, it requires data from the outside sources, from humans, and so ultimately we have to tell the AI our current understanding of our values. And this is the source of the problem because how do we actually agree as a community of billions on what these values should be?

It's not, so in that end it's not a problem of "which ethics" to define in code or to define ethics for "a global government", it's a matter of implementing ethics, which aren't opinions or open to opinion but simple, universally recognized (observed=observation) principles.

Universally recognized by what though? We still need to agree on the process of how to recognize valid from invalid ethical principles. For example not everyone is utilitarian, and not everyone is consequentialist, and this means there will be some people who will be more concerned about the afterlife than the happiness in the here and now. Both would be ethical according to the logic of their own moral systems but it doesn't mean they'll agree.

A Christian for example can believe to steal is a sin, and this is absolute. There is no situation where stealing becomes right in Christianity. Stealing is always wrong. In consequentialism, in utilitarianism, in some other ethics, stealing might not always be wrong. It would depend on the consequences of the actions, on the amount of happiness or misery it could create, and right there we'd have a conflict between hardline Christians and hardline utilitarians. To the machines neither would be unethical because both would be logical and following their principles. So how would the machine determine which of these is true?

Stealing is right or is it wrong? It's going to depend on who you ask, the circumstances, etc.

 7 years ago  Reveal Comment

The terms Right/Wrong are relative to the person and their moral structures. What is right for you, may be wrong for others and vice versa.

We agree that programming can include 'it depends' capabilities, but the more complex you go, the more difficult and convoluted it becomes. This introduces risks of error, inconsistency, and corner-cases that require human intervention.

The "do no harm" and "treat others how you want to be treated" are good rules-of-thumb (we all have different thumbs) and are very problematic to program as the terms 'harm' and 'how you want to be treated' are different from person to person and can change quite often even for an individual.

Treat others how you want to be treated is a bad heuristic because it's not data driven. "You" doesn't exist in the data and shouldn't because it would bias things. Instead treat others how they want to be treated is data driven and can leverage big data.

Once again this shows why it's hard to do ethics. People want to put themselves and their views into the ethics but this biases things. In order to do it right it has to be data driven in my opinion. The ethics have to be based on the current views of the world, the consensus of different demographics represented, according to clear rules.

It doesn't matter what we say. It matters what the world says. We represent others and if we are talking about global projects, global companies, global AI, etc, then it would be elitist and selfish to program only our own opinions and feelings into it. Why should we think we know what is or isn't right for the whole world?

The world has to decide for itself what is or isn't right and the responsibility shouldn't be on some elites in an ivory tower but on the people in the world to decide what they think justice is. I don't even agree with a lot of other people on a lot of different things but I recognize that we have to serve other people and represent the interests of other people in the global context.

How do you determine the value of an ethical policy if you're not basing it on the values of either your community, or of the global community as a whole?

My point is it is not up to us or me to decide what is best for the entire global community. The global community is the only demographic which can decide their values and ethics in the context of AI has to represent the values of different demographics.

There is a global community of human beings who share similar values. I could say there are "communities" which generate a shared consensus for what the majority of communities of that time believe in. This is also called zeitgeist. Global sentiment can reveal the current zeitgeist and the nature of it to some extent.

There's no such thing as "global community" much like there's no such thing as "which ethics" or conflicting ethics. There's universally recognized principles, look up Universal Ethics,

There is nothing to look up. I don't learn ethics from books. I learn ethics from observation. What works and what doesn't? What is the data showing how people really think and feel? If you can't cite actual practical data showing that people think a certain way then your views on "universal ethics" are backed by what? Your own feelings?

I see ethics the way a weather forecaster sees cloud formations. It's merely the current arrangement of mass sentiment on many different topics, issues, etc. I don't get to decide how you or others think about a question like abortion, or whatever else. I only get to ask questions to you to see if you'll tell me what you think in some anonymous safe fashion, or I can observe your behavior and deduce from your behavior what you really think from your actions.

People who claim they value a certain thing? Their behavior should align with it to give this some weight. And this is how I know what someone believes in and what they might think is or isn't ethical. Do that for every person in the community and you get community sentiment and behavioral data. This data can inform what society really thinks and feels, and from that we can come up with some ethics which we think currently best represents the values our communities hold.

If you have anything to offer in regards to "which ethics" do so, otherwise I'd rather not spend my time responding to vague comments that lose track of the conversation and have no contentions with what I said but only with what I implied or insinuated or whatever justified the direction of the responses.

Different demographics of people have very different ethics. The ethics which help people survive in prison don't necessarily work in every environment. It clearly worked for them in prison but then they get out of prison and find that suddenly things work very different.

One side of ethics is it has to actually work in the real world. It's not some hard coded rules but it has to actually improve societal well being or raise the sum of human happiness or have some similar metric which we can say that by following these rules it is making the world a better place.

Many people believe their holy book provides the best source of ethics. So when I ask which ethics it's an obvious question. Not everyone is going to agree with each other on most things. So to have a universal agreement among billions of people is pretty difficult. And if it does happen it likely will have to be for the most fundamental human values.