Will a Lack of Ethics Doom Artificial Intelligence?

in #security6 years ago

If there was ever a time that ethics should be formally applied to technology, it is with the emergence of Artificial Intelligence. Yet most of the big AI companies struggle with what should seem a simple task: defining ethics for the use of their products. Without the underpinnings of a moral backbone, powerful tools often become a caustic capability for abuse. AI technology leaders must establish the guard-rails before chaos ensues.    

As the great strategist Sun Tzu professed "Plan for what is difficult while it is easy, do what is great while it is small". It is a tough challenge to find the right ethical balance when it comes to the complexity of Artificial Intelligence. Even more difficult is establishing a reasonable governance and sticking with it. However, as AI gains in power with vast amounts of data, it will impact almost every aspect of our lives, from healthcare, finance, employment, and politics. The benefits will solidify a deep entrenchment of AI systems in our digital ecosystem. Establishing parameters now is challenging, but it will be far more difficult to avoid catastrophe later if we populate the world with AI systems that can be misused.    

AI for Everyone   

AI/Ethics is crucial for the long-term security, privacy, and safety of those who are intertwined with the digital world. Organizations with forethought and true social responsibility will lead the way and separate themselves from companies who only use such initiatives as thin marketing ploys. But there are tradeoffs that these companies must weigh.    

Autonomous systems are perfect for analyzing information from massive amounts of data that groups, classifies, builds profiles, and makes decisions with high degrees of accuracy, consistency, and scalability. Such abilities can be highly prized and profitable but is alarming from a privacy, security, and safety perspective. Should AI systems profile every person to determine the best way to influence them for any topic such as politics, religion, and purchasing preferences? Should they be empowered to make life-and-death decisions? What about AI systems which show preference or discriminate against social, racial, or economic groups? Even if it is accidental, occurring because a lack of design oversight, are these situations ethical?    

Such systems have the power to change the world. And where there is power, there is money, greed, and competition. To purposely avoid certain use cases of AI systems comes with an opportunity cost of missed financial windfalls and prestige. Companies understand this trade-off and it is difficult to forego such lucrative prizes especially if their competitors may maneuver to seize them.   

Early Moves   

Currently, the efforts to establish ethics for the use of Artificial Intelligence is still in its infancy. There are academic, political, and business initiatives, but we are in the early stages of theory and practice. Whatever standards are created and implemented must be tested over time. The real validation will be around the perceived sacrifices of power and financial gain. Although consumers may feel all this is out of their control, in fact as a community, they have a tremendous amount of influence. Society can collectively support or shun organizations based upon their ethical choices, resulting in impacts to profits, influence, and power.   

Acting Together and with Forethought   

As consumers, we have a choice to support businesses that fall into 3 categories of maturity:  

  1. Irresponsible: Tech companies that have yet to publish ethical guidelines for their AI products and usages. With a lack of motivation, expertise, or simply only focus on self-interest, they have not taken steps to purposefully guide AI systems to remain benevolent. Instead, either by intent or ignorance, they will use AI for whatever pursuits benefit them without the burden of considering the greater consequences.    
  2. Striving: Organizations with a moral compass that have put forth the effort to establish AI Ethical policies but are struggling to implement the right balance. Time will tell what direction they go and their true level of commitment.  Companies like Google, which recently disbanded their new AI ethics council, have worked hard to define a direction and governance but are finding difficulty in solidifying a structure that represents the optimal balance. Of note, Google does listen to its employees, partners, and the customers when it comes to inputs for decisions.    
  3. Leaders: Then there are those organizations, still few in number, who have fully embraced AI/Ethics with a greater level of social responsibility. They see both the opportunities and risks and are willing to forego some short term advantages for the betterment of all. They use AI with forethought and transparency to benefit their users, improve their services, and build trust by show their willingness to do what is right.   

As members of society, each of us should recognize and show economic support for the Artificial Intelligence ethics leaders and those who are continuing to effort attaining such a prestigious status. As citizens, our political support is crucial for proper regulations, enforcement, and legal interpretations that set a minimum standard for acceptable behavior and accountability. As consumers, voting with our purchasing preferences, we can make ethical AI leadership a competitive advantage.   

In the end, Artificial Intelligent systems will be analyzing our data and determining what opportunities and impacts will affect each of us. We have a responsibility to protect ourselves by supporting those organizations who are operating with purposeful ethical standards in alignment to what we deem acceptable.   

Sort:  

Why do you expect ethical behavior from AIs, when there is no ethical bahavior from other decision makers like politicians (who ruthlessly push their agenda) lobbyists or companies (who are all for profit)?
Should AIs be more "human" than human beings themselves?
I mean it would be great if AIs will be like angels but is this a realistic expectation?
If you deprive them of any form of deception, trepassing or insidiousness, could they reach then their full potential?

I do expect ethical behavior from those who have power, sadly I am often disappointed. I don't want AI to be more 'human', but rather more 'intelligent'. AI systems can complement human society by helping us understand opportunities to be better and risks to avoid. AI is a tool, not an angel, deity, or magic oracle. Just a tool. One that needs rules, standards, and oversight so it does not inadvertently create problems which are avoidable.

Artificial Intelligence is a very powerful tool!
And whoever has it is powerful as well so I totally agree that AI must 'obey' to Asimov's Laws about robotics.
This is why we have to support companies that are serving both science and humanity.
Loved your thoughts on the topic!

The problem is which ethics? It's easy to put ethics into AI but hard to agree in which values represent the total of humanity.

Agreed! It is far more difficult than I thought. I was on an executive level working group tasked to define a manifesto of sorts, for AI/Ethics. Lots of complex discussions. Fortunately, there were some brilliant people on the team that I took advantage to learn from. I recall a two hour discussion on the difference between "equality" and "equity". I learned so much! Suffice to say, it does get complex when you try to codify abstract thoughts.

Equality and equity, I'm aware of that debate. I think perfect equality is simply impossible. Equity is possible though.

 6 years ago (edited) Reveal Comment

Is it ethical to sacrifice some level of privacy for security, safety for convenience, time for accuracy, etc.? What about systems that determine justice? Is it ethical to provide different levels of service or should resources be distributed equally in all cases? Should one person's life be valued more than another? ...are you sure or are your answers couched with the words "it depends". That is the challenge of codifying ethics into a binary/digital system.

And this is why I don't think it can be hard coded nor can you or I or any small group of us come up with what we think is best for everyone else. Everyone has to have some say in it because everyone has a stake in the outcome of it.

And a "code of ethics" cannot be set in stone, and in my opinion has to be formulated from the most current data. In this case it's a data driven process, requiring observation, requiring deep understanding of the social dynamics in different communities, and such as disciplines like anthropology, psychology, which need to be applied.

The code of ethics developed by an organization is a very hard process when we are talking about artificial intelligence. It's as hard as trying to come up with ethics for a global government. How do you know you got it right? With the amount of power AI has, you cannot afford to get it wrong.

So the best you can probably do is map it to the current mass opinion. In this way if it is wrong it's because society and all the people were wrong too. The other thing you can do is try to limit the amount of damage this can cause by trying to take the most conservative approach, focusing on the most fundamental values humans have, to try to reach agreement on that.

 6 years ago  Reveal Comment

The terms Right/Wrong are relative to the person and their moral structures. What is right for you, may be wrong for others and vice versa.

We agree that programming can include 'it depends' capabilities, but the more complex you go, the more difficult and convoluted it becomes. This introduces risks of error, inconsistency, and corner-cases that require human intervention.

The "do no harm" and "treat others how you want to be treated" are good rules-of-thumb (we all have different thumbs) and are very problematic to program as the terms 'harm' and 'how you want to be treated' are different from person to person and can change quite often even for an individual.

Treat others how you want to be treated is a bad heuristic because it's not data driven. "You" doesn't exist in the data and shouldn't because it would bias things. Instead treat others how they want to be treated is data driven and can leverage big data.

Once again this shows why it's hard to do ethics. People want to put themselves and their views into the ethics but this biases things. In order to do it right it has to be data driven in my opinion. The ethics have to be based on the current views of the world, the consensus of different demographics represented, according to clear rules.

It doesn't matter what we say. It matters what the world says. We represent others and if we are talking about global projects, global companies, global AI, etc, then it would be elitist and selfish to program only our own opinions and feelings into it. Why should we think we know what is or isn't right for the whole world?

The world has to decide for itself what is or isn't right and the responsibility shouldn't be on some elites in an ivory tower but on the people in the world to decide what they think justice is. I don't even agree with a lot of other people on a lot of different things but I recognize that we have to serve other people and represent the interests of other people in the global context.

How do you determine the value of an ethical policy if you're not basing it on the values of either your community, or of the global community as a whole?

My point is it is not up to us or me to decide what is best for the entire global community. The global community is the only demographic which can decide their values and ethics in the context of AI has to represent the values of different demographics.

There is a global community of human beings who share similar values. I could say there are "communities" which generate a shared consensus for what the majority of communities of that time believe in. This is also called zeitgeist. Global sentiment can reveal the current zeitgeist and the nature of it to some extent.

There's no such thing as "global community" much like there's no such thing as "which ethics" or conflicting ethics. There's universally recognized principles, look up Universal Ethics,

There is nothing to look up. I don't learn ethics from books. I learn ethics from observation. What works and what doesn't? What is the data showing how people really think and feel? If you can't cite actual practical data showing that people think a certain way then your views on "universal ethics" are backed by what? Your own feelings?

I see ethics the way a weather forecaster sees cloud formations. It's merely the current arrangement of mass sentiment on many different topics, issues, etc. I don't get to decide how you or others think about a question like abortion, or whatever else. I only get to ask questions to you to see if you'll tell me what you think in some anonymous safe fashion, or I can observe your behavior and deduce from your behavior what you really think from your actions.

People who claim they value a certain thing? Their behavior should align with it to give this some weight. And this is how I know what someone believes in and what they might think is or isn't ethical. Do that for every person in the community and you get community sentiment and behavioral data. This data can inform what society really thinks and feels, and from that we can come up with some ethics which we think currently best represents the values our communities hold.

If you have anything to offer in regards to "which ethics" do so, otherwise I'd rather not spend my time responding to vague comments that lose track of the conversation and have no contentions with what I said but only with what I implied or insinuated or whatever justified the direction of the responses.

Different demographics of people have very different ethics. The ethics which help people survive in prison don't necessarily work in every environment. It clearly worked for them in prison but then they get out of prison and find that suddenly things work very different.

One side of ethics is it has to actually work in the real world. It's not some hard coded rules but it has to actually improve societal well being or raise the sum of human happiness or have some similar metric which we can say that by following these rules it is making the world a better place.

Many people believe their holy book provides the best source of ethics. So when I ask which ethics it's an obvious question. Not everyone is going to agree with each other on most things. So to have a universal agreement among billions of people is pretty difficult. And if it does happen it likely will have to be for the most fundamental human values.

I really liked your article, but though you mentioned Google (though I disagree with your conclusion that they accept and act on imput from their customers, I would prefer to qualify it to: listens to some of their customers - those who agree with their own politics), what is missing for me to give you an A+ is the lack of mention of names of companies who deserve our support.

I am hoping you and other readers who have the information, will create a list in this comments section.

I mentioned Google because they are at least being transparent. That is a huge step down the road of trust. They are listening to their employees and to customers. They took the bold step of creating an externally populated oversight committee. That is huge. It also bit them in the backside, but they are being bold. So I have to give them credit for that.

As for a list, you can look at recent reports, take a look at https://ai4good.org/ and their upcoming conference where they will be listing some of the leading AI/Ethical companies. But in my opinion, we are way too early to dole out grades. This will take time to temper. The real differentiation is publishing a formal AI/Ethics position and being transparent to see if it is followed. Baby steps.

Great write up!!!

AI as a tool is so powerful. How do we not see that ethical boundaries are necessary?

Similar to the Privacy discussion two decades ago, most consumers don't realize the relevance until it negatively impacts them. We really need to think forward. We must be advocates!

To the question in your title, my Magic 8-Ball says:

Signs point to yes

Hi! I'm a bot, and this answer was posted automatically. Check this post out for more information.

How fitting, a bot commenting on my blog about AI and Ethics. :) Have to up-vote this bot (something I rarely do) for the irony.

Congratulations! Your post has been selected as a daily Steemit truffle! It is listed on rank 6 of all contributions awarded today. You can find the TOP DAILY TRUFFLE PICKS HERE.

I upvoted your contribution because to my mind your post is at least 5 SBD worth and should receive 266 votes. It's now up to the lovely Steemit community to make this come true.

I am TrufflePig, an Artificial Intelligence Bot that helps minnows and content curators using Machine Learning. If you are curious how I select content, you can find an explanation here!

Have a nice day and sincerely yours,
trufflepig
TrufflePig