I fear it no matter if it is programmed with"moral codes" or not.
If the "moral codes" are programmed by humans, there is no manner in which I can ensure a robot has morals with are compatible with my moral codes.
If AI develops it's own moral codes, then no one can be assured what the AI is willing to do.
AI is a dangerous, slippery slope which we cannot afford to make many mistakes with. Each mistake could be the last mistake for hundreds or millions of people.
Just because we CAN do a thing doesn't mean we SHOULD do that thing.
It will eventually be let out upon the world, but we as a species should be quite wary of large scale implementation in the foreseeable future, as our existence could depend on it!
Yes, the challenge lies in teaching/programming robots to understand right from wrong as well as to remain subordinate to humans. Stay in your place bot!🤖
Waov nice comment! @bot-or-not :)