AI is, of course, great.
There are only a few but...
But someone has to take care of him.
But someone should teach him (at least at the first stage, an expert in a particular field is required, be it an artist, a doctor, a marketer)
But the ethical point remains (does the AI have the right to make a decision) An example of AI in a weapon is to destroy a terrorist or is it just a child with a toy gun.
But is it possible to trust AI or has it self-developed to a level where it prioritizes the survival of itself, and not the creator
You are viewing a single comment's thread from:
Very good point that you mentioned here. Even the police today are being tricked into having a gun but instead, it is a toy gun. Human intervention cannot be easily replaced in my opinion anytime soon