I consider the increasing application of AI to killer battlefield drones and weapons to be one of the likelier points of failure, as it's amongst the earliest and most experimental applications to be undertaken. Also, there's just so many things that are very difficult for people to sort out about our social relationships, and waging war is a social relationship. Friendly fire is a thing. If we can't sort out who we're supposed to kill, I suspect AI will not find it easier to get right 100% of the time.
You are viewing a single comment's thread from: