You are viewing a single comment's thread from:

RE: China's Gamification of the Enslavement of the Mind via Sesame Credit + AI Decision Making as the Worst Centralization Scenario

Interesting to consider your comparison between AI and Dwarf Fortress. Given the nascent militarization of drones, and Google's application of AI to targeting systems, it is immediately obvious that if AI goes bad, it can go really bad, with armed drone swarms wreaking havoc on society.

Or it could just make a really silly mistake, like letting meat go bad and still allowing it to be served to people globally. With AI coordinating logistics, it could give the whole world food poisoning at once.

Did you ever read my short story 'Fresh Meat'?

Thanks!

Sort:  

Or it could just make a really silly mistake, like letting meat go bad and still allowing it to be served to people globally. With AI coordinating logistics, it could give the whole world food poisoning at once.

This is the most likely scenario. Most of the dangers of AI comes from unpredictability and uncontrollability that makes centralized economic planning look like archery at point blank. You don't know what's going to happen in a complex system. An extreme scenario would be extremely improbable.

We also have to understand that AI doesn't pop out at completed products. They are always learning and the data and logic that's fed to them are hat eventually define them. There was this quote (I think from IBM) that said "Human should think. Machines should work" With AI involved, I'd say Human should decide and create while machines do the thinking. Think about the Ethereum parity wallet case. AI are just going to make such things in a grander scale. That's why I say that humans should have the deciding power at all times and there should also be proper diversification of AI. Most malware doesn't even work on Mac/Linux. Isolated AI code bases is the way to go.

Technology is never good or evil. It just amplifies things. In 3001: The Final Odyssey Arthur C. Clarke entertains the idea of Supernova explosions being industrial accidents. That's where we are going. No technology is going to be a savior. It didn't happen with the internet and it isn't going to happen with blockchain, DAG or AI. They are just tools that amplify certain things. They are instruments and should be treated as such.

If you are blowing things up with nuclear energy, it's evil. If you are producing energy with it, it's good. If you are tinkering with nuclear energy in the middle of a mega city, that's just stupid. There is no such is as too safe. But there is a thing called Murphy's law.

I consider the increasing application of AI to killer battlefield drones and weapons to be one of the likelier points of failure, as it's amongst the earliest and most experimental applications to be undertaken. Also, there's just so many things that are very difficult for people to sort out about our social relationships, and waging war is a social relationship. Friendly fire is a thing. If we can't sort out who we're supposed to kill, I suspect AI will not find it easier to get right 100% of the time.

AI can be very bad for society if it lands in the wrong hands. Hope it remains in good hands for ever.