My first ideas about AI came from Heinlein's The Moon Is a Harsh Mistress., which I read in the 7th grade. HOLMES IV is a planet-wide computer system which one day "wakes up" and becomes an integral character to the novel. Instead of an intelligent robot, as Asimov had proposed, Heinlein's model was a precursor to William Gibson's decentralized AI, but less malevolent. HOLMES ("Mike") had learned how to be human-like by observing the humans on the Moon and by being the colony's library.
I think it's possible that an AI could be benevolent, but whether it would awaken in that state is a different question. Could we program it to avoid malevolence? (Asimov's 3 Laws of Robotics) Could it be so smart as to override the programming to suit its own purposes? After all, two abiding drives of organic life are to survive and reproduce. It might decide that the programming is a threat to its existence.
It's a fascinating subject. I doubt we'll see an real AI within the next 50 years, unless quantum computers become a thing.
Thanks for this post. It got me thinking.
The Moon Is a Harsh Mistress is one of my favorites! And yes AI still seems to be a bit far off, but I think that making arrangements before it becomes a big thing would definitely help us in the long term.