You are viewing a single comment's thread from:

RE: 5 things we need to change about AI

in Technology3 months ago

This is brilliant! What a challenging and thought-provoking article. I was still completely immersed well past the 80% drop-off mark, haha.

"To fear is natural. But this technology is not going away. And rather than sulking about why we did this, it would be a much better use of our energy to think about the inevitable changes it will bring and how we best prepare for them."

Absolutely agree. Humans have a natural tenacity to fear the things we don't understand or can't predict and I think this has formed the backbone of our response to the alarming and rapid transformation of AI technologies. Such a short period of time has passed since the likes of AlphaGo, and not enough leeway provided for us to "catch up" - it seems AI is very much developing at its own pace, at a rate that we humans just aren't capable of matching.

So, yes, in the present we have the power to change the narrative. To change the dynamic. I particularly liked this paragraph:

"So more optimistic. More positive. But also realistic and scientific. For example, I think it's much more useful to spend energy on how we govern AI; removing bias, tackling data narrowing, understanding hallucination, overlaying explainability. And yes. Failsafe mechanisms. The really interesting thing (for me at least) is that all these AI flaws are also present (in slightly different forms) in human beings."

From automating mundane tasks to pioneering breakthroughs in healthcare, AI is transforming the way we live and work... It promises immense potential for productivity gains and innovation. But I've been learning more and more about the concerning, underlying biases popping up in AI systems - specifically in the output of algorithms due to prejudiced "human" assumptions. For example, the recently established investigation into healthcare algorithms in the US, which severely underestimated the needs of Black patients, leading to significantly less care. I also think its important to point our how audiences perceive and value AI-augmented labour... AI and inequality - especially true in cases where "value" directly intersects with bias against marginalised groups.

Your point about the "rights" of AI was surprising but also very interesting. What an observation. Very much "food for thought"! Personhood being granted to non-human animals has also extended to non-human entities. For example, in 2017 New Zealand passed a groundbreaking law that defined the Whanganui River as a "living whole" - from the mountain to the sea - incorporating all its "physical and metaphysical elements" as definitively human. The scope and reach of AI is difficult to calculate, but it's certainly worth considering that if AI becomes a "legal person" then it too is subject to personhood.

"If an AI could be held accountable under law for contractual, tortious, or even criminal acts, has not the AI, de facto, acquired legal personhood (whether juridical or natural)?"

What a question. Thanks for sharing @dviney !!

Sort:  

Thanks @actaylor - fascinating piece on the Whanganui River. I agree on the biases. This should really be the key area for us working in the field (and in fact my guys are doing some very cool stuff with IBM and Satalia in that very space right now). Stay in touch, Alex, as I'll come back to the topic of AI later (and perhaps devote a piece to AI Governance when I do). I think I will keep these AI pieces on Ecency, as I like the platform... and am trying to keep my main blog focused on agile & change management.

Great! Am glad to hear some cool stuff is being done with IBM and Satalia regarding biases in AI. Awesome, @dviney - I'll give your account a follow - look forward to learning more from you. ☺️