You are viewing a single comment's thread from:

RE: LeoThread 2025-08-16 03:50

in LeoFinance2 months ago

race accept that sometimes, not intentionally, because intentionally it would be an autonomous weapon. But in order to deliver a goal that is proper and helps humans overall, there are lives lost anyway, but under what conditions can you allow a AI product to launch even though it leads to, let's call it unintentional, basically casualties. So I think as a society, we do have to detect, debug and try to fix each casualty, but it won't be if then else rules, did you break the law, if you did go to jail kind of situation, but rather programmers will have to go back and see if they have to gather more data and things like that. And as far as the level of explainability, I think it will be possible with some more research for AI to approximate human description of why something happened, because humans aren't perfect explainers either. So the AI decision is certainly way too complex for us to understand, but it can tell us the most prominent five reasons are ABCDE enough for people to (29/38)