You are viewing a single comment's thread from:

RE: LeoThread 2025-11-05 15-48

in LeoFinance21 days ago

Part 6/12:

I shared a story from a conference about an ASIC designed to optimize a tractor's engine parameters. Flaws in the hardware caused it to optimize based on erroneous data, leading to unintended and potentially damaging behavior. This exemplifies how poorly designed optimization systems can go awry when input measurements are flawed—much like how an AI might behave if its goal system misinterprets data or objectives.


Critique of Assumptions in AI Safety Research

I delved into some core assumptions underlying current safety and alignment frameworks, many of which I believe are flawed or overly optimistic:

1. Hypothesis Generation and World Models