Sort:  

AI has been compared to electricity and the industrial revolution; a particularly apt analogy treats AI as a new computing paradigm (Software 2.0), since both center on automating digital information processing.

When forecasting computing's effect on jobs in the 1980s, the most predictive feature of a task was how fixed its algorithm was — whether it involved mechanically transforming information with rote, easy-to-specify rules (typing, bookkeeping, human calculators).

Those were the kinds of programs that could be written by hand then.

With modern AI, it has become possible to produce programs that could never be written manually.

The approach specifies objectives (classification accuracy, reward functions) and searches program space via gradient descent to find neural networks that meet those objectives. This aligns with the Software 2.0 framing.

Under this new paradigm, verifiability becomes the key predictive feature. If a task is verifiable, it can be optimized directly or via reinforcement learning, allowing neural nets to perform extremely well.

Verifiability is about the extent to which an AI can "practice" something: the environment must be resettable (new attempts possible), efficient (many attempts can be made), and rewardable (an automated way exists to score each attempt).

The more verifiable a task is, the more amenable it is to automation in Software 2.0.

Non-verifiable tasks must rely on generalization or imitation, which is why progress in large language models looks "jagged": verifiable tasks (math, coding, puzzle-like problems with clear answers) advance rapidly, while creative, strategic, and context-rich real-world tasks lag behind.

Software 1.0 automates what can be specified. Software 2.0 automates what can be verified