1969
Arthur Bryson and Yu-Chi Ho introduce backpropagation, a method for optimizing multi-stage dynamic systems. While originally developed for control systems, this algorithm becomes crucial for training multilayer neural networks. Backpropagation only gained prominence in the 2000s and 2010s with advances in computing power, enabling the rise of deep learning.
Marvin Minsky and Seymour Papert publish Perceptrons: An Introduction to Computational Geometry,12 which critically analyzed the limitations of single-layer neural networks. Their work is often blamed for reducing interest in neural networks. In the 1988 edition, they argue that progress had already stalled due to a lack of theoretical understanding despite numerous experiments with perceptrons by the mid-1960s.