
A recent paper by Hinton and colleagues (2017) has brought some enthusiasm in the field of deep learning as it challenges the performance of CNNs:
"We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. " [source]
Eugenio Culurciello, at the MLReview blog on Medium, has posted an insightful summary of the key points behind this research, which makes a good read for those not understanding technicalities in Hinton's paper. From the post:
"Deep neural nets learn by back-propagation of errors over the entire network. In contrast real brains supposedly wire neurons by Hebbian principles: “units that fire together, wire together”. Capsules mimic Hebbian learning..." [source]
Culurciello also discusses pooling in standard deep neural nets versus the one in capsules (more dynamic), prediction, the resemblance of capsules to cortical columns in the human brain, and more. For illustration purposes, a high-level overview of the architecture of the capsule network is provided.
So, if you don't have time to fully digest Hinton's paper, Culurciello's summary is a good read:
To stay in touch with me, follow @cristi
Cristi Vlad Self-Experimenter and Author
I have actually been into that paper last week. It is a kind of machine inside a machine dedicated to a given task, if I have well understood. I still have to digest it further. The amazing thing is that the maths look very easy :)