Take a pattern of bits, apply some math to them, some of that may depend on what is next door to them, etc and arrive at a number.
Take that number and push it onto a layer as they call it. You can have one or more hidden layers as well as input and output layers which accept the input and operate or provide the result vector.
You vary weights of the layer nodes, decision points, and train them by iteration.
You can operate on images this way but they are hard to train and you need to iterate all layers and pixels. Can be done in hardware of course also.
Operators akin to Sobel can be used to locate edges, gradient, and slope direction to get features out of the image space.
Find a bridge for instance. Long straight line and quick change in color or shading etc.
Neural nets are akin to decision trees.
Take a pattern of bits, apply some math to them, some of that may depend on what is next door to them, etc and arrive at a number.
Take that number and push it onto a layer as they call it. You can have one or more hidden layers as well as input and output layers which accept the input and operate or provide the result vector.
You vary weights of the layer nodes, decision points, and train them by iteration.
You can operate on images this way but they are hard to train and you need to iterate all layers and pixels. Can be done in hardware of course also.
Operators akin to Sobel can be used to locate edges, gradient, and slope direction to get features out of the image space.
Find a bridge for instance. Long straight line and quick change in color or shading etc.
https://en.wikipedia.org/wiki/Sobel_operator
In this case we can look at machine lever or arm and see the edges and say it is moving... or it is still.
However we might miss fact it is broken and moving or broken and static.