Great article about gradient descent. Many modern machine learning algorithms (e.g. feedforward neural network) use stochastic gradient descent for updating parameters. There are also some modifications (e.g. batch stochastic descent) to speed up training while keeping the accuracy. Look forward to other of your articles about those techniques
You are viewing a single comment's thread from:
I briefly mentioned some of those techniques in at the end.
You're correct that stochastic gradient descent has somewhat overtaken traditional gradient descent if for no other reason than it's faster to compute.
Anyway, there will definitely be more articles about these other techniques in the future.