The integration to steady state of many initial value ODEs and PDEs using the forward Euler methodcan alternatively be considered as gradient descent for an associated minimization problem.Greedy algorithms such as steepest descent for determining the step size are asslow to reach steady state as is forward Euler integration with the best uniform step size.But other, much faster methods using bolder step size selection exist.Various alternatives are investigated from both theoretical and practical points of view.The steepest descent method is also known for the regularizing or smoothing effect that thefirst few steps have for certain inverse problems,amounting to a finite time regularization. We further investigate the retention of thisproperty using the faster gradient descent variants in the context of two applications.When the combination of regularization and accuracy demands more than a dozen or so steepestdescent steps, the alternatives offer an advantage, even though (indeed because)the absolute stability limit of forward Euler is carefully yet severely violated.