Neural networks have become a widely adopted tool for tackling a variety of problems in machine learning and artificial intelligence. In this contribution, we use the mathematical framework of local stability analysis to gain a deeper understanding of the learning dynamics of feedforward neural networks. We derive equations for the tangent operator of the learning dynamics of three-layer networks learning regression tasks. The results are valid for an arbitrary number of nodes and arbitrary choices of activation functions. Applying the results to a network learning a regression task, we investigate numerically how stability indicators relate to the final training loss. Although the specific results vary with different choices of initial conditions and activation functions, we demonstrate that it is possible to predict the final training loss by monitoring finite-time Lyapunov exponents during the training process.