This technical report by Barak A. Pearlmutter from Carnegie Mellon University discusses methods for learning state space trajectories in recurrent neural networks. The primary focus is on computing the gradient of an error function with respect to the network's weights and time constants, which allows for gradient descent to minimize the error. The report introduces a forward-backward technique to approximate derivatives and provides simulations demonstrating the network's ability to follow limit cycles. It also explores extensions such as mutable time delays and teacher forcing, and includes a complexity analysis. The research highlights the suitability of these networks for tasks in signal processing, control, and speech. The report concludes with future work plans and comparisons to related work.This technical report by Barak A. Pearlmutter from Carnegie Mellon University discusses methods for learning state space trajectories in recurrent neural networks. The primary focus is on computing the gradient of an error function with respect to the network's weights and time constants, which allows for gradient descent to minimize the error. The report introduces a forward-backward technique to approximate derivatives and provides simulations demonstrating the network's ability to follow limit cycles. It also explores extensions such as mutable time delays and teacher forcing, and includes a complexity analysis. The research highlights the suitability of these networks for tasks in signal processing, control, and speech. The report concludes with future work plans and comparisons to related work.