We are often given time series data from which we would like to learn their governing laws. How can we learn the dynamical system that reproduces not just the short term but also the long term patterns in the data? In this talk, we are interested in answering this question for chaotic time series generated by an unknown ergodic dynamical system. Under certain conditions, we prove that regression of the one-step dynamics is sufficient to emulate ergodic (long-term) behavior provided we also learn the first-order derivative (Jacobian) of the one-step dynamics. In the second half, we study the problem of sampling from a target probability distribution in the setting of simulation-based Bayesian inference. Here, either the target is known up to a normalization constant, or its score function — gradient of log density — can be evaluated anywhere. We propose to learn the zero of a score residual operator that is the difference between the target score and the score of a pushforward density of a known source distribution through a transport map. The desired transport map is an invertible function that takes samples from an easy-to-sample source density to produce samples according to the target density. We compare such a score operator Newton-Raphson method to existing approaches for sampling using measure transport. Finally, we tackle the generative modeling setting in which we discuss how the dynamical systems approach to measure transport can yield new insights. The first part of the talk is joint work with Jeongjin Park (Georgia Tech) and the second with Youssef Marzouk (MIT) and Adriaan de Clercq (UChicago).