Diffusion model is a prevailing generative AI technology, and this talk will focus on understanding its performance, for the sake of certifying its results, choosing its hyperparameters, and assessing its applicability to downstream tasks. A major part of the talk will focus on quantifying diffusion model's generation accuracy. The importance of this problem already led to a rich and substantial literature; however, most existing theoretical investigations assumed that an epsilon-accurate score function has already been oracle-given, and focused on just the inference process of diffusion model. I will instead describe a first quantitative understanding of the end-to-end generative modeling protocol, including both score training (optimization) and inference (sampling). The resulting error analysis will lead to insights on how to design the training and inference processes for efficacious generation. Then, diffusion model's generalization will be studied - when it is not memorizing the training data set, what exactly will it generate? This question is not only pertinent to privacy and copyright considerations, but also important to the innovation of knowledge. Some quantitative results about diffusion model's generation inductive biases will be described.