Dr. Jinyoung Choi
Postdoctoral Researcher
Seoul National University
Jinyoung Choi is a Postdoctoral Researcher at the Computer Vision Lab, Seoul National University. Her primary research interests lie in computer vision and deep learning, with a strong focus on generative models. Her work aims to bridge mathematical theory with practical applications by developing novel training algorithms and enhancing the controllability and efficiency of data generation. She also aims to solve a wider range of problems by applying a generative perspective to their solutions. Additionally, she has experience in the area of learned image and video compression. She has published work in top-tier computer vision and machine learning conferences such as CVPR, NeurIPS, ICLR, etc. She received a Ph.D. from Electrical and Computer Engineering, Seoul National University, and both an M.S. in Mathematics and a B.S. in Industrial & Management Engineering and Mathematics from POSTECH.
nhancing Efficiency-Performance Trade-offs of Diffusion Probabilistic Models
Diffusion probabilistic models have emerged as the state-of-the-art in generative AI, but their slow sampling speed remains a critical bottleneck for real-world applications. This dissertation introduces several methodologies that address the fundamental trade-off between sampling efficiency and generation performance. The overall challenge of DPM efficiency can be systematically addressed by dividing solutions into three core research scopes: Training, Sampling, and Distillation. This dissertation explores distinct methodologies, one developed under each of these three perspectives. First, a novel training objective is proposed to fundamentally enhance the model's capacity. This methodology corrects the prediction errors that occur when sampling with few steps, enabling the model to achieve stable performance with reduced inference time. Second, an advanced sampling method is introduced to increase the numerical accuracy of diffusion sampling. This general acceleration algorithm reduces the truncation error by optimally combining multiple ODE solutions at intermediate steps, thereby generating higher quality samples for a given number of time steps. Third, an efficient and novel distillation method is devised to create a fast single-step generative model from a pretrained multi-step model. The technique utilizes the characteristic function to match the distributions of the two models, allowing for rapid convergence and the generation of diverse samples with greatly improved training efficiency. Collectively, these complementary methodologies offer a comprehensive strategy to achieve high-fidelity, high-performance generative models.