User profiles for Jeongyeol Kwon

Jeongyeol Kwon

University of Wisconsin-Madison
Verified email at wisc.edu
Cited by 478

A fully first-order method for stochastic bilevel optimization

J Kwon, D Kwon, S Wright… - … Conference on Machine …, 2023 - proceedings.mlr.press
We consider stochastic unconstrained bilevel optimization problems when only the first-order
gradient oracles are available. While numerous optimization methods have been proposed …

Feed two birds with one scone: Exploiting wild data for both out-of-distribution generalization and detection

H Bai, G Canal, X Du, J Kwon… - … on Machine Learning, 2023 - proceedings.mlr.press
Modern machine learning models deployed in the wild can encounter both covariate and
semantic shifts, giving rise to the problems of out-of-distribution (OOD) generalization and OOD …

Rl for latent mdps: Regret guarantees and a lower bound

J Kwon, Y Efroni, C Caramanis… - Advances in Neural …, 2021 - proceedings.neurips.cc
In this work, we consider the regret minimization problem for reinforcement learning in latent
Markov Decision Processes (LMDP). In an LMDP, an MDP is randomly drawn from a set of …

On penalty methods for nonconvex bilevel optimization and first-order stochastic approximation

J Kwon, D Kwon, S Wright, R Nowak - arXiv preprint arXiv:2309.01753, 2023 - arxiv.org
In this work, we study first-order algorithms for solving Bilevel Optimization (BO) where the
objective functions are smooth but possibly nonconvex in both levels and the variables are …

Global convergence of the EM algorithm for mixtures of two component linear regression

J Kwon, W Qian, C Caramanis… - … on Learning Theory, 2019 - proceedings.mlr.press
The Expectation-Maximization algorithm is perhaps the most broadly used algorithm for
inference of latent variable problems. A theoretical understanding of its performance, however, …

On the minimax optimality of the EM algorithm for learning two-component mixed linear regression

J Kwon, N Ho, C Caramanis - International Conference on …, 2021 - proceedings.mlr.press
We study the convergence rates of the EM algorithm for learning two-component mixed
linear regression under all regimes of signal-to-noise ratio (SNR). We resolve a long-standing …

On the computational and statistical complexity of over-parameterized matrix sensing

J Zhuo, J Kwon, N Ho, C Caramanis - Journal of Machine Learning …, 2024 - jmlr.org
We consider solving the low-rank matrix sensing problem with the Factorized Gradient Descent
(FGD) method when the specified rank is larger than the true rank. We refer to this as over…

EM converges for a mixture of many linear regressions

J Kwon, C Caramanis - International Conference on Artificial …, 2020 - proceedings.mlr.press
We study the convergence of the Expectation-Maximization (EM) algorithm for mixtures of
linear regressions with an arbitrary number $ k $ of components. We show that as long as …

Reinforcement learning in reward-mixing mdps

J Kwon, Y Efroni, C Caramanis… - Advances in Neural …, 2021 - proceedings.neurips.cc
Learning a near optimal policy in a partially observable system remains an elusive challenge
in contemporary reinforcement learning. In this work, we consider episodic reinforcement …

The EM algorithm gives sample-optimality for learning mixtures of well-separated gaussians

J Kwon, C Caramanis - Conference on Learning Theory, 2020 - proceedings.mlr.press
We consider the problem of spherical Gaussian Mixture models with $ k\geq 3$ components
when the components are well separated. A fundamental previous result established that …