[go: up one dir, main page]

[edit]

Volume 99: Conference on Learning Theory, 25-28 June 2019, Phoenix, USA

[edit]

Editors: Alina Beygelzimer, Daniel Hsu

[bib][citeproc]

Contents:

Preface

Conference on Learning Theory 2019: Preface

Alina Beygelzimer, Daniel Hsu; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1-2

Contributed Papers

Inference under Information Constraints: Lower Bounds from Chi-Square Contraction

Jayadev Acharya, Clément L Canonne, Himanshu Tyagi; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:3-17

Learning in Non-convex Games with an Optimization Oracle

Naman Agarwal, Alon Gonen, Elad Hazan; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:18-29

Learning to Prune: Speeding up Repeated Computations

Daniel Alabi, Adam Tauman Kalai, Katrina Liggett, Cameron Musco, Christos Tzamos, Ellen Vitercik; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:30-33

Towards Testing Monotonicity of Distributions Over General Posets

Maryam Aliakbarpour, Themis Gouleakis, John Peebles, Ronitt Rubinfeld, Anak Yodpinyanee; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:34-82

Testing Mixtures of Discrete Distributions

Maryam Aliakbarpour, Ravi Kumar, Ronitt Rubinfeld; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:83-114

Normal Approximation for Stochastic Gradient Descent via Non-Asymptotic Rates of Martingale CLT

Andreas Anastasiou, Krishnakumar Balasubramanian, Murat A. Erdogdu; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:115-137

Adaptively Tracking the Best Bandit Arm with an Unknown Number of Distribution Changes

Peter Auer, Pratik Gajane, Ronald Ortner; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:138-158

Achieving Optimal Dynamic Regret for Non-stationary Bandits without Prior Information

Peter Auer, Yifang Chen, Pratik Gajane, Chung-Wei Lee, Haipeng Luo, Ronald Ortner, Chen-Yu Wei; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:159-163

A Universal Algorithm for Variational Inequalities Adaptive to Smoothness and Noise

Francis Bach, Kfir Y Levy; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:164-194

Learning Two Layer Rectified Neural Networks in Polynomial Time

Ainesh Bakshi, Rajesh Jayaram, David P Woodruff; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:195-268

Private Center Points and Learning of Halfspaces

Amos Beimel, Shay Moran, Kobbi Nissim, Uri Stemmer; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:269-282

Lower bounds for testing graphical models: colorings and antiferromagnetic Ising models

Ivona Bezáková, Antonio Blanca, Zongchen Chen, Daniel Štefankovič, Eric Vigoda; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:283-298

Approximate Guarantees for Dictionary Learning

Aditya Bhaskara, Wai Ming Tai; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:299-317

The Optimal Approximation Factor in Density Estimation

Olivier Bousquet, Daniel Kane, Shay Moran; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:318-341

Sorted Top-k in Rounds

Mark Braverman, Jieming Mao, Yuval Peres; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:342-382

Multi-armed Bandit Problems with Strategic Arms

Mark Braverman, Jieming Mao, Jon Schneider, S. Matthew Weinberg; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:383-416

Universality of Computational Lower Bounds for Submatrix Detection

Matthew Brennan, Guy Bresler, Wasim Huleihel; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:417-468

Optimal Average-Case Reductions to Sparse PCA: From Weak Assumptions to Strong Hardness

Matthew Brennan, Guy Bresler; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:469-470

Learning rates for Gaussian mixtures under group action

Victor-Emmanuel Brunel; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:471-491

Near-optimal method for highly smooth convex optimization

Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, Yuanzhi Li, Aaron Sidford; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:492-507

Improved Path-length Regret Bounds for Bandits

Sébastien Bubeck, Yuanzhi Li, Haipeng Luo, Chen-Yu Wei; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:508-528

Optimal Learning of Mallows Block Model

Robert Busa-Fekete, Dimitris Fotakis, Balázs Szörényi, Manolis Zampetakis; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:529-532

Gaussian Process Optimization with Adaptive Sketching: Scalable and No Regret

Daniele Calandriello, Luigi Carratino, Alessandro Lazaric, Michal Valko, Lorenzo Rosasco; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:533-557

Disagreement-Based Combinatorial Pure Exploration: Sample Complexity Bounds and an Efficient Algorithm

Tongyi Cao, Akshay Krishnamurthy; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:558-588

A Rank-1 Sketch for Matrix Multiplicative Weights

Yair Carmon, John C Duchi, Sidford Aaron, Tian Kevin; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:589-623

On the Computational Power of Online Gradient Descent

Vaggos Chatziafratis, Tim Roughgarden, Joshua R. Wang; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:624-662

Active Regression via Linear-Sample Sparsification

Xue Chen, Eric Price; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:663-695

A New Algorithm for Non-stationary Contextual Bandits: Efficient, Optimal and Parameter-free

Yifang Chen, Chung-Wei Lee, Haipeng Luo, Chen-Yu Wei; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:696-726

Faster Algorithms for High-Dimensional Robust Covariance Estimation

Yu Cheng, Ilias Diakonikolas, Rong Ge, David P. Woodruff; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:727-757

Testing Symmetric Markov Chains Without Hitting

Yeshwanth Cherapanamjeri, Peter L. Bartlett; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:758-785

Fast Mean Estimation with Sub-Gaussian Rates

Yeshwanth Cherapanamjeri, Nicolas Flammarion, Peter L. Bartlett; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:786-806

Vortices Instead of Equilibria in MinMax Optimization: Chaos and Butterfly Effects of Online Learning in Zero-Sum Games

Yun Kuen Cheung, Georgios Piliouras; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:807-834

Pure entropic regularization for metrical task systems

Christian Coester, James R. Lee; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:835-848

A near-optimal algorithm for approximating the John Ellipsoid

Michael B. Cohen, Ben Cousins, Yin Tat Lee, Xin Yang; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:849-873

Artificial Constraints and Hints for Unbounded Online Learning

Ashok Cutkosky; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:874-894

Combining Online Learning Guarantees

Ashok Cutkosky; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:895-913

Learning from Weakly Dependent Data under Dobrushin’s Condition

Yuval Dagan, Constantinos Daskalakis, Nishanth Dikkala, Siddhartha Jayanti; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:914-928

Space lower bounds for linear prediction in the streaming model

Yuval Dagan, Gil Kur, Ohad Shamir; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:929-954

Computationally and Statistically Efficient Truncated Regression

Constantinos Daskalakis, Themis Gouleakis, Christos Tzamos, Manolis Zampetakis; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:955-960

Reconstructing Trees from Traces

Sami Davies, Miklos Z. Racz, Cyrus Rashtchian; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:961-978

Is your function low dimensional?

Anindya De, Elchanan Mossel, Joe Neeman; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:979-993

Computational Limitations in Robust Classification and Win-Win Results

Akshay Degwekar, Preetum Nakkiran, Vinod Vaikuntanathan; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:994-1028

Fast determinantal point processes via distortion-free intermediate sampling

Michał Dereziński; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1029-1049

Minimax experimental design: Bridging the gap between statistical and worst-case approaches to least squares regression

Michał Dereziński, Kenneth L. Clarkson, Michael W. Mahoney, Manfred K. Warmuth; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1050-1069

Communication and Memory Efficient Testing of Discrete Distributions

Ilias Diakonikolas, Themis Gouleakis, Daniel M. Kane, Sankeerth Rao; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1070-1106

Testing Identity of Multidimensional Histograms

Ilias Diakonikolas, Daniel M. Kane, John Peebles; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1107-1131

Lower Bounds for Parallel and Randomized Convex Optimization

Jelena Diakonikolas, Cristóbal Guzmán; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1132-1157

On the Performance of Thompson Sampling on Logistic Bandits

Shi Dong, Tengyu Ma, Benjamin Van Roy; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1158-1160

Lower Bounds for Locally Private Estimation via Communication Complexity

John Duchi, Ryan Rogers; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1161-1191

Sharp Analysis for Nonconvex SGD Escaping from Saddle Points

Cong Fang, Zhouchen Lin, Tong Zhang; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1192-1234

Achieving the Bayes Error Rate in Stochastic Block Model by SDP, Robustly

Yingjie Fei, Yudong Chen; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1235-1269

High probability generalization bounds for uniformly stable algorithms with nearly optimal rate

Vitaly Feldman, Jan Vondrak; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1270-1279

Sum-of-squares meets square loss: Fast rates for agnostic tensor completion

Dylan J. Foster, Andrej Risteski; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1280-1318

The Complexity of Making the Gradient Small in Stochastic Convex Optimization

Dylan J. Foster, Ayush Sekhari, Ohad Shamir, Nathan Srebro, Karthik Sridharan, Blake Woodworth; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1319-1345

Statistical Learning with a Nuisance Component

Dylan J. Foster, Vasilis Syrgkanis; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1346-1348

On the Regret Minimization of Nonconvex Online Gradient Ascent for Online PCA

Dan Garber; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1349-1373

Optimal Tensor Methods in Smooth Convex and Uniformly ConvexOptimization

Alexander Gasnikov, Pavel Dvurechensky, Eduard Gorbunov, Evgeniya Vorontsova, Daniil Selikhanovych, César A. Uribe; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1374-1391

Near Optimal Methods for Minimizing Convex Functions with Lipschitz $p$-th Derivatives

Alexander Gasnikov, Pavel Dvurechensky, Eduard Gorbunov, Evgeniya Vorontsova, Daniil Selikhanovych, César A. Uribe, Bo Jiang, Haoyue Wang, Shuzhong Zhang, Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, Yuanzhi Li, Aaron Sidford; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1392-1393

Stabilized SVRG: Simple Variance Reduction for Nonconvex Optimization

Rong Ge, Zhize Li, Weiyao Wang, Xiang Wang; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1394-1448

Learning Ising Models with Independent Failures

Surbhi Goel, Daniel M. Kane, Adam R. Klivans; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1449-1469

Learning Neural Networks with Two Nonlinear Layers in Polynomial Time

Surbhi Goel, Adam R. Klivans; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1470-1499

When can unlabeled data improve the learning rate?

Christina Göpfert, Shai Ben-David, Olivier Bousquet, Sylvain Gelly, Ilya Tolstikhin, Ruth Urner; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1500-1518

Sampling and Optimization on Convex Sets in Riemannian Manifolds of Non-Negative Curvature

Navin Goyal, Abhishek Shetty; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1519-1561

Better Algorithms for Stochastic Bandits with Adversarial Corruptions

Anupam Gupta, Tomer Koren, Kunal Talwar; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1562-1578

Tight analyses for non-smooth stochastic gradient descent

Nicholas J. A. Harvey, Christopher Liaw, Yaniv Plan, Sikander Randhawa; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1579-1613

Reasoning in Bayesian Opinion Exchange Networks Is PSPACE-Hard

Jan Hązła, Ali Jadbabaie, Elchanan Mossel, M. Amin Rahimian; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1614-1648

How Hard is Robust Mean Estimation?

Samuel B. Hopkins, Jerry Li; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1649-1682

A Robust Spectral Algorithm for Overcomplete Tensor Decomposition

Samuel B. Hopkins, Tselil Schramm, Jonathan Shi; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1683-1722

Sample-Optimal Low-Rank Approximation of Distance Matrices

Pitor Indyk, Ali Vakilian, Tal Wagner, David P Woodruff; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1723-1751

Making the Last Iterate of SGD Information Theoretically Optimal

Prateek Jain, Dheeraj Nagaraj, Praneeth Netrapalli; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1752-1755

Accuracy-Memory Tradeoffs and Phase Transitions in Belief Propagation

Vishesh Jain, Frederic Koehler, Jingbo Liu, Elchanan Mossel; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1756-1771

The implicit bias of gradient descent on nonseparable data

Ziwei Ji, Matus Telgarsky; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1772-1798

An Optimal High-Order Tensor Method for Convex Optimization

Bo Jiang, Haoyue Wang, Shuzhong Zhang; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1799-1801

Parameter-Free Online Convex Optimization with Sub-Exponential Noise

Kwang-Sung Jun, Francesco Orabona; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1802-1823

Sample complexity of partition identification using multi-armed bandits

Sandeep Juneja, Subhashini Krishnasamy; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1824-1852

Privately Learning High-Dimensional Distributions

Gautam Kamath, Jerry Li, Vikrant Singhal, Jonathan Ullman; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1853-1902

On Communication Complexity of Classification Problems

Daniel Kane, Roi Livni, Shay Moran, Amir Yehudayoff; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1903-1943

Non-asymptotic Analysis of Biased Stochastic Approximation Scheme

Belhal Karimi, Blazej Miasojedow, Eric Moulines, Hoi-To Wai; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1944-1974

Discrepancy, Coresets, and Sketches in Machine Learning

Zohar Karnin, Edo Liberty; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1975-1993

Bandit Principal Component Analysis

Wojciech Kotłowski, Gergely Neu; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1994-2024

Contextual bandits with continuous actions: Smoothing, zooming, and adapting

Akshay Krishnamurthy, John Langford, Aleksandrs Slivkins, Chicheng Zhang; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2025-2027

Distribution-Dependent Analysis of Gibbs-ERM Principle

Ilja Kuzborskij, Nicolò Cesa-Bianchi, Csaba Szepesvári; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2028-2054

Global Convergence of the EM Algorithm for Mixtures of Two Component Linear Regression

Jeongyeol Kwon, Wei Qian, Constantine Caramanis, Yudong Chen, Damek Davis; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2055-2110

An Information-Theoretic Approach to Minimax Regret in Partial Monitoring

Tor Lattimore, Csaba Szepesvári; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2111-2139

Solving Empirical Risk Minimization in the Current Matrix Multiplication Time

Yin Tat Lee, Zhao Song, Qiuyi Zhang; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2140-2157

On Mean Estimation for General Norms with Statistical Queries

Jerry Li, Aleksandar Nikolov, Ilya Razenshteyn, Erik Waingarten; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2158-2172

Nearly Minimax-Optimal Regret for Linearly Parameterized Bandits

Yingkai Li, Yining Wang, Yuan Zhou; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2173-2174

Sharp Theoretical Analysis for Nonparametric Testing under Random Projection

Meimei Liu, Zuofeng Shang, Guang Cheng; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2175-2209

Combinatorial Algorithms for Optimal Design

Vivek Madan, Mohit Singh, Uthaipon Tantipongpipat, Weijun Xie; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2210-2258

Nonconvex sampling with the Metropolis-adjusted Langevin algorithm

Oren Mangoubi, Nisheeth K Vishnoi; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2259-2293

Beyond Least-Squares: Fast Rates for Regularized Empirical Risk Minimization through Self-Concordance

Ulysse Marteau-Ferey, Dmitrii Ostrovskii, Francis Bach, Alessandro Rudi; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2294-2340

Planting trees in graphs, and finding them back

Laurent Massoulié, Ludovic Stephan, Don Towsley; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2341-2371

Uniform concentration and symmetrization for weak interactions

Andreas Maurer, Massimiliano Pontil; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2372-2387

Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit

Song Mei, Theodor Misiakiewicz, Andrea Montanari; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2388-2464

Batch-Size Independent Regret Bounds for the Combinatorial Multi-Armed Bandit Problem

Nadav Merlis, Shie Mannor; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2465-2489

Lipschitz Adaptivity with Multiple Learning Rates in Online Learning

Zakaria Mhammedi, Wouter M Koolen, Tim Van Erven; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2490-2511

VC Classes are Adversarially Robustly Learnable, but Only Improperly

Omar Montasser, Steve Hanneke, Nathan Srebro; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2512-2530

Affine Invariant Covariance Estimation for Heavy-Tailed Distributions

Dmitrii M. Ostrovskii, Alessandro Rudi; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2531-2550

Stochastic Gradient Descent Learns State Equations with Nonlinear Activations

Samet Oymak; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2551-2579

A Theory of Selective Prediction

Mingda Qiao, Gregory Valiant; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2580-2594

Consistency of Interpolation with Laplace Kernels is a High-Dimensional Phenomenon

Alexander Rakhlin, Xiyu Zhai; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2595-2623

Classification with unknown class-conditional label noise on non-compact feature spaces

Henry Reeve,  Kabán; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2624-2651

The All-or-Nothing Phenomenon in Sparse Linear Regression

Galen Reeves, Jiaming Xu, Ilias Zadik; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2652-2663

Depth Separations in Neural Networks: What is Actually Being Separated?

Itay Safran, Ronen Eldan, Ohad Shamir; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2664-2666

How do infinite width bounded norm networks look in function space?

Pedro Savarese, Itay Evron, Daniel Soudry, Nathan Srebro; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2667-2690

Exponential Convergence Time of Gradient Descent for One-Dimensional Deep Linear Neural Networks

Ohad Shamir; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2691-2713

Learning Linear Dynamical Systems with Semi-Parametric Least Squares

Max Simchowitz, Ross Boczar, Benjamin Recht; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2714-2802

Finite-Time Error Bounds For Linear Stochastic Approximation andTD Learning

R. Srikant, Lei Ying; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2803-2830

Robustness of Spectral Methods for Community Detection

Ludovic Stephan, Laurent Massoulié; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2831-2860

Maximum Entropy Distributions: Bit Complexity and Stability

Damian Straszak, Nisheeth K. Vishnoi; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2861-2891

Adaptive Hard Thresholding for Near-optimal Consistent Robust Regression

Arun Sai Suggala, Kush Bhatia, Pradeep Ravikumar, Prateek Jain; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2892-2897

Model-based RL in Contextual Decision Processes: PAC bounds and Exponential Improvements over Model-free Approaches

Wen Sun, Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2898-2933

Stochastic first-order methods: non-asymptotic and computer-aided analyses via potential functions

Adrien Taylor, Francis Bach; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2934-2992

The Relative Complexity of Maximum Likelihood Estimation, MAP Estimation, and Sampling

Christopher Tosh, Sanjoy Dasgupta; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2993-3035

The Gap Between Model-Based and Model-Free Methods on the Linear Quadratic Regulator: An Asymptotic Viewpoint

Stephen Tu, Benjamin Recht; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:3036-3083

Theoretical guarantees for sampling and inference in generative models with latent diffusions

Belinda Tzen, Maxim Raginsky; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:3084-3114

Gradient Descent for One-Hidden-Layer Neural Networks: Polynomial Convergence and SQ Lower Bounds

Santosh Vempala, John Wilmes; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:3115-3117

Estimation of smooth densities in Wasserstein distance

Jonathan Weed, Quentin Berthet; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:3118-3119

Estimating the Mixing Time of Ergodic Markov Chains

Geoffrey Wolfer, Aryeh Kontorovich; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:3120-3159

Stochastic Approximation of Smooth and Strongly Convex Functions: Beyond the $O(1/T)$ Convergence Rate

Lijun Zhang, Zhi-Hua Zhou; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:3160-3179

Open Problems

Open Problem: Is Margin Sufficient for Non-Interactive Private Distributed Learning?

Amit Daniely, Vitaly Feldman; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:3180-3184

Open Problem: How fast can a multiclass test set be overfit?

Vitaly Feldman, Roy Frostig, Moritz Hardt; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:3185-3189

Open Problem: Do Good Algorithms Necessarily Query Bad Points?

Rong Ge, Prateek Jain, Sham M. Kakade, Rahul Kidambi, Dheeraj M. Nagaraj, Praneeth Netrapalli; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:3190-3193

Open Problem: Risk of Ruin in Multiarmed Bandits

Filipo S. Perotto, Mathieu Bourgais, Bruno C. Silva, Laurent Vercouter; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:3194-3197

Open Problem: Monotonicity of Learning

Tom Viering, Alexander Mey, Marco Loog; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:3198-3201

Open Problem: The Oracle Complexity of Convex Optimization with Limited Memory

Blake Woodworth, Nathan Srebro; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:3202-3210

subscribe via RSS