[go: up one dir, main page]




Home Page

Papers

Submissions

News

Editorial Board

Special Issues

Open Source Software

Proceedings (PMLR)

Data (DMLR)

Transactions (TMLR)

Search

Statistics

Login

Frequently Asked Questions

Contact Us



RSS Feed

JMLR Volume 22

On the Optimality of Kernel-Embedding Based Goodness-of-Fit Tests
Krishnakumar Balasubramanian, Tong Li, Ming Yuan; (1):1−45, 2021.
[abs][pdf][bib]

Domain Generalization by Marginal Transfer Learning
Gilles Blanchard, Aniket Anand Deshmukh, Urun Dogan, Gyemin Lee, Clayton Scott; (2):1−55, 2021.
[abs][pdf][bib]      [code]

Regulating Greed Over Time in Multi-Armed Bandits
Stefano Tracà, Cynthia Rudin, Weiyu Yan; (3):1−99, 2021.
[abs][pdf][bib]      [code]

An Empirical Study of Bayesian Optimization: Acquisition Versus Partition
Erich Merrill, Alan Fern, Xiaoli Fern, Nima Dolatnia; (4):1−25, 2021.
[abs][pdf][bib]      [code]

The Decoupled Extended Kalman Filter for Dynamic Exponential-Family Factorization Models
Carlos A. Gomez-Uribe, Brian Karrer; (5):1−25, 2021.
[abs][pdf][bib]

Consistent estimation of small masses in feature sampling
Fadhel Ayed, Marco Battiston, Federico Camerlenghi, Stefano Favaro; (6):1−28, 2021.
[abs][pdf][bib]

Preference-based Online Learning with Dueling Bandits: A Survey
Viktor Bengs, Róbert Busa-Fekete, Adil El Mesaoudi-Paul, Eyke Hüllermeier; (7):1−108, 2021.
[abs][pdf][bib]

A Unified Framework for Random Forest Prediction Error Estimation
Benjamin Lu, Johanna Hardin; (8):1−41, 2021.
[abs][pdf][bib]

Convex Clustering: Model, Theoretical Guarantee and Efficient Algorithm
Defeng Sun, Kim-Chuan Toh, Yancheng Yuan; (9):1−32, 2021.
[abs][pdf][bib]

Mixing Time of Metropolis-Hastings for Bayesian Community Detection
Bumeng Zhuo, Chao Gao; (10):1−89, 2021.
[abs][pdf][bib]

Unfolding-Model-Based Visualization: Theory, Method and Applications
Yunxiao Chen, Zhiliang Ying, Haoran Zhang; (11):1−51, 2021.
[abs][pdf][bib]      [code]

Global and Quadratic Convergence of Newton Hard-Thresholding Pursuit
Shenglong Zhou, Naihua Xiu, Hou-Duo Qi; (12):1−45, 2021.
[abs][pdf][bib]

Homogeneity Structure Learning in Large-scale Panel Data with Heavy-tailed Errors
Di Xiao, Yuan Ke, Runze Li; (13):1−42, 2021.
[abs][pdf][bib]

On Multi-Armed Bandit Designs for Dose-Finding Trials
Maryam Aziz, Emilie Kaufmann, Marie-Karelle Riviere; (14):1−38, 2021.
[abs][pdf][bib]

Simple and Fast Algorithms for Interactive Machine Learning with Random Counter-examples
Jagdeep Singh Bhatia; (15):1−30, 2021.
[abs][pdf][bib]

Pykg2vec: A Python Library for Knowledge Graph Embedding
Shih-Yuan Yu, Sujit Rokka Chhetri, Arquimedes Canedo, Palash Goyal, Mohammad Abdullah Al Faruque; (16):1−6, 2021. (Machine Learning Open Source Software Paper)
[abs][pdf][bib]      [code]

Continuous Time Analysis of Momentum Methods
Nikola B. Kovachki, Andrew M. Stuart; (17):1−40, 2021.
[abs][pdf][bib]      [supplementary]

A Unified Sample Selection Framework for Output Noise Filtering: An Error-Bound Perspective
Gaoxia Jiang, Wenjian Wang, Yuhua Qian, Jiye Liang; (18):1−66, 2021.
[abs][pdf][bib]

Ranking and synchronization from pairwise measurements via SVD
Alexandre d'Aspremont, Mihai Cucuringu, Hemant Tyagi; (19):1−63, 2021.
[abs][pdf][bib]

Aggregated Hold-Out
Guillaume Maillard, Sylvain Arlot, Matthieu Lerasle; (20):1−55, 2021.
[abs][pdf][bib]

A Fast Globally Linearly Convergent Algorithm for the Computation of Wasserstein Barycenters
Lei Yang, Jia Li, Defeng Sun, Kim-Chuan Toh; (21):1−37, 2021.
[abs][pdf][bib]

When random initializations help: a study of variational inference for community detection
Purnamrita Sarkar, Y. X. Rachel Wang, Soumendu S. Mukherjee; (22):1−46, 2021.
[abs][pdf][bib]

A Two-Level Decomposition Framework Exploiting First and Second Order Information for SVM Training Problems
Giulio Galvan, Matteo Lapucci, Chih-Jen Lin, Marco Sciandrone; (23):1−38, 2021.
[abs][pdf][bib]

Entangled Kernels - Beyond Separability
Riikka Huusari, Hachem Kadri; (24):1−40, 2021.
[abs][pdf][bib]      [code]

Generalization Performance of Multi-pass Stochastic Gradient Descent with Convex Loss Functions
Yunwen Lei, Ting Hu, Ke Tang; (25):1−41, 2021.
[abs][pdf][bib]

Finite Time LTI System Identification
Tuhin Sarkar, Alexander Rakhlin, Munther A. Dahleh; (26):1−61, 2021.
[abs][pdf][bib]

Inference In High-dimensional Single-Index Models Under Symmetric Designs
Hamid Eftekhari, Moulinath Banerjee, Ya'acov Ritov; (27):1−63, 2021.
[abs][pdf][bib]      [code]

Tsallis-INF: An Optimal Algorithm for Stochastic and Adversarial Bandits
Julian Zimmert, Yevgeny Seldin; (28):1−49, 2021.
[abs][pdf][bib]

Single and Multiple Change-Point Detection with Differential Privacy
Wanrong Zhang, Sara Krehbiel, Rui Tuo, Yajun Mei, Rachel Cummings; (29):1−36, 2021.
[abs][pdf][bib]

A Review of Robot Learning for Manipulation: Challenges, Representations, and Algorithms
Oliver Kroemer, Scott Niekum, George Konidaris; (30):1−82, 2021.
[abs][pdf][bib]

FLAME: A Fast Large-scale Almost Matching Exactly Approach to Causal Inference
Tianyu Wang, Marco Morucci, M. Usaid Awan, Yameng Liu, Sudeepa Roy, Cynthia Rudin, Alexander Volfovsky; (31):1−41, 2021.
[abs][pdf][bib]      [website]

Learning interaction kernels in heterogeneous systems of agents from multiple trajectories
Fei Lu, Mauro Maggioni, Sui Tang; (32):1−67, 2021.
[abs][pdf][bib]      [code]

Asynchronous Online Testing of Multiple Hypotheses
Tijana Zrnic, Aaditya Ramdas, Michael I. Jordan; (33):1−39, 2021.
[abs][pdf][bib]

Neighborhood Structure Assisted Non-negative Matrix Factorization and Its Application in Unsupervised Point-wise Anomaly Detection
Imtiaz Ahmed, Xia Ben Hu, Mithun P. Acharya, Yu Ding; (34):1−32, 2021.
[abs][pdf][bib]      [code]

Learning and Planning for Time-Varying MDPs Using Maximum Likelihood Estimation
Melkior Ornik, Ufuk Topcu; (35):1−40, 2021.
[abs][pdf][bib]

Multi-class Gaussian Process Classification with Noisy Inputs
Carlos Villacampa-Calvo, Bryan Zaldívar, Eduardo C. Garrido-Merchán, Daniel Hernández-Lobato; (36):1−52, 2021.
[abs][pdf][bib]      [code]

A Bayesian Contiguous Partitioning Method for Learning Clustered Latent Variables
Zhao Tang Luo, Huiyan Sang, Bani Mallick; (37):1−52, 2021.
[abs][pdf][bib]

Risk-Averse Learning by Temporal Difference Methods with Markov Risk Measures
Umit Köse, Andrzej Ruszczyński; (38):1−34, 2021.
[abs][pdf][bib]

giotto-tda: : A Topological Data Analysis Toolkit for Machine Learning and Data Exploration
Guillaume Tauzin, Umberto Lupo, Lewis Tunstall, Julian Burella Pérez, Matteo Caorsi, Anibal M. Medina-Mardones, Alberto Dassatti, Kathryn Hess; (39):1−6, 2021. (Machine Learning Open Source Software Paper)
[abs][pdf][bib]      [code]

Residual Energy-Based Models for Text
Anton Bakhtin, Yuntian Deng, Sam Gross, Myle Ott, Marc'Aurelio Ranzato, Arthur Szlam; (40):1−41, 2021.
[abs][pdf][bib]

From Fourier to Koopman: Spectral Methods for Long-term Time Series Prediction
Henning Lange, Steven L. Brunton, J. Nathan Kutz; (41):1−38, 2021.
[abs][pdf][bib]      [code]

High-Order Langevin Diffusion Yields an Accelerated MCMC Algorithm
Wenlong Mou, Yi-An Ma, Martin J. Wainwright, Peter L. Bartlett, Michael I. Jordan; (42):1−41, 2021.
[abs][pdf][bib]

Banach Space Representer Theorems for Neural Networks and Ridge Splines
Rahul Parhi, Robert D. Nowak; (43):1−40, 2021.
[abs][pdf][bib]

Wasserstein barycenters can be computed in polynomial time in fixed dimension
Jason M Altschuler, Enric Boix-Adsera; (44):1−19, 2021.
[abs][pdf][bib]      [code]

RaSE: Random Subspace Ensemble Classification
Ye Tian, Yang Feng; (45):1−93, 2021.
[abs][pdf][bib]      [code]

Optimal Structured Principal Subspace Estimation: Metric Entropy and Minimax Rates
Tony Cai, Hongzhe Li, Rong Ma; (46):1−45, 2021.
[abs][pdf][bib]

Understanding Recurrent Neural Networks Using Nonequilibrium Response Theory
Soon Hoe Lim; (47):1−48, 2021.
[abs][pdf][bib]

Optimal Feedback Law Recovery by Gradient-Augmented Sparse Polynomial Regression
Behzad Azmi, Dante Kalise, Karl Kunisch; (48):1−32, 2021.
[abs][pdf][bib]

From Low Probability to High Confidence in Stochastic Convex Optimization
Damek Davis, Dmitriy Drusvyatskiy, Lin Xiao, Junyu Zhang; (49):1−38, 2021.
[abs][pdf][bib]

Structure Learning of Undirected Graphical Models for Count Data
Nguyen Thi Kim Hue, Monica Chiogna; (50):1−53, 2021.
[abs][pdf][bib]

Projection-free Decentralized Online Learning for Submodular Maximization over Time-Varying Networks
Junlong Zhu, Qingtao Wu, Mingchuan Zhang, Ruijuan Zheng, Keqin Li; (51):1−42, 2021.
[abs][pdf][bib]

Sparse and Smooth Signal Estimation: Convexification of L0-Formulations
Alper Atamturk, Andres Gomez, Shaoning Han; (52):1−43, 2021.
[abs][pdf][bib]

Subspace Clustering through Sub-Clusters
Weiwei Li, Jan Hannig, Sayan Mukherjee; (53):1−37, 2021.
[abs][pdf][bib]

GemBag: Group Estimation of Multiple Bayesian Graphical Models
Xinming Yang, Lingrui Gan, Naveen N. Narisetty, Feng Liang; (54):1−48, 2021.
[abs][pdf][bib]

Integrative Generalized Convex Clustering Optimization and Feature Selection for Mixed Multi-View Data
Minjie Wang, Genevera I. Allen; (55):1−73, 2021.
[abs][pdf][bib]

Incorporating Unlabeled Data into Distributionally Robust Learning
Charlie Frogner, Sebastian Claici, Edward Chien, Justin Solomon; (56):1−46, 2021.
[abs][pdf][bib]

Normalizing Flows for Probabilistic Modeling and Inference
George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, Balaji Lakshminarayanan; (57):1−64, 2021.
[abs][pdf][bib]

Estimation and Inference for High Dimensional Generalized Linear Models: A Splitting and Smoothing Approach
Zhe Fei, Yi Li; (58):1−32, 2021.
[abs][pdf][bib]      [code]

Predictive Learning on Hidden Tree-Structured Ising Models
Konstantinos E. Nikolakakis, Dionysios S. Kalogerias, Anand D. Sarwate; (59):1−82, 2021.
[abs][pdf][bib]      [code]

A Distributed Method for Fitting Laplacian Regularized Stratified Models
Jonathan Tuck, Shane Barratt, Stephen Boyd; (60):1−37, 2021.
[abs][pdf][bib]      [code]

Stochastic Proximal AUC Maximization
Yunwen Lei, Yiming Ying; (61):1−45, 2021.
[abs][pdf][bib]

How to Gain on Power: Novel Conditional Independence Tests Based on Short Expansion of Conditional Mutual Information
Mariusz Kubkowski, Jan Mielniczuk, Paweł Teisseyre; (62):1−57, 2021.
[abs][pdf][bib]

Geometric structure of graph Laplacian embeddings
Nicolás García Trillos, Franca Hoffmann, Bamdad Hosseini; (63):1−55, 2021.
[abs][pdf][bib]

Sparse Tensor Additive Regression
Botao Hao, Boxiang Wang, Pengyuan Wang, Jingfei Zhang, Jian Yang, Will Wei Sun; (64):1−43, 2021.
[abs][pdf][bib]

Dynamic Tensor Recommender Systems
Yanqing Zhang, Xuan Bi, Niansheng Tang, Annie Qu; (65):1−35, 2021.
[abs][pdf][bib]

Approximate Newton Methods
Haishan Ye, Luo Luo, Zhihua Zhang; (66):1−41, 2021.
[abs][pdf][bib]

A General Framework for Empirical Bayes Estimation in Discrete Linear Exponential Family
Trambak Banerjee, Qiang Liu, Gourab Mukherjee, Wengunag Sun; (67):1−46, 2021.
[abs][pdf][bib]

Path Length Bounds for Gradient Descent and Flow
Chirag Gupta, Sivaraman Balakrishnan, Aaditya Ramdas; (68):1−63, 2021.
[abs][pdf][bib]      [blog]

Determining the Number of Communities in Degree-corrected Stochastic Block Models
Shujie Ma, Liangjun Su, Yichong Zhang; (69):1−63, 2021.
[abs][pdf][bib]

Testing Conditional Independence via Quantile Regression Based Partial Copulas
Lasse Petersen, Niels Richard Hansen; (70):1−47, 2021.
[abs][pdf][bib]      [code]

Phase Diagram for Two-layer ReLU Neural Networks at Infinite-width Limit
Tao Luo, Zhi-Qin John Xu, Zheng Ma, Yaoyu Zhang; (71):1−47, 2021.
[abs][pdf][bib]      [code]

Prediction against a limited adversary
Erhan Bayraktar, Ibrahim Ekren, Xin Zhang; (72):1−33, 2021.
[abs][pdf][bib]

Optimization with Momentum: Dynamical, Control-Theoretic, and Symplectic Perspectives
Michael Muehlebach, Michael I. Jordan; (73):1−50, 2021.
[abs][pdf][bib]

Kernel Operations on the GPU, with Autodiff, without Memory Overflows
Benjamin Charlier, Jean Feydy, Joan Alexis Glaunès, François-David Collin, Ghislain Durif; (74):1−6, 2021. (Machine Learning Open Source Software Paper)
[abs][pdf][bib]      [code]

Attention is Turing-Complete
Jorge Pérez, Pablo Barceló, Javier Marinkovic; (75):1−35, 2021.
[abs][pdf][bib]

Analyzing the discrepancy principle for kernelized spectral filter learning algorithms
Alain Celisse, Martin Wahl; (76):1−59, 2021.
[abs][pdf][bib]

ChainerRL: A Deep Reinforcement Learning Library
Yasuhiro Fujita, Prabhat Nagarajan, Toshiki Kataoka, Takahiro Ishikawa; (77):1−14, 2021. (Machine Learning Open Source Software Paper)
[abs][pdf][bib]      [code]

POT: Python Optimal Transport
Rémi Flamary, Nicolas Courty, Alexandre Gramfort, Mokhtar Z. Alaya, Aurélie Boisbunon, Stanislas Chambon, Laetitia Chapel, Adrien Corenflos, Kilian Fatras, Nemo Fournier, Léo Gautheron, Nathalie T.H. Gayraud, Hicham Janati, Alain Rakotomamonjy, Ievgen Redko, Antoine Rolet, Antony Schutz, Vivien Seguy, Danica J. Sutherland, Romain Tavenard, Alexander Tong, Titouan Vayer; (78):1−8, 2021. (Machine Learning Open Source Software Paper)
[abs][pdf][bib]      [code]

Is SGD a Bayesian sampler? Well, almost
Chris Mingard, Guillermo Valle-Pérez, Joar Skalse, Ard A. Louis; (79):1−64, 2021.
[abs][pdf][bib]

Communication-Efficient Distributed Covariance Sketch, with Application to Distributed PCA
Zengfeng Huang, Xuemin Lin, Wenjie Zhang, Ying Zhang; (80):1−38, 2021.
[abs][pdf][bib]

Knowing what You Know: valid and validated confidence sets in multiclass and multilabel prediction
Maxime Cauchois, Suyash Gupta, John C. Duchi; (81):1−42, 2021.
[abs][pdf][bib]

PyKEEN 1.0: A Python Library for Training and Evaluating Knowledge Graph Embeddings
Mehdi Ali, Max Berrendorf, Charles Tapley Hoyt, Laurent Vermue, Sahand Sharifzadeh, Volker Tresp, Jens Lehmann; (82):1−6, 2021.
[abs][pdf][bib]

Statistical Query Lower Bounds for Tensor PCA
Rishabh Dudeja, Daniel Hsu; (83):1−51, 2021.
[abs][pdf][bib]

Variance Reduced Median-of-Means Estimator for Byzantine-Robust Distributed Inference
Jiyuan Tu, Weidong Liu, Xiaojun Mao, Xi Chen; (84):1−67, 2021.
[abs][pdf][bib]

Gradient Methods Never Overfit On Separable Data
Ohad Shamir; (85):1−20, 2021.
[abs][pdf][bib]

Multi-view Learning as a Nonparametric Nonlinear Inter-Battery Factor Analysis
Andreas Damianou, Neil D. Lawrence, Carl Henrik Ek; (86):1−51, 2021.
[abs][pdf][bib]

On Solving Probabilistic Linear Diophantine Equations
Patrick Kreitzberg, Oliver Serang; (87):1−24, 2021.
[abs][pdf][bib]      [code]

Edge Sampling Using Local Network Information
Can M. Le; (88):1−29, 2021.
[abs][pdf][bib]

Bayesian Text Classification and Summarization via A Class-Specified Topic Model
Feifei Wang, Junni L. Zhang, Yichao Li, Ke Deng, Jun S. Liu; (89):1−48, 2021.
[abs][pdf][bib]

Risk Bounds for Unsupervised Cross-Domain Mapping with IPMs
Tomer Galanti, Sagie Benaim, Lior Wolf; (90):1−42, 2021.
[abs][pdf][bib]      [code]

Analysis of high-dimensional Continuous Time Markov Chains using the Local Bouncy Particle Sampler
Tingting Zhao, Alexandre Bouchard-Côté; (91):1−41, 2021.
[abs][pdf][bib]      [code]

NEU: A Meta-Algorithm for Universal UAP-Invariant Feature Representation
Anastasis Kratsios, Cody Hyndman; (92):1−51, 2021.
[abs][pdf][bib]      [code]

Flexible Signal Denoising via Flexible Empirical Bayes Shrinkage
Zhengrong Xing, Peter Carbonetto, Matthew Stephens; (93):1−28, 2021.
[abs][pdf][bib]      [code]

Consistent Semi-Supervised Graph Regularization for High Dimensional Data
Xiaoyi Mai, Romain Couillet; (94):1−48, 2021.
[abs][pdf][bib]

Histogram Transform Ensembles for Large-scale Regression
Hanyuan Hang, Zhouchen Lin, Xiaoyu Liu, Hongwei Wen; (95):1−87, 2021.
[abs][pdf][bib]

Guided Visual Exploration of Relations in Data Sets
Kai Puolamäki, Emilia Oikarinen, Andreas Henelius; (96):1−32, 2021.
[abs][pdf][bib]      [code]

Safe Policy Iteration: A Monotonically Improving Approximate Policy Iteration Approach
Alberto Maria Metelli, Matteo Pirotta, Daniele Calandriello, Marcello Restelli; (97):1−83, 2021.
[abs][pdf][bib]

On the Theory of Policy Gradient Methods: Optimality, Approximation, and Distribution Shift
Alekh Agarwal, Sham M. Kakade, Jason D. Lee, Gaurav Mahajan; (98):1−76, 2021.
[abs][pdf][bib]

Adaptive estimation of nonparametric functionals
Lin Liu, Rajarshi Mukherjee, James M. Robins, Eric Tchetgen Tchetgen; (99):1−66, 2021.
[abs][pdf][bib]

OpenML-Python: an extensible Python API for OpenML
Matthias Feurer, Jan N. van Rijn, Arlind Kadra, Pieter Gijsbers, Neeratyoy Mallik, Sahithya Ravi, Andreas Müller, Joaquin Vanschoren, Frank Hutter; (100):1−5, 2021. (Machine Learning Open Source Software Paper)
[abs][pdf][bib]      [code]

LocalGAN: Modeling Local Distributions for Adversarial Response Generation
Baoxun Wang, Zhen Xu, Huan Zhang, Kexin Qiu, Deyuan Zhang, Chengjie Sun; (101):1−29, 2021.
[abs][pdf][bib]      [code]

Learning a High-dimensional Linear Structural Equation Model via l1-Regularized Regression
Gunwoong Park, Sang Jun Moon, Sion Park, Jong-June Jeon; (102):1−41, 2021.
[abs][pdf][bib]

A Unified Analysis of First-Order Methods for Smooth Games via Integral Quadratic Constraints
Guodong Zhang, Xuchan Bao, Laurent Lessard, Roger Grosse; (103):1−39, 2021.
[abs][pdf][bib]      [code]

Explaining Explanations: Axiomatic Feature Interactions for Deep Networks
Joseph D. Janizek, Pascal Sturmfels, Su-In Lee; (104):1−54, 2021.
[abs][pdf][bib]      [code]

Pathwise Conditioning of Gaussian Processes
James T. Wilson, Viacheslav Borovitskiy, Alexander Terenin, Peter Mostowsky, Marc Peter Deisenroth; (105):1−47, 2021.
[abs][pdf][bib]      [code]

Online stochastic gradient descent on non-convex losses from high-dimensional inference
Gerard Ben Arous, Reza Gheissari, Aukosh Jagannath; (106):1−51, 2021.
[abs][pdf][bib]

Beyond English-Centric Multilingual Machine Translation
Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Michael Auli, Armand Joulin; (107):1−48, 2021.
[abs][pdf][bib]      [code]

Towards a Unified Analysis of Random Fourier Features
Zhu Li, Jean-Francois Ton, Dino Oglic, Dino Sejdinovic; (108):1−51, 2021.
[abs][pdf][bib]

mvlearn: Multiview Machine Learning in Python
Ronan Perry, Gavin Mischler, Richard Guo, Theodore Lee, Alexander Chang, Arman Koul, Cameron Franz, Hugo Richard, Iain Carmichael, Pierre Ablin, Alexandre Gramfort, Joshua T. Vogelstein; (109):1−7, 2021. (Machine Learning Open Source Software Paper)
[abs][pdf][bib]      [code]

River: machine learning for streaming data in Python
Jacob Montiel, Max Halford, Saulo Martiello Mastelini, Geoffrey Bolmier, Raphael Sourty, Robin Vaysse, Adil Zouitine, Heitor Murilo Gomes, Jesse Read, Talel Abdessalem, Albert Bifet; (110):1−8, 2021. (Machine Learning Open Source Software Paper)
[abs][pdf][bib]      [code]

Non-parametric Quantile Regression via the K-NN Fused Lasso
Steven Siwei Ye, Oscar Hernan Madrid Padilla; (111):1−38, 2021.
[abs][pdf][bib]      [code]

L-SVRG and L-Katyusha with Arbitrary Sampling
Xun Qian, Zheng Qu, Peter Richtárik; (112):1−47, 2021.
[abs][pdf][bib]

A Lyapunov Analysis of Accelerated Methods in Optimization
Ashia C. Wilson, Ben Recht, Michael I. Jordan; (113):1−34, 2021.
[abs][pdf][bib]

NUQSGD: Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization
Ali Ramezani-Kebrya, Fartash Faghri, Ilya Markov, Vitalii Aksenov, Dan Alistarh, Daniel M. Roy; (114):1−43, 2021.
[abs][pdf][bib]

Stochastic Proximal Methods for Non-Smooth Non-Convex Constrained Sparse Optimization
Michael R. Metel, Akiko Takeda; (115):1−36, 2021.
[abs][pdf][bib]

An Importance Weighted Feature Selection Stability Measure
Victor Hamer, Pierre Dupont; (116):1−57, 2021.
[abs][pdf][bib]

Strong Consistency, Graph Laplacians, and the Stochastic Block Model
Shaofeng Deng, Shuyang Ling, Thomas Strohmer; (117):1−44, 2021.
[abs][pdf][bib]

A General Framework for Adversarial Label Learning
Chidubem Arachie, Bert Huang; (118):1−33, 2021.
[abs][pdf][bib]      [code]

Some Theoretical Insights into Wasserstein GANs
Gérard Biau, Maxime Sangnier, Ugo Tanielian; (119):1−45, 2021.
[abs][pdf][bib]

Empirical Bayes Matrix Factorization
Wei Wang, Matthew Stephens; (120):1−40, 2021.
[abs][pdf][bib]      [code]

Langevin Dynamics for Adaptive Inverse Reinforcement Learning of Stochastic Gradient Algorithms
Vikram Krishnamurthy, George Yin; (121):1−49, 2021.
[abs][pdf][bib]

Sparse Convex Optimization via Adaptively Regularized Hard Thresholding
Kyriakos Axiotis, Maxim Sviridenko; (122):1−47, 2021.
[abs][pdf][bib]

Convergence Guarantees for Gaussian Process Means With Misspecified Likelihoods and Smoothness
George Wynne, François-Xavier Briol, Mark Girolami; (123):1−40, 2021.
[abs][pdf][bib]

A flexible model-free prediction-based framework for feature ranking
Jingyi Jessica Li, Yiling Elaine Chen, Xin Tong; (124):1−54, 2021.
[abs][pdf][bib]      [code]

Bandit Convex Optimization in Non-stationary Environments
Peng Zhao, Guanghui Wang, Lijun Zhang, Zhi-Hua Zhou; (125):1−45, 2021.
[abs][pdf][bib]

Integrative High Dimensional Multiple Testing with Heterogeneity under Data Sharing Constraints
Molei Liu, Yin Xia, Kelly Cho, Tianxi Cai; (126):1−26, 2021.
[abs][pdf][bib]

LassoNet: A Neural Network with Feature Sparsity
Ismael Lemhadri, Feng Ruan, Louis Abraham, Robert Tibshirani; (127):1−29, 2021.
[abs][pdf][bib]      [code]

Optimal Bounds between f-Divergences and Integral Probability Metrics
Rohit Agrawal, Thibaut Horel; (128):1−59, 2021.
[abs][pdf][bib]

Finite-sample Analysis of Interpolating Linear Classifiers in the Overparameterized Regime
Niladri S. Chatterji, Philip M. Long; (129):1−30, 2021.
[abs][pdf][bib]

Learning Whenever Learning is Possible: Universal Learning under General Stochastic Processes
Steve Hanneke; (130):1−116, 2021.
[abs][pdf][bib]

MushroomRL: Simplifying Reinforcement Learning Research
Carlo D'Eramo, Davide Tateo, Andrea Bonarini, Marcello Restelli, Jan Peters; (131):1−5, 2021. (Machine Learning Open Source Software Paper)
[abs][pdf][bib]      [code]

Locally Differentially-Private Randomized Response for Discrete Distribution Learning
Adriano Pastore, Michael Gastpar; (132):1−56, 2021.
[abs][pdf][bib]

A Contextual Bandit Bake-off
Alberto Bietti, Alekh Agarwal, John Langford; (133):1−49, 2021.
[abs][pdf][bib]      [code]

An Inertial Newton Algorithm for Deep Learning
Camille Castera, Jérôme Bolte, Cédric Févotte, Edouard Pauwels; (134):1−31, 2021.
[abs][pdf][bib]      [code]

Learning Sparse Classifiers: Continuous and Mixed Integer Optimization Perspectives
Antoine Dedieu, Hussein Hazimeh, Rahul Mazumder; (135):1−47, 2021.
[abs][pdf][bib]      [code]

Implicit Langevin Algorithms for Sampling From Log-concave Densities
Liam Hodgkinson, Robert Salomone, Fred Roosta; (136):1−30, 2021.
[abs][pdf][bib]

Hybrid Predictive Models: When an Interpretable Model Collaborates with a Black-box Model
Tong Wang, Qihang Lin; (137):1−38, 2021.
[abs][pdf][bib]      [code]

An algorithmic view of L2 regularization and some path-following algorithms
Yunzhang Zhu, Renxiong Liu; (138):1−62, 2021.
[abs][pdf][bib]

Hoeffding's Inequality for General Markov Chains and Its Applications to Statistical Learning
Jianqing Fan, Bai Jiang, Qiang Sun; (139):1−35, 2021.
[abs][pdf][bib]

Generalization Properties of hyper-RKHS and its Applications
Fanghui Liu, Lei Shi, Xiaolin Huang, Jie Yang, Johan A.K. Suykens; (140):1−38, 2021.
[abs][pdf][bib]

Pseudo-Marginal Hamiltonian Monte Carlo
Johan Alenlöv, Arnoud Doucet, Fredrik Lindsten; (141):1−45, 2021.
[abs][pdf][bib]

Inference for Multiple Heterogeneous Networks with a Common Invariant Subspace
Jesús Arroyo, Avanti Athreya, Joshua Cape, Guodong Chen, Carey E. Priebe, Joshua T. Vogelstein; (142):1−49, 2021.
[abs][pdf][bib]      [code]

Non-attracting Regions of Local Minima in Deep and Wide Neural Networks
Henning Petzka, Cristian Sminchisescu; (143):1−34, 2021.
[abs][pdf][bib]

Individual Fairness in Hindsight
Swati Gupta, Vijay Kamble; (144):1−35, 2021.
[abs][pdf][bib]

On efficient multilevel Clustering via Wasserstein distances
Viet Huynh, Nhat Ho, Nhan Dam, XuanLong Nguyen, Mikhail Yurochkin, Hung Bui, Dinh Phung; (145):1−43, 2021.
[abs][pdf][bib]

Nonparametric Modeling of Higher-Order Interactions via Hypergraphons
Krishnakumar Balasubramanian; (146):1−35, 2021.
[abs][pdf][bib]

Optimal Minimax Variable Selection for Large-Scale Matrix Linear Regression Model
Meiling Hao, Lianqiang Qu, Dehan Kong, Liuquan Sun, Hongtu Zhu; (147):1−39, 2021.
[abs][pdf][bib]

Statistical guarantees for local graph clustering
Wooseok Ha, Kimon Fountoulakis, Michael W. Mahoney; (148):1−54, 2021.
[abs][pdf][bib]

Hyperparameter Optimization via Sequential Uniform Designs
Zebin Yang, Aijun Zhang; (149):1−47, 2021.
[abs][pdf][bib]      [code]

Accelerating Ill-Conditioned Low-Rank Matrix Estimation via Scaled Gradient Descent
Tian Tong, Cong Ma, Yuejie Chi; (150):1−63, 2021.
[abs][pdf][bib]      [code]

Universal consistency and rates of convergence of multiclass prototype algorithms in metric spaces
László Györfi, Roi Weiss; (151):1−25, 2021.
[abs][pdf][bib]

Hardness of Identity Testing for Restricted Boltzmann Machines and Potts models
Antonio Blanca, Zongchen Chen, Daniel Štefankovič, Eric Vigoda; (152):1−56, 2021.
[abs][pdf][bib]

Factorization Machines with Regularization for Sparse Feature Interactions
Kyohei Atarashi, Satoshi Oyama, Masahito Kurihara; (153):1−50, 2021.
[abs][pdf][bib]

Kernel Smoothing, Mean Shift, and Their Learning Theory with Directional Data
Yikun Zhang, Yen-Chi Chen; (154):1−92, 2021.
[abs][pdf][bib]      [code]

What Causes the Test Error? Going Beyond Bias-Variance via ANOVA
Licong Lin, Edgar Dobriban; (155):1−82, 2021.
[abs][pdf][bib]

A Greedy Algorithm for Quantizing Neural Networks
Eric Lybrand, Rayan Saab; (156):1−38, 2021.
[abs][pdf][bib]      [code]

The Ridgelet Prior: A Covariance Function Approach to Prior Specification for Bayesian Neural Networks
Takuo Matsubara, Chris J. Oates, François-Xavier Briol; (157):1−57, 2021.
[abs][pdf][bib]

Information criteria for non-normalized models
Takeru Matsuda, Masatoshi Uehara, Aapo Hyvarinen; (158):1−33, 2021.
[abs][pdf][bib]

When Does Gradient Descent with Logistic Loss Find Interpolating Two-Layer Networks?
Niladri S. Chatterji, Philip M. Long, Peter L. Bartlett; (159):1−48, 2021.
[abs][pdf][bib]

Are We Forgetting about Compositional Optimisers in Bayesian Optimisation?
Antoine Grosnit, Alexander I. Cowen-Rivers, Rasul Tutunov, Ryan-Rhys Griffiths, Jun Wang, Haitham Bou-Ammar; (160):1−78, 2021.
[abs][pdf][bib]      [code]

MetaGrad: Adaptation using Multiple Learning Rates in Online Learning
Tim van Erven, Wouter M. Koolen, Dirk van der Hoeven; (161):1−61, 2021.
[abs][pdf][bib]      [code]

Counterfactual Mean Embeddings
Krikamol Muandet, Motonobu Kanagawa, Sorawit Saengkyongam, Sanparith Marukatat; (162):1−71, 2021.
[abs][pdf][bib]      [code]

PeerReview4All: Fair and Accurate Reviewer Assignment in Peer Review
Ivan Stelmakh, Nihar Shah, Aarti Singh; (163):1−66, 2021.
[abs][pdf][bib]

Improving Reproducibility in Machine Learning Research(A Report from the NeurIPS 2019 Reproducibility Program)
Joelle Pineau, Philippe Vincent-Lamarre, Koustuv Sinha, Vincent Lariviere, Alina Beygelzimer, Florence d'Alche-Buc, Emily Fox, Hugo Larochelle; (164):1−20, 2021.
[abs][pdf][bib]

Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning
Charles H. Martin, Michael W. Mahoney; (165):1−73, 2021.
[abs][pdf][bib]

The ensmallen library for flexible numerical optimization
Ryan R. Curtin, Marcus Edel, Rahul Ganesh Prabhu, Suryoday Basak, Zhihao Lou, Conrad Sanderson; (166):1−6, 2021. (Machine Learning Open Source Software Paper)
[abs][pdf][bib]      [code]

Estimation and Optimization of Composite Outcomes
Daniel J. Luckett, Eric B. Laber, Siyeon Kim, Michael R. Kosorok; (167):1−40, 2021.
[abs][pdf][bib]

Asymptotic Normality, Concentration, and Coverage of Generalized Posteriors
Jeffrey W. Miller; (168):1−53, 2021.
[abs][pdf][bib]

First-order Convergence Theory for Weakly-Convex-Weakly-Concave Min-max Problems
Mingrui Liu, Hassan Rafique, Qihang Lin, Tianbao Yang; (169):1−34, 2021.
[abs][pdf][bib]

Black-Box Reductions for Zeroth-Order Gradient Algorithms to Achieve Lower Query Complexity
Bin Gu, Xiyuan Wei, Shangqian Gao, Ziran Xiong, Cheng Deng, Heng Huang; (170):1−47, 2021.
[abs][pdf][bib]

Optimal Rates of Distributed Regression with Imperfect Kernels
Hongwei Sun, Qiang Wu; (171):1−34, 2021.
[abs][pdf][bib]

Unlinked Monotone Regression
Fadoua Balabdaoui, Charles R. Doss, Cécile Durot; (172):1−60, 2021.
[abs][pdf][bib]

Replica Exchange for Non-Convex Optimization
Jing Dong, Xin T. Tong; (173):1−59, 2021.
[abs][pdf][bib]

Achieving Fairness in the Stochastic Multi-Armed Bandit Problem
Vishakha Patil, Ganesh Ghalme, Vineet Nair, Y. Narahari; (174):1−31, 2021.
[abs][pdf][bib]

Doubly infinite residual neural networks: a diffusion process approach
Stefano Peluchetti, Stefano Favaro; (175):1−48, 2021.
[abs][pdf][bib]

Locally Private k-Means Clustering
Uri Stemmer; (176):1−30, 2021.
[abs][pdf][bib]

Prediction Under Latent Factor Regression: Adaptive PCR, Interpolating Predictors and Beyond
Xin Bing, Florentina Bunea, Seth Strimas-Mackey, Marten Wegkamp; (177):1−50, 2021.
[abs][pdf][bib]

Conditional independences and causal relations implied by sets of equations
Tineke Blom, Mirthe M. van Diepen, Joris M. Mooij; (178):1−62, 2021.
[abs][pdf][bib]

A Sharp Blockwise Tensor Perturbation Bound for Orthogonal Iteration
Yuetian Luo, Garvesh Raskutti, Ming Yuan, Anru R. Zhang; (179):1−48, 2021.
[abs][pdf][bib]

Improved Shrinkage Prediction under a Spiked Covariance Structure
Trambak Banerjee, Gourab Mukherjee, Debashis Paul; (180):1−40, 2021.
[abs][pdf][bib]

Alibi Explain: Algorithms for Explaining Machine Learning Models
Janis Klaise, Arnaud Van Looveren, Giovanni Vacanti, Alexandru Coca; (181):1−7, 2021. (Machine Learning Open Source Software Paper)
[abs][pdf][bib]      [code]

A Probabilistic Interpretation of Self-Paced Learning with Applications to Reinforcement Learning
Pascal Klink, Hany Abdulsamad, Boris Belousov, Carlo D'Eramo, Jan Peters, Joni Pajarinen; (182):1−52, 2021.
[abs][pdf][bib]      [code]

Benchmarking Unsupervised Object Representations for Video Sequences
Marissa A. Weis, Kashyap Chitta, Yash Sharma, Wieland Brendel, Matthias Bethge, Andreas Geiger, Alexander S. Ecker; (183):1−61, 2021.
[abs][pdf][bib]      [code]

mlr3pipelines - Flexible Machine Learning Pipelines in R
Martin Binder, Florian Pfisterer, Michel Lang, Lennart Schneider, Lars Kotthoff, Bernd Bischl; (184):1−7, 2021. (Machine Learning Open Source Software Paper)
[abs][pdf][bib]      [code]

Mode-wise Tensor Decompositions: Multi-dimensional Generalizations of CUR Decompositions
HanQin Cai, Keaton Hamm, Longxiu Huang, Deanna Needell; (185):1−36, 2021.
[abs][pdf][bib]      [code]

As You Like It: Localization via Paired Comparisons
Andrew K. Massimino, Mark A. Davenport; (186):1−39, 2021.
[abs][pdf][bib]

Matrix Product States for Inference in Discrete Probabilistic Models
Rasmus Bonnevie, Mikkel N. Schmidt; (187):1−48, 2021.
[abs][pdf][bib]

Differentially Private Regression and Classification with Sparse Gaussian Processes
Michael Thomas Smith, Mauricio A. Alvarez, Neil D. Lawrence; (188):1−41, 2021.
[abs][pdf][bib]      [code]

One-Shot Federated Learning: Theoretical Limits and Algorithms to Achieve Them
Saber Salehkaleybar, Arsalan Sharifnassab, S. Jamaloddin Golestani; (189):1−47, 2021.
[abs][pdf][bib]      [code]

Collusion Detection and Ground Truth Inference in Crowdsourcing for Labeling Tasks
Changyue Song, Kaibo Liu, Xi Zhang; (190):1−45, 2021.
[abs][pdf][bib]

On the Estimation of Network Complexity: Dimension of Graphons
Yann Issartel; (191):1−62, 2021.
[abs][pdf][bib]

Method of Contraction-Expansion (MOCE) for Simultaneous Inference in Linear Models
Fei Wang, Ling Zhou, Lu Tang, Peter X.K. Song; (192):1−32, 2021.
[abs][pdf][bib]

Sparse Popularity Adjusted Stochastic Block Model
Majid Noroozi, Marianna Pensky, Ramchandra Rimal; (193):1−36, 2021.
[abs][pdf][bib]

Limit theorems for out-of-sample extensions of the adjacency and Laplacian spectral embeddings
Keith D. Levin, Fred Roosta, Minh Tang, Michael W. Mahoney, Carey E. Priebe; (194):1−59, 2021.
[abs][pdf][bib]

Learning Laplacian Matrix from Graph Signals with Sparse Spectral Representation
Pierre Humbert, Batiste Le Bars, Laurent Oudre, Argyris Kalogeratos, Nicolas Vayatis; (195):1−47, 2021.
[abs][pdf][bib]      [code]

COKE: Communication-Censored Decentralized Kernel Learning
Ping Xu, Yue Wang, Xiang Chen, Zhi Tian; (196):1−35, 2021.
[abs][pdf][bib]

Particle-Gibbs Sampling for Bayesian Feature Allocation Models
Alexandre Bouchard-Côté, Andrew Roth; (197):1−105, 2021.
[abs][pdf][bib]      [code]

Integrated Principal Components Analysis
Tiffany M. Tang, Genevera I. Allen; (198):1−71, 2021.
[abs][pdf][bib]      [code]

On ADMM in Deep Learning: Convergence and Saturation-Avoidance
Jinshan Zeng, Shao-Bo Lin, Yuan Yao, Ding-Xuan Zhou; (199):1−67, 2021.
[abs][pdf][bib]

Refined approachability algorithms and application to regret minimization with global costs
Joon Kwon; (200):1−38, 2021.
[abs][pdf][bib]

Understanding How Dimension Reduction Tools Work: An Empirical Approach to Deciphering t-SNE, UMAP, TriMap, and PaCMAP for Data Visualization
Yingfan Wang, Haiyang Huang, Cynthia Rudin, Yaron Shaposhnik; (201):1−73, 2021.
[abs][pdf][bib]      [code]

Interpretable Deep Generative Recommendation Models
Huafeng Liu, Liping Jing, Jingxuan Wen, Pengyu Xu, Jiaqi Wang, Jian Yu, Michael K. Ng; (202):1−54, 2021.
[abs][pdf][bib]

Learning partial correlation graphs and graphical models by covariance queries
Gábor Lugosi, Jakub Truszkowski, Vasiliki Velona, Piotr Zwiernik; (203):1−41, 2021.
[abs][pdf][bib]

Failures of Model-dependent Generalization Bounds for Least-norm Interpolation
Peter L. Bartlett, Philip M. Long; (204):1−15, 2021.
[abs][pdf][bib]

Langevin Monte Carlo: random coordinate descent and variance reduction
Zhiyan Ding, Qin Li; (205):1−51, 2021.
[abs][pdf][bib]

Hamilton-Jacobi Deep Q-Learning for Deterministic Continuous-Time Systems with Lipschitz Continuous Controls
Jeongho Kim, Jaeuk Shin, Insoon Yang; (206):1−34, 2021.
[abs][pdf][bib]      [code]

A Unified Convergence Analysis for Shuffling-Type Gradient Methods
Lam M. Nguyen, Quoc Tran-Dinh, Dzung T. Phan, Phuong Ha Nguyen, Marten van Dijk; (207):1−44, 2021.
[abs][pdf][bib]

Oblivious Data for Fairness with Kernels
Steffen Grünewälder, Azadeh Khaleghi; (208):1−36, 2021.
[abs][pdf][bib]      [code]

Explaining by Removing: A Unified Framework for Model Explanation
Ian Covert, Scott Lundberg, Su-In Lee; (209):1−90, 2021.
[abs][pdf][bib]      [code]

Policy Teaching in Reinforcement Learning via Environment Poisoning Attacks
Amin Rakhsha, Goran Radanovic, Rati Devidze, Xiaojin Zhu, Adish Singla; (210):1−45, 2021.
[abs][pdf][bib]      [code]

Bandit Learning in Decentralized Matching Markets
Lydia T. Liu, Feng Ruan, Horia Mania, Michael I. Jordan; (211):1−34, 2021.
[abs][pdf][bib]

Convex Geometry and Duality of Over-parameterized Neural Networks
Tolga Ergen, Mert Pilanci; (212):1−63, 2021.
[abs][pdf][bib]

Cooperative SGD: A Unified Framework for the Design and Analysis of Local-Update SGD Algorithms
Jianyu Wang, Gauri Joshi; (213):1−50, 2021.
[abs][pdf][bib]

dalex: Responsible Machine Learning with Interactive Explainability and Fairness in Python
Hubert Baniecki, Wojciech Kretowicz, Piotr Piątyszek, Jakub Wiśniewski, Przemysław Biecek; (214):1−7, 2021. (Machine Learning Open Source Software Paper)
[abs][pdf][bib]      [code]

TensorHive: Management of Exclusive GPU Access for Distributed Machine Learning Workloads
Paweł Rościszewski, Michał Martyniak, Filip Schodowski; (215):1−5, 2021. (Machine Learning Open Source Software Paper)
[abs][pdf][bib]      [code]

Context-dependent Networks in Multivariate Time Series: Models, Methods, and Risk Bounds in High Dimensions
Lili Zheng, Garvesh Raskutti, Rebecca Willett, Benjamin Mark; (216):1−88, 2021.
[abs][pdf][bib]

A Unified Framework for Spectral Clustering in Sparse Graphs
Lorenzo Dall'Amico, Romain Couillet, Nicolas Tremblay; (217):1−56, 2021.
[abs][pdf][bib]

Thompson Sampling Algorithms for Cascading Bandits
Zixin Zhong, Wang Chi Chueng, Vincent Y. F. Tan; (218):1−66, 2021.
[abs][pdf][bib]

Soft Tensor Regression
Georgia Papadogeorgou, Zhengwu Zhang, David B. Dunson; (219):1−53, 2021.
[abs][pdf][bib]

Shape-Enforcing Operators for Generic Point and Interval Estimators of Functions
Xi Chen, Victor Chernozhukov, Ivan Fernandez-Val, Scott Kostyshak, Ye Luo; (220):1−42, 2021.
[abs][pdf][bib]

A Bayes-Optimal View on Adversarial Examples
Eitan Richardson, Yair Weiss; (221):1−28, 2021.
[abs][pdf][bib]      [code]

Classification vs regression in overparameterized regimes: Does the loss function matter?
Vidya Muthukumar, Adhyyan Narang, Vignesh Subramanian, Mikhail Belkin, Daniel Hsu, Anant Sahai; (222):1−69, 2021.
[abs][pdf][bib]

Stochastic Online Optimization using Kalman Recursion
Joseph de Vilmarest, Olivier Wintenberger; (223):1−55, 2021.
[abs][pdf][bib]

Bayesian Distance Clustering
Leo L. Duan, David B. Dunson; (224):1−27, 2021.
[abs][pdf][bib]      [code]

Representer Theorems in Banach Spaces: Minimum Norm Interpolation, Regularized Learning and Semi-Discrete Inverse Problems
Rui Wang, Yuesheng Xu; (225):1−65, 2021.
[abs][pdf][bib]

FATE: An Industrial Grade Platform for Collaborative Learning With Data Protection
Yang Liu, Tao Fan, Tianjian Chen, Qian Xu, Qiang Yang; (226):1−6, 2021. (Machine Learning Open Source Software Paper)
[abs][pdf][bib]      [code]

Tighter Risk Certificates for Neural Networks
María Pérez-Ortiz, Omar Rivasplata, John Shawe-Taylor, Csaba Szepesvári; (227):1−40, 2021.
[abs][pdf][bib]      [code]

How Well Generative Adversarial Networks Learn Distributions
Tengyuan Liang; (228):1−41, 2021.
[abs][pdf][bib]

Convolutional Neural Networks Are Not Invariant to Translation, but They Can Learn to Be
Valerio Biscione, Jeffrey S. Bowers; (229):1−28, 2021.
[abs][pdf][bib]

Learning with semi-definite programming: statistical bounds based on fixed point analysis and excess risk curvature
Stéphane Chrétien, Mihai Cucuringu, Guillaume Lecué, Lucie Neirac; (230):1−64, 2021.
[abs][pdf][bib]

sklvq: Scikit Learning Vector Quantization
Rick van Veen, Michael Biehl, Gert-Jan de Vries; (231):1−6, 2021. (Machine Learning Open Source Software Paper)
[abs][pdf][bib]      [code]

Probabilistic Iterative Methods for Linear Systems
Jon Cockayne, Ilse C.F. Ipsen, Chris J. Oates, Tim W. Reid; (232):1−34, 2021.
[abs][pdf][bib]      [code]

A Generalised Linear Model Framework for β-Variational Autoencoders based on Exponential Dispersion Families
Robert Sicks, Ralf Korn, Stefanie Schwaar; (233):1−41, 2021.
[abs][pdf][bib]

A general linear-time inference method for Gaussian Processes on one dimension
Jackson Loper, David Blei, John P. Cunningham, Liam Paninski; (234):1−36, 2021.
[abs][pdf][bib]

GIBBON: General-purpose Information-Based Bayesian Optimisation
Henry B. Moss, David S. Leslie, Javier Gonzalez, Paul Rayson; (235):1−49, 2021.
[abs][pdf][bib]

Expanding Boundaries of Gap Safe Screening
Cassio F. Dantas, Emmanuel Soubies, Cédric Févotte; (236):1−57, 2021.
[abs][pdf][bib]      [code]

Consensus-Based Optimization on the Sphere: Convergence to Global Minimizers and Machine Learning
Massimo Fornasier, Lorenzo Pareschi, Hui Huang, Philippe Sünnen; (237):1−55, 2021.
[abs][pdf][bib]      [code]

DeEPCA: Decentralized Exact PCA with Linear Convergence Rate
Haishan Ye, Tong Zhang; (238):1−27, 2021.
[abs][pdf][bib]

Decentralized Stochastic Gradient Langevin Dynamics and Hamiltonian Monte Carlo
Mert Gürbüzbalaban, Xuefeng Gao, Yuanhan Hu, Lingjiong Zhu; (239):1−69, 2021.
[abs][pdf][bib]

DIG: A Turnkey Library for Diving into Graph Deep Learning Research
Meng Liu, Youzhi Luo, Limei Wang, Yaochen Xie, Hao Yuan, Shurui Gui, Haiyang Yu, Zhao Xu, Jingtun Zhang, Yi Liu, Keqiang Yan, Haoran Liu, Cong Fu, Bora M Oztekin, Xuan Zhang, Shuiwang Ji; (240):1−9, 2021. (Machine Learning Open Source Software Paper)
[abs][pdf][bib]      [code]

Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, Alexandra Peste; (241):1−124, 2021.
[abs][pdf][bib]      [code]

Wasserstein distance estimates for the distributions of numerical approximations to ergodic stochastic differential equations
Jesus Maria Sanz-Serna, Konstantinos C. Zygalakis; (242):1−37, 2021.
[abs][pdf][bib]

Quasi-Monte Carlo Quasi-Newton in Variational Bayes
Sifan Liu, Art B. Owen; (243):1−23, 2021.
[abs][pdf][bib]

Consistency of Gaussian Process Regression in Metric Spaces
Peter Koepernik, Florian Pfaff; (244):1−27, 2021.
[abs][pdf][bib]

On lp-hyperparameter Learning via Bilevel Nonsmooth Optimization
Takayuki Okuno, Akiko Takeda, Akihiro Kawana, Motokazu Watanabe; (245):1−47, 2021.
[abs][pdf][bib]

Mixture Martingales Revisited with Applications to Sequential Tests and Confidence Intervals
Emilie Kaufmann, Wouter M. Koolen; (246):1−44, 2021.
[abs][pdf][bib]

Statistical Guarantees for Local Spectral Clustering on Random Neighborhood Graphs
Alden Green, Sivaraman Balakrishnan, Ryan J. Tibshirani; (247):1−71, 2021.
[abs][pdf][bib]

Statistically and Computationally Efficient Change Point Localization in Regression Settings
Daren Wang, Zifeng Zhao, Kevin Z. Lin, Rebecca Willett; (248):1−46, 2021.
[abs][pdf][bib]

On the Riemannian Search for Eigenvector Computation
Zhiqiang Xu, Ping Li; (249):1−46, 2021.
[abs][pdf][bib]

Bayesian time-aligned factor analysis of paired multivariate time series
Arkaprava Roy, Jana Schaich Borg, David B Dunson; (250):1−27, 2021.
[abs][pdf][bib]

Tractable Approximate Gaussian Inference for Bayesian Neural Networks
James-A. Goulet, Luong Ha Nguyen, Saeid Amiri; (251):1−23, 2021.
[abs][pdf][bib]      [code]

Batch greedy maximization of non-submodular functions: Guarantees and applications to experimental design
Jayanth Jagalur-Mohan, Youssef Marzouk; (252):1−62, 2021.
[abs][pdf][bib]

Bifurcation Spiking Neural Network
Shao-Qun Zhang, Zhao-Yu Zhang, Zhi-Hua Zhou; (253):1−21, 2021.
[abs][pdf][bib]

Inference for the Case Probability in High-dimensional Logistic Regression
Zijian Guo, Prabrisha Rakshit, Daniel S. Herman, Jinbo Chen; (254):1−54, 2021.
[abs][pdf][bib]

Adversarial Monte Carlo Meta-Learning of Optimal Prediction Procedures
Alex Luedtke, Incheoul Chung, Oleg Sofrygin; (255):1−67, 2021.
[abs][pdf][bib]      [code]

Model Linkage Selection for Cooperative Learning
Jiaying Zhou, Jie Ding, Kean Ming Tan, Vahid Tarokh; (256):1−44, 2021.
[abs][pdf][bib]

Estimating Uncertainty Intervals from Collaborating Networks
Tianhui Zhou, Yitong Li, Yuan Wu, David Carlson; (257):1−47, 2021.
[abs][pdf][bib]

Optimized Score Transformation for Consistent Fair Classification
Dennis Wei, Karthikeyan Natesan Ramamurthy, Flavio P. Calmon; (258):1−78, 2021.
[abs][pdf][bib]

ROOTS: Object-Centric Representation and Rendering of 3D Scenes
Chang Chen, Fei Deng, Sungjin Ahn; (259):1−36, 2021.
[abs][pdf][bib]

Learning Strategies in Decentralized Matching Markets under Uncertain Preferences
Xiaowu Dai, Michael I. Jordan; (260):1−50, 2021.
[abs][pdf][bib]

Domain adaptation under structural causal models
Yuansi Chen, Peter Bühlmann; (261):1−80, 2021.
[abs][pdf][bib]      [code]

Revisiting Model-Agnostic Private Learning: Faster Rates and Active Learning
Chong Liu, Yuqing Zhu, Kamalika Chaudhuri, Yu-Xiang Wang; (262):1−44, 2021.
[abs][pdf][bib]

On the Stability Properties and the Optimization Landscape of Training Problems with Squared Loss for Neural Networks and General Nonlinear Conic Approximation Schemes
Constantin Christof; (263):1−77, 2021.
[abs][pdf][bib]

Regularized spectral methods for clustering signed networks
Mihai Cucuringu, Apoorv Vikram Singh, Déborah Sulem, Hemant Tyagi; (264):1−79, 2021.
[abs][pdf][bib]

Exact Asymptotics for Linear Quadratic Adaptive Control
Feicheng Wang, Lucas Janson; (265):1−112, 2021.
[abs][pdf][bib]      [code]

Learning Bayesian Networks from Ordinal Data
Xiang Ge Luo, Giusi Moffa, Jack Kuipers; (266):1−44, 2021.
[abs][pdf][bib]      [code]

Reproducing kernel Hilbert C*-module and kernel mean embeddings
Yuka Hashimoto, Isao Ishikawa, Masahiro Ikeda, Fuyuta Komura, Takeshi Katsura, Yoshinobu Kawahara; (267):1−56, 2021.
[abs][pdf][bib]

Stable-Baselines3: Reliable Reinforcement Learning Implementations
Antonin Raffin, Ashley Hill, Adam Gleave, Anssi Kanervisto, Maximilian Ernestus, Noah Dormann; (268):1−8, 2021. (Machine Learning Open Source Software Paper)
[abs][pdf][bib]      [code]

CAT: Compression-Aware Training for bandwidth reduction
Chaim Baskin, Brian Chmiel, Evgenii Zheltonozhskii, Ron Banner, Alex M. Bronstein, Avi Mendelson; (269):1−20, 2021.
[abs][pdf][bib]      [code]

Further results on latent discourse models and word embeddings
Sammy Khalife, Douglas Gonçalves, Youssef Allouah, Leo Liberti; (270):1−36, 2021.
[abs][pdf][bib]

Nonparametric Continuous Sensor Registration
William Clark, Maani Ghaffari, Anthony Bloch; (271):1−50, 2021.
[abs][pdf][bib]      [code]

Transferability of Spectral Graph Convolutional Neural Networks
Ron Levie, Wei Huang, Lorenzo Bucci, Michael Bronstein, Gitta Kutyniok; (272):1−59, 2021.
[abs][pdf][bib]

On the Hardness of Robust Classification
Pascale Gourdeau, Varun Kanade, Marta Kwiatkowska, James Worrell; (273):1−29, 2021.
[abs][pdf][bib]

Simultaneous Change Point Inference and Structure Recovery for High Dimensional Gaussian Graphical Models
Bin Liu, Xinsheng Zhang, Yufeng Liu; (274):1−62, 2021.
[abs][pdf][bib]

Partial Policy Iteration for L1-Robust Markov Decision Processes
Chin Pang Ho, Marek Petrik, Wolfram Wiesemann; (275):1−46, 2021.
[abs][pdf][bib]      [code]

Estimating the Lasso's Effective Noise
Johannes Lederer, Michael Vogt; (276):1−32, 2021.
[abs][pdf][bib]      [supplementary]

Gaussian Approximation for Bias Reduction in Q-Learning
Carlo D'Eramo, Andrea Cini, Alessandro Nuara, Matteo Pirotta, Cesare Alippi, Jan Peters, Marcello Restelli; (277):1−51, 2021.
[abs][pdf][bib]

Multilevel Monte Carlo Variational Inference
Masahiro Fujisawa, Issei Sato; (278):1−44, 2021.
[abs][pdf][bib]

Fast Learning for Renewal Optimization in Online Task Scheduling
Michael J. Neely; (279):1−44, 2021.
[abs][pdf][bib]

Graph Matching with Partially-Correct Seeds
Liren Yu, Jiaming Xu, Xiaojun Lin; (280):1−54, 2021.
[abs][pdf][bib]      [code]

Contrastive Estimation Reveals Topic Posterior Information to Linear Models
Christopher Tosh, Akshay Krishnamurthy, Daniel Hsu; (281):1−31, 2021.
[abs][pdf][bib]

LDLE: Low Distortion Local Eigenmaps
Dhruv Kohli, Alexander Cloninger, Gal Mishne; (282):1−64, 2021.
[abs][pdf][bib]      [code]

Non-linear, Sparse Dimensionality Reduction via Path Lasso Penalized Autoencoders
Oskar Allerbo, Rebecka Jörnsten; (283):1−28, 2021.
[abs][pdf][bib]      [code]

Linear Bandits on Uniformly Convex Sets
Thomas Kerdreux, Christophe Roux, Alexandre d'Aspremont, Sebastian Pokutta; (284):1−23, 2021.
[abs][pdf][bib]

Double Generative Adversarial Networks for Conditional Independence Testing
Chengchun Shi, Tianlin Xu, Wicher Bergsma, Lexin Li; (285):1−32, 2021.
[abs][pdf][bib]

An Online Sequential Test for Qualitative Treatment Effects
Chengchun Shi, Shikai Luo, Hongtu Zhu, Rui Song; (286):1−51, 2021.
[abs][pdf][bib]

V-statistics and Variance Estimation
Zhengze Zhou, Lucas Mentch, Giles Hooker; (287):1−48, 2021.
[abs][pdf][bib]      [code]

A Theory of the Risk for Optimization with Relaxation and its Application to Support Vector Machines
Marco C. Campi, Simone Garatti; (288):1−38, 2021.
[abs][pdf][bib]

VariBAD: Variational Bayes-Adaptive Deep RL via Meta-Learning
Luisa Zintgraf, Sebastian Schulze, Cong Lu, Leo Feng, Maximilian Igl, Kyriacos Shiarlis, Yarin Gal, Katja Hofmann, Shimon Whiteson; (289):1−39, 2021.
[abs][pdf][bib]      [code]

On Universal Approximation and Error Bounds for Fourier Neural Operators
Nikola Kovachki, Samuel Lanthaler, Siddhartha Mishra; (290):1−76, 2021.
[abs][pdf][bib]

© JMLR .
Mastodon