Research

Currently, my research interests are centered around the mathematical theory behind machine learning, and the application of machine learning techniques to the solution of PDEs. Important in this line of work are approximation theory, specifically the approximation theory of neural networks, functional analysis, statistics, and information theory. Recent projects include the determination of optimal approximation rates for deep ReLU neural networks on Sobolev spaces, the determination of fundamental approximation theoretic quantities, such as the entropy and n-widths, of classes of functions corresponding to shallow neural networks, and a theoretical analysis of greedy algorithms for statistical estimation and for numerically solving PDEs.

In addition, I have previously worked on, and continue to work on, a variety of other projects. These topics include convex optimization and optimization on manifolds, the application of compressed sensing to electronic structure calculations, signal processing, and neural network training, and the application of machine learning to materials science.

I worked as a postdoc with Professor Jinchao Xu from 2018-2022 and completed my PhD under Professor Russel Caflisch in 2018. Here is my CV.

Journal Articles

Compact Support Of L1 Penalized Variational Problems
Communications in Mathematical Sciences (2017) (with Omer Tekin)

Accuracy, Efficiency and Optimization of Signal Fragmentation
Multiscale Simulation and Modelling (2020) (with Russel Caflisch and Edward Chou)

Approximation Rates for Neural Networks with General Activation Functions
Neural Networks (2020) (with Jinchao Xu)

Accelerated Optimization with Orthogonality Constraints
Journal of Computational Mathematics (2020)

High-Order Approximation Rates for Shallow Neural Networks with Cosine and ReLUk Activation Functions
Applied and Computational Harmonic Analysis (2022) (with Jinchao Xu)

Optimal Convergence Rates for the Orthogonal Greedy Algorithm
IEEE Transactions on Information Theory (2022) (with Jinchao Xu)

Extensible Structure-Informed Prediction of Formation Energy with Improved Accuracy and Usability employing Neural Networks
Computational Materials Science (2022) (with Adam Krajewski, Zi-Kui Liu, and Jinchao Xu)

Uniform approximation rates and metric entropy of shallow neural networks
Research in the Mathematical Sciences (2022) (with Limin Ma and Jinchao Xu)

Sharp Bounds on the Approximation Rates, Metric Entropy, and n-Widths of Shallow Neural Networks
Foundations of Computational Mathematics (2022) (with Jinchao Xu)

Extended Regularized Dual Averaging Methods for Stochastic Optimization
Journal of Computational Mathematics (2023) (with Jinchao Xu)

Characterization of the Variation Spaces Corresponding to Shallow Neural Networks
Constructive Approximation (2023) (with Jinchao Xu)

Greedy Training Algorithms for Neural Networks and Applications to PDEs
Journal of Computational Physics (2023) (with Qingguo Hong, Wenrui Hao, Xianlin Jin and Jinchao Xu)

Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev and Besov Spaces
Journal of Machine Learning Research (2023)

Preprints and Works in Progress

Accelerated First-Order Methods: Differential Equations and Lyapunov Functions

Training Sparse Neural Networks using Compressed Sensing (with Jianhong Chen and Jinchao Xu)

Sharp Lower Bounds on Interpolation by Deep ReLU Neural Networks at Irregularly Spaced Data

Achieving acceleration despite very noisy gradients (with Kanan Gupta and Stephan Wojtowsytsch)

Entropy-based convergence rates of greedy algorithms (with Yuwen Li)

Sharp Convergence Rates for Matching Pursuit (with Jason Klusowski)

Optimal Approximation of Zonoids and Uniform Approximation by Shallow Neural Networks

Weighted variation spaces and approximation by shallow ReLU networks (with Ronald DeVore, Robert Nowak, and Rahul Parhi)