Research

My research interests are currently centered around approximation theory and its adjacent fields of mathematics and science, including deep learning, statistics, and numerical methods for solving PDEs. I am also interested in the application of machine learning to materials science.

I worked as a postdoc with Professor Jinchao Xu from 2018-2022 and completed my PhD under Professor Russel Caflisch in 2018. Here is my CV. A list of my publications can also be found on my Google Scholar Profile.

Journal Articles

Compact Support Of L1 Penalized Variational Problems
Communications in Mathematical Sciences (2017) (with Omer Tekin)

Accuracy, Efficiency and Optimization of Signal Fragmentation
Multiscale Simulation and Modelling (2020) (with Russel Caflisch and Edward Chou)

Approximation Rates for Neural Networks with General Activation Functions
Neural Networks (2020) (with Jinchao Xu)

Accelerated Optimization with Orthogonality Constraints
Journal of Computational Mathematics (2020)

High-Order Approximation Rates for Shallow Neural Networks with Cosine and ReLUk Activation Functions
Applied and Computational Harmonic Analysis (2022) (with Jinchao Xu)

Optimal Convergence Rates for the Orthogonal Greedy Algorithm
IEEE Transactions on Information Theory (2022) (with Jinchao Xu)

Extensible Structure-Informed Prediction of Formation Energy with Improved Accuracy and Usability employing Neural Networks
Computational Materials Science (2022) (with Adam Krajewski, Zi-Kui Liu, and Jinchao Xu)

Uniform approximation rates and metric entropy of shallow neural networks
Research in the Mathematical Sciences (2022) (with Limin Ma and Jinchao Xu)

Sharp Bounds on the Approximation Rates, Metric Entropy, and n-Widths of Shallow Neural Networks
Foundations of Computational Mathematics (2022) (with Jinchao Xu)

Extended Regularized Dual Averaging Methods for Stochastic Optimization
Journal of Computational Mathematics (2023) (with Jinchao Xu)

Characterization of the Variation Spaces Corresponding to Shallow Neural Networks
Constructive Approximation (2023) (with Jinchao Xu)

Greedy Training Algorithms for Neural Networks and Applications to PDEs
Journal of Computational Physics (2023) (with Qingguo Hong, Wenrui Hao, Xianlin Jin and Jinchao Xu)

Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev and Besov Spaces
Journal of Machine Learning Research (2023)

Entropy-based Convergence Rates of Greedy Algorithms
Mathematical Models and Methods in Applied Sciences (2024) (with Yuwen Li)

Sharp Lower Bounds on the Manifold Widths of Sobolev and Besov Spaces
Journal of Complexity (2024)

Conference Papers

Equivariant Frames and the Impossibility of Continuous Canonicalization
International Conference on Machine Learning (2024) (with Nadav Dym and Hannah Lawrence)

Preprints and Works in Progress

Accelerated First-Order Methods: Differential Equations and Lyapunov Functions

Training Sparse Neural Networks using Compressed Sensing (with Jianhong Chen and Jinchao Xu)

Sharp Lower Bounds on Interpolation by Deep ReLU Neural Networks at Irregularly Spaced Data

Achieving acceleration despite very noisy gradients (with Kanan Gupta and Stephan Wojtowsytsch)

Sharp Convergence Rates for Matching Pursuit (with Jason Klusowski)

Optimal Approximation of Zonoids and Uniform Approximation by Shallow Neural Networks

Weighted variation spaces and approximation by shallow ReLU networks (with Ronald DeVore, Robert Nowak, and Rahul Parhi)

Efficient Structure-Informed Featurization and Property Prediction of Ordered, Dilute, and Random Atomic Structures (with Adam Krajewski and Zi-Kui Liu)

Convergence and error control of consistent PINNs for elliptic PDEs (with Andrea Bonito, Ronald DeVore, and Guergana Petrova)