Research

My research interests are currently in the fields of approximation theory, the mathematical theory of neural networks, statistics, and numerical methods for solving PDEs. A particular emphasis is on the theoretical development and mathematical analysis of machine learning methods for scientific computing. In terms of applications, I also work on the application of machine learning methods to materials science. My work is supported by the National Science Foundation (DMS-2424305 and CCF-2205004) as well as the Office of Naval Research (MURI ONR grant N00014-20-1-2787).

I worked as a postdoc with Professor Jinchao Xu from 2018-2022 and completed my PhD in mathematics from UCLA under the direction of Professor Russel Caflisch in 2018.

Here is my CV. A full list of my publications can also be found on my Google Scholar Profile.

Journal Articles

Sharp Lower Bounds on the Manifold Widths of Sobolev and Besov Spaces
Journal of Complexity (2024)

Entropy-based Convergence Rates of Greedy Algorithms
Mathematical Models and Methods in Applied Sciences (2024) (with Yuwen Li)

Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev and Besov Spaces
Journal of Machine Learning Research (2023)

Greedy Training Algorithms for Neural Networks and Applications to PDEs
Journal of Computational Physics (2023) (with Qingguo Hong, Wenrui Hao, Xianlin Jin and Jinchao Xu)

Characterization of the Variation Spaces Corresponding to Shallow Neural Networks
Constructive Approximation (2023) (with Jinchao Xu)

Extended Regularized Dual Averaging Methods for Stochastic Optimization
Journal of Computational Mathematics (2023) (with Jinchao Xu)

Sharp Bounds on the Approximation Rates, Metric Entropy, and n-Widths of Shallow Neural Networks
Foundations of Computational Mathematics (2022) (with Jinchao Xu)

Uniform approximation rates and metric entropy of shallow neural networks
Research in the Mathematical Sciences (2022) (with Limin Ma and Jinchao Xu)

Extensible Structure-Informed Prediction of Formation Energy with Improved Accuracy and Usability employing Neural Networks
Computational Materials Science (2022) (with Adam Krajewski, Zi-Kui Liu, and Jinchao Xu)

Optimal Convergence Rates for the Orthogonal Greedy Algorithm
IEEE Transactions on Information Theory (2022) (with Jinchao Xu)

High-Order Approximation Rates for Shallow Neural Networks with Cosine and ReLUk Activation Functions
Applied and Computational Harmonic Analysis (2022) (with Jinchao Xu)

Accelerated Optimization with Orthogonality Constraints
Journal of Computational Mathematics (2020)

Approximation Rates for Neural Networks with General Activation Functions
Neural Networks (2020) (with Jinchao Xu)

Accuracy, Efficiency and Optimization of Signal Fragmentation
Multiscale Simulation and Modelling (2020) (with Russel Caflisch and Edward Chou)

Compact Support Of L1 Penalized Variational Problems
Communications in Mathematical Sciences (2017) (with Omer Tekin)

Conference Papers

Equivariant Frames and the Impossibility of Continuous Canonicalization
International Conference on Machine Learning (2024) (with Nadav Dym and Hannah Lawrence)

Preprints and Works in Progress

On the expressiveness and spectral bias of KANs (with Yixuan Wang, Ziming Liu, and Thomas Y. Hou)

Approximation Rates for Shallow ReLUk Neural Networks on Sobolev Spaces via the Radon Transform (with Tong Mao and Jinchao Xu)

Convergence and error control of consistent PINNs for elliptic PDEs (with Andrea Bonito, Ronald DeVore, and Guergana Petrova)

Efficient Structure-Informed Featurization and Property Prediction of Ordered, Dilute, and Random Atomic Structures (with Adam Krajewski and Zi-Kui Liu)

Weighted variation spaces and approximation by shallow ReLU networks (with Ronald DeVore, Robert Nowak, and Rahul Parhi)

Sharp Convergence Rates for Matching Pursuit (with Jason Klusowski)

Optimal Approximation of Zonoids and Uniform Approximation by Shallow Neural Networks

Achieving acceleration despite very noisy gradients (with Kanan Gupta and Stephan Wojtowsytsch)

Sharp Lower Bounds on Interpolation by Deep ReLU Neural Networks at Irregularly Spaced Data

On the Activation Function Dependence of the Spectral Bias of Neural Networks (with Qinyang Tan, Qingguo Hong, and Jinchao Xu)

Training Sparse Neural Networks using Compressed Sensing (with Jianhong Chen and Jinchao Xu)

Accelerated First-Order Methods: Differential Equations and Lyapunov Functions