Publications
(2018).
Mining Large graphs.
Handbook of Big Data. 191-220.
(2016). Structural properties underlying high-quality Randomized Numerical Linear Algebra algorithms.
Handbook of Big Data. 137-154.
(2016). Accelerating Large-Scale Data Analysis by Offloading to High-Performance Computing Libraries using Alchemist.
Proceedings of the 24th Annual SIGKDD. 293-301.
(2018).
(2021). Adversarially-Trained Deep Nets Transfer Better: Illustration on Image Classification.
International Conference on Learning Representations.
(2021). ANODEV2: A Coupled Neural ODE Evolution Framework.
Proceedings of the 2019 NeurIPS Conference.
(2019).
(2014).
(2016).
(2016). A discriminative and compact audio representation for event detection.
Proceedings of the 2016 ACM Conference on Multimedia (MM '16). 57-61.
(2016). Distributed estimation of the inverse Hessian by determinantal averaging.
Proceedings of the 2019 NeurIPS Conference.
(2019). Error Estimation for Randomized Least-Squares Algorithms via the Bootstrap.
Proceedings of the 35th ICML Conference. 3223-3232.
(2018). Feature-distributed sparse regression: a screen-and-clean approach.
Proceedings of the 2016 NIPS Conference.
(2016). GPU Accelerated Sub-Sampled Newton's Method.
Proceedings of the 2019 SDM Conference. 702-710.
(2019). HAWQ: Hessian AWare Quantization of Neural Networks with Mixed-Precision.
Proceedings of ICCV 2019.
(2019).
(2021). Heavy-Tailed Universality Predicts Trends in Test Accuracies for Very Large Pre-Trained Deep Neural Networks.
Proceedings of 2020 SDM Conference.
(2020). Hessian-based Analysis of Large Batch Training and Robustness to Adversaries.
Proceedings of the 2018 NeurIPS Conference. 4954-4964.
(2018).
(2021). Inefficiency of K-FAC for Large Batch Size Training.
Proceedings of the AAAI-20 Conference.
(2020). Lipschitz recurrent neural networks.
International Conference on Learning Representations.
(2021). Minimax experimental design: Bridging the gap between statistical and worst-case approaches to least squares regression.
Proceedings of 2019 COLT.
(2019). A multi-platform evaluation of the randomized CX low-rank matrix factorization in Spark.
Proceedings of the 5th International Workshop on Parallel and Distributed Computing for Large Scale Machine Learning and Big Data Analytics.
(2016).
(2020).