Gavin Brown
grbrown (at) bu (dot) edu
Hello! I am a PhD candidate in the Boston University Department of Computer Science. I am fortunate to be advised by Adam Smith. For the first two years of my PhD, I was advised by Peter Chin and worked on applications of maching learning and compressed sensing. Before coming to BU, I was a data analytics consultant at Mu Sigma. Before that, I received a BS in Mathematics from Case Western Reserve University in 2015. My Senior Capstone project was advised by David Gurarie.
I do research on the theory of data privacy and machine learning. The outputs of data analysis depend on the details of individual data points, sometimes heavily. When is this necessary, and when can we avoid it? One of the ways I study these questions is through the lens of differential privacy, and here I focus on techniques for high-dimensional statistical problems. I also work on understanding when and why machine learning models memorize large amounts of training examples.
I deeply enjoy teaching, both in and out of the classroom. In 2020, I received a Teaching Fellow Excellence Award from the Computer Science Department.
Fast, Sample-Efficient, Affine-Invariant Private Mean and Covariance Estimation for Subgaussian Distributions.
Gavin Brown, Samuel B. Hopkins, and Adam Smith
Strong Memory Lower Bounds for Learning Natural Models.
Gavin Brown, Mark Bun, and Adam Smith
COLT 2022. Proceedings version.
Performative Prediction in a Stateful World.
Gavin Brown, Iden Kalemaj, and Shlomi Hod.
AISTATS 2022. Proceedings version.
A preliminary version of this paper appeared at the NeurIPS Workshop on Consequential Decision Making in Dynamic Environments, 2020.
Covariance-Aware Private Mean Estimation Without Private Covariance Estimation
Gavin Brown, Marco Gaboardi, Adam Smith, Jonathan Ullman, and Lydia Zakynthinou.
NeurIPS 2021 Spotlight Presentation. Proceedings version.
When Is Memorization of Irrevelant Training Data Necessary for High-Accuracy Learning?
Gavin Brown, Mark Bun, Vitaly Feldman, Adam Smith, and Kunal Talwar.
STOC 2021. Proceedings version.
When Is Memorization of Irrelevant Training Data Necessary for High-Accuracy Learning?
STOC 2021. Talk and poster presentation.
When Is Memorization of Entire Examples Necessary for High-Accuracy Learning?
Penn State University Statistical Data Privacy Seminar. 2021. Talk.
When Is Memorization of Entire Examples Necessary for High-Accuracy Learning?
Hebrew University Theory Seminar. 2021. Talk.
When Is Memorization of Irrelevant Training Data Necessary for High-Accuracy Learning?
Workshop on the Theory of Overparameterized Machine Learning 2021. Lightning talk.
When Is Memorization of Entire Examples Necessary for High-Accuracy Learning?
Boston University Probability and Statistics Seminar. 2021. Talk.
When Is Memorization of Entire Examples Necessary for High-Accuracy Learning? Google Differential Privacy Workshop 2021. Poster presentation.
CS 537 - Randomness in Computing (Sofya Raskhodnikova). Graduate Class. Spring 2020.
CS 330 - Introduction to Algorithms (Adam Smith and Dora Erdos). Undergraduate Class. Fall 2019.
CS 542 - Machine Learning (Peter Chin). Graduate Class. Summer 2019, Session I.
CS 112 - Introduction to Computer Science II (Christine Papadakis-Kanaris). Undergraduate Class. Fall 2018.
CS 542 - Machine Learning (Peter Chin). Graduate Class. Spring 2018.