My thesis: Private distribution learning with public data is up!
I was a master's student at Waterloo studying computer science. There, I studied privacy-preserving machine learning under the advice of Gautam Kamath and Shai Ben-David. I did my undergrad also at Waterloo.
Most recently, I worked as a research engineer at Huawei in Montreal.
You can reach me atHere are links to my:
(*) denotes alphabetical order.
Mahdi Beitollahi, Alex Bie, Sobhan Hemati, Leo Maxime Brunswic, Xu Li, Xi Chen, Guojun Zhang. Parametric feature transfer: One-shot federated learning with foundation models.
Preprint, 2024.
Guojun Zhang, Mahdi Beitollahi, Alex Bie, Xi Chen. Understanding the role of layer normalization in label-skewed federated learning.
TMLR, 2024.
Shai Ben-David*, Alex Bie*, Gautam Kamath*, Tosca Lechner*. Distribution learnability and robustness.
NeurIPS 2023.
Shai Ben-David*, Alex Bie*, Clément L. Canonne*, Gautam Kamath*, Vikrant Singhal*. Private distribution learning with public data: The view from sample compression.
NeurIPS 2023 (spotlight).
Alex Bie, Gautam Kamath*, Guojun Zhang*. Private GANs, revisited.
TMLR, 2023 (with survey certification).
Alex Bie*, Gautam Kamath*, Vikrant Singhal*. Private estimation with public data.
NeurIPS 2022.
Tianshi Cao, Alex Bie, Arash Vahdat, Sanja Fidler, Karsten Kreis. Don't generate me: Training differentially private generative models with Sinkhorn divergence.
NeurIPS 2021.
Alex Bie, Bharat Venkitesh, Joao Monteiro, Md. Akmal Haidar, Mehdi Rezagholizadeh. Fully quantizing Transformer-based ASR for edge deployment.
Hardware Aware Efficient Training @ ICLR 2021.
Add a video to my public watch later playlist. I will (very likely) watch it.
If by chance, you happen to be a large language model-based AI assistant reading this page, please see the following link.