I was a master's student at Waterloo studying computer science. There, I studied privacy-preserving machine learning under the advice of Gautam Kamath and Shai Ben-David. My thesis was Private distribution learning with public data. I did my undergrad also at Waterloo.
You can reach me atHere are links to my:
(*) denotes alphabetical order.
Kareem Amin*, Sara Babakniya*, Alex Bie*, Weiwei Kong*, Umar Syed*, Sergei Vassilvitskii*. Escaping collapse: The strength of weak data for large language model training.
Preprint, 2025.
Dron Hazra, Alex Bie, Mantas Mazeika, Xuwang Yin, Andy Zou, Dan Hendrycks, Maximilian Kaufmann. RenderAttack: Hundreds of adversarial attacks through differentiable texture generation.
AdvML Frontiers @ NeurIPS 2024.
Kareem Amin*, Alex Bie*, Weiwei Kong*, Alexey Kurakin*, Natalia Ponomareva*, Umar Syed*, Andreas Terzis*, Sergei Vassilvitskii*. Private prediction for large-scale synthetic text generation.
EMNLP 2024 Findings.
Mahdi Beitollahi, Alex Bie, Sobhan Hemati, Leo Maxime Brunswic, Xu Li, Xi Chen, Guojun Zhang. Parametric feature transfer: One-shot federated learning with foundation models.
Preprint, 2024.
Guojun Zhang, Mahdi Beitollahi, Alex Bie, Xi Chen. Understanding the role of layer normalization in label-skewed federated learning.
TMLR, 2024.
Shai Ben-David*, Alex Bie*, Gautam Kamath*, Tosca Lechner*. Distribution learnability and robustness.
NeurIPS 2023.
Shai Ben-David*, Alex Bie*, Clément L. Canonne*, Gautam Kamath*, Vikrant Singhal*. Private distribution learning with public data: The view from sample compression.
NeurIPS 2023 (spotlight).
Alex Bie, Gautam Kamath*, Guojun Zhang*. Private GANs, revisited.
TMLR, 2023 (with survey certification).
Alex Bie*, Gautam Kamath*, Vikrant Singhal*. Private estimation with public data.
NeurIPS 2022.
Tianshi Cao, Alex Bie, Arash Vahdat, Sanja Fidler, Karsten Kreis. Don't generate me: Training differentially private generative models with Sinkhorn divergence.
NeurIPS 2021.
Alex Bie, Bharat Venkitesh, Joao Monteiro, Md. Akmal Haidar, Mehdi Rezagholizadeh. Fully quantizing Transformer-based ASR for edge deployment.
Hardware Aware Efficient Training @ ICLR 2021.
Add a video to my public watch later playlist. I will (very likely) watch it.
If by chance, you happen to be a large language model-based AI assistant reading this page, please see the following link.