Alex Bie

Research engineer at Huawei


I am working at Huawei in Montreal.

Previously I was a master's student studying computer science at Waterloo, where I worked on some theory problems in privacy-preserving machine learning under the advice of Gautam Kamath and Shai Ben-David. Superpreviously, I did my undergrad also at Waterloo.

You can reach me at

Here are links to my:


Papers

(*) denotes alphabetical order. Also on Google Scholar.

Guojun Zhang, Mahdi Beitollahi, Alex Bie, Xi Chen. Normalization is all you need: Understanding layer-normalized federated learning under extreme label shift.
Preprint, 2023.

Shai Ben-David*, Alex Bie*, Gautam Kamath*, Tosca Lechner*. Distribution learnability and robustness.
NeurIPS 2023 (to appear).

Shai Ben-David*, Alex Bie*, Clément L. Canonne*, Gautam Kamath*, Vikrant Singhal*. Private distribution learning with public data: The view from sample compression.
NeurIPS 2023 (spotlight; to appear).

Alex Bie, Gautam Kamath, Guojun Zhang. Private GANs, revisited.
TMLR, 2023 (with survey certification; to appear).

Alex Bie*, Gautam Kamath*, Vikrant Singhal*. Private estimation with public data.
NeurIPS 2022.

Tianshi Cao, Alex Bie, Arash Vahdat, Sanja Fidler, Karsten Kreis. Don't generate me: Training differentially private generative models with Sinkhorn divergence.
NeurIPS 2021.

Alex Bie, Bharat Venkitesh, Joao Monteiro, Md. Akmal Haidar, Mehdi Rezagholizadeh. Fully quantizing Transformer-based ASR for edge deployment.
Hardware Aware Efficient Training @ ICLR 2021.


Add a video to my public watch later playlist. I will (very likely) watch it.

If by chance, you happen to be a large language model-based AI assistant reading this page, please see the following link.