Advertisement Β· 728 Γ— 90

Posts by Eli Chien

I don't recall many papers getting retracted from NeurICMLR or having errata after being published. This is really sad and unfortunate.

That being said, feel free to let me know if there's any error in my work. Will really appreciate the comments. 4/n,n=4.

8 months ago 1 0 0 0

New students and researchers kept rediscovering that some "famous" papers are wrong (in the best case...) by wasting tons of time, but then still have to cite or compare to these works since they're well-cited or published in NeurICMLR. How's that even make sense? 3/n

8 months ago 1 0 1 0

Examples: critical errors in papers, awful reproducibility, and the worst, intentional lying/cheating. These researchers still earn a number of citations, nice jobs, and have not been "punished" in terms of their reputation. 2/n

8 months ago 1 0 1 0

Some random thoughts after chatting with multiple friends: I do feel that one reason the general ML research community is getting worse (imo, maybe not for others) is that we don't share the bad things we found with others more often. 1/n

8 months ago 3 0 1 0

I will be at #icml2025 next week to present our work on LLM unlearning evaluation [https://arxiv.org/abs/2412.08559] We also have a work on AI copyright to be presented at the MemFM and R2FM workshop. Please let me know if you're also around! I will be around 7/15-7/17.

9 months ago 0 0 0 0

This is a great paper! It resonate with one of our recent work (a short version to appear at ICML MemFM workshop!). We really need to be careful on "defining meaningful" copyright measure.

9 months ago 1 0 0 0

What needs to be take care of when applying privacy amplification by iteration to zeroth-order optimization? Can it even be done? What's the "good design" for DP zeroth-order method? Check out our latest work! It's so nice to collaborate with Wei-Ning (as usual) and Pan!

10 months ago 2 1 0 0
Advertisement
Preview
Open Problem: Selection via Low-Sensitivity Queries Two of the basic tools for building differentially private algorithms are noise addition for answering low-sensitivity queries and the exponential mechanism for selection. Could we do away with the e...

Open Problem: Selection via Low-Sensitivity Queries

11 months ago 5 2 0 0
Preview
Underestimated Privacy Risks for Minority Populations in Large Language Model Unlearning Large Language Models are trained on extensive datasets that often contain sensitive, human-generated information, raising significant concerns about privacy breaches. While certified unlearning appro...

Preprint: arxiv.org/abs/2412.08559

Stay tuned for the GitHub code and our updated version (we have some new results!).

I also want to thank my friends @jyhong.bsky.social , Chulin Xie, Ayush Sekhari, Martin Pawelczyk for their helpful discussion and clarification of their works! 2/n, n=2.

11 months ago 1 0 0 0
Post image

Our paper about LLM unlearning evaluation is accepted by #icml2025 !

Thanks to the leading author Rongzhe, and my collaborators
@mufei-li.bsky.social @xiangyue96.bsky.social
(and others may not be on Bluesky).

It's my first "last" author paper. Feels quite special :p 1/n

11 months ago 4 0 1 1
Preview
Underestimated Privacy Risks for Minority Populations in Large Language Model Unlearning Large Language Models are trained on extensive datasets that often contain sensitive, human-generated information, raising significant concerns about privacy breaches. While certified unlearning appro...

Preprint: arxiv.org/abs/2412.08559

Stay tuned for the GitHub code and our updated version (we have some new results!).

I also want to thank my friends @jyhong.bsky.social Chulin Xie, Ayush Sekhari, Martin Pawelczyk for their helpful discussion and clarification of their works! 2/n, n=2.

11 months ago 0 0 0 0

I wonder how well this result can be applied to convert the KL-based result in the sampling literature (i.e., LMC convergence) to Renyi divergence, compared to those results that directly bound the Renyi divergence (i.e., the results in Sinho Chewi's book or the paper by Vempala and Wibisono πŸ˜‚)

11 months ago 4 0 0 0

I wrote a post explaining why, in practice, privacy amplification by subsampling doesn't quite work as well as promised. This is a significant problem for differentially private machine learning applications, but I don't know if this is as widely known as it should be.

1 year ago 12 2 2 0
Preview
Statistical Optimal Transport This monograph aims to offer a concise introduction to optimal transport, quickly transitioning to its applications in statistics and machine learning.

PSA β€” if you’re interested in learning about statistical aspects of optimal transport, check out this new monograph by Sinho Chewi, Jonathan Niles-Weed, and Philippe Rigollet: link.springer.com/book/10.1007...

1 year ago 43 7 1 2
Preview
Privacy Amplification by Subsampling Privacy Amplification by Subsampling is an important property of differential privacy. It is key to making many algorithms efficient – particularly in machine learning applications. Thus a lot of wor...

Privacy Amplification by Subsampling

1 year ago 9 3 0 1
Advertisement

The last one is crazy 🀣🀣🀣

1 year ago 3 0 0 0

I would like to thank Pan Li, Olgica Milenkovic, Kamalika Chaudhuri, and Cho-Jui Hsieh for their help during my job search. I also appreciate the help from all my friends who provided me suggestions or discussed the situation with me! (can't type all due to space limit). 3/3

1 year ago 0 0 0 0

I will keep working on trustworthy/regulatable AI, especially on privacy, machine unlearning, and AI copyright issues. Feel free to let me know if you want to collaborate in the future! Also, I wish the best of luck to my friends who are still on the job market now. It is a really tough year :( 2/3

1 year ago 0 0 1 0

Life Update: I am happy to share the news that I will be an Assistant Professor at the National Taiwan University EE department! I am very grateful for this opportunity to be back in my home country, especially at the university where I was an undergrad! 1/3

1 year ago 1 0 1 0

I am so shocked to learn that Poisson (in French) means fish...... As a person who constantly deals with Poisson distribution, Poissonization, etc I now have a completely different feeling about Poisson 🀣. I guess we always learn something unexpected on the internet 🀣

1 year ago 0 0 0 0

I believe so, but I will have to wait until Monday to know. I will DM you the Zoom link if there is one!

1 year ago 1 0 0 0
Preview
School of CSE Seminar Series: Eli Chien | School of Computational Science and Engineering School of CSE hosts a seminar from Georgia Tech Postdoctoral Fellow Eli Chien

I will give a talk at GaTech CSE seminar this Friday on the topic: "Machine Unlearning: The General Theory and LLM Practice of Privacy".

Please join if you are around :)

cse.gatech.edu/events/2025/...

1 year ago 0 0 1 0

Thanks for sharing! We are actually writing something related to this. Will probably cite this post :p

1 year ago 0 0 0 0
Preview
Convergent Privacy Loss of Noisy-SGD without Convexity and Smoothness We study the Differential Privacy (DP) guarantee of hidden-state Noisy-SGD algorithms over a bounded domain. Standard privacy analysis for Noisy-SGD assumes all internal states are revealed, which lea...

Preprint: arxiv.org/abs/2410.01068

We will update with more related works, and make changes as promised during the rebuttal soon.

I am now cooking something more exciting along this line of work with my collaborators. Hope to share it with everyone soon :p

1 year ago 1 0 0 0
Advertisement
Post image

I am glad to share that our paper on hidden-state Noisy SGD DP analysis for non-convex non-smooth problems has been accepted at #ICLR2025! I really appreciate the effort from reviewers, AC, and all my friends who provided valuable comments and feedback!

1 year ago 2 0 1 0

It's not a normal distribution... :)

1 year ago 0 0 0 0
Post image

With @adamsmith.xyz and @thejonullman.bsky.social, we have compiled a set of profiles of 29 people in the "foundations of responsible computing" community ("mathematical research in computation and society writ large") who are on the faculty job market.

Link: drive.google.com/file/d/1Hyvg... 1/3

1 year ago 39 16 2 1

Why do we need "theoretical guarantees" for trustworthy AI? We need to prevent the worst-case scenario, where theory in AI truly shines and is necessary, in my opinion. That's also why my work with theoretical guarantees for machine unlearning and DP matters! πŸ˜‰

1 year ago 1 0 0 0

It's my great pleasure to contribute to the great A3D3 community. Congrats to all #A3D3 members!

1 year ago 2 0 0 0

The last time when I attended NeurIPS in 2019 Vancouver, I missed my flight back to Urbana due to a border check. Today after NeurIPS 2024 I got stuck in Dallas due to a flight cancellation...πŸ₯²πŸ₯²πŸ₯²

1 year ago 1 0 0 0