Advertisement · 728 × 90
#
Hashtag
#ICML2024
Advertisement · 728 × 90
Preview
Advanced Conformal Prediction: Reliable Uncertainty Quantification for Real-World Machine Learning (Preorder / Early Access) 🚀 Advanced Conformal Prediction: Practical Uncertainty Quantification for Real-World ML 🚀Preorder / Early Access – Chapters released progressivelyFull book priced at over $80+ USD.Are you ready to take your machine learning models to the next level? Move beyond basic predictions and master advanced techniques for quantifying and managing uncertainty with confidence.In "Advanced Conformal Prediction," you'll dive deep into sophisticated methods designed to enhance reliability and decision-making in real-world AI applications. This comprehensive guide equips you with practical skills and cutting-edge tools, empowering you to confidently deploy machine learning solutions in high-stakes environments.Whether you're a data scientist, engineer, researcher, or practitioner, this book will become your essential resource for ensuring the trustworthiness of your AI models.📖 What's Inside: Advanced methods for uncertainty quantification Practical insights for real-world AI deployment Techniques for improving model reliability Join the journey and transform your approach to AI uncertainty today!🌟 Perfect for: Data Scientists and ML Engineers AI Researchers Professionals deploying ML in critical industries Secure your copy now and revolutionize how you manage uncertainty in machine learning!

valeman.gumroad.com/...

📄 Genentech paper: arxiv.org/abs/2405.0...

#MachineLearning #ConformalPrediction #UncertaintyQuantification #ICML2024 #Genentech #STEM #LearnLikeAPro

0 0 0 0

5/ 📝 Paper was at #ICML 2024 - ML4LMS workshop!
Poster: openreview.net/attachment?i...
Code: github.com/ddofer/Prote...

#phdlife #ICML #ICML2024 #research #huji

0 0 0 0
Preview
Protein language models expose viral mimicry and immune escape Viruses elude the immune system through molecular mimicry, adopting their hosts biophysical characteristics. We adapt protein language models (PLMs) to differenti-ate between human and viral...

1. Our paper "Protein Language Models Expose Viral Mimicry and Immune Escape" was at #ICML2024. We delve into Adversarial examples in Biology, and how machine learning can understand viruses! 🦠
openreview.net/forum?id=gGn...

#ICML #ML4LMS #science #bioinformatics #ML #virus #LLM #science #ai

1 1 1 0
Post image Post image Post image Post image

Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling by Bairu Hou et al. #ICML2024
tl;dr: generate multiple clarifications of input txt w/ external LLM then forward:
>disagreement btw outputs -> data uncertainty
>avg uncertainty in each output -> model uncertainty

19 4 2 0
Post image

Excited that our paper quantifying #LLMs usage in paper reviews is selected as an #ICML2024 oral (top 1.5% of submissions)! 🚀

Main results👇
proceedings.mlr.press/v235/liang24...

Media Coverage: The New York Times
nyti.ms/3vwQhdi

0 0 0 0
ICML 2024 PapersICML 2024

Find out all the papers from the position paper track at #ICML2024 here: icml.cc/virtual/2024...

0 0 0 0
Post image Post image

The position paper "Bayesian Deep Learning (BDL) is Needed in the Age of Large-Scale AI" is my favorite in this #ICML2024 track.
It gives an excellent apology to BDL, a pragmatic summary of the challenges and lots of directions to explore
arxiv.org/abs/2402.00809

6 0 1 0
Post image

[1/2] Position paper at #ICML2024 “An Inner Interpretability Framework for AI Inspired by Lessons from Cognitive Neuroscience"

2 0 1 0
Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution – Paper Explained
Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution – Paper Explained YouTube video by AI Coffee Break with Letitia

Text diffusion can finally generate good text!📃
We've combed through the dense math of the “Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution” paper to bring you the key insights and takeaways.👇
📺 youtu.be/K_9wQ6LZNpI
The paper won the #ICML2024 best paper award.

1 0 1 0
Post image

#icml2024 in Vienna

1 0 0 0
Post image

At #ICML2024 in Vienna, our PI @orvieto_antonio is co-organizing the workshop "Next Generation of Sequence Modeling Architectures". The workshop will bring together various researchers to chart the course for the next generation of sequence models. The focus is on better…

0 0 0 0
Post image

What can AI/ML researchers learn from 🙋survey methodology to make data collection 🎯 less biased and more 😀human centric? @stephnie @barbara_plank and @fraukolos are presenting their position paper in hall C #2007! Go and see it! #ICML2024

0 0 0 0

Almost there ...... today (In 90 minutes) ⏲️5:40 pm, catch me in talk on 📢human disagreement and 📏model calibration at 🧑‍⚖️ Legal Tech Social 🪩in 🎵Lehar1-4🎶 #icml2024 #legaltech #NLProc (Joint work with @TYSSSantosh2 , Oana, @matgrabmair, @barbara_plank )

0 0 0 0
Post image

#icml2024 paper:
how are LLMs used in reviews?
10% of ICLR sentences are auto-generated.
More LLM usage when submitting later
Less when referring to at least one other paper
arxiv.org/abs/2403.07183
🤖
#ML #machinelearning #NLP
#NLProc #LLM #LLMs #data #DataScience

1 0 0 0
Accelerating Heterogeneous Federated Learning with Closed-form Classifiers Federated Learning (FL) methods often struggle in highly statistically heterogeneous settings. Indeed, non-IID data distributions cause client drift and bias...

📜Visit our #ICML2024 poster tomorrow introducing Fed3R, a robust and efficient Federated Learning method for heterogeneous settings leveraging closed-form classifiers.

🚀Led by @ErosFani, with @bcaputo_iit @mciccone_AI

🗺️Thu, 11.30AM-1PM, Hall C #2507.

proceedings.mlr.press/v235/fani-24a.…

0 0 0 0

Pruning gets wors with overparametrization
Testing their combinatorial method Zhang&Papayan(
@stats285
) find that when adding (unneeded) parameters you end up with more (absolute) number of parameters for the same performance.
#ICML2024

0 0 1 0

ICML FOMO? I'll share papers from here
In #ICML2024 ? Talk to me e.g. on
Tinybenchmarks🐭
LoRA's weight characteristics (asymmetry)☯️
Model merging♻️
open human feedback🗣️
BabyLM👼
Details (or highlights of recent research):🤖

1 0 1 0

🚀 Heading to 🇦🇹 #ICML2024! I’ll give talk about pluralistic human values and alignment at the Machine Learning Scientists in Legal Tech session tomorrow (Wed.) afternoon. If you're interested in human label variation and NLP for LegalTech, please reach out! 👋

0 0 0 0
Post image

And consider following the authors Yuesong Shen, Nico Daheim, Bai Cong, Peter Nickl, Gian Maria Marconi, Clement Bazan, Rio Yokota, Iryna Gurevych, Daniel Cremers, Mohammad Emtiyaz Khan and Thomas Möllenhoff.

See you this week in Vienna! 🧵 (9/9)
#ICML2024 #NLProc

0 0 0 0
Preview
Variational Learning is Effective for Large Deep Networks We give extensive empirical evidence against the common belief that variational learning is ineffective for large neural networks. We show that an optimizer called Improved Variational Online...

Want to know more? Be sure to check the paper and code!

📄 Paper: arxiv.org/abs/2402.17641
💻 Code: github.com/team-approx-...
📺 Video: youtu.be/TRNYnRRJBRg?...

(8/🧵) #ICML2024 #NLProc

0 0 1 0
Post image

We use the variance estimate from training to calculate the leave-one-out cross-validation loss. This is a measure of generalization performance
✅ IVON’s estimate follows the true test loss far better than AdamW

(7/🧵) #NLProc #ICML2024

2 0 1 0
Post image

IVON is cheaper than other Hessian-based model-merging, but performs just as well ✅
• We directly use the Hessian obtained during training
• No second pass through the dataset like prior methods

(6/🧵) #NLProc #ICML2024

0 0 1 0

An earlier version of IVON won the first place at the NeurIPS 2021 competition on Approximate Inference in Bayesian Deep Learning 🏆

(5/🧵) #NLProc #ICML2024 #DeepLearning

0 0 1 0
Post image

Training GPT-2 models from scratch with IVON gets better perplexity scores than AdamW! 🤯

For image classification with ResNet, IVON is better than AdamW in terms accuracy and uncertainty!

(4/🧵) #NLProc #ICML2024

0 0 1 0

Training with IVON has many benefits 🚀

↗️ predictive uncertainty compared to MC-dropout and SWAG
↘️ model-merging costs
↗️ prediction of generalization error for diagnostics and early stopping
↗️ understanding of model sensitivity to data

(3/🧵) #NLProc #ICML2024

1 0 1 0
Handling the Positive-Definite Constraint in the Bayesian Learning Rule The Bayesian learning rule is a natural-gradient variational inference method, which not only contains many existing learning algorithms as special cases but also enables the design of new algorith...

IVON is built on Lin et al. (2020) (proceedings.mlr.press/v119/lin20d....) with practical hacks for performance at scale! 🐱‍💻

🚀 Similar cost to Adam
🚀 Searching for the best hyperparameters is easy
🚀 IVON is easy to use for multi-GPU training

(2/🧵) #NLProc #ICML2024

0 0 1 0
Variational Learning is Effective for Large Deep Networks [ICML 2024 Spotlight]
Variational Learning is Effective for Large Deep Networks [ICML 2024 Spotlight] Variational Learning is Effective for Large Deep NetworksSpotlight at the 41st International Conference on Machine Learning (ICML), Vienna, July 21-27, 2024....

🤔 Variational learning is often thought to be impractical
🔥 Plot twist: it actually works better than Adam!

Meet IVON, a new optimizer that brings the best out of variational learning – 🧵 (1/9) #NLProc #ICML2024

📰 arxiv.org/abs/2402.17641

youtu.be/TRNYnRRJBRg

1 1 1 0

Anyone here at #ICML2024 in Vienna this week?
Would love to meet up and chat about the interface of climate data/science and ML, @pangeo_data , open source/open science in general!

1 0 0 0
Post image

And consider following the authors Yuesong Shen, Nico Daheim, Bai Cong, Peter Nickl, Gian Maria Marconi, Clement Bazan, Rio Yokota, Iryna Gurevych, Daniel Cremers, Mohammad Emtiyaz Khan & Thomas Möllenhoff (MCML, UKP Lab, Hessian.ai, Tokyo Tech & RIKEN AIP).

See you in Vienna!

#ICML2024 #NLProc

0 0 0 0
Preview
Variational Learning is Effective for Large Deep Networks We give extensive empirical evidence against the common belief that variational learning is ineffective for large neural networks. We show that an optimizer called Improved Variational Online...

Want to know more? Be sure to check the paper and code!

📄 Paper: arxiv.org/abs/2402.17641
💻 Code: github.com/team-approx-...

(3/🧵) #ICML2024 #NLProc

0 0 1 0