Advertisement · 728 × 90

Posts by Siddarth Venkatraman @ NeurIPS 2024

Honestly, it feels like as an AI researcher it might actually be worth it to throw your dignity aside and pay Elon for Twitter blue to advertise your papers. Getting papers famous is literally just a social media clout game now.

1 year ago 4 0 1 0

See the second part of my post - yes, they are likely using explicit search to improve performance at test time. But the focus should be on the search through reasoning chains itself, which the model has been trained to do with RL. Even for the explicit search, you require the RL value functions.

1 year ago 0 0 0 0

Few fields reward quick pivoting as much as AI, or vice versa punish the very thing a phd is usually meant to be: stick with one research direction for 5 years no matter what, go really deep, becoming a niche expert

for your research to be relevant in AI, you might wanna pivot every 1-2 years

1 year ago 15 2 3 0

I think the intersection of builders and researchers is higher in machine learning, compared to other disciplines.

1 year ago 1 0 0 0

You could still wrap this with explicit search techniques like MCTS if you have value functions for partial sequences (which would also be a product of the RL training). This could further improve performance, similar to fast vs slow policy in AlphaZero.

1 year ago 3 0 0 0

Saying o3 is just a “more principled search technique” is quite reductive. The o series of models don’t require “explicit search” strategies in the form of tree search, wrapped in loops etc. Instead, RL is used to train the model to “learn to search” using long CoT chains.

1 year ago 2 0 3 0

You’re correct, there’s plenty of simulated environments we can’t solve yet. But do you consider having 1 million parallel instances of an environment sped up 100x solving it with PPO with low wall clock time a desirable solution?

1 year ago 0 0 0 0
Advertisement

This isn’t a general solution to RL. The point is to make learning algorithms sample efficient. If the environment you are doing RL on is the real world, you can’t make the “environment go fast”.

With “infinite samples”, you can random sample policies till you stumble on one with high reward.

1 year ago 5 0 1 0
Preview
GitHub - GFNOrg/diffusion-samplers Contribute to GFNOrg/diffusion-samplers development by creating an account on GitHub.

Come check out our neurips poster today! We will be at West Ballroom #7101 from 4:30pm - 7:30pm.

Website: github.com/gfnorg/diffu...

1 year ago 1 1 0 0

If you're at NeurIPS, RLC is hosting an RL event from 8 till late at The Pearl on Dec. 11th. Join us, meet all the RL researchers, and spread the word!

1 year ago 63 18 2 4

Even his current claim that o1 is “better than most humans in most tasks” is pretty wild imo. What are “most tasks” here even? Obviously not any physical tasks because there is no embodiment. Can o1 actually completely replace a human in any job? Can it manage a project from start to finish?

1 year ago 0 0 0 0
x.com

x.com/vahidk/statu...

1 year ago 0 0 1 0

It also doesn’t help when OpenAI staff post about how o1 is already AGI (yes this happened today).

Unfortunately the dialogue is directed by those on either end of the spectrum (AI is useless vs AGI is already here) without much room for nuance.

1 year ago 4 0 1 0
Preview
A year before CEO shooting, lawsuit alleged UHC used AI to deny coverage The lawsuit accuses UnitedHealthcare of using artificial intelligence to deny coverage to elderly patients.

www.newsweek.com/united-healt...

I have anecdotal evidence from a friend who works at a client company for a popular insurance firm. They are using shitty “AI models” which are basically just CatBoost to mass process claims. They know the models are shit, but that’s also the point. Truly sickening.

1 year ago 2 0 0 0

It is reductive to blame it all on a single CEO, but I find it hard to believe how you are “shocked” by this public reaction. UHC has the highest claim denial rate among insurance providers, resulting in untold medical bankruptcies and preventable deaths. I’m shocked this doesn’t happen more often.

1 year ago 5 0 1 0
Advertisement

Subtlety and nuance go out the window when strong political feelings are thrown in the mix. I understand why AI researchers can get defensive/angry due to toxic comments, but we should still try to understand the origin of people’s anger. Imo, right wing AI silicon valley billionaires are the root.

1 year ago 0 0 0 0

I think the recent conflict between AI researchers and the anti-AI clique hints at the latter. This broad left leaning user base could fracture again as differences in opinions between the farther left and moderate factions get amplified.

1 year ago 1 0 0 0

This app is an interesting social experiment. Assuming Bluesky doesn’t just fizzle out, will hostile social relations as in Twitter resurface here too? If hostilities do return, will it be because conservatives come to this app, or will it be new political tensions within left leaning communities?

1 year ago 0 0 1 0

Another thing; let’s reflect if they actually have a point. When I deeply reflect upon it, I am not even personally convinced that in the grand scheme of things AI is going to be a net good for humanity. So, maybe the distaste is warranted and we’re the ones in the bubble?

1 year ago 2 0 1 0

As AI researchers, we shouldn’t demonize people outside our space who have a passionate distaste for AI. You have to understand that most of the pro-AI sentiment people see online comes from absolutely vile “AI-bros”, especially on twitter. We just need to distinguish ourselves as academics.

1 year ago 3 1 1 0

Yeah, it will definitely not be “true OT” at end, but it works to get surprisingly smooth ODE paths which can be easily numerically integrated. You can train a CIFAR 10 flow model which can generate high quality images with 5-10 Euler steps.

1 year ago 0 0 0 0
Preview
Improving and generalizing flow-based generative models with minibatch optimal transport Continuous normalizing flows (CNFs) are an attractive generative modeling technique, but they have been held back by limitations in their simulation-based maximum likelihood training. We introduce the...

You can do minibatch OT coupling to get actual optimal transport flows with simulation free training.

arxiv.org/abs/2302.00482

1 year ago 0 0 1 0

Sure, that argument works from a utilitarian perspective.

From monkey brain casual user point of view, it looks ugly and outdated. And I think this is what should be focused on.

1 year ago 0 0 1 0

Anyone has thoughts about which generative models are also the best for representation learning features for downstream tasks?

My guess is GANs are a dark horse and the latents carry important abstract features. But we haven’t explored this much since they are hard to train.

1 year ago 1 0 0 0
Advertisement

You can just have a verification system like the system in pre-Elon twitter, where blue check marks are verified accounts.

1 year ago 1 0 2 0

Ideally it should default to your username like Twitter. These small inconveniences add up over time and could cause people to go back over to twitter and need to be changed. Twitter perfected the design of this kind of social media, and these minor design choices matter.

1 year ago 2 0 2 0

IQL and BCQ are still the most consistent, reliable offline RL algorithms. Interestingly, IQL optimizes for the optimal batch constrained policy too (just without a behavior policy model which is needed for BCQ).

Many other algorithms seem to work “better” since they overfit hyperparams for D4RL.

1 year ago 5 0 1 0
Preview
xLSTM: Extended Long Short-Term Memory In the 1990s, the constant error carousel and gating were introduced as the central ideas of the Long Short-Term Memory (LSTM). Since then, LSTMs have stood the test of time and contributed to numerou...

XLSTM helps with the parallelizable thing arxiv.org/abs/2405.04517

I suspect the memory issues and compute scaling with sequence lengths will motivate some large scale model with these soon. Probably for high dimensional data like videos rather than language.

1 year ago 3 0 0 0

Pretty cool, didn’t know of this work. Recurrent nets are still quite slow to train for large sequences like in LLMs cuz it’s not parallelizable (though chunking like your paper would definitively help). Would be curious to see how well it works at very large scale.

1 year ago 1 0 1 0

Would like to be added :)

1 year ago 0 0 0 0