Advertisement ยท 728 ร— 90

Posts by Jonathan Balloch

I think a lot of this over indexes on the current composition of the current administration. I would be surprised if this all represents a broadly held belief about ai models

1 month ago 0 0 0 0

Nice thanks for clarifying!

5 months ago 0 0 0 0

very cool!

5 months ago 0 0 0 0

Not to rain on the parade, but this is the same size as the OpenDV dataset right? Is the novel part the data? or perhaps that it is in europe?

5 months ago 1 0 2 0

Ooo peak design is legit

7 months ago 1 0 0 0

for the record, this is why LLMs have been more widely successful and applicable than, say, vision-language-action models, and why VLAs are catching up: this is a recipe that can be applied very broadly, but only works at a production level if the data domain is VERY thoroughly covered

1 year ago 0 0 0 0

The more data you have, the better an embedding space you have, and the more likely your interpolation is to be correct. So you are right in that the something like the answer is probably in the training data, but you are wrong that the exact answer is in the training data or searched for.

1 year ago 2 0 1 0

Like many social media discussions, what is missing here is nuance. LLMs, like all generative no-prior ML models, are, effectively, interpolating. But in the case of LLMs, they are interpolating in the space of "next token embedding."

1 year ago 0 0 1 0

Fundamentally you can *have* both, but functionally when you optimize for multiple objectives usually only one ends up as the primary. Guzdials article is suggesting that the prior push being so attached to undergrad outcomes is a bad primary objective for K-12 students, which is reasonable...

1 year ago 0 0 1 0
Advertisement

Le Chat underrated

1 year ago 1 0 0 0

well thats not great

1 year ago 1 0 0 0

I think a deeper difficulty in ML is the economy of attention. The hundreds of papers each day released on ArXiv in ML means that a reader needs to resort to heuristics to keep up. Stuff like trust a recommender system, or only read famous authors, or scan for buzzwords.

1 year ago 6 1 2 0

Sarah Paine is incredible

1 year ago 0 0 0 0

Given whats going on in the world, I think its time to reread Brave New World

1 year ago 1 0 0 0
Post image

Example, pre-train (reward free) to map temporal distances into distances in latent space, and then, finetune: map these through a dot product with a latent task description to a reward function.

A couple of refs:

openreview.net/forum?id=YGh...
arxiv.org/abs/2110.02719
arxiv.org/abs/2110.15191

1 year ago 4 2 1 0

I know exactly what you mean. Especially for us academic-related folks, our recommended bubble gets ultra tight. my recommendation is to look at some of the "highly followed" topics, which will give a more norm-y feed. But truly BlueSky needs "Trending"

1 year ago 4 0 0 0

Depending on precision, that is a crazy price for 2 high quality 6-dof robot arms, to say nothing of them attached as one torso. If the price stays when people start building it you can be sure I'll be one. The Rethink Baxter is a lesson, cumulative error from backlash will be the important thing

1 year ago 3 0 1 0
Advertisement

agreed

1 year ago 0 0 0 0

very exciting!

1 year ago 2 0 0 0
Video

$14k open source humanoid robot upper torso. Writing with a pen on a notebook that you're holding is an impressively challenging task! Also comes with an open, modular, python software stack for robot control and planning.

openpyro-a1.github.io

1 year ago 58 14 3 3

Hiring researchers and engineers for a stealth, applied research company with a focus on RL x foundation models. Folks on the team already are leading RL / learning researchers. If you think you'd be good at the research needed to get things working in practice, email me

1 year ago 65 11 2 2

Begs the question: at what point is multi-task training implicit meta learning @chelseafinn.bsky.social

1 year ago 0 0 1 0
Post image
1 year ago 1 0 0 0
Preview
AI pioneers who channeled 'hedonistic' machines win computer science's top prize Teaching machines in the way that animal trainers mold the behavior of dogs or horses has been an important method for developing artificial intelligence and one that was recognized Wednesday with the...

Congrats Andrew and Rich, well deserved!! apnews.com/article/turi...

1 year ago 6 3 0 0

One reason to be intolerant of misleading hype in tech and science is that tolerating the small lies and deception is how you get tolerance of big lies

1 year ago 185 27 4 0

super excited to try this out

1 year ago 1 0 0 0
Advertisement
Preview
An unexpected RL Renaissance New talk! Forecasting the Alpaca moment for reasoning models and why the new style of RL training is a far bigger deal than the emergence of RLHF.

Trying to tell the story behind this explosion of research we are in. An unexpected RL Renaissance.
New talk! Forecasting the Alpaca moment for reasoning models and why the new style of RL training is a far bigger deal than the emergence of RLHF.
YouTube: https://buff.ly/41bVRPp

1 year ago 64 11 3 2

Easier installation, faster PPO script, new tutorials. The team has put in so much work and I'm excited for y'all to try it.
github.com/Emerge-Lab/g...

1 year ago 29 2 1 0
Preview
AI Search: The Bitter-er Lesson | Notion What if we could start automating AI research today? What if we didnโ€™t have to wait for a 2030 supercluster to cure cancer? What if ASI was in the room with us already?

Incredibly cool article. Why, in spite of all of the hype about the scale of learning, we shouldn't forget the second half of Sutton's Bitter Lesson: search scales too, and often better yellow-apartment-148.notion.site/AI-Search-Th...
(h/t klowrey)

1 year ago 2 0 0 0

"peter thiel backed" ๐Ÿ˜‚๐Ÿ˜‚๐Ÿ˜‚ ded

1 year ago 4 0 0 0