Advertisement · 728 × 90

Posts by Gabe Grand

Post image

Paper + code + interactive demos: gabegrand.github.io/battleship ⚓️🎯

5 months ago 1 0 0 0

Special shoutout to @valeriopepe.bsky.social (co-first author), who is super talented and currently on the PhD job market!

5 months ago 0 0 1 0

Thanks to Valerio Pepe, Josh Tenenbaum, and Jacob Andreas for long-horizon collaboration and planning: this line of Battleship work has been *2 years* in the making!

5 months ago 0 0 1 0

Bottom line: The future of AI-driven discovery isn't just bigger models—it's smarter inference. By combining LMs with rational planning strategies, we can build agents that ask better questions, make better decisions, and collaborate effectively with humans.

5 months ago 1 1 2 0

Why does this matter? Discovery-driven AI (scientific experiments, theorem proving, drug discovery) requires hitting needles in combinatorially vast haystacks. If we want agents that explore rationally, we need to go beyond prompting.

5 months ago 1 0 1 0

Key takeaway: Current LMs aren’t rational information seekers: they struggle to ground answers in context, generate informative queries, and balance exploration vs. exploitation. But Bayesian inference at test time can dramatically close these gaps—efficiently.

5 months ago 2 0 1 0
Post image

Does this generalize? YES. We replicated on "Guess Who?" from TextArena and saw similar gains: GPT-4o (61.7% → 90.0%), Llama-4-Scout (30.0% → 72.4%). The framework works across information-seeking domains with combinatorial hypothesis spaces.

5 months ago 0 0 1 0
Post image

Deciding when to explore vs. act is also key. Skilled players (humans + GPT-5) spread out questions over the course of the game. Weak LMs spam all 15 upfront. The key isn't asking MORE—it's asking BETTER questions at the RIGHT time. Quality > quantity.

5 months ago 1 0 1 0
Advertisement

Here's the kicker: asking high-EIG questions alone doesn't guarantee wins. Weaker models struggle to convert information into good moves. Bayes-M—which explicitly marginalizes over beliefs—is crucial for translating questions into action.

5 months ago 0 0 1 0
Post image

Our approach leverages inference scaling to enable models to ask more informative questions. Bayes-Q boosts EIG by up to 0.227 bits (94.2% of the theoretical ceiling) and virtually eliminates redundant questions (18.5% → 0.2% for Llama-4-Scout).

5 months ago 0 0 1 0
Post image

In head-to-head comparisons, both GPT-4o and Llama-4-Scout now beat GPT-5 while costing 2.8x and 99.7x less, respectively.

5 months ago 0 0 1 0
Post image

With all three Bayesian components (+Bayes-QMD), Llama-4-Scout jumps from near-random guessing (0.367 F1) to super-human level (0.764 F1). GPT-4o sees similar gains (0.450 → 0.782 F1). The deltas are really striking.

5 months ago 1 0 1 0

We developed three Bayesian strategies inspired by Bayesian Experimental Design (BED):
âť“ Question (Bayes-Q): Optimizes expected info gain (EIG)
🎯 Move (Bayes-M): Maximizes hit probability
⚖️ Decision (Bayes-D): Decides when to ask vs. shoot using one-step lookahead

5 months ago 1 0 1 0

In our second set of experiments, we turned to the challenge of building rational question-asking agents to play the Captain role.

5 months ago 0 0 1 0
Post image

We find that having models write Python functions to answer questions boosts accuracy by +14.7% (absolute p.p.), and complements CoT reasoning.

5 months ago 0 0 1 0
Advertisement
Video

One useful trick to improve answering accuracy is to use code generation. Code grounds reasoning in executable logic, not just vibes.

5 months ago 1 1 1 0
Post image

Many LMs really struggle with questions that require grounding answers in the board and dialogue context. GPT-4o drops from 72.8% → 60.4% accuracy on context-dependent questions. Llama-4-Scout: 68.0% → 54.0%. Humans? Basically flat (92.8% vs 91.9%).

5 months ago 0 0 1 0
Post image

Overall, humans are really reliable at answering questions on BattleshipQA (92.5% accuracy). In contrast, LM accuracy ranges widely—from near-random (52.5%, GPT-4o-mini) to human-level (92.8%, o3-mini). But there's a catch…

5 months ago 0 0 1 0

In our first experiment, we looked at QA accuracy in the Spotter role – this is an important sanity-check for how well players (humans & agents) can understand and reason about the game state.

5 months ago 0 0 1 0
Post image

To understand how people strategize & collaborate, we ran a two-player synchronous human study (N=42) and collected full action trajectories and chat dialogues. Our “BattleshipQA” dataset provides a rich, multimodal benchmark for comparing human and agent behavior.

5 months ago 0 0 1 0

We created “Collaborative Battleship”—a two-player game where a Captain (who only sees a partial board) must balance asking questions vs. taking shots, while a Spotter (who sees everything) can only answer Yes/No. It's deceptively simple but cognitively demanding.

5 months ago 1 0 1 0

But LMs are trained to *answer* queries, not *ask* them. Can they learn to explore intelligently?

5 months ago 0 0 1 0

Many high-stakes AI applications require asking data-driven questions—think scientific discovery, medical diagnosis, or drug development.

5 months ago 0 0 1 0
Video

Do AI agents ask good questions? We built “Collaborative Battleship” to find out—and discovered that weaker LMs + Bayesian inference can beat GPT-5 at 1% of the cost.

Paper, code & demos: gabegrand.github.io/battleship

Here's what we learned about building rational information-seeking agents... 🧵🔽

5 months ago 25 11 1 2

Hello! Late to the party, but still excited to join this brave blue world 🌎🦋

5 months ago 1 0 1 0
Advertisement