The trick is using the AI to write you smaller versions of the experiments that you can iterate on faster
Posts by Jon Barron
I would just ask them about their methodology and their interpretation in the normal human way that we currently do. It's very easy to tell the difference between a PhD student who is using an LLM to do science and someone who is just along for the ride as the LLM does science for them.
They probably could if I was on their committee, which happens sometimes.
yeah I guess definitionally if someone makes new knowledge, you could say that they are learning the knowledge that they make by virtue of it being in their head? But that's not usually how people talk about new discoveries or findings being made.
ah I guess my program fast-tracked me
My understanding was that the goal of doing a PhD was to make new knowledge
yeah that was an odd exchange, I didn't expect that concern. I guess I don't really think of a PhD has being primarily about the student learning stuff, but instead about the student accomplishing stuff (or learning how to accomplish stuff).
Radiance Meshes for Volumetric Reconstruction
Alexander Mai, Trevor Hedstrom, @grgkopanas.bsky.social, Janne Kontkanen, Falko Kuester, @jonbarron.bsky.social
tl;dr: Delaunay tetrahedralization->constant density and linear color radian->radiance mesh->radiance field
arxiv.org/abs/2512.04076
This, combined with most fields outside of computer science being overly concerned with maintaining cultural and social solidarity (especially against encroaching technology brothers) seems like the most likely explanation.
Is basic image understanding solved in todayโs SOTA VLMs? Not quite.
We present VisualOverload, a VQA benchmark testing simple vision skills (like counting & OCR) in dense scenes. Even the best model (o3) only scores 19.8% on our hardest split.
Hereโs what Iโve been working on for the past year. This is SkyTour, a 3D exterior tour utilizing Gaussian Splat. The UX is in the modeling of the โflight path.โ I led the prototyping team that built the first POC. I was the sole designer and researcher on the project, one of the 1st inventors.
Ah cool, then why is that last bit true?
I don't see how the last sentence follows logically from the two prior sentences.
Be sure to do a dedication where you thank a ton of people, it's kind plus it feels good.
Besides that I'd just do a staple job of your papers. Doing new stuff in a thesis is usually a mistake, unless you later submit it as a paper or post it online somewhere. Nobody reads past the dedication.
This thread rules
๐๐๐Announcing our $13M funding round to build the next generation of AI: ๐๐ฉ๐๐ญ๐ข๐๐ฅ ๐
๐จ๐ฎ๐ง๐๐๐ญ๐ข๐จ๐ง ๐๐จ๐๐๐ฅ๐ฌ that can generate entire 3D environments anchored in space & time. ๐๐๐
Interested? Join our world-class team:
๐ spaitial.ai
youtu.be/FiGX82RUz8U
๐บ Now available: Watch the recording of Aaron Hertzmann's talk, "Can Computers Create Art?" www.youtube.com/watch?v=40CB...
@uoftartsci.bsky.social
Here's a recording of my 3DV keynote from a couple weeks ago. If you're already familiar with my research, I recommend skipping to ~22 minutes in where I get to the fun stuff (whether or not 3D has been bitter-lesson'ed by video generation models)
www.youtube.com/watch?v=hFlF...
www.instagram.com/mrtoledano/ for anyone else who wanted to see more of this artist's work, really cool stuff!
yeah those fisher kernel models were surprisingly gnarly towards the end of their run.
yep absolutely. Super hard to do, but absolutely the best approach if it works.
If you want you can see the models that AlexNet beat in the 2012 imagenet competition, they were quite huge, here's one: www.image-net.org/static_files.... But I think the better though experiment is to imagine how large a shallow model would have to be to match AlexNet's capacity (very very huge)
One pattern I like (used in DreamFusion and CAT3D) is to "go slow to go fast" --- generate something small and slow to harness all that AI goodness, and then bake that 3D generation into something that renders fast. Moving along this speed/size continuum is a powerful tool.
It makes sense that radiance fields trended towards speed --- real-time performance is paramount in 3D graphics. But what we've seen in AI suggests that magical things can happen if you forgo speed and embrace compression. What else is in that lower left corner of this graph?
And this gets a bit hand-wavy, but NLP also started with shallow+fast+big n-gram models, then moved to parse trees etc, and then on to transformers. And yes, I know, transformers aren't actually small, but they are insanely compressed! "Compression is intelligence", as they say.
In fact, it's the *opposite* of what we saw in object recognition. There we started with shallow+fast+big models like mixtures of Gaussians on color, then moved to more compact and hierarchical models using trees and features, and finally to highly compressed CNNs and VITs.
Let's plot the trajectory of these three generations, with speed on the x-axis and model size on the y-axis. Over time, we've been steadily moving to bigger and faster models, up and to the right. This is sensible, but it's not the trend that other AI fields have been on...
Generation three swapped out those voxel grids for a bag of particles, with 3DGS getting the most adoption (shout out to 2021's pulsar though). These models are larger than grids, and can be tricky to optimize, but the upside for rendering speed is so huge that it's worth it.
The second generation was all about swapping out MLPs for a giant voxel grid of some kind, usually with some hierarchy/aliasing (NGP) or low-rank (TensoRF) trick for dealing with OOMs. These grids are much bigger than MLPs, but they're easy to train and fast to render.