"...it would be as though our brain, which constitutes the physical, material condition of our thoughts, were unable to follow what we do, so that from now on we would indeed need artificial machines to do our thinking and speaking." Hannah Arendt, 'The Human Condition', 1958.
Posts by Tzu-Mao Li
@wjakob.bsky.social used to do this for his papers by releasing "extended tech reports", e.g.,
rgl.epfl.ch/publications...
rgl.epfl.ch/publications...
rgl.epfl.ch/publications...
as a nerd on these topics I find reading these detailed tech reports a lot more satisfying!
Yes! I still sometimes fall into this today. It's hard especially when you care a lot about what you did.
I have procrastinated on writing so much that I wrote an entire document on writing tips: cseweb.ucsd.edu/~tzli/writin...
Probably not much is new but I find I still need to repeat the same things to my students regularly. Will update this document over time hopefully.
"AI has transformed programming forever", but what does that even mean? I mean, Sublime Text has transformed the way I and many programmed forever since it came out (with much less cost), but I don't see the same amount of excitement around it.
Sorry, have to rant, back to writing proposals. : (
cseweb.ucsd.edu/~tzli/novelt...
I gave an internal talk at UCSD last year regarding "novelty" in computer science research. In it I "debunked" some of the myth people seem to have about what is good research in computer science these days. People seemed to like it, so I thought I should share.
The idea is to use line segments to sample the Dirac delta at the decision boundaries (aka Crofton formula), along with a piecewise program transformation to detect whether the line intersects boundary. For low-dim integrals this marks the first time we can robustly sample general discontinuities.
Check out our recent SIGGRAPH Asia paper on differentiating programs that integrates over discontinuous functions (yes, again), which got the best paper award! yashbelhe.github.io/asd/index.html
The first author Yash is looking for a job yashbelhe.github.io so talk to us if you're interested.
I was surprised most of my students have not heard of Rahimi's very influential talk (it feels like everything just vaporizes in a few years these days), so I thought I should share it:
www.youtube.com/watch?v=x7ps...
Depends on what you mean by "true layering" but Unity docs.unity3d.com/Packages/com... probably has implemented Belcour's layered material belcour.github.io/blog/researc...
"It also takes into account light interactions between two vertically stacked physical layers"
I love everything from Michael Gharbi! mgharbi.com
The papers are:
groups.csail.mit.edu/graphics/xfo...
groups.csail.mit.edu/graphics/hdr...
likesum.github.io/bpn/
tamarott.github.io/ASAPNet_web/
The ones I didn't list here are good too. ; )
Thanks a lot for the warm comments!
I shared thoughts on what roles "classical graphics" should play in future and also advertise some of our research projects. I also discussed a bit of my thoughts on the field.
Take a look if you are interested, or if you have an existential crisis of your graphics-related research or job!
I gave a talk at Pacific Graphics 2025 on the topic of "Classical Computer Graphics in the Age of Generative AI". I've uploaded the recording to Youtube today.
www.youtube.com/watch?v=Vyci...
It's amazing how simple the basic jackknife estimator is (Eq 4) and how the cosine comes out as a jump scare. XD
Yes! www.youtube.com/watch?v=48tv... (sorry I was slow : ( )
🎄 Introducing our paper A Generalizable Light Transport 3D Embedding for Global Illumination lnkd.in/gQUMSAyV .
🙈 Just as Transformers learn long-range relationships between words or pixels, our new paper shows they can also learn how light interacts and bounces around a 3D scene.
Today we're unveiling our newest project: OpenQMC! Developed & contributed to the Foundation by @framestore.bsky.social, OpenQMC aims to improve the fidelity & efficiency of rendering photoreal moving images for film, TV, gaming & advertising. Read more on our blog:
www.aswf.io/blog/openqmc...
Amazing paper. Can't believe I haven't read it. Thanks a lot for sharing! (And yes I agree that the Nyquist limit is likely too loose and we can do so much better!)
(This was inspired by the debate of whether the Pixel camera's 100x zoom in is hallucination or not, but it seems to apply to everything in the "AI" world right now.)
My thoughts got stuck at the point above, so I decided to make this a bluesky post. ; )
To move forward, either we move back to the "old ways" (I actually prefer this), or we should have a better visualization to indicate what things have higher uncertainty and make it clear to the audience. Probably a lot of people are working on this, but uncertainty quantification is a hard problem.
We used to have a clear relation between sampling rates and reconstruction error. Now that has gone away and anything can go. In some sense, we have traded reconstruction error with predictability (perhaps because predictability is harder to benchmark). It almost feels like a form of no-free-lunch.
Anything outside of Nyquist-Shannon limit is "hallucination". It used to have cooler names aliasing/noise. I think the key difference between the two is that human are good at catching aliasing/noise (even anti-aliasing), but not good at noticing hallucination. So "hallucination" feels like cheating
My lab will be recruiting at all levels. PhD students, postdocs, and a research engineering position (worldwide for PhD/postdoc, EU candidates only for the engineering position). If you're at SIGGRAPH, I'd love to talk to you if you are interested in any of these.
I've started to ask these questions in talks just so I can collect answers I can use myself in the future. ; )
"Hallucinations on the future of real-time rendering", High Performance Graphics 2025 keynote: c0de517e.com/023_hpg.htm
Most interesting thread I've read recently! I assume you can use this to build a BSP tree like data structure to render a lot of quadratic Bezier strokes?