In between Breakthrough rounds in Battlefield 6 (which is excellent!), I managed to write about procrastination. It's not that important, leveling up the M39 is, but perhaps you want to check it out? Or not---after all, I am sure you have less important things to do. 😜
medium.com/@niklaselmqv...
Posts by Niklas Elmqvist
In which I quote Kelly Kapoor from the Office as a way to illustrate reflections on my 20 years of the CHI conference and obsequiously convert it to advice for new researchers looking to make their mark on the HCI community. Plant your tree today! niklaselmqvist.medium.com/the-second-b...
Random Friday night thought: the way we prompt LLMs to give the illusion of a conversation by feeding it the entire chat history every time is essentially the plot of the movie The Notebook (2004; Ryan Gosling, Rachel McAdams; originally a book by Nicholas Sparks). 🤔
New post: "When Everyone Is Super" on what coding agents mean for CS research.
Everyone got the same superpower. The hard parts of research (questions, methods, rigor) just became relatively more important. The easy part (code) just became relatively less so.
medium.com/@niklaselmqv...
Help us figure out the impact of Generative AI in the visualization community! Is the same old, an existential threat to the field, or just another nifty tool in a long line of tools? We need your input whether you’re an academic or practitioner alike. Do you love it? Do you hate it? Let us know!
Friendly PSA: If you are an associate editor for a journal and you decide to make a decision on a paper you have enlisted me to review *before* my review deadline, please think again. I know how hard it is to find reviewers, but you really aren't helping matters by wasting my time.
New blog post on what happens when a paper revision requires modifying code written by a graduated student. Prototyping new visualization techniques for teaching, ideation, or even publication is now trivial. "When Code Is Not The Issue": niklaselmqvist.medium.com/when-code-is...
Reviewers want perfectly controlled experiments AND real-world ecological validity. But you can't have both, at least not in one study.
New post on the validity tradeoff in HCI and what to do about it: niklaselmqvist.medium.com/you-cant-hav...
(Spoiler: run two studies that complement each other.)
The big LLMs are now promising ‘Ph.D.-level’ performance. If you are an academic and using them in lieu of your own writing, thinking, and reviewing, you are implicitly agreeing with this claim. Sure, you can use LLMs to augment your own abilities, but make sure you are not supplanting them.
CHI papers are getting longer and longer and the presentations are becoming shorter and shorter. Where does it stop?
Mental note: the month of November is basically entirely spent on CHI revisions (if you have the fortune of having them). #chi2026
(This may sound bitter, but it really isn’t. It’s just a dispassionate observation that a CHI project is now basically a full-year enterprise.)
It is frankly ridiculous how much time is spent manually marking up LaTeX to show how a paper was revised in the CHI R&R process. Sure, some of this is about clear communication with reviewers. On the other hand, we know how to create automatic diffs of PDFs: why not just use this? #chi2026
VIS 2025 wrapped! Our OPC team introduced two experiments: student reviewers (20% uptake, successful mentorship model) and public peer reviews (16 papers, 52 reviews on OSF). Both continue next year. Thanks to our IPC and the incredible Petra Specht. #ieeevis 🎢 ieeevis.org/year/2025/bl...
Ever wondered why your advisor keeps suggesting the same ideas? I'm a deterministic state machine with no memory—and I suspect many professors are too!
I just published a simple suggestion for Ph.D. students to manage their advisor's memory 🧠
niklaselmqvist.medium.com/the-state-ma...
Pete Butcher wearing a Quest 3 headset and holding two controllers to interact with data in DashSpace while his live view is shown on the slide in the background. The live view shows a stream graph, visualization components, and photos in 3D.
This picture was too good to not post. @pwsbutcher.bsky.social white-knuckled a live demo during the Q&A at the end of the talk. Legend! 🦄
Marcel in a blue hoodie and Pete in a striped sweater getting ready to present DashSpace at IEEE VIS 2025 in Vienna.
Marcel in a blue hoodie at the podium presenting DashSpace: the slide shows the paper title: “DashSpace: A Live Collaborative Platform for Immersive and Ubiquitous Analytics”.
We’re very pleased to present DashSpace, a web-first live and collaborative platform for immersive analytics using WebXR: github.com/Webstrates/D...
Here is @maski89.bsky.social ready to give the talk at #ieeevis 2025 and @pwsbutcher.bsky.social on deck for a demo.
PDF: pure.au.dk/ws/portalfil...
Johannes at the podium getting ready to give his talk with a title slide showing “Eye of the Beholder: Towards Measuring Visualization Complexity” and the authors: Johannes Ellemose and Niklas Elmqvist.
Here we have Johannes Ellemose giving his #ieeevis 2025 talk on “Eye of the Beholder 🧿: Towards Measuring Visualization Complexity”. Is there something intrinsic about a visualization that can be used to characterize its complexity? Johannes connected several studies to investigate this.
Sungbok Shin in a blue hoodie standing at the podium giving his talk at IEEE VIS 2025 in Vienna, Austria. The slide in the background shows the title of his talk: “Visualizationary: Automating Design Feedback for Visualization Designers using LLMs”. Also lists co-authors Sanghyun Hong and Niklas Elmqvist.
Sungbok Shin is now presenting Visualizationary (needs some practice to say; think “revolutionary”) as an automated LLM feedback tool for visualization design. The tool gives feedback for novices to improve their visualizations over iterative design processes. #ieeevis PDF: arxiv.org/abs/2409.13109
Title slide showing Arnold Schwarzenegger as the Terminator (T-800) holding his hand out to the audience and with a speech bubble saying “Immersive analytics, come with me if you want to live.” The slide also has a picture of Niklas Elmqvist.
I just enlisted the help of Arnold from Austria 🇦🇹 to bring home my controversial message that immersive analytics has a Unity problem at the “DataVis in XR” panel. Our CG&A paper: pure.bangor.ac.uk/ws/portalfil...
Thx @pwsbutcher.bsky.social, @ritsosp.bsky.social, and @maski89.bsky.social for help!
Sungbok Shin wearing a blue hoodie at the speaker podium presenting his talk; the slide shows the name of his paper (“Drillboard
Wow—Sungbok Shin points out that our work on Drillboards was started in 2020 (deepest COVID). Drillboards is a dynamic dashboard with authoring and reader modes where you can create chart hierarchies for varying detail. #ieeevis
ArXiv: arxiv.org/abs/2410.12744
Demo (laptop): drillboards.pages.dev
Dylan and Areen at the podium in front of a slide with a participant quote that reads “I've never seen a graph, I've never seen anything visually. I may be frank—graphics mean nothing to me. I have no context, so I would dismiss them out of hand if they aren't built for me... If they're built from a visual place, at least for me, they mean nothing, they have no resonance with me whatsoever."
And the authors are reading my mind: one of the participants who was born blind did not see the need to base the tactile representation on a visualization. arxiv.org/abs/2508.14289
Dylan Cashman and Areen Khalaila at the speaker podium at IEEE VIS 2025 presenting their best paper on tactile representations. The slide says “Should a tactile representation replicate its visual counterpart to be effective?”
Important question in accessible visualization. Our interviews (doi.ieeecomputersociety.org/10.1109/TVCG...) tell us that basing an accessible representation on the visualization is good for (1) working with sighted people, and (2) for people not born BLV. Dylan Cashman and Areen Khalaila #ieeevis
Selfie of the three IEEE VIS 2025 overall papers chairs (from left: Niklas Elmqvist, Melanie Tory, and Holger Theisel) at the speaker podium with the entire ballroom audience in the background while opening the conference.
It was our great honor to serve as #ieeevis 2025 overall papers chairs. The conference is now open and we can’t wait for you all to enjoy the program. Thanks to Petra Specht for her tireless work as assistant to the OPCs! (And good luck to Melanie, Alex Endert, and @tisenberg.bsky.social for 2026!)
Johanna Schmidt at the podium with a screen showing “Welcome to Vienna!” and 1,114 (the number of participants).
Johanna Schmidt from VRVis and TU Wien opening #ieeevis 2025 in Vienna! More than 1,100 attendees on location.
Leo Liu at the speaker podium presenting during the VISxGenAI workshop at IEEE VIS 2025 in Vienna, Austria.
Here’s @zcliu.bsky.social presenting “contextual dynamic explanations” (CoDEx) at the #VISxGenAI workshop at #ieeevis 2025. Go @hcil-umd.bsky.social and UMD! visxgenai.github.io
Anton at the speaker podium gesturing while presenting his multi-agent system.
Slide showing the report generation approach of nested reports, stories, and visualizations as well as an example on visualizairon publication data.
Brand new Ph.D. student Anton Wolter from my group at @csaudk.bsky.social presenting his team’s work on multi-agent data visualization and narrative generation at the #VISxGenAI workshop at #ieeevis; a runner-up in the VisAgent min-challenge at the workshop. Congratulations Anton and team!
Pete Butcher (right) and Panos Ritsos (left) wearing Quest Pro and Quest 3 headsets while standing in front of a projected image showing both of their augmented reality views using DashSpace.
@pwsbutcher.bsky.social and @ritsosp.bsky.social tag-teaming immersive data analysis at the DashSpace tutorial during the first day of #ieeevis 2025!
Danish/Viennese pastries in Vienna with the sign “small Danish pastry”
Even in Vienna it seems nobody is willing to take credit for what the Danes call “wienerbrød”. Circular reference ftw. #ieeevis
The #ieeevis 2025 OPCs, working under the direction of the VIS Steering Committee, has just released 52 anonymized peer reviews for 16 accepted papers to be published at VIS 2025. We hope each year will add to this repository. OSF link here: osf.io/s9j5b/ (download the spreadsheet directly)
New favorite use of LLMs: asking it to explain my own slides from last year back to me because I was too stupid/lazy to write proper speaker notes at the time.