I thought it’s the other way around: time runs slower when traveling at high velocity, so a server in orbit should run fewer cycles than a server on Earth. Am I seeing this backwards?
Posts by Max Liani
It’s multi-modal :)
When you count that, the amount of non-verbal sensorial input is huge.
Thank you. I’ll write a blog post about the method.
Yes, it’s 12 bytes of indices and flags per line that stores the context of the expansion loop. Which node, which parent, which siblings, is the tree node expanded… stuff like this.
It’s a good point, I don’t have an implementation for multi-selection reparent and it doesn’t change selection
With that many nodes it takes a sensible fraction of a second to invalidate and rebuild the cache when the scene structure changes, or a node is expanded/collapsed, changing the flattened list.
I had to create a flat cache for the line drawn and that lets me jump to the visible band of TreeNode lines I need. I still rely on the nested structure, so it isn't a flat list... but a few hops in the cache reconstruct the nesting context to initialize ImGui::TreePush/TreePop
Alright, I made some further improvements to the tree view, which now draws in constant time (in respect to the scene complexity). Here is a test with 16 million transform nodes viewable in the scroll area. Without screen recording, the whole app renders at 8kHz.
It’s probably the sensible thing to do. Thank you. I do support left-click on first selection line, followed by shift-left-click on to complete selection over a continuous range of lines.
Maybe shift-left-click-drag would be the natural thing for marquee.
I recall in Maya left-click-drag is always select, for drag&drop one has to use middle-click-drag. Never liked that much either, it's not very laptop friendly.
The typical implementation requires one to click-drag on empty space to start a selection marquee. Somehow that feels ok for a file explorer window, but I find it annoying and limiting in a scene hiearchy tree view.
I played a bit with the UX of interacting with the tree view. The common operations are selection and re-parent. I want to control both with the same gesture. So, if the drag&drop is mostly vertical -> reparent. If it widens to the side, it becomes marquee selection.
Yes, no?
dear imgui offers a convenient ImGuiListClipper to draw only a range of a long list. In this case I am not relying on that because my data structure is not a flat list. For widgets out of view, I call a cut down version of TreeNode I call TreeNodeSkip, and that feels good enough for now.
That's a test with 1 million lines in the tree view, which is likely much more than one can reasonably expect to interact with. I am happy my take on multi selection works and feel zippy.
I am not sure why I am doing this, but I optimized further the scene tree view as it was getting a bit slow and clunky to operate when I had many nodes expanded. It's not cheap, it's still the most expensive part of the program to draw.
I have had this in the background today. A mostly silent stream with the occasional com, making me jump from time to time 😆
It should also work with rectangular textures, as far as the area in pixel of each mip is a power of 2, or with non power of two sizes, as far as the data is padded to align to power of two boundaries.
Take your number & mask if you mip 0 is the 1x1 resolution, or number & ~mask if mip 0 is the high resolution. Multiply that by the pixel size in bytes.
If my mental calculations are correct you should get your offset.
I haven’t tried this, but if the texture is power of 2, you can pregenerate a number which is the sum of the mips area in pixel: 1+4+16+64… then from the resolution of the current mip level (say 8x8 = 64, which should be a single bit), subtract 1 to flip on all the bits below. Use that as mask.
Here is how I sample this light:
I read you paper when you posted it a while ago, it’s great stuff!
A difference perhaps is that this is not a spherical rectangle, it’s more close to an orthogonal spherical crop. Those derivations on a different starting function are beyond my abilities.
It's not a uniform distribution. Well, it is relatively uniform if at least one of the angles is small. It creates 4 singularities when both angles approach π. But that is outside the intended use because there are easier ways to cover the entire hemisphere :D
Here is the result in render.
A long time ago in a galaxy far away (circa 2011) I came up with a distant rect light for my lighting crew. I can't remember how I did it, but I am trying to recreate it.
Here is a test for the light sample distribution in the hemispherical domain.
Some times you want the character feel moving through the environment, so practical lights and indirect illumination covers that. Some time you need to highlight a feature and keep it stable -> distant lights.
What are distant lights good for?
Not for the sun, you get much better result with a HDR map. They are good for fill, rim, modeling lights, especially when the characters move around in the sequence and you need to maintain shot-to-shot consistency.
After how many years? I added code for distant area lights. Some call it "directional". The control "scale compensation" is meant to preserve the light power (radiant flux), so as the emissive area increases, the intensity decreases.
It looked fun for a minute, but it turned sour quickly.
I don’t know what to do with those people. It’s not worth showing them, it only empowers them. Answering with patience is pointless (I tried, he didn’t reply).
I wish they’d find a better way to channel their eagerness.
Likely I could preallocate those resources. It would complicate some code, like the fallback if initialization fails. Some other denoisers won’t be able to tell how much memory they need until initialized.
Everything is better when workflow is simple and repeatable.
Thanks. A big GPU helps :)