TL;DR that you can easily share around:
If you have an NVIDIA GPU, use Vulkan.
If you have an AMD or Intel GPU, use OpenGL.
Posts by GPUsAreMagic
Here are some benchmarks of the new Vulkan backend introduced in #Minecraft 26.2-snapshot-1 and how it compares to OpenGL.
The results may be surprising to some of you 😁
Full article:
nemez.net/posts/202604...
Some Core 9 273PQE "BartlettLake-S" benchmarks.
CB R23: 30323 (default PL1=125W, PL2=253W - R23 finishes within PL2 duration)
CB 2026: 5550 (default PLs)
CB 2026: 6034 (PL1=PL2=253W)
Geekbench 6: browser.geekbench.com/v6/cpu/17286... (default PLs)
All with DDR4-3200, not DDR5 - CB 2026 does care!
Laptops with dGPUs are very hit or miss sadly… more often a miss tbh, exactly the stuff you’re describing, though battery life being poor is the main common thing.
Only dGPU laptop I ever had which was “well behaved” was my previous work laptop which was a high end Dell workstation.
(since my main social media is still twitter this probably means this account will be very inactive moving forward as I usually don't cross-post personal ramblings and such)
Well, here goes nothing, sometimes to move forward you have to leave some things behind.
From now on I will no longer be making chip annotations and will only be contributing to media in limited capacity. Minecraft testing TBD, but for now on hold too.
Thank you all! 😉
And a Minecraft server test - 1.21.5 default settings, flying around in spectator mode at maximum speed - its keeping up nicely.
Flying through new terrain: 65-90W (~70W typical)
Flying through already generated terrain: 43W
Some more unscientific ArrowLake tests:
TrueNAS 25.04.1 QuickSync in Jellyfin works out of the box. NPU is missing firmware (hopefully soon?).
Added 3 old (power hungry) SSDs for a test pool.
Idle: 38W at wall
Transcoding (HEVC to AV1): 65-70W
Serving the transcoded video: 40W
Some quick power scaling data I collected.
Think I'll settle on a 100W power limit for the NAS.
Finally received the motherboard for the 265K yesterday.
After all the fixes since launch this is even better than I expected/remembered from last year.
20W from the wall without PCIe cards at idle-ish (vs like 50W+ for AM5). More MT than 9700X at about half the wall power (50W PL1/2), neat.
At the end of the journey, scissors will join your side and assure your victory over the terrible army of boxes. Stay strong!
Nah, those are just blocks of memory for whatever the MC is doing internally. Can't really go any deeper than loosely saying top and bottom are the two MCs without some highly NDA'd NVIDIA info.
The SRAM would be for batching data bursts and general DRAM management.
Hey, thanks!
I don't have a blog on this, mainly because IMO I don't have much to say other "learn to recognize patterns well" and "learn some low level chip design stuff", the latter to get an idea of how stuff works and might be laid out.
And yep, you can kinda see the dual MC, but no clear cut.
Wouldn't have taken me a year and a new SSD to debug if I could just see something was constantly writing to the drive.
The new ReFS-as-C:\ thats out in canary builds will be interesting to observe knowing ReFS does this...
Observing TBW more closely on the new 9100 PRO, it is quite obvious ReFS is doing some insane write amplification which results in those lockups after heavy I/O, really wish Microsoft didn't hide this traffic from task manager and other utilization reporting subsystems.
Turns out the SSD lockups/slowness after heavy writes and TRIM freezing randomly were actually caused by ReFS, my old SSD is fine, oops.
Disabling ReFS dedup/compression didn't really change anything, so after 4 years it is back to NTFS.
I guess if you're on Intel Arc... use 28 chunk render distance? ❓❔❓❔
(repeatable, I reran 24/28/32 chunks 4 times each because it didn't make sense, and it still doesn't make sense)
No RTX 4070 because I already sold it, oops. But the GTX 1660 works well enough, its all CPU bound anyway.
You can also see the difference between the professional Quadro driver and the GeForce driver with game optimizations - neat!
Back when I tested the RX 6600 vs GTX 1660 in Minecraft 2y ago, AMD was firmly ahead, but seems like something in MC changed a bit and AMD possibly reintroduced some CPU overhead in the driver.
NVIDIA is now ahead again at "normal" and especially "high" render distance.
Minecraft Patrix 256x resourcepack + OptiFine runs on the RX 9070 XT. AMD upped the hardware limitation of 16K texture size.
This combo just crashes on startup on any previous AMD GPU. (It still needs 25GB+ of VRAM so the 9070 XT cant really "run" it but hey, progress!)
This is a complete 180 from what AMD recording/streaming used to be - amazing work there.
I'd say, short of a few software support woes still, quality-wise you're not giving up anything for streaming or casual recording with AMD anymore, guess my 4070 is a testbench card now.
sample1.mkv is RX 9070 XT h.264, averaging about 11mbit
sample2.mkv is RTX 4070 AV1, averaging about 12mbit
Can't compare AMD AV1 yet as OBS support is not released yet (no B-Frames and it ignores CBR/VBR/CQP limits lol), should be soon hopefully as parts got merged on Friday.
Also did blind A/B tests of some OBS recordings, not identical scenes but decently similar ones so you can also compare. Everyone said they look almost same.
Clips are in this google drive link alongside the not-twitter-compressed screenshare screenshots: drive.google.com/file/d/1x-Kk...
Some quick non-scientific RX 9070 XT video encoder tests.
Two pictures from the receiving end of a discord screenshare from the 9070 XT. 1440p60 at about 9-10mbit since I have nitro, both during movement.
Looks perfectly usable for a stream, full res in gdrive link in reply 🧵
AMD StrixHalo annotation!
I quite like the memory/MALL layout, but it was really annoying to annotate. Also dual media engines!
Photos provided to me by x.com/Kurnalsalts
Full res available at nemez.net/die
VFIO on the Asrock Z790 Livemixer:
✅All slots separate IOMMU groups properly
✅PCIe card SR-IOV works
✅iGPU SR-IOV works (with out of tree i915)
⁉️Not possible to set the iGPU as the primary output if there is a dGPU
As expect of a modern Intel chipset, nice VFIO workstation.