Advertisement · 728 × 90

Posts by Clamchowder

Post image

Intel's Arrow Lake is impressively efficient when running throughput-bound stuff on 16 underclocked E-Cores. I'm running all Geekbench 6 workloads in parallel through Intel SDE (so emulated, to get exact instruction counts), and this chip is getting over 3.7G instructions/watt.

1 week ago 1 0 0 0
Clam's Chip Commentary

clamtech.org?dest=gpuwrite
Here's a look at GPU cache/memory write bandwidth across a variety of hardware

1 month ago 1 1 0 0
Clam's Chip Commentary

Time for a little site with some multi-page support! I plan to write random thoughts on hardware there. To start, here's some commentary on drilling down GPU cache latency using very funny OpenCL kernels: clamtech.org?dest=gpudire...

1 month ago 1 1 0 0
Post image Post image Post image

Looking good on Intel too, improving measured latency by ~6.8 ns, or 19-20 cycles

5 months ago 0 0 0 0
Post image Post image

So far I used a simple a=a[a] pattern to test GPU memory latency, but that indexed addressing penalty always bothered me. I finally got around to making the compiler spit out a chain of dependent loads and nothing else.

Good start on AMD. I save ~4 or ~12 ns for scalar and vector accesses

5 months ago 2 0 1 0

Yeah. I was more surprised ARL didn't dynamically adjust the SNCU/D2D clock, even with XMP disabled. They clearly had that capability in MTL and the uncores are very similar.

Still ARL idle power is just fine for a desktop platform, so maybe they didn't bother.

6 months ago 1 0 0 0

Intel's desktop Arrow Lake always keeps the SNCU (die to die interface and some other parts of the uncore) at 2.6 GHz. On Meteor Lake, it goes up to 2.4 GHz but varies a lot probably to save power.

6 months ago 1 0 1 0
Post image

Intel's newer Emerald Rapids improves L3 latency compared to Sapphire Rapids, at least when one core is able to allocate a similar amount of L3 capacity. It's still high at ~105 core cycles, but better than ~125 cycles from the last generation.

9 months ago 2 2 1 0
Advertisement

Yep, that was a fun one. Loved the quirky Northbridge with its separate paths for CPU and GPU memory accesses. I should boot that system back up sometime and check the NB power states too.

9 months ago 1 0 0 0

That's for Cortex A78. I have not tried on anything else. With officially documented events, 0x26 (iTLB access) can infer op cache misses at the 32B fetch window granularity. A78's op cache is virtually addressed, and doesn't require TLB lookup on a hit.

10 months ago 1 0 0 0
Post image

Arm never documented PMU events for their op cache. From brute force searching, there's a pair of possibly related events. Event 0x177 may be op cache hits, and 0x178 may be op cache misses. Both events appear to count instructions (not micro-ops or cachelines)

10 months ago 1 2 1 0
Post image

AMD had a separate Shader Array subdivision within Shader Engines even in the original GCN architecture. Interesting that it never mattered until RDNA added a L1 cache to the Shader Arrays and had multiple SAs per SE

1 year ago 2 1 0 0
Post image Post image Post image

Output from DispatchRays calls in CP2077's path tracing mode, with exposure adjusted manually and no denoising done

There's just not enough computing power available to get a good sample count while maintaining real-time performance. It's like setting ISO 102400 on a DSLR

1 year ago 4 0 0 0
Post image

Messing around with microbenchmarking Arc B580
12.2 TB/s of L1 bandwidth, or ~214 bytes per Xe Core cycle
Theoretical is probably 256B/cycle. But close enough for now

1 year ago 7 0 0 0

Cinebench 2024 on the Ryzen 7 4800H (Zen 2)
Stock: 600 pts
Op cache disabled: 525 pts

1 year ago 4 0 0 0
Post image Post image

In games with higher VRAM usage (DCS), GPU-Z incorrectly shows 16.8 GB of dedicated memory used (out of 12 GB total lol). Task manager correctly shows shared memory allocated

1 year ago 1 0 0 0
Post image Post image

Frame drop in Baldur's Gate 3, as captured by GPUView. The game has to move ~35 MB to the GPU, which means reserving space to hold the data, getting the data contiguous in physical memory, and of course doing the transfer. Really fast, takes just 12.6 ms, but is enough to miss a 60Hz vsync interval

1 year ago 1 0 0 0
Advertisement
Post image

Zero-copy should be more natural on an iGPU versus a discrete one, but not all iGPUs can do zero-copy.

Here I'm testing OpenCL Shared Virtual Memory with a 256 MB buffer and only modifying one 32-bit value in it. Anything in the millisecond range implies the driver had to copy the entire buffer.

1 year ago 2 0 0 0
Disabling Zen 4's Op Cache
Disabling Zen 4's Op Cache YouTube video by lamchester1

www.youtube.com/watch?v=SwlK...
Discussing turning off Zen 4's op cache and its performance consequences, in video format :)

1 year ago 4 1 0 0

Ooh, thanks for the link! I looked on Intel's PDFs and sites, and didn't find anything on LNL/ARL. I will check this out

1 year ago 1 0 0 0

Skymont perfmon events (specifically unit masks for evt 0xD1, retired mem loads by source) appear to act differently on Arrow Lake and Lunar Lake. Expected, given their different cache setups.

But I wish Intel would hurry up and get LNL/ARL documentation written up :/

1 year ago 2 0 1 0
Post image Post image

Youtube AV1 decoding can be heavy on old CPUs, even at 1080P. IPC though is surprisingly good for AMD's very outdated 12h architecture.

1 year ago 4 0 0 0

Remember that after Bulldozer, it took AMD five years to change direction. And even with Zen, that's only a foundation. They built on that for several more years before really threatening Intel

1 year ago 2 0 0 0

Exactly. It's like getting one move in a turn based game and having a board call whether you won or lost based on the results of that single turn.

I also think an engineer should lead the company because they can appreciate the technical challenges/sniff out BS, and Pat Gelsinger is an engineer.

1 year ago 3 0 0 0
Advertisement
x.com

x.com/lamchester/s... I figured some of it out for Zen 2. Events 7 and 0x47 correspond to traffic on the two DDR4 channels. Never got around to doing that for newer Zen generations though

1 year ago 1 0 0 0

:D That was a fun article to write, though I spent way too much free time poking around and gathering data

1 year ago 2 0 1 0
Post image

(bottom is from the Zen 4 PPR)
Zen 2 used eight bits, letting you select any combination of logical SMT threads within a CCX for L3 performance monitoring. More flexible, but would take too many bits with Zen 3's larger CCXes.

1 year ago 6 0 0 0
Post image

In Zen 5's Processor Programming Reference, the L3 performance event select registers now take core IDs from 0-15. That would let the register handle 16 core CCX-es.

Of course this doesn't mean a 16 core CCX will show up, but it's interesting that AMD's laying the groundwork for it.

1 year ago 6 0 2 0
Post image

Zen 4 has a funny errata where an on-die 1.8V voltage regulator might be configured incorrectly, which then kills the CPU by feeding something too much voltage.

1 year ago 2 0 0 0

Zen 4 had a 144 entry loop buffer. However, it's disabled in the latest BIOS for my ASRock B650 PG Lightning. Maybe AMD found a bug that no one else ran into (or realized they were hitting).

Likely doesn't affect performance, as the op cache has more than enough bandwidth to feed downstream stages.

1 year ago 3 0 0 0