The term really refers to the technique of having a fattier, “Lego brick” structure at the core of a system, and thus relaxing static type checking (within that system). As such, you increase bang-for-buck of codepaths, increase dynamic runtime capabilities, and so on.
Posts by Ryan Fleury
“Fat struct” does not imply a “type” or “kind” field. They can contain them, but the main part is unlike discriminated unions, they have nothing to do with the data layout.
Alas, extragate behavior
Oh no… direct confrontation?! He’s coming to explain his point?! I was just planning on disparaging his reputation from a distance… That’s what social media is for, after all…
I just call it “fat struct”, “mega struct”, or “lego brick”. An important part is that the flags don’t signify the presence of data, they just control which codepaths apply.
I wrote a post about this in the context of UI programming: www.rfleury.com/p/ui-part-3-...
bsky.app/profile/rfle...
Yes, and furthermore, the degree to which variants are different is often exaggerated unnecessarily by the programmer. You have some degree of control over the overlap. But if they are that different, it’s possible they shouldn’t even be in the same type at all.
This makes the serialization code easier to table-drive, generate, and it just makes it smaller & more static; using the type for a new case doesn’t require all sum-type usage sites to update accordingly.
Imagine, for example, serializing a type, so I can send it over the network. If it’s a fairly complex sum type, that sum type structure needs to mirror itself in the serialization code. If it’s a simpler product type package, the serialization code is flat.
Not always. You *can* have a tag, but in many cases, the more useful construct is flags, which signal an instance’s applicability to a given codepath. You can also store a tag, and have a tag -> flags table. But many codepaths simply don’t want to check.
Anyways, would’ve been nice if you left it at “I don’t know what he means”, rather than just jumping to explaining my point for me, with the most caricatured & laziest interpretation, for a quick dunk.
This was interleaved with dunking on my “pointers” point, and the idea there was basically you don’t need to assume that pointers can be null at any time, if you are working in a sane architecture, where e.g. memory is preallocated, nil structs are used to make all reads safe, and so on.
This doesn’t sacrifice type safety, it just doesn’t apply it to the Nth degree. Sometimes you want to relax typechecking in favor of dynamic flexibility & composability. At the boundary of that space, it is obviously still typechecked.
Another way of saying this is that the structure of code is a function of the structure of types. Do you want your code to be mostly flat, work in large batches, with as-simple-as-possible data transform requirements, derived from what the computer needs to do? Okay, design the types to match.
No, I am including tagged unions in that definition, actually; nobody here is actually investigating my point with any degree of depth.
For anyone wanting to treat the subject in good faith: bsky.app/profile/rfle...
This is not always the right decision, but often can be, because it deduplicates codepaths & allows them to apply to a larger set of data, thus increasing their utility.
This often results in designs which collapse multiple variants of a sum type into a single product type. This eliminates typechecking between those variants, effectively forming a “dynamically typed space” for them.
In my experience it has been much better to think carefully about data transform requirements of my program, and design batch workloads that operate on a set of simple homogeneous (product) types, and have everything flow that way.
No, that isn’t what I’m arguing. I’m saying heterogeneous types require heterogeneous codepaths. Sum types require interleaving heterogeneous work at all usage sites of the type, which is often a bad decision for both simplicity & performance.