because this is getting some decent traction, I’d like to point out that you can donate / become a lifetime member with some pretty cool perks for only $36:
(they also have like, neat VPSes + website hosting stuff!)
sdf.org?join#donate
Posts by LaurieWired
More Vintage Computing museums should rent out cloud access to their rare hardware.
SDF (Super Dimension Fortress) does it, and it’s freaking awesome.
I’m literally logged into a Sun SPARCstation…anyone can do this for free, right now. Just SSH in.
Of course, I tried my hardest to prove this right, but no, this one turned out to be completely false.
Following specific steps (initial neglect and then suddenly perfect care) on the standard Gen1 (P1) Tamagotchi, your pet would eventually evolve into Bill Clinton and live a very long time.
The popular rumor when I was growing up was repeatedly feeding your pet would eventually make them explode.
Back in the 90s and early 2000s, Tamagotchis had crazy lore. Most were completely false (but some were true!)
These rumors ranged from saying that they were aliens escaping a drunk planet to being able to unlock Bill Clinton on a Tamagotchi.
This rumor is actually...completely true!
(sadly, the maximum speedup just escaping earths gravity well is something like 1 x 10 ^ (-10), so yeah the blackhole thing is kinda necessary)
There’s some (fun?) papers that allow you to solve the halting problem by placing yourself dangerously close to a black hole…while your computer safely computes for ~infinite-ish amounts of time.
One of the better papers:
"Relativistic computers and the Turing barrier" (Németi & Dávid 2006)
Time Dilation kind of makes the whole “datacenters in space” idea more fun.
Technically… a GPS Block III CPU runs an extra ~7,000 clock cycles per day compared to the same machine on earth.
Extend to the extreme, and you get the whole subfield of CS+physics called relativistic hypercompuation.
If L1 got corrupted, the kernel would invalidate the whole cacheline and force a refill from slower pools.
It worked…but had a massive performance hit.
As much as 30%!
It’s a particularly nasty issue.
Unlike DRAM /w ECC, there wasn’t a transparent way to correct it.
The fix was brutal. IBM recommended scientists reprogram critical workloads to write-through mode.
Basically, every store in L1 would also travel down the cache hierarchy immediately.
How do you kill a Supercomputer?
(Accidentally) using radioactive solder is a good way.
IBM’s Blue Gene/L frequently crashed when running simulations at LLNL.
Turned out that alpha particles from the lead solder in the board carrier were slamming the L1 cache with bit flips.
I think perhaps the weirdest thing about SuperH was its concept of “upwards compatibility”.
The ISA itself is a microcode-less design, all future instructions were trapped and emulated by older chipsets.
It’d be slow…but you could run future code on very old chips!
...cache line, the CPU pipeline stalls way, way less.
This was *really* important for embedded devices, which were often extremely bandwidth constrained in the era.
Sega famously used the processors for the Dreamcast, and ARM actually ended up licensing their patents for Thumb mode!
In the 90s, Hitachi came up with a bizarre way to conserve memory bandwidth.
Their SuperH architecture, intended to compete with ARM, was a 32-bit architecture that used…16 bit instructions.
The benefit was really high code density. If you can fit twice as many instructions into every...
do you have any links for this? sounds interesting, never heard of it
Interesting, I didn't really think about the feel of the car itself.
I would assume there's a large gap then between racing simulators and the real thing.
Some fields, like drone racing, are closer, where sim practice actually translates to real life gains...probably due to no body perception.
For a good example of this, look up the Bannister effect!
Anyway, here's the paper, it's a fun read:
lab.plopes.org/published/20...
By sort of “proving” the movement is possible (giving up autonomy!) the concept suddenly clicks, and you’ll “just get it”.
I feel like there’s probably a lot of interesting biological barriers that could be overcome if you trained yourself to go past limits by electrical stimulation first.
Let me give an example.
As a dancer myself, early on, aerials have a difficult initial mental barrier.
The common way to learn is to essentially let your teacher control your muscle movements, repeating the overall motions, over and over again.
Their mental load was “reduced” by having a computer electrically stimulate their arm instead.
Bodily autonomy wise, it might feel a bit freaky, because you have the proprioception of your arm moving, but without the mental load of you moving it.
I wish more research was poured in this area.
Would you let a computer hijack your muscle movements if it increased your performance 35%?
I totally would.
Came across a really interesting ACM paper today (SplitBody), where subjects were given difficult multitasking challenges.
Full Video:
www.youtube.com/watch?v=KKbg...
By implementing a hedged read strategy taking advantage of (undocumented!) channel scrambling offsets, I've gotten as much as 15x reductions in tail latency.
Works across Intel, AMD, Graviton, DDR4, DDR5, x86, ARM, you name it.
Check out the C++ lib I wrote, watch the video, and try it yourself!
Modern DRAM is based on a brilliant design from IBM.
But, we're still paying for a latency penalty that's existed since the 60s!
In this video, I'm introducing my research project (Tailslayer) that immensely reduces p99.99 latency on traditional RAM!
(yes this is a clip from my next video. you can do very neat things when you take advantage of multiple reorder buffers!)
I love that it’s not 2004 anymore
using multicore processors is too FUN.
haha I'm not quite that mean :)