Advertisement · 728 × 90

Posts by Patrick Dubroy

The machines are fine. I'm worried about us. On AI agents, grunt work, and the part of science that isn't replaceable.

"You don't know which afternoon of debugging was the one that taught you something fundamental about your data until three years later, when you're working on a completely different problem and the insight surfaces"

ergosphere.blog/posts/the-ma...

1 week ago 2 0 1 0
Python signal handling
I've written signal handlers in Python many times before, but never looked deeply into the exact mechanics until today.

Here's a simple example of a SIGINT handler:

import signal


def handler(signum, frame):
    print('Signal handler called with signal', signum)


signal.signal(signal.SIGINT, handler)
Some questions I had, along with the answers that I discovered:

Q: When does handler run, and how it is interleaved with whatever's happening on the main thread?

Answer:

A Python signal handler does not get executed inside the low-level (C) signal handler. Instead, the low-level signal handler sets a flag which tells the virtual machine to execute the corresponding Python signal handler at a later point (for example, at the next bytecode instruction).

and

Python signal handlers are always executed in the main Python thread of the main interpreter, even if the signal was received in another thread.

(From the doc for the signal module)

Q: Isn't this kind of scary? So the main thread can be preempted at an arbitrary point to run the signal handler?

Answer:

Unfortunately, yes! This also means:

Warning: Synchronization primitives such as threading.Lock should not be used within signal handlers. Doing so can lead to unexpected deadlocks.

So the restrictions are pretty similar to C — see the signal-safety(7) man page.

Python signal handling I've written signal handlers in Python many times before, but never looked deeply into the exact mechanics until today. Here's a simple example of a SIGINT handler: import signal def handler(signum, frame): print('Signal handler called with signal', signum) signal.signal(signal.SIGINT, handler) Some questions I had, along with the answers that I discovered: Q: When does handler run, and how it is interleaved with whatever's happening on the main thread? Answer: A Python signal handler does not get executed inside the low-level (C) signal handler. Instead, the low-level signal handler sets a flag which tells the virtual machine to execute the corresponding Python signal handler at a later point (for example, at the next bytecode instruction). and Python signal handlers are always executed in the main Python thread of the main interpreter, even if the signal was received in another thread. (From the doc for the signal module) Q: Isn't this kind of scary? So the main thread can be preempted at an arbitrary point to run the signal handler? Answer: Unfortunately, yes! This also means: Warning: Synchronization primitives such as threading.Lock should not be used within signal handlers. Doing so can lead to unexpected deadlocks. So the restrictions are pretty similar to C — see the signal-safety(7) man page.

Q: Ok, so what's the recommended pattern?

A: On option is to just assign to a global variable in the handler (shutdown_requested = True), and do the actual handling in your main loop or whatever. But in many cases you may need some way to park/wake the code in the main loop. In C you can use the self-pipe trick; Python has built-in support for this with signal.set_wakeup_fd().

How deadlock can occur
It's interesting to look at exactly how deadlock can occur. Here's the implementation of Event from cpython/Lib/threading.py:

    def __init__(self):
        self._cond = Condition(Lock())
        self._flag = False
    
    # ...
    
    def wait(self, timeout=None):
        with self._cond:
            signaled = self._flag
            if not signaled:
                signaled = self._cond.wait(timeout)
            return signaled
So the main thread could get preempted after acquiring the lock, but before the wait. Then suppose the signal handler tried to use the set() method on Event. Here's how it's defined:

    def set(self):
        with self._cond:
            self._flag = True
            self._cond.notify_all()
…so you'd have a self-deadlock when the main thread tried to acquire a lock again in the handler.

Q: Ok, so what's the recommended pattern? A: On option is to just assign to a global variable in the handler (shutdown_requested = True), and do the actual handling in your main loop or whatever. But in many cases you may need some way to park/wake the code in the main loop. In C you can use the self-pipe trick; Python has built-in support for this with signal.set_wakeup_fd(). How deadlock can occur It's interesting to look at exactly how deadlock can occur. Here's the implementation of Event from cpython/Lib/threading.py: def __init__(self): self._cond = Condition(Lock()) self._flag = False # ... def wait(self, timeout=None): with self._cond: signaled = self._flag if not signaled: signaled = self._cond.wait(timeout) return signaled So the main thread could get preempted after acquiring the lock, but before the wait. Then suppose the signal handler tried to use the set() method on Event. Here's how it's defined: def set(self): with self._cond: self._flag = True self._cond.notify_all() …so you'd have a self-deadlock when the main thread tried to acquire a lock again in the handler.

TIL: Python signal handling
→ github.com/pdubroy/til/...

1 week ago 10 0 0 0
Preview
WebAssembly is not as hard as it seems | The Web Dev Podcast Series 1 Its common to only hear Wasm talked about in contexts that seem… advanced. How can mere mortals like us hope to build with it? Patrick Dubroy talks about flattening the learning curve and putting…

Did you miss @dubroy.com teaching us about WebAssembly

No worries! Check out our conversation on the Web Dev Podcast, then code along with us on Learn with Jason.

WDP: codetv.dev/series/web-d...
LWJ: codetv.dev/series/learn...

2 weeks ago 4 1 0 1
Bar chart comparing performance improvements across three technologies: JSON shows 15.6ms (v17) reduced to 0.35ms (v18), a 44.6x speed increase; LiquidHTML shows 2,250ms (JS) reduced to 36ms (Wasm), a 62.8x speed increase; and ES5 shows 3,308ms (JS) reduced to 55ms (Wasm), a 60.3x speed increase. Blue bars represent 'Before' measurements and green bars represent 'After' measurements

Bar chart comparing performance improvements across three technologies: JSON shows 15.6ms (v17) reduced to 0.35ms (v18), a 44.6x speed increase; LiquidHTML shows 2,250ms (JS) reduced to 36ms (Wasm), a 62.8x speed increase; and ES5 shows 3,308ms (JS) reduced to 55ms (Wasm), a 60.3x speed increase. Blue bars represent 'Before' measurements and green bars represent 'After' measurements

New post on the @ohmjs.org blog —

Inside Ohm's PEG-to-Wasm compiler
→ ohmjs.org/blog/2026/03...

v18 is now more than 50x faster for real-world grammars while using about 10% of the memory 🔥

…this post goes into the details of how it's built.

2 weeks ago 8 4 0 0
Preview
GitHub - rxi/microui: A tiny immediate-mode UI library A tiny immediate-mode UI library. Contribute to rxi/microui development by creating an account on GitHub.

Love this —

microui: A tiny, portable, immediate-mode UI library written in ANSI C
→ github.com/rxi/microui

Only around 1100 SLoC (!)

2 weeks ago 9 1 0 0

In 30 minutes!

3 weeks ago 2 1 0 0
Preview
Twitch Twitch is the world

Tomorrow at 17:20 CET / 12:20 EDT / 9:20 PDT, I'll be livestreaming at twitch.tv/jlengstorf, teaching @jason.energy about WebAssembly!!

I'll do my best to condense the best parts of our Wasm book (@wasmgroundup.com) into ~60 minutes. 😄

3 weeks ago 4 4 0 0
Advertisement
Michael Vollmer : Programming Languages, Computer Science

Interesting theme that came up in a few talks (incl. mine) was flat/packed tree representations.

Michael Vollmer (Univ. of Kent) has done tons of interesting work in this area: recurial.com

3 weeks ago 3 1 0 0

Had a great time hanging out at MoreVMs today, thanks @stefan-marr.de for the invitation and for organizing!

I believe my talk was recorded, so hopefully I can share soon.

3 weeks ago 2 1 1 0

Ah, that was with @avibryant.com and @kevinlynagh.com and it was the interpreter for Fidget bytecode, which I believe you and I discussed a bit on Discord? I really need to write that up and publish it. 😬

4 weeks ago 2 0 0 0
MoreVMs 2026 - MoreVMs'26 - ‹Programming› 2026 The 10th MoreVMs workshop aims to bring together industrial and academic programmers to discuss the design, implementation, and usage of modern languages and runtimes. This includes aspects such as re...

Looking forward to a couple of @ohmjs.org- and @wasmgroundup.com-related things next week:

1️⃣ Talking about PEG-to-#Wasm compilation in Ohm v18 at MoreVMs: 2026.programming-conference.org/home/MoreVMs...

2️⃣ Live-streaming with @jason.energy on Thursday at 17:30 CET, teaching him about WebAssembly.

4 weeks ago 4 1 1 0

One more time — I'm looking for new consulting clients.

Some ways I can help:

∙ Fractional tech leadership (tackling "leadership debt" in small eng orgs)
∙ Full-stack, 0 to 1 projects
∙ Language design & impl (eg with @ohmjs.org)
∙ JavaScript/TypeScript perf

(🔁 appreciated)

4 weeks ago 11 16 0 0
Consulting
I do technical advising and freelance development for companies big and small.

Some recent examples:

Part-time advising for a small startup, doing regular 1-on-1s with the CTO and selected ICs. Advised on technical architecture and team issues, and helped them hire a Head of Engineering and their first Staff Engineer.
Worked with HCI pioneer Michel Beaudouin-Lafon and his research group to build a fully incremental processing pipeline for Asciidoc with Ohm.
For a research group investigating parametric CAD/CAE systems, implemented a GPU-based interpreter (in Rust and WGSL) for rendering implicit surfaces.
For engineering projects, I’m especially interested in work where I can combine my deep systems expertise with frontend development and UX work.

Consulting I do technical advising and freelance development for companies big and small. Some recent examples: Part-time advising for a small startup, doing regular 1-on-1s with the CTO and selected ICs. Advised on technical architecture and team issues, and helped them hire a Head of Engineering and their first Staff Engineer. Worked with HCI pioneer Michel Beaudouin-Lafon and his research group to build a fully incremental processing pipeline for Asciidoc with Ohm. For a research group investigating parametric CAD/CAE systems, implemented a GPU-based interpreter (in Rust and WGSL) for rendering implicit surfaces. For engineering projects, I’m especially interested in work where I can combine my deep systems expertise with frontend development and UX work.

Also I finally put together a consulting/ page, for anyone who's interested in working with me.

dubroy.com/consulting/

Did I mention I still have availability this year? 😇

1 month ago 6 0 2 1
Spring has sprung over here; I’m enjoying the sunshine.

I’m teaching another Scratch course at my kids’ Montessori school. It’s 90 minutes once a week for 5 weeks, and this time, the theme is “programming artificial life”:

Learn programming and create your own interactive digital creature. First, you’ll design your character (on paper or on the iPad). Then you learn how to give it behaviour in Scratch. Let it walk across the screen, search for food, decide when it needs to sleep. Turn your iPad into a virtual world!

The bee-like thing above is my creature, of course. It flaps its wings, gets happy when you feed it ants, and loves when you “pet” it with your cursor.

I still have availability for new consulting projects in 2026. I also finally put together a consulting page in case you’re curious how I could help you. I’m always happy to chat about potential projects, so feel free to get in touch.

Over the past few weeks, I’ve been finishing up the last little bits of WebAssembly support in Ohm, and officially announced the v18 beta. I’ve continued to make performance improvements, and am pretty psyched that it’s now about 50x faster on real-world grammars 🔥.

Speaking of Ohm, I’ve been invited to do a talk at the MoreVMs Workshop next week. So I’ve been spending time preparing that.

And I’ll be doing live pair programming with Jason Lengstorf on Learn with Jason on Thursday, March 19. You should tune in!

Spring has sprung over here; I’m enjoying the sunshine. I’m teaching another Scratch course at my kids’ Montessori school. It’s 90 minutes once a week for 5 weeks, and this time, the theme is “programming artificial life”: Learn programming and create your own interactive digital creature. First, you’ll design your character (on paper or on the iPad). Then you learn how to give it behaviour in Scratch. Let it walk across the screen, search for food, decide when it needs to sleep. Turn your iPad into a virtual world! The bee-like thing above is my creature, of course. It flaps its wings, gets happy when you feed it ants, and loves when you “pet” it with your cursor. I still have availability for new consulting projects in 2026. I also finally put together a consulting page in case you’re curious how I could help you. I’m always happy to chat about potential projects, so feel free to get in touch. Over the past few weeks, I’ve been finishing up the last little bits of WebAssembly support in Ohm, and officially announced the v18 beta. I’ve continued to make performance improvements, and am pretty psyched that it’s now about 50x faster on real-world grammars 🔥. Speaking of Ohm, I’ve been invited to do a talk at the MoreVMs Workshop next week. So I’ve been spending time preparing that. And I’ll be doing live pair programming with Jason Lengstorf on Learn with Jason on Thursday, March 19. You should tune in!

What I'm up to now
→ dubroy.com/now/

∙ Teaching another Scratch course
∙ Still have availability in 2026 for consulting
∙ Ohm v18 is now ~50x faster than v17 🔥
∙ Invited talk at MoreVMs next week
∙ Live pair programming with @jason.energy next Thursday at ~17:30 CET!

1 month ago 10 0 1 0
Advertisement
Just like in JavaScript, you can do shared memory multithreading in WebAssembly! I've long known this was possible, but until the other day, had never actually played with it myself, so I decided to put together a small, self-contained example.

(This is for Node, but it's pretty much the same in the browser.)

Just like in JavaScript, you can do shared memory multithreading in WebAssembly! I've long known this was possible, but until the other day, had never actually played with it myself, so I decided to put together a small, self-contained example. (This is for Node, but it's pretty much the same in the browser.)

Details
Structured cloning of WebAssembly.Module
Normally you'd instantiate a Wasm module with WebAssembly.instantiate, which gives you a module instance. Here, we use WebAssembly.compile, which gives us a WebAssembly.Module. This is a stateless object that is structured-cloneable, which allows it to be safely shared across realm boundaries.

Serialization (an implicit part of structured cloning) of WebAssembly modules is defined in §3 of the WebAssembly Web API, which says:

Engines should attempt to share/reuse internal compiled code when performing a structured serialization, although in corner cases like CPU upgrade or browser update, this might not be possible and full recompilation may be necessary.

Shared memory
WebAssembly.Memory also support structured cloning. When we pass shared: true, the buffer property is a SharedArrayBuffer:

The structured clone algorithm accepts SharedArrayBuffer objects and typed arrays mapped onto SharedArrayBuffer objects. In both cases, the SharedArrayBuffer object is transmitted to the receiver resulting in a new, private SharedArrayBuffer object in the receiving agent (just as for ArrayBuffer). However, the shared data block referenced by the two SharedArrayBuffer objects is the same data block, and a side effect to the block in one agent will eventually become visible in the other agent.

Atomic add
The last piece of the puzzle is the i32.atomic.rmw.add instruction used in the addId function:

  (func (export "add") (result i32)
    ;; mem[0] += workerId
    i32.const 0
    global.get 0
    i32.atomic.rmw.add))
This instruction is defined in the threads proposal (a Stage 4 proposal, so not finalized yet), which defines "a new shared linear memory type and some new operations for atomic memory access".

i32.atomic.rmw.add is equivalent to LOCK XADD on x86. As described in the threads proposal:

Details Structured cloning of WebAssembly.Module Normally you'd instantiate a Wasm module with WebAssembly.instantiate, which gives you a module instance. Here, we use WebAssembly.compile, which gives us a WebAssembly.Module. This is a stateless object that is structured-cloneable, which allows it to be safely shared across realm boundaries. Serialization (an implicit part of structured cloning) of WebAssembly modules is defined in §3 of the WebAssembly Web API, which says: Engines should attempt to share/reuse internal compiled code when performing a structured serialization, although in corner cases like CPU upgrade or browser update, this might not be possible and full recompilation may be necessary. Shared memory WebAssembly.Memory also support structured cloning. When we pass shared: true, the buffer property is a SharedArrayBuffer: The structured clone algorithm accepts SharedArrayBuffer objects and typed arrays mapped onto SharedArrayBuffer objects. In both cases, the SharedArrayBuffer object is transmitted to the receiver resulting in a new, private SharedArrayBuffer object in the receiving agent (just as for ArrayBuffer). However, the shared data block referenced by the two SharedArrayBuffer objects is the same data block, and a side effect to the block in one agent will eventually become visible in the other agent. Atomic add The last piece of the puzzle is the i32.atomic.rmw.add instruction used in the addId function: (func (export "add") (result i32) ;; mem[0] += workerId i32.const 0 global.get 0 i32.atomic.rmw.add)) This instruction is defined in the threads proposal (a Stage 4 proposal, so not finalized yet), which defines "a new shared linear memory type and some new operations for atomic memory access". i32.atomic.rmw.add is equivalent to LOCK XADD on x86. As described in the threads proposal:

TIL: Multithreaded WebAssembly
→ github.com/pdubroy/til/...

(corrected)

1 month ago 10 1 0 0
Post image

The playground is awesome! Btw you might want to change the CSS for the shortcuts…it resolving to Fire Code for me, which has ligatures for many of these things, which makes it confusing.

Adding `font-variant-ligatures: none` seems to fix it.

1 month ago 1 0 1 0
L14: Natural Deduction for IfArith
L14: Natural Deduction for IfArith YouTube video by Kristopher Micinski

And @krismicinski.bsky.social's "Natural Deduction for IfArith" lecture is also great: www.youtube.com/watch?v=neCr...

1 month ago 1 0 1 0
Crash Course on Notation in Programming Language Theory This blog post is meant to help my friends get started in reading my other blog posts, that is, this post is a crash course on the notation ...

A while back, someone in the @wasmgroundup.com Discord asked about resources for learning the formal notation used in the WebAssembly spec.

One I like is Jeremy Siek's "Crash Course on Notation in Programming Language Theory": siek.blogspot.com/2012/07/cras...

1 month ago 8 1 2 0
It comes up rarely, but on a few projects I've wanted a dead simple hash table implementation. Most recently, it was for an experiment in the Ohm WebAssembly compiler. When I'm compiling a grammar, I assign each rule name a unique ID, but I wanted a fixed size cached (e.g. 8 or 32 items) keyed by rule ID.

I discovered Fibonacci hashing, aka "Knuth's muliplicative method":

So here's the idea: Let's say our hash table is 1024 slots large, and we want to map an arbitrarily large hash value into that range. The first thing we do is we map it using the above trick into the full 64 bit range of numbers. So we multiply the incoming hash value with 2^64/φ ≈ 11400714819323198485. (the number 11400714819323198486 is closer but we don't want multiples of two because that would throw away one bit) Multiplying with that number will overflow, but just as we wrapped around the circle in the flower example above, this will wrap around the whole 64 bit range in a nice pattern, giving us an even distribution across the whole range from 0 to 2^64. To illustrate, let's just look at the upper three bits. So we'll do this:

size_t fibonacci_hash_3_bits(size_t hash)
{
    return (hash * 11400714819323198485llu) >> 61;
}
All we have to do to get an arbitrary power of two range is to change the shift amount. So if my hash table is size 1024, then instead of just looking at the top 3 bits I want to look at the top 10 bits. So I shift by 54 instead of 61. Easy enough.

It turns out the Linux kernel has used this for ~6 years; here's the comment from include/linux/hash.h:

/*
 * This hash multiplies the input by a large odd number and takes the
 * high bits.  Since multiplication propagates changes to the most
 * significant end only, it is essential that the high bits of the
 * product be used for the hash value.
 *
 * Chuck Lever verified the effectiveness of this technique:
 * http://www.citi.umich.edu/techreports/reports/citi-tr-00-1.pdf
 *
 * Although a random odd number will do, it turns out that the golden
 * ratio phi = (sqrt(5)-1)/2, or its negative, has particularly nice
 * properties.  (See Knuth vol 3, section 6.4, exercise 9.)
 *
 * These are the negative, (1 - phi) = phi**2 = (3 - sqrt(5))/2,
 * which is very slightly easier to multiply by and makes no
 * difference to the hash distribution.
 */
#define GOLDEN_RATIO_32 0x61C88647
#define GOLDEN_RATIO_64 0x61C8864680B583EBull
Why would you use this?
(I may get some details of this explanation wrong, because hashing and hash table sizing are a surprisingly complex subject!)

If I understand correctly, it makes sense to use this if (a) you don't have access to a good hash function, and (b) you want power-of-two (not prime) table sizes; and/or (c) you want the bucket calculation operation to be as fast as possible. (A multiplication plus a shift is significantly faster than modulo/division.)

It comes up rarely, but on a few projects I've wanted a dead simple hash table implementation. Most recently, it was for an experiment in the Ohm WebAssembly compiler. When I'm compiling a grammar, I assign each rule name a unique ID, but I wanted a fixed size cached (e.g. 8 or 32 items) keyed by rule ID. I discovered Fibonacci hashing, aka "Knuth's muliplicative method": So here's the idea: Let's say our hash table is 1024 slots large, and we want to map an arbitrarily large hash value into that range. The first thing we do is we map it using the above trick into the full 64 bit range of numbers. So we multiply the incoming hash value with 2^64/φ ≈ 11400714819323198485. (the number 11400714819323198486 is closer but we don't want multiples of two because that would throw away one bit) Multiplying with that number will overflow, but just as we wrapped around the circle in the flower example above, this will wrap around the whole 64 bit range in a nice pattern, giving us an even distribution across the whole range from 0 to 2^64. To illustrate, let's just look at the upper three bits. So we'll do this: size_t fibonacci_hash_3_bits(size_t hash) { return (hash * 11400714819323198485llu) >> 61; } All we have to do to get an arbitrary power of two range is to change the shift amount. So if my hash table is size 1024, then instead of just looking at the top 3 bits I want to look at the top 10 bits. So I shift by 54 instead of 61. Easy enough. It turns out the Linux kernel has used this for ~6 years; here's the comment from include/linux/hash.h: /* * This hash multiplies the input by a large odd number and takes the * high bits. Since multiplication propagates changes to the most * significant end only, it is essential that the high bits of the * product be used for the hash value. * * Chuck Lever verified the effectiveness of this technique: * http://www.citi.umich.edu/techreports/reports/citi-tr-00-1.pdf * * Although a random odd number will do, it turns out that the golden * ratio phi = (sqrt(5)-1)/2, or its negative, has particularly nice * properties. (See Knuth vol 3, section 6.4, exercise 9.) * * These are the negative, (1 - phi) = phi**2 = (3 - sqrt(5))/2, * which is very slightly easier to multiply by and makes no * difference to the hash distribution. */ #define GOLDEN_RATIO_32 0x61C88647 #define GOLDEN_RATIO_64 0x61C8864680B583EBull Why would you use this? (I may get some details of this explanation wrong, because hashing and hash table sizing are a surprisingly complex subject!) If I understand correctly, it makes sense to use this if (a) you don't have access to a good hash function, and (b) you want power-of-two (not prime) table sizes; and/or (c) you want the bucket calculation operation to be as fast as possible. (A multiplication plus a shift is significantly faster than modulo/division.)

TIL: Fibonacci hashing
https://github.com/pdubr

1 month ago 3 0 0 0

I should also mention that these are just big, long, mega-notes in chronological order. So pretty easy to find things by visually scanning, and (rarely) searching.

1 month ago 2 0 0 0

It's just a single note for all cooking/baking stuff. So I can do Cmd+F or just visually scan. I don't try that many new things (maybe a few times a month) so it's pretty easy to find.

1 month ago 1 0 0 0
Raycast Notes search interface, with three notes highlighted with blue rectangles: 'Useful stuff’ (preview shows “Replacement parts for Roborock…”), 'Cool stuff’, and 'Cooked’ (preview shows “2026-02-12: Made gyoza again…”).

Raycast Notes search interface, with three notes highlighted with blue rectangles: 'Useful stuff’ (preview shows “Replacement parts for Roborock…”), 'Cool stuff’, and 'Cooked’ (preview shows “2026-02-12: Made gyoza again…”).

My 80/20, grug-brained personal productivity system:

- Cool stuff: URLs, books, movies, etc. I want to remember.
- Useful stuff: how/where/etc for things I do a few times a year.
- Cooked: for cooking/baking: what recipe (URL or book), any adjustments I made, how it turned out.

1 month ago 10 0 3 0
Advertisement

Oh, thanks, didn't know about that! Would have been eligible already on GitHub stars

1 month ago 0 0 0 0

Heh. Thanks! Hope this is a good thing :-)

1 month ago 1 0 0 0

TIL that everyone who installs the Vercel CLI now gets a copy of @ohmjs.org

1 month ago 19 1 3 0
Ahoy!

Hope your February has been swell. Here in southern Germany, it's been a relatively snowy winter…but it's finally starting to feel like spring.

First and foremost, we wanted to let you know that we published a new blog post last week, A WebAssembly interpreter (Part 2). In Part 1, we created a simple Wasm interpreter from scratch, but it was only able to evaluate expressions consisting of literals. In the latest post, we add support for local and global variables. Give it a look!

And here are your Wasm tidbits for February:
 • "WebCC is a lightweight, zero-dependency C++ toolchain and framework for building WebAssembly applications. It provides a direct, high-performance bridge between C++ and HTML5 APIs." And then there's Coi, "a modern, component-based language for building reactive web apps", which is built on WebCC. 
 • Marimo is an open-source reactive Python notebook; like Jupyter, but better in many ways (no hidden state, stored as pure Python files, …). And it also supports WebAssembly notebooks, powered by Pyodide; in other words, Wasm notebooks execute entirely in the browser, without a backend executing Python. 
 • Along the same lines: Pandoc for the People is a fully-featured GUI interface for Pandoc (probably the most-used Haskell program ever). It lets you run any kind of conversion that pandoc supports, without the documents ever leaving your computer. It's based on the recent Pandoc 3.9 release, which supports Wasm via the GHC wasm backend.

Ahoy! Hope your February has been swell. Here in southern Germany, it's been a relatively snowy winter…but it's finally starting to feel like spring. First and foremost, we wanted to let you know that we published a new blog post last week, A WebAssembly interpreter (Part 2). In Part 1, we created a simple Wasm interpreter from scratch, but it was only able to evaluate expressions consisting of literals. In the latest post, we add support for local and global variables. Give it a look! And here are your Wasm tidbits for February: • "WebCC is a lightweight, zero-dependency C++ toolchain and framework for building WebAssembly applications. It provides a direct, high-performance bridge between C++ and HTML5 APIs." And then there's Coi, "a modern, component-based language for building reactive web apps", which is built on WebCC. • Marimo is an open-source reactive Python notebook; like Jupyter, but better in many ways (no hidden state, stored as pure Python files, …). And it also supports WebAssembly notebooks, powered by Pyodide; in other words, Wasm notebooks execute entirely in the browser, without a backend executing Python. • Along the same lines: Pandoc for the People is a fully-featured GUI interface for Pandoc (probably the most-used Haskell program ever). It lets you run any kind of conversion that pandoc supports, without the documents ever leaving your computer. It's based on the recent Pandoc 3.9 release, which supports Wasm via the GHC wasm backend.

Is it the end of February already??

Yes, yes it is. Which means we just sent out our #Wasm tidbits.

Sign up here to get it in your inbox once a month (ish): sendfox.com/wasmgroundup

1 month ago 4 2 0 0
Video

Here's my first creature.

1 month ago 5 0 0 0
Programmierung künstlichen Lebens in Scratch

Lerne Programmieren und erstelle dein eigenes, interaktives, digitales Wesen. Zuerst entwirfst du deine Figur (auf Papier oder auf dem iPad). Dann lernst du, wie du ihr in Scratch Verhalten gibst. Lass sie über den Bildschirm laufen, nach Futter suchen, entscheiden, wann sie schlafen muss. Dein iPad wird zu einer virtuellen Welt!

Programming Artificial Life in Scratch
Learn programming and create your own interactive digital creature. First, you design your character (on paper or on the iPad). Then you learn how to give it behaviour in Scratch. Let it walk across the screen, search for food, decide when it needs to sleep. Your iPad becomes a virtual world!

Programmierung künstlichen Lebens in Scratch Lerne Programmieren und erstelle dein eigenes, interaktives, digitales Wesen. Zuerst entwirfst du deine Figur (auf Papier oder auf dem iPad). Dann lernst du, wie du ihr in Scratch Verhalten gibst. Lass sie über den Bildschirm laufen, nach Futter suchen, entscheiden, wann sie schlafen muss. Dein iPad wird zu einer virtuellen Welt! Programming Artificial Life in Scratch Learn programming and create your own interactive digital creature. First, you design your character (on paper or on the iPad). Then you learn how to give it behaviour in Scratch. Let it walk across the screen, search for food, decide when it needs to sleep. Your iPad becomes a virtual world!

Starting another Scratch course at my kids' (Montessori) school today.

A bit different this time — the theme is "artificial life". Taking some inspiration from @shiffman.lol's natureofcode.com

1 month ago 13 0 1 0
 * This library intercepts time at multiple levels to slow down (or speed up)
 * all animations on a web page.
 *
 * ## How it works:
 *
 * 1. **requestAnimationFrame patching**: We replace window.requestAnimationFrame
 *    with a wrapper that passes modified timestamps to callbacks. Time-based
 *    animations that use the timestamp parameter will automatically slow down.
 *
 * 2. **performance.now() patching**: We replace performance.now() to return
 *    virtual time. Libraries that use this for timing will be affected.
 *
 * 3. **Date.now() patching**: We replace Date.now() to return virtual epoch
 *    milliseconds. Libraries like Motion/Framer Motion use this for timing.
 *
 * 4. **setTimeout/setInterval patching**: We scale delays by inverse of speed
 *    so timed callbacks fire at the expected virtual time.
 *
 * 5. **Web Animations API**: We poll document.getAnimations() and modify the
 *    playbackRate of all Animation objects. This affects CSS animations,
 *    CSS transitions, and element.animate() calls.
 *
 * 6. **Media elements**: We set playbackRate on video/audio elements.
 *
 * ## Limitations:
 *
 * - Frame-based animations (that increment by a fixed amount per frame without
 *   using timestamps) cannot be smoothly slowed down.
 *
 * - Animations created by libraries that cache their own time references
 *   before we patch may not be affected. The Chrome extension runs at
 *   document_start to minimize this issue.

* This library intercepts time at multiple levels to slow down (or speed up) * all animations on a web page. * * ## How it works: * * 1. **requestAnimationFrame patching**: We replace window.requestAnimationFrame * with a wrapper that passes modified timestamps to callbacks. Time-based * animations that use the timestamp parameter will automatically slow down. * * 2. **performance.now() patching**: We replace performance.now() to return * virtual time. Libraries that use this for timing will be affected. * * 3. **Date.now() patching**: We replace Date.now() to return virtual epoch * milliseconds. Libraries like Motion/Framer Motion use this for timing. * * 4. **setTimeout/setInterval patching**: We scale delays by inverse of speed * so timed callbacks fire at the expected virtual time. * * 5. **Web Animations API**: We poll document.getAnimations() and modify the * playbackRate of all Animation objects. This affects CSS animations, * CSS transitions, and element.animate() calls. * * 6. **Media elements**: We set playbackRate on video/audio elements. * * ## Limitations: * * - Frame-based animations (that increment by a fixed amount per frame without * using timestamps) cannot be smoothly slowed down. * * - Animations created by libraries that cache their own time references * before we patch may not be affected. The Chrome extension runs at * document_start to minimize this issue.

slowmo.dev by @seflless.bsky.social is pretty damn cool — "Slow down, pause, or speed up time of any web content."

Here's how it works.

1 month ago 7 0 0 0

You can see it here: github.com/wasmgroundup...

Ended up using mostly branded types, as that seemed to provide the best ergonomics.

1 month ago 1 1 0 0
Advertisement