A maze that exists only as a data structure in memory is a bit useless. We need some way to make it legible to human beings. So I’ve written an article that addresses how we do that.
jrsinclair.com/articles/2025/rendering-...
Posts by James Sinclair
I felt like writing about something fun. So I wrote an article about creating mazes with JavaScript. Things got out of hand and it grew to two articles. Second one will be published soon.
jrsinclair.com/articles/202...
Thinking about this some more, I do wonder if explore-expand-extract tracks as a specific application of Dave Snowden’s Cynefin framework.
I can't believe I haven't come across this talk by @kentbeck.com before. It has so much explanatory power.
www.youtube.com/watch?v=Wazq...
@charity.wtf talks a lot of sense, as usual:
> [10x Engineers exist] So what? It doesn’t matter. […] What matters is how fast the team can collectively write, test, review, ship, maintain, refactor, extend, architect, and revise the software that they own.
charity.wtf/2025/06/19/i...
I wrote a thing about all the ways you can summon a function in Javascript. It even includes a flow-chart to help check if you're picking a suitable incantation.
jrsinclair.com/articles/202...
What do you think? Does this theory make sense, or am I simply defending my biases?
As a side effect of this structured thinking process, you also generate automated tests that provide immediate feedback on your progress. And if you're following _all_ the steps (red, green, refactor), the code improves with every iteration.
It’s because that initial investment of self-discipline pays off. And the return on investment is huge. Problems are easier to solve if you are clear and specific about what the issue actually is.
And that’s not all.
Instead, TDD forces you to be very specific upfront about _what_ you want to achieve. And that feels like less fun and more effort than diving in and coding a solution. It takes mental effort.
Why, then, do some people love TDD so much?
I have a theory. People struggle with TDD because it feels like hard work. This is because many of us code to think. We're assigned a task, and we start hacking away in the IDE. The solution languidly emerges as we learn more by writing code.
But TDD won’t let you do that.
What would you use these for? Well, if you have large arrays, they can (sometimes) be a more efficient alternative than `.filter()` if you know where the data you want is at the start (or the end) of the array. They also come in handy when writing parsers.
Screenshot of some JavaScript code. The code is as follows: const dropWhile = (pred, arr) => { const idx = arr.findIndex(x => !pred(x)); return arr.slice(idx); } console.log(dropWhile(isVowel, ['a', 'e', 'i', 'o', 'u', 'b'])); // 🪵 ['b']
The second, dropWhile(), does the opposite. It traverses your array and ignores items until a predicate returns false. Then it will give you the rest of the array.
Screenshot of JavaScript code. The code is as follows: const isVowel = (c) => ['a', 'e', 'i', 'o', 'u'].includes(c); const takeWhile = (pred, arr) => { const idx = arr.findIndex(x => !pred(x)); return arr.slice(0, idx); } console.log(takeWhile(isVowel, ['a', 'e', 'i', 'o', 'u', 'b'])); // 🪵 ['a', 'e', 'i', 'o', 'u']
The first, takeWhile(), traverses your array and keeps adding items to a new array until a predicate returns false.
A couple of array utilities you won't find in Array.prototype 🧵
A quote by John Ousterhout, from the book 'A Philosophy of Software Design'. “Most modules have more users than developers, so it is better for the developers to suffer than the users. As a module developer, you should strive to make life as easy as possible for the users of your module, even if that means extra work for you. Another way of expressing this idea is that it is more important for a module to have a simple interface than a simple implementation.”
In some ways, this is so obvious it shouldn't need stating. But it does. And the tension is real. We want our code to be elegant, simple, concise. Yet, we rarely trade that off against the complexity that elegance creates in the interfaces (user interfaces and APIs) we develop.
From the archives: jrsinclair.com/articles/201...
Functional programmers are obsessed with purity. “Pure functions let you reason about your code”. “They give you referential transparency!” And they have a point. Purity is good. But what do you do with the impure bits of your code?
// With our Map-based approach, we essentially reimplemented // the Set data structure. We might as well use the Set // constructor directly. const uniq = (xs) => [...new Set(...xs)]; const fruits = ['apple', 'banana', 'apple', 'orange', 'banana']; console.log(uniq(fruits)); // 🪵 ['apple', 'banana', 'orange']
All we've really done with the third option, though, is re-implement the Set structure from scratch. So, we might as well use that. The most efficient option also happens to be the most concise.
// To avoid traversing the array more than we need to, // we can use a Map to keep track of the elements we've // seen so far. This way we only traverse the array once. const uniq = (xs) => Array.of( (xs.reduce((m, x) => m.set(x, true), new Map())).keys() ); const fruits = ['apple', 'banana', 'apple', 'orange', 'banana']; console.log(uniq(fruits)); // 🪵 ['apple', 'banana', 'orange']
To avoid traversing the array more than we need to, we can use a Map to keep track of the elements we've seen so far. This way we only traverse the array once. Once we've been through the list, we return the keys of the Map as an array.
// A more efficient approach is to use .indexOf() to // check if the current element is the first occurrence // of that element in the array. const uniq = (xs) => xs.filter( (x, i) => xs.indexOf(x) === i ); const fruits = ['apple', 'banana', 'apple', 'orange', 'banana']; console.log(uniq(fruits)); // 🪵 ['apple', 'banana', 'orange']
A (slightly) more efficient approach is to use .indexOf() to check if the current element is the first occurrence of that element in the array. But it still traverses the array many more times than it needs to.
// Naïve solution. This is inefficient because it // traverses the array multiple times, and creates // lots of new intermediate arrays. It works, though. const uniq = (xs) => xs.filter( (x, i) => !xs.slice(i + 1).includes(x) ); const fruits = ['apple', 'banana', 'apple', 'orange', 'banana']; console.log(uniq(fruits)); // 🪵 ['apple', 'banana', 'orange']
Our first approach uses a filter and `.includes()`. This is inefficient because it traverses the array multiple times, and creates lots of new intermediate arrays. It works, though.
Four ways to remove duplicates from an array in JavaScript. 🧵
P.S. Use the last one.
Screenshot of a code snippet showing the use of an array tuple as a Maybe construct. Code reads as follows: // Suppose we want to retreive the weather conditions // for a given location. Perhaps we've retreived some // data from a weather API and we need to process it. const sensors = { ['Dartmoor']: { value: 10, unit: 'C' }, ['Baker Street']: { value: 18, unit: 'C' }, ['Scotland Yard']: { value: 68, unit: 'F' }, } const conditions = { ['Dartmoor']: 'Foggy', ['Baker Street']: 'Sunny', ['Scotland Yard']: 'Rainy', } const toCelsius = (fahrenheit) => (fahrenheit - 32) * (5 / 9); // Here we define a function that wraps our value in an // array if it's not nullish. We can then use .map(), // .flatMap(), .reduce() etc. to safely handle the // nullish case. const maybe = (value) => value == null ? [] : [value]; const getWeather = (location) => maybe(sensors[location]) .map(({unit, value}) => unit === 'F' ? toCelsius(value) : value) .flatMap( (temperature) => maybe(conditions[location]) .map((condition) => ({temperature, condition})) ) .map(({temperature, condition}) => `${location}: ${temperature}°C, ${condition}`) .reduce((_, x) => x, 'Weather conditions not available'); console.log(getWeather('Dartmoor')); // 🪵 'Dartmoor: 10°C, Foggy' console.log(getWeather('Scotland Yard')); // 🪵 'Scotland Yard: 20°C, Rainy' console.log(getWeather('Diogenes Club')); // 🪵 'Weather conditions not available'
Did you know that you can use an array tuple in place of a Maybe structure? It elegantly handles empty values using familiar `.map()`, `.flatMap()`, and `.reduce()` methods. I think it's rather neat. Credit goes to @jmsfbs@pixelfed.social for introducing me to the idea.
Ousterhout argues that general purpose code tends to be simpler. It doesn't need to handle lots of special cases with copious if-statements and control structures. This kind of code handles those cases "ya ain't gonna need" _without any modification_.
In case you haven't come across it, YAGNI stands for "ya ain't gonna need it." The idea is that we want to avoid over-complicating things to solve for problems we may never encounter. Applied carelessly, you may end up with an over-specialized design.
I love this quote from John Ousterhout:
> I have found over and over that specialization leads to complexity; I now think that over-specialization may be the single greatest cause of complexity in software.
On the surface, it appears to contradict YAGNI, but not necessarily.
Screenshot of some JavaScript code showing two variables being swapped using array destructuring. The code reads as follows: let a = 'left'; let b = 'right'; console.log([a, b]); // 🪵 ["left", "right"] // Swap two variables with array destructuring. [b, a] = [a, b] console.log([a, b]); // 🪵 ["right", "left"]
It's old news now, but swapping variables with destructuring still blows my mind every time I see it.
You’re right, it’s sad that you can’t generally assume that people know that ?? Is available when they use || for setting defaults. Knowledge about new language features takes some time to disperse.
There’s nothing wrong with using || if it suits your use case. As you suggest, if you’re dealing with user-supplied data, that’s a slightly different use case to one where a developer omits an optional config parameter.
Screenshot of JavaScript code illustrating use of the nullish coalescing operator. The code reads as follows: function doSomething(configParam) { const config = configParam || DEFAULT_VAL; // ❌ // Rest of code } // The code above breaks if config param is a primitive // type (not an object). For example: '' || 'Unexpected' // ➡︎ 'Unexpected' 0 || 'Unexpected' // ➡︎ 'Unexpected' false || 'Unexpected' // ➡︎ 'Unexpected' // Instead, use the ?? operator function doSomething(configParam) { const config = configParam ?? DEFAULT_VAL; // ✅ // Rest of code }
I still see lots of people setting default values in JS using the || operator. This is fine if the values you're dealing with are always objects. But if you deal with numbers, booleans or strings, this can be problematic. In most cases, the ?? operator is a better choice.