Charlie Marsh charliemarsh 🔥 popular repo java-ml
ah yes, @charliermarsh best known for his java-ml repo
Charlie Marsh charliemarsh 🔥 popular repo java-ml
ah yes, @charliermarsh best known for his java-ml repo
Bun v1.3.13 ships tomorrow.
Many memory usage & reliability improvements.
It’s not precisely accurate that we stream tarballs to disk. It is extracted in-memory to a temp folder. No tarball is saved.
Previously, Bun didn’t start extracting until the entire tarball finished downloading. compressed and uncompressed tarballs stayed in memory too long.
bun install · peak memory Streaming tarball extraction. 3.44GB 0.2GB 17× less memory Before After Repo with large packages · macOS arm64
In the next version of Bun
bun install streams tarballs to disk
In a large repo, this reduced memory by 17x
So this is effectively a lazy in-memory bytecode cache
The vast majority of what makes test isolation slow is time spent parsing modules. Not evaluation. Parsing. That’s why this works so well.
JavaScriptCore has both an input sourcecode cache and an in-memory bytecode cache. Since JavaScript is lazily parsed, both are needed - and we need to make sure that the cache is hit every time.
To do that, we add another cache that lets us skip the transpiler & disk reads
The trick here is adding one more level on top of JavaScriptCore’s multi-level code caching, and serializing ESM exports/imports.
The main tradeoff here is after compression it’s about 20% larger than VLQ-encoded compressed equivalent. Fortunately, compressing sourcemaps is unnecessary for server-side JavaScript.
Image from Twitter
Decoding now costs close to 0. Lookups cost about 6% more. Encoding gets faster.
_tsc.js (563k mappings) resident after first .stack Mapping.List (main) ~11.3 MB (20 B/mapping) LEB128 stream (commit 1) 2.92 MB (5.4 B/mapping) bit-packed windows (this) 1.29 MB (2.41 B/mapping)
In the next version of Bun
Source maps use up to 8x less memory
Quote Tweet: https://twitter.com/i/status/2044686990051045540
People have been asking for this for years.
It wasn't clear how to make it performant enough to be shippable, until earlier today
Quote Tweet: https://twitter.com/i/status/2020373504189952508
Mar 6: 167 lbs ( -12% )
Apr 14: 156 lbs ( -19% )
close to goal weight
if you’re choosing > 250 attributes for one span
please explain & give an example
me asking can the bits be packed to reduce memory usage or must it be slow & huge
what’s the max length of individual attribute names and values you’ve use in otel?
what’s the max number of attributes you have personally used at work in OpenTelemetry spans?
Bun v1.3.12 ships tonight
her: wait how do you track the calories
me: believe it or not, also claude. I send it a pic of the menu
I’m such a shill (2/2)
me at nopa on a first date
her: how’d you pick this restaurant?
me: I asked claude
[waitress compliments her pants, clothes come up]
her: where do you shop?
me: lost a bunch of weight, had to rebuy everything. sent claude a pic of me and it picked
[few min later, looking at menu] (1/2)
Image from Twitter
love seeing @elysiaJS double weekly npm downloads since December
Quote Tweet: https://twitter.com/i/status/2040257344404410431
claude, add a markdown pretty printer for terminals to Bun & don't make any mistakes
Bun v1.3.12 ships on Monday
In the next version of Bun
Threadpool & JIT threads now respect cgroup CPU limits instead of physical cores. This improves resource utilization in Docker & k8s
https://github.com/oven-sh/bun/pull/28801
earthquake
PR & benchmarks
https://github.com/oven-sh/bun/pull/28767
┌────────────────────────────────────────┬─────────┬─────────┬─────────┬──────┬────────┐ │ Benchmark │ v1.3.10 │ v1.3.12 │ npm │ Δ │ vs npm │ ├────────────────────────────────────────┼─────────┼─────────┼─────────┼──────┼────────┤ │ stringWidth: UTF-16 hyperlink (440 KB) │ 2.00 ms │ 180 µs │ 743 µs │ 11× │ 4× │ ├────────────────────────────────────────┼─────────┼─────────┼─────────┼──────┼────────┤ │ stripANSI: 1 KB plain text │ 65 ns │ 17 ns │ 32 ns │ 3.9× │ 1.9× │ ├────────────────────────────────────────┼─────────┼─────────┼─────────┼──────┼────────┤ │ stripANSI: OSC 8 hyperlink │ 60 ns │ 45 ns │ 66 ns │ 1.3× │ 1.5× │ ├────────────────────────────────────────┼─────────┼─────────┼─────────┼──────┼────────┤ │ stringWidth: ANSI hyperlink (445 KB) │ 135 µs │ 120 µs │ 1.38 ms │ 1.1× │ 11× │ ├────────────────────────────────────────┼─────────┼─────────┼─────────┼──────┼────────┤ │ stripANSI: bash out
In the next version of Bun
`Bun.stringWidth` gets up to 11x faster
`Bun.stripANSI` gets up to 3.9x faster
guess what’s coming in bun v1.4
Previously, async stacktraces were only supported for JS functions. Now it works for either natively implemented functions or JS functions.
❯ bun-new a.js # New 1 | import fs from "node:fs"; 2 | 3 | const p = "fake"; 4 | 5 | export async function main() { 6 | await fs.promises.lstat(p); ^ ENOENT: no such file or directory, lstat 'fake' path: "fake", syscall: "lstat", errno: -2, code: "ENOENT" at async main (/Users/jarred/a.js:6:21) Bun v1.3.11-debug+5b14b04a8 (macOS arm64) ~ ❯ bun a.js # Previous ENOENT: no such file or directory, lstat 'fake' path: "fake", syscall: "lstat", errno: -2, code: "ENOENT" Bun v1.3.11-canary.1+2e610b140 (macOS arm64)
In the next version of Bun
Async stacktraces are supported on native APIs like node:fs, Bun.write, node:http, node:dns & more.
This makes debugging easier
The curmudgeons hating on it as nothing new are proven wrong by all the cool stuff people are building.
If it was easy before then typography on the web would not be so uniform and nobody would’ve cared enough to try Cheng’s library.