That pushes people to tools like Cloudflare (who have the infrastructure to endure the storm). Which in turn makes Cloudflare a common point of failure (why recently sites like Spotify, LinkedIn, Zoom, YouTube, Canva, etc have had outages at the same time).
Posts by Jordan Mele
Depends on the scale. DDOS attacks are tricky as the flood of requests can be coming from thousands (or even millions) of machines.
Yes to both. Been at Canva 5 years.
Being 2 letters off bazelisk (github.com/bazelbuild/b...) and pronounced similar will get pretty confusing fast. 😅
I'll let internal folks know. Be interesting to see how to fares at scale.
In repo tooling is;
- Ruff (linting and formatting)
- mypy (type checks, deprecated)
- pyrefly (type checks)
- uv (dependencies)
In terms of IDE (VSCode) language server extension was previously Pylance, now BasedPyright (preinstalled in devboxes). They both work and that's enough for me.
Dependencies are "global" in a sense, so virtual environments are common. That complicates IDE integrations.
For me personally (inexperienced as I am with the python ecosystem) this is a particular challenge.
Languages servers are pretty hit and miss, especially since type annotations are ignored at runtime (static only, so tools like mypy, pyright, pyrefly, or ty are needed).
Projects like UV and Ruff are helping. UV in particular really improved dependency management.
I wouldn't describe myself as a python dev, but I do work with it. Mainly in a "build Infra" capacity.
That said, a well optimised site can get pretty low already. Be interesting to hear about the challenges that motivate adding this kind of complexity to the web platform.
As for out of order streaming of HTML... It seems like a good (and perhaps even overdue) primitive. One not reliant on JS that should drive down TTFB and (subject to the implementation) TTI.
Being able to apply non-layout styling to arbitrary ranges (without JS) would be handy.
Caching maybe?
One way to check would be building after a change, then reverting that change and building again. Comparing times as you go (assuming the margin for error isn't larger than the difference).
Go (a sanctioned language) was in place before Rust and my team gave it a try. Didn’t get much traction and the error handling a common source of mistakes.
Jump to 2025 and there are no sanctioned languages. Just need to make a good case for inclusion.
Pretty much.
First showed up in 2019 being used by a single team and went mostly unnoticed.
In 2023 support was added to Bazel.
In 2024 my team picks it up (mainly for build and devx things). It took off from there and is now finding usage across our stack.
I’ve been in Atlanta GA this week to speak at #bazelcon (also first time speaking on stage 😓).
On-demand recording is at youtu.be/ZBYWI4vdeco
Stockout in us-east-1.
Taking a jab at AWS there but same-same for GCP’s us-east1.
There is definitely an art to getting anything useful out of it. Same can be said for most places reviews can be left.
No arguments that most obsolescence nowadays is by design. Not much to be gained with new gen stuff accomodating regressions in software efficiency (ignoring use cases like local AI and gaming), feels like you’d at least get something decent out of it. Like a noticeably faster system.
Not surprised to hear ASUS did that, I understand their warranty process had (and maybe still is) acting in bad faith. Reputation wise, it fits.
That pending diagnosis turned out to be a MB failure (chip on the board burned out). Lasted 10 years, so not too bad. Thankfully all other parts are fine, so I turn what remains into an AM4 build for relatively little vs. going all new.
GPU? Market has been weird ever since crypto mining pushed everything up. Whatever says it supports the display numbers and resolutions should meet your needs. NVIDIA GeForce, Intel Arc, AMD Radeon, pick your poison, AMD has historically had driver issues though (stable currently for me).
Not much I can say for storage beyond take a look at ReFS if you haven’t already (with backups). Supposed to have several tricks around IO, could help but benchmarking would be needed to confirm.
Continuing CPU, for any custom data processors you’ve got (not SQL Server) it’s worth seeing if they can take advantage of SIMD extensions like AVX512. For relevant workloads they can slash wall time.
For CPU, more cores (threadripper) is probably more important than L3 cache (x3d). Provided SQL Server is actually using them and not bottlenecked on bandwidth. Behaviour of your current rig may provide some insight there.
For memory, I’m pretty sure G.Skill is selling the fastest sticks (if you can find it). Dual vs quad channel MB configuration is a factor in bandwidth, but not one I’m familiar with.
Damn. TPM 2.0 was fine on all mine for W11, CPU generation cutoff is what caught them all. Real shame since they are all more than capable.
😔 There is gonna be so much eWaste over the next few months/years with 10 reaching end of support.
Brand is probably a factor, though I suspect environment is a factor (cat fur, I’ve had a lot less issues since moving).
My luck with motherboards is worse. 1 RMA (BIOS crash-loop), 2 post-warranty (same issue, but with workaround), and possibly one more pending diagnosis. All GIGABYTE brand.