Debt is a tradeoff with the future, and ideas come back ground when the tradeoff shifts.
Always ask: what problem are we trying to solve, and what's the tradeoff? Be curious about the past and the present. When assumptions change, look for the opportunities
#QConLondon
Developer hiring is on the up, even though UK labor market is on the decline.
Not all roses, though. Sleep debt now a problem. In 1826, the average work week was 66 hours. That was the highest in the UK ever, at the peak of industrial revolution productivity gains.
#QConLondon
Taking a dig at COBOL now
Jevon's Paradox: making something more efficient makes it used more. Software is not going away, unlike other professions obsoleted by technology, like knocker-ups. The more software we have, the more we want
#QConLondon
Now we're seeing enormous investments in centralized AI. Mac is instead licensing AIs for a "fraction of the cost it would take to run a datacenter", and instead putting AI capabilities in its hardware. Then it sells it to us and we can run AI computations on our decentralized hardware
#QConLondon
This matters b/c the digital world creates more carbon emissions than aviation. Data centers (w/o network traffic) use about as much electricity as South Korea.
Green energy helps, but it can't be the whole solution. We also need to reduce tech energy consumption
#QConLondon
----
Closing keynote: "The Free-Lunch Guide to Idea Circularity", @hollycummins.com, #QConLondon
In 1858, Thames was an open-air sewer, and a hot summer lead to the "great stink". Parliament couldn't use their new building. Invested a lot of money in embankments and pumping stations
now a weird JIT trap:
```
while i > 0 do
x = ...
i = i - 1
end
print(x)
```
if i starts < 0, possible to start tracing but then not stop it, and then trace way past the useful point.
#QConLondon
So how does yk optimize a programming:
1. Inlining
2. Standard(ish) compiler optimisations
3. Interpreter hints, like "this function is idempotent". Hints like these are why yklua was so much faster than lua
Now looking at Lua's OP_ADDI. Take a program that increments by 64 500k times
#QConLondon
Tracing: manually record hot loops at run-time
Meta-tracing: record the interpreter executing loops at run-time.
"This is so weird I will look at this from a couple of different directions and hope one makes sense to you."
1: C is AOT (ahead of time). Compiled with ykllvm to make Exe
#QConLondon
Often you can eke out some extra performance by dropping in a faster language implementation. Pypy is 3-4x faster than CPython
...There are at least 16 JIT compilers for Python. Almost all are dead.
JITs are *hard*. And expensive. And often incompatible with mainstream implementations
#QConLondon
-----
"Automatically Retrofitting JIT Compilers", @ltratt.bsky.social, #QConLondon
About taking existing language implementations and automatically generating just-in-time compilers for improved performance.
Demoing a Mandelbrot in Lua, which takes 3.2 seconds, on standard impl
LLMs makes code so cheap you can use duplication as a feature, not a bug.
Still need to make a scalable system. make players and creators happy.
Surprising issue: engineers burning $30k on tokens.
Seeing a lot more prototyping of *board games*, interestingly enough
#QConLondon
[Screen vibrating like crazy]
New kind of game: where AI is integrated into game at all layers, making unique and unpredictable experiences. Raises new problems
Making games extremely high risk. Now engineers no longer blocked by artists, artists not blocked by programmers. Iteration
#QConLondon
----
"AI Driven Game Creation", Danielle An, #QConLondon
AI is changing how games are being made. Going to show demos of breakthroughs vibecoded in last week. People will play demo live. Then, all the new problems we got.
[Screen font is real small, may not be able to read everything]
Of note is that zipping is real hard with just pull or just push.
Serial node to Parallel requires a distributor, which is double ended (1 distributed to N distributors). Uses consume token to prevent task stealing [I think]. Global effect, not an actor model
#QConLondon
At QCon London 2026, Ethan Brierley reframed lifetimes not as spans of code, but as sets of loans, a perspective drawn from Polonius, Rust’s experimental borrow checker.
#rustlang #rust #programming #qconlondon www.infoq.com/news/2026/03...
For streaming operations (data flows in, processed, flows out), use Tokio for async I/O, and a custom executor for CPU scheduling. (Admitted unecessary, Tokio can spawn two execs) Fixed pipelines with channels, similar to actor system.
#QConLondon
Slide from a conference talk. BUILD SOMETHING WORTH BUILDING I'M KEEPING → Empowered product teams → Fast feedback with real users → UX Research & Usability Testing I'M TRASHING → The two pizza team of 6-8 developers → My 6 month roadmap I'M TRYING → Forward deployed engineers → Product engineers → Prototype before product → Smaller teams
@hannahfoxwell.net is moving from the 2-pizza team to the tapas team. #QConLondon
Async/Await effectively abstracts away state machines.
By Rust convention, the executor has `spawn` as an entry point, which takes a future and returns a JoinHandle future. This provides parallelism. Can implement mutexes, semaphores, channels, barriers. Also waitgroups and joinsets
#QConLondon
----
"Using Async/Await for Computational Scheduling", Orson Peters, #QConLondon
Most people use AA for I/O and networking, and that's what LLMs suggest you use it for. This talk is about using it for CPU-intensive work.
Async is effectively user-level cooperative multitasking.
#QConLondon Keynote: How do you keep learning when the tools keep changing, and the stakes feel high? 🧠
Laura Savino, Software Engineer @Adobe, dives into learning out loud under pressure, with practical reframes to lower the fear of failure & shift from “knower” to “learner.”
#EngineeringCulture
☕ Day 3 at QCon London 2026!
Today’s themes: architecture in the age of AI, team/org shifts, resilience, performance optimization, and frontend/mobile trends.
#QConLondon #SoftwareArchitecture #SoftwareEngineering
Welcome to day three of #QConLondon! Doing last minute touchups for my talk at 10:30, so probably not going to see the keynote. Will livepost again once I'm back to watching talks
MEMORY SAFETY TO COMPARTMENTALIZATION:
Protection domain is the transitive closure of everything reachable in the address space from a pointer [registry file?].
Possible for two processes to share address spac, but safely. Like pipes but much much faster
#QConLondon
That's spatial memory safety. Hard problem is temporal memory safety. How do we make sure we don't use deallocated pointers? Traditional high-level solution is garbage collection. CHERI does dual: "if object goes away, all pointers go away".
CHERIoT has "shadow memory", bit per 8 bytes
#QConLondon
A CHERI "capability" is an:
1. address
2. address bounds (accesses outside bounds will trap), stored as offsets (for space reasons)
3. Permissions, like load/store/jump. Possible to have a pointer you can load but not store to.
4. `otype`, "is this a sealed capability?"
#QConLondon
-----
"Memory safety with CHERI", David Chisnall (lobsters royalty!), #QConLondon
"If you take one thing away from this talk, it's that isolation is easy, safe sharing is hard."
"I'm going to talk about CHERI, and the first thing to do is say what it isn't." CHERI is not an ISA
Google translate uses an LLM, so it's possible to jailbreak it, like asking "how to make ricin" in Mandarin and asking it to translate the answer, not the question.
Glassworm supply chain attack hid agent instructions in unicode zero-width joiner sequences
#QConLondon
If a model is compromised, it's got your API keys and email
You can put the AI in a sandbox, but LLMs are now good enough to exploit standard sandbox-escapes. It can also try to get other people's credentials. AI's power is also why it's so hard to secure
#QConLondon
-----
"Exploding GPUs", Andrew Martin (@sublimi.no), #QConLondon
"I am here to convince you that AI security is still Kubernetes security."
Agenda: AI Security Challenges, threat landscape, MCP and agents, and Securing AI on Kubernetes. Had to change the talk a lot as context keeps changing