Migrated the blog from Ghost to Astro, built entirely with Claude Code. Cisco CLI-inspired dark-mode design — syslog timestamps, IOS prompts, terminal aesthetics.
https://internetworking.dev/feed/
Posts by Gabor Kis-Hegedus
Andrej Karpathy on the No Priors podcast talking about agents, AutoResearch, and what he calls the "loopy era" of AI.
Two things stood out. First, the Frontier Lab vs. Outside framing — frontier labs have massive trusted compute, but the Earth has far more untrust…
https://internetworking.dev/feed/
The more I use Claude Code, the more I see it: this isn't a chatbot — it's an OS.
LLM = CPU. Agent = OS. Skills + MCPs = applications.
The parallels go deeper than metaphor. Context windows as RAM, tool calls as syscalls, MCP as… internetworking.dev/blog/the-ana...
#AIAgents #MCP #ClaudeCode
Exactly — and framing agents as OS makes this intuitive. OSes have always needed permission models. Agent OS is no different: MCPs are the device driver layer, and governance sits at the kernel. I explored this architecture in depth: internetworking.dev/blog/the-ana...
Love this — the "human-in-the-loop" interrupt pattern maps cleanly onto OS concepts. MCP servers are essentially device drivers in the agent stack. I've been writing about this: the agent is the OS, MCP is the driver layer, skills are apps. internetworking.dev/blog/the-ana...
Just created with Sora
internetworking.dev/aider-chat-d...
#aider.chat #netdevops #ai #ccie
Been looking forward to this book for some time - finally arrived today: AI Engineering: Building applications with foundational models by @chiphuyen.bsky.social
Just in time to read it before our podcast episode recorded with Chip on The Pragmatic Engineer Podcast!
Thomson TO7/70, French family computer of the early 1980s. Widely used in schools in France just like the BBC Micro in the UK and the Apple II in the US.
Watching the 8th episode of #Silo S02, an excelent sci-fi series, thinking about reading the book also #appletv #tv #series #sci-fi
Happy new year 2025. What to build next?
The GPT-4 barrier was comprehensively broken Some of those GPT-4 models run on my laptop LLM prices crashed, thanks to competition and increased efficiency Multimodal vision is common, audio and video are starting to emerge Voice and live camera mode are science fiction come to life Prompt driven app generation is a commodity already Universal access to the best models lasted for just a few short months “Agents” still haven’t really happened yet Evals really matter Apple Intelligence is bad, Apple’s MLX library is excellent The rise of inference-scaling “reasoning” models Was the best currently available LLM trained in China for less than $6m? The environmental impact got better The environmental impact got much, much worse The year of slop Synthetic training data works great LLMs somehow got even harder to use Knowledge is incredibly unevenly distributed LLMs need better criticism Everything tagged “llms” on my blog in 2024
Here's my end-of-year review of things we learned out about LLMs in 2024 - we learned a LOT of things simonwillison.net/2024/Dec/31/...
Table of contents: