Posts by Ariadne Conill đ°
instructions unclear: posting inuyasha memes on linkedin to describe various business strategies
this is a great proposal and one that i think is worth studying. sound transit is already over-committed to light rail for the âspineâ service, but building ballard/west seattle rail as an automated light *metro* makes an incredible amount of sense. our neighbours to the north have proved it.
gitlab.alpinelinux.org/alpine/aports
contribute it to aports? we would love to have it...
QNX is oldschool microkernel-ish unix clone. it is actually really neat. unfortunately, they open sourced it, and then blackberry rugpulled the open source version.
debating writing a personal activitypub server that also functions as an atproto PDS. that way i can talk to my friends on either platform easily.
clearly, it is time to go back to linkedin, the premier social network for professional posters
Bluesky is a fun toy but for serious posters like myself the uptime is frankly unacceptable. I require enterprise-level availability and durability SLAs for any website where I'm going to post. I'm going to have my SRE team start looking into other options for my posts.
banned from the delivery tracking website for pressing F5 too many times
oh? what sort of unholy things? i am trying to improve the user experience of pkgconf on windows right now.
it does not matter if they are lying or not, people need to be building their systems as if it were true anyway
the reason i bring this up is because it highlights my earlier thesis: complex systems are *exploitable* systems.
as an aside, a random fun fact: most CVEs in sudo are not related to memory safety at all, but rather in the way that sudo processes its access control rules.
and in a world where anyone can rent APT-level capability for $200/monthâŚ
you canât afford to rely on âprobably safe.â
you need to build systems which are *correct by construction*.
but capability systems only work if âstarting from nothingâ is real.
if thereâs hidden ambient authority, the whole model collapses.
in capability systems, security is not obtained through enforcement, but rather through construction.
the alternative to chipping away at default ambient authority is to build a capability system.
in a capability system, your program receives all of the authority it needs to run, and nothing more, when the program starts.
seccomp, however, is much worse. seccomp is fragile: i have had to downgrade musl on a few occasions in alpine because upgrading it broke everyone due to the seccomp policy included with containerd not being updated for newer syscalls.
the problem with subtractive sandboxing is that they are imperfect.
while approaches like landlock and pledge reduce authority more strictly, the runtime environment and programs must voluntarily facilitate installation of a landlock or pledge policy.
a subtractive sandbox starts from a position of ambient authority and voluntarily reduces that authority before executing code in the sandbox.
subtractive sandboxes are built with things like seccomp, openbsd's pledge and landlock.
so how do we actually reduce ambient authority?
there are two broad approaches: subtractive sandboxes and capability systems.
but even when you get isolation right⌠itâs still not enough.
because while isolation reduces ambient authority, it doesnât eliminate it.
and ambient authority is where things get dangerous.
and that has turned out to be a lot harder than expected, because youâre constantly balancing strong guarantees vs. real-world usability.
but it is necessary work because if you get that balance wrong, the system doesnât fail securely⌠it just gets disabled instead.
this problem is deeper than it looks.
âjust make isolation strongerâ isnât enough. it has to feel usable.
weâve spent the past two years at @edera.dev working on exactly this:
making isolation not just strong, but *ergonomic enough that people keep it on*.
this is part of why things like kata and firecracker havenât fully taken off.
too hard, too limiting and people turn it off.
modern workloads make this worse.
people want GPUs.
they want CUDA.
they want direct access to hardware that was never designed for this model.
so now your isolation layer has to preserve guarantees and expose extremely privileged, messy interfaces safely.
thatâs a very hard problem.
weâve seen this before: âjust turn off SELinuxâ
every enterprise software 10 years ago had this in their install instructions.
and today enterprise software says "turn off seccomp" or "use privileged mode".
we started with isolation because itâs *the* foundation.
almost every other security property assumes you already have it.
but isolation is a paradox:
it has to be strong enough to withstand attacksâŚ
while still being flexible enough that users donât disable it.
because they *will* disable it.
resilient systems are *intentional*.
they do more with less.
they are built from primitives that are easy to reason about.
enabling people to easily build resilient systems is why i started @edera.dev.
my take: complexity is the enemy.
because AI doesnât get tired.
it will happily explore every edge case, every weird state, every undefined behavior in your system.
so the more complex your system is, the more opportunities youâre giving an attackerâs AI to find a way in.