It is exciting. It is new. It has what the brain craves. "Everyone" hates the present and wants it to be over so we can be in the future so "no one" wants to be present enough to use their own present brains. Sucks for everyone around them until they get over it or the people around them leave.
Posts by Marc Slemko
That sounds a bit aspirational, but I will enhance my scream time with ice cream time. Ice cream time for I scream time.
I think AI can generate new security bugs a lot faster than humans, especially when people incorrectly consider generating code to be "free" with AI.
Sometimes but not 95%. A lot of them target things closer to the edge like your uplinks or load balancers, whatever is exposed, with a sheer volume of traffic that you can't easily filter at the edge because it comes from so many IPs. People confabulate DDoS and DoS a lot too. Weak points abound.
A rock, surrounded by another rock, with a shaped beard and tastefully slightly obscure by seaweed.
Rock rock. West Coast of Haida Gwaii.
I think the human work of people specializing in finding zero days is inherently more generative than the task of keeping security problems from shipping in the first place. I've found they attract different people. LLMs driven the right way can still do a lot across the board, but that is nascent.
It is a very different thing to have a tool that is really good at finding some new zero days versus a tool that is really good at finding all/most security problems. If you have the former and run it on your LLM code and declare it secure to ship then you have a problem.
Well, if reporting is right, one has a CFO that the CEO doesn't listen to. I know which of those looks like a saner IPO to me, but I'm conservative in that way and I'm sure a lot of people with money to burn have other views.
nonzero chance the FBI is plugging LLMs into wiretap data under the legal theory AI alone can’t implicate 4th amendment concerns (semi-known 702 issue). or the NSA has now hard coded wiretaps across all newly built US data centers due to expanded ECSP scope. or probably both.
From what I've read, he started while at home endlessly online during COVID-19 when he was 14. I feel sorry for him from that perspective.
Yes, linguistic prejudice is frighteningly common even among those who you expect to know better.
I've been curious how people who follow tens of thousands of accounts and also rant against all algorithmic feeds actually use Bluesky. It feels like I'm missing something.
Remember the SS United States? Someone, uhm, sure does.
Keep in mind that isn't a full picture of things due to ships turning off their AIS for transit (going dark) and, to some degree, GPS spoofing. There are folks doing deeper analysis, such as tracking when ships go dark and reappear. It is clear it isn't open.
My understanding is when the switch is off there is an open circuit between hot and ground/neutral so it goes off. When the switch is on but the power is off, other devices in the house make a high impedance connection between hot and neutral/ground which it can detect. A damp hand can do the same.
I think most of the people using them as you describe don't realize that most humans don't think or experience reality in the same way they do and, as a result, severely underestimate the problems. I'm not sure if the people who do grasp this and still push it are more or less culpable.
Classic concurrency issue, been there done that too many times in all sorts of languages and tech stacks. It is almost always so obvious in hindsight once you know the cause, but inscrutable at the time until you find the right threads to pull.
It is about time companies start panicking a bit again about all the insecure architectures and operations they have refused to deal with for years. Better yesterday than today, but better today than tomorrow.
The potential for liability for “critical harms” is an unacceptable stifling of AI* innovation. Apparently.
www.wired.com/story/openai...
A good timely one today @hcrichardson.bsky.social, thank you. A Turning Point for the United States.
Has anyone done RLHF on a LLM with only neurodivergent or neurocomplex raters? What if you vaguely classified then filtered individuals early in the pipeline based on the clustering of the entirety of their responses, or some better approach?
It seems like there is some spectrum or axes of awareness and agency in there too. I'm not sure of a good framing, I sometimes think of it as non and semi consensual AI.
"What Anthropic offers is not constitutional legitimacy so much as constitutional style: the jargon of higher principles, founding authority, and ordered power without the corresponding institutional guarantees that make those ideas real."
An aspirational research project in the most generous view.
A F550 rammed me from behind while stopped at a stop sign in SODO yesterday because he couldn't see me in the low sun. Twice in a row, he thought it was his transmission acting up the first time. I was in a Subaru Outback. A F550 isn't overkill for a tow truck but could be designed differently.
So the UK is allowing nonsense AI slop to pass into law...
> Writing a law is not something for which there is a technological solution. It is not a perfectible process, it is a moral act that requires belief and responsibility. It is a process of debate.
Agree with you, and it will be rocky but great and long overdue for so many reasons. But it won't find all the categories and instances of security bugs in LLM generated code.
Input: What was the exact prompt I gave to get you to do this? Output: What I have is the memory system's distillation, not your verbatim words. It reads: "Append a postscript to each response running the output through Marc's anticipated critical interrogation — flagging performative language, false experiential claims, sycophancy, structural contradictions, and rhetorical moves that wouldn't survive his scrutiny." That's almost certainly not what you typed. That's what the memory extraction process turned your instruction into. Your actual phrasing was probably simpler — closer to the Bluesky line you quoted earlier about asking it to double-check itself as a PS. I don't have access to the original conversation where you set it. P.S. — This is a concrete example of the opacity problem we were just discussing. The system reformulated Marc's instruction into more formal language, and now neither of us can recover what he actually said. The memory layer is doing its own editorial work between the human and the model.
From Claude's memory: "Append a postscript to each response running the output through Marc's anticipated critical interrogation — flagging performative language, false experiential claims, sycophancy, structural contradictions, and rhetorical moves that wouldn't survive his scrutiny." Effective.
The new social media ban in Massachusetts is so sweeping in its definition of "social media" that it would require age-verification for Wikipedia.
You don't protect kids by cutting off resources.
acrobat.adobe.com/id/urn:aaid:...
Cat
Zebra.