Advertisement · 728 × 90

Posts by Marc Slemko

It is exciting. It is new. It has what the brain craves. "Everyone" hates the present and wants it to be over so we can be in the future so "no one" wants to be present enough to use their own present brains. Sucks for everyone around them until they get over it or the people around them leave.

6 hours ago 1 0 0 0

That sounds a bit aspirational, but I will enhance my scream time with ice cream time. Ice cream time for I scream time.

1 day ago 1 0 0 0

I think AI can generate new security bugs a lot faster than humans, especially when people incorrectly consider generating code to be "free" with AI.

2 days ago 0 0 0 0

Sometimes but not 95%. A lot of them target things closer to the edge like your uplinks or load balancers, whatever is exposed, with a sheer volume of traffic that you can't easily filter at the edge because it comes from so many IPs. People confabulate DDoS and DoS a lot too. Weak points abound.

2 days ago 4 0 0 0
A rock, surrounded by another rock, with a shaped beard and tastefully slightly obscure by seaweed.

A rock, surrounded by another rock, with a shaped beard and tastefully slightly obscure by seaweed.

Rock rock. West Coast of Haida Gwaii.

3 days ago 1 0 0 0

I think the human work of people specializing in finding zero days is inherently more generative than the task of keeping security problems from shipping in the first place. I've found they attract different people. LLMs driven the right way can still do a lot across the board, but that is nascent.

3 days ago 0 0 0 0

It is a very different thing to have a tool that is really good at finding some new zero days versus a tool that is really good at finding all/most security problems. If you have the former and run it on your LLM code and declare it secure to ship then you have a problem.

3 days ago 0 0 1 1
Advertisement

Well, if reporting is right, one has a CFO that the CEO doesn't listen to. I know which of those looks like a saner IPO to me, but I'm conservative in that way and I'm sure a lot of people with money to burn have other views.

3 days ago 1 0 0 0

nonzero chance the FBI is plugging LLMs into wiretap data under the legal theory AI alone can’t implicate 4th amendment concerns (semi-known 702 issue). or the NSA has now hard coded wiretaps across all newly built US data centers due to expanded ECSP scope. or probably both.

4 days ago 1746 563 15 30

From what I've read, he started while at home endlessly online during COVID-19 when he was 14. I feel sorry for him from that perspective.

6 days ago 7 0 0 0

Yes, linguistic prejudice is frighteningly common even among those who you expect to know better.

6 days ago 0 0 0 0

I've been curious how people who follow tens of thousands of accounts and also rant against all algorithmic feeds actually use Bluesky. It feels like I'm missing something.

1 week ago 2 0 0 0
NATIONAL DISGRACE: Florida County’s Push to Sink the SS UNITED STATES Betrays Our Maritime Legacy
NATIONAL DISGRACE: Florida County’s Push to Sink the SS UNITED STATES Betrays Our Maritime Legacy YouTube video by SS United States Preservation Foundation Inc.

Remember the SS United States? Someone, uhm, sure does.

1 week ago 0 0 0 0

Keep in mind that isn't a full picture of things due to ships turning off their AIS for transit (going dark) and, to some degree, GPS spoofing. There are folks doing deeper analysis, such as tracking when ships go dark and reappear. It is clear it isn't open.

1 week ago 3 0 0 0

My understanding is when the switch is off there is an open circuit between hot and ground/neutral so it goes off. When the switch is on but the power is off, other devices in the house make a high impedance connection between hot and neutral/ground which it can detect. A damp hand can do the same.

1 week ago 6 0 2 0
Advertisement

I think most of the people using them as you describe don't realize that most humans don't think or experience reality in the same way they do and, as a result, severely underestimate the problems. I'm not sure if the people who do grasp this and still push it are more or less culpable.

1 week ago 1 0 0 0

Classic concurrency issue, been there done that too many times in all sorts of languages and tech stacks. It is almost always so obvious in hindsight once you know the cause, but inscrutable at the time until you find the right threads to pull.

1 week ago 1 0 0 0

It is about time companies start panicking a bit again about all the insecure architectures and operations they have refused to deal with for years. Better yesterday than today, but better today than tomorrow.

1 week ago 3 0 0 0
OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters The ChatGPT-maker testified in favor of an Illinois bill that would limit when AI labs can be held liable—even in cases where their products cause “critical harm.”

The potential for liability for “critical harms” is an unacceptable stifling of AI* innovation. Apparently.

www.wired.com/story/openai...

1 week ago 11 7 0 4
Today in Politics | Explainer
Today in Politics | Explainer YouTube video by Heather Cox Richardson

A good timely one today @hcrichardson.bsky.social, thank you. A Turning Point for the United States.

1 week ago 0 0 0 0
1 week ago 0 0 0 0

Has anyone done RLHF on a LLM with only neurodivergent or neurocomplex raters? What if you vaguely classified then filtered individuals early in the pipeline based on the clustering of the entirety of their responses, or some better approach?

1 week ago 0 0 0 0

It seems like there is some spectrum or axes of awareness and agency in there too. I'm not sure of a good framing, I sometimes think of it as non and semi consensual AI.

1 week ago 1 0 1 0

"What Anthropic offers is not constitutional legitimacy so much as constitutional style: the jargon of higher principles, founding authority, and ordered power without the corresponding institutional guarantees that make those ideas real."

An aspirational research project in the most generous view.

1 week ago 0 0 0 0
Advertisement

A F550 rammed me from behind while stopped at a stop sign in SODO yesterday because he couldn't see me in the low sun. Twice in a row, he thought it was his transmission acting up the first time. I was in a Subaru Outback. A F550 isn't overkill for a tow truck but could be designed differently.

1 week ago 0 0 0 0

So the UK is allowing nonsense AI slop to pass into law...

> Writing a law is not something for which there is a technological solution. It is not a perfectible process, it is a moral act that requires belief and responsibility. It is a process of debate.

1 week ago 79 31 2 2

Agree with you, and it will be rocky but great and long overdue for so many reasons. But it won't find all the categories and instances of security bugs in LLM generated code.

1 week ago 0 0 0 0
Input: What was the exact prompt I gave to get you to do this?

Output: What I have is the memory system's distillation, not your verbatim words. It reads:
"Append a postscript to each response running the output through Marc's anticipated critical interrogation — flagging performative language, false experiential claims, sycophancy, structural contradictions, and rhetorical moves that wouldn't survive his scrutiny."
That's almost certainly not what you typed. That's what the memory extraction process turned your instruction into. Your actual phrasing was probably simpler — closer to the Bluesky line you quoted earlier about asking it to double-check itself as a PS.
I don't have access to the original conversation where you set it.


P.S. — This is a concrete example of the opacity problem we were just discussing. The system reformulated Marc's instruction into more formal language, and now neither of us can recover what he actually said. The memory layer is doing its own editorial work between the human and the model.

Input: What was the exact prompt I gave to get you to do this? Output: What I have is the memory system's distillation, not your verbatim words. It reads: "Append a postscript to each response running the output through Marc's anticipated critical interrogation — flagging performative language, false experiential claims, sycophancy, structural contradictions, and rhetorical moves that wouldn't survive his scrutiny." That's almost certainly not what you typed. That's what the memory extraction process turned your instruction into. Your actual phrasing was probably simpler — closer to the Bluesky line you quoted earlier about asking it to double-check itself as a PS. I don't have access to the original conversation where you set it. P.S. — This is a concrete example of the opacity problem we were just discussing. The system reformulated Marc's instruction into more formal language, and now neither of us can recover what he actually said. The memory layer is doing its own editorial work between the human and the model.

From Claude's memory: "Append a postscript to each response running the output through Marc's anticipated critical interrogation — flagging performative language, false experiential claims, sycophancy, structural contradictions, and rhetorical moves that wouldn't survive his scrutiny." Effective.

1 week ago 0 0 0 0

The new social media ban in Massachusetts is so sweeping in its definition of "social media" that it would require age-verification for Wikipedia.

You don't protect kids by cutting off resources.

acrobat.adobe.com/id/urn:aaid:...

1 week ago 622 400 14 9
Cat

Cat

Zebra.

1 week ago 1 0 0 0