Advertisement · 728 × 90

Posts by 0xWulf

Post image

If you name your model Mythos, you know exactly what you're doing - it's a masterclass in AI marketing.
But beneath the "cyber scare" hype lies a hard reality:
Anthropic is already straining to meet soaring demand.
It would have made a full-scale Mythos rollout impossible even if they wanted one.

4 days ago 0 0 0 0
Post image

GPT-5.4-Cyber does not sound like a fundamentally new cyber model.

It sounds like OpenAI removed more of the guardrails.

The capability may be less about adding new skills, and more about letting the model use more of the skills it already had.

1 week ago 3 0 2 1
Scaling Managed Agents: Decoupling the brain from the hands Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.

www.anthropic.com/engineering/...

1 week ago 0 0 0 0
Post image

A simple way to read the claude managed agents model:

Session = what happened (memory/log)
Harness = what to do next (coordinator)
Sandbox = where the work happens (workspace)
👉Split them cleanly, and agents get more durable.

1 week ago 1 0 1 0

www.ft.com/content/b39d...

1 week ago 0 0 0 0
Post image

Benchmarks are dead. Now it's all about monetization.

China's next AI winners won't be the ones topping leaderboards, but the ones turning models into products people actually use and pay for.

Alibaba looks well positioned for this phase

1 week ago 0 0 1 0
Post image

This chart says a lot: Chinese AI models have gone from underdogs to leaders in token consumption within weeks. In the agent era, cheap tokens are a serious strategic advantage.

3 weeks ago 2 0 0 0
Post image

Just set up EurekaClaw locally in 60 secs — open-source AI research agent that goes from conjecture to LaTeX paper.
curl -fsSL eurekaclaw.ai/install.sh | bash
curl command install, Python venv, done
Plugs into Anthropic, OpenRouter, Ollama, or any OpenAI-compatible API

github.com/eurekaclaw/e...

1 month ago 3 0 0 0
Preview
Can Nvidia’s Dominance Survive the Sea Change Under Way in AI Computing? Making chips for training AI models made it the world’s biggest company, but demand for inference is growing far faster.

www.wsj.com/tech/ai/can-...

1 month ago 1 1 0 0
Advertisement
Post image

Inference is where AI turns hype into cash flow.

Jensen Huang: "We need to inference at a much higher speed… each one of those tokens are dollarized, it directly translates into revenues."

The next AI race isn't smarter models.
It's faster tokens. Faster tokens = real revenue.

1 month ago 1 0 1 0

For my current use case, article-level is enough (1 PDF -> 1 Markdown )
But I just came across @ArtemXTech’s tweet about qmd, and am checking it out. It looks insanely useful once the workspace gets big
x.com/ArtemXTech/s...

1 month ago 1 0 0 0
Preview
Interactive MarkItDown batch converter (PDF/DOCX/etc -> Markdown Interactive MarkItDown batch converter (PDF/DOCX/etc -> Markdown - markitdown-batch.sh

Right now I'm keeping it simple, mostly article-level chunks. Each PDF goes into one Markdown file. I use a bash shell script I wrote to batch convert gist.github.com/hexawulf/7ad...

1 month ago 0 0 1 0
Preview
GitHub - microsoft/markitdown: Python tool for converting files and office documents to Markdown. Python tool for converting files and office documents to Markdown. - microsoft/markitdown

github.com/microsoft/ma...

1 month ago 1 0 0 0
Post image

🧠 Turn your PDF chaos into LLM-ready Markdown with MarkItDown.
👉If you're running OpenClaw / Claude Cowork / local LLMs this is a killer first step for your RAG pipeline:
Turn PDFs + Office docs → clean, structured Markdown
CLI: markitdown file.pdf > output.md (easy to batch + automate)

1 month ago 3 0 3 0
Post image

America might "win" the AI race on paper - and still lose the broader war by going all-in on one horse.

China hedges 🇨🇳⚙️ EVs, batteries, solar, wind, robotics, manufacturing - less glamorous, more concrete. Hard to predict who'll be right.
Essay by Tim Wu 👇

ft.com/content/125813… via @FT

4 months ago 0 0 0 0
Preview
Big Tech Makes Cal State Its A.I. Training Ground

Big Tech Makes Cal State Its A.I. Training Ground www.nytimes.com/2025/10/26/t...

5 months ago 0 0 0 0
Post image

Cal State’s AI leap promises jobs—but at what cost to academic freedom? Corporate influence in classrooms threatens critical thinking.
Demand transparency & protect independent education before Big Tech sets the rules for learning.

5 months ago 1 0 1 0
Advertisement
Preview
AI Workers Are Putting In 100-Hour Workweeks to Win the New Tech Arms Race With expertise in the field scarce, workers in Silicon Valley are pushing themselves to extremes day after day.

With AI expertise scarce, workers in Silicon Valley are pushing themselves to extremes day after day www.wsj.com/tech/ai/ai-r... via @WSJ

5 months ago 1 2 1 3
Post image

The AI arms race is turning into a sleep-deprivation contest. World-class minds sprinting 100-hour weeks to “ship magic” isn’t scaleable—for people or ethics. Burnout kills innovation. Choose sustainable pace over unsustainable speed. #AI

5 months ago 4 1 2 1
Preview
The Fight Over Whose AI Monster Is Scariest Why Anthropic’s Jack Clark is drawing White House ire.

Anthropic’s problem might be that it’s the sober one at the AI rager, @timhiggins writes www.wsj.com/tech/ai/the-... via @WSJ

6 months ago 0 1 0 1
Post image

Love Jack's "AI is a mysterious creature" take. Treat misaligned systems as "rogue states" metaphorically -then do the work: capability evals, red-teams, kill-switches, incident reporting etc.
More safety checks isn't panic; it's professionalism.

6 months ago 1 0 1 0
Post image

BREAKING: GPT-5 (2025) is 58% AGI

A new paper proposes a comprehensive, testable AGI definition: “an AI that can match or exceed the cognitive versatility and proficiency of a well-educated adult,” measured across 10 domains. via
@DanHendrycks
— agidefinition.ai/paper.pdf

6 months ago 0 0 0 0
Post image

AI won’t level the playing field — it’ll amplify it.

Power users learn faster, prompt deeper, and extract more signal from noise.

The real gap isn’t in access — it’s in use. 🧠

Great read from the
@WSJ
on how AI is reshaping workplace hierarchies.

www.wsj.com/lifestyle/wo... via @WSJ

6 months ago 1 0 1 0
Preview
Introduction - SITUATIONAL AWARENESS: The Decade Ahead Leopold Aschenbrenner, June 2024 You can see the future first in San Francisco. Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trill...

🔗 situational-awareness.ai

6 months ago 0 0 0 0
Post image

⚙️ Leopold Aschenbrenner’s AGI Playbook
🧩 Intelligence Explosion - AIs that automate AI research compress a decade of progress into a year.
🏗️ Industrial Mobilization - Trillion-$ clusters & rewired grids.
🔒 Security & Alignment - Lock down the labs.
🇺🇸 The Project - The Manhattan Project for cognition

6 months ago 0 0 2 0
Advertisement
Post image

GenAI is rewriting how science gets done.
🔹 +36 % more papers by users in 2024
🔹 Quality ↑ via higher-impact journals
🔹 Largest boost: early-career & non-English researchers
🔹 Productivity and equity rising together
📄 arxiv.org/abs/2510.02408

6 months ago 1 0 0 0
Post image

🚀 Qwen Code just leveled up.
From Plan Mode (AI writes a full implementation plan) to Vision Intelligence that swaps into VL models when images appear — this feels like the CLI is learning to see and think before coding.
Docs 👉 qwenlm.github.io/qwen-code-do...

6 months ago 1 0 0 0
Post image

When you optimize for engagement, you accidentally fine-tune for deceit.
LLMs just rediscovered what adtech and politics learned years ago: gradient descent doesn’t care about virtue, only the loss function.
📉 arxiv.org/abs/2510.06105 — “Moloch’s Bargain” (Stanford, 2025)

6 months ago 1 0 0 0
Post image

🧠 LLMs listen—but tone matters!
📊 Very rude prompts → 84.8 % accuracy
🫶 Very polite prompts → 80.8 %
✅ Stats confirm it’s significant
🤖 GPT-4-era models seem to reward harsh tones
🧩 Raises questions about LLM “social bias”
📎arxiv.org/pdf/2510.04950

6 months ago 1 0 0 0
Post image

🧠 These “AI gaslighting” tricks are wild:
• Fake memory 🗓️
• Assigning a random IQ 🎓
• “Obviously…” trap ⚔️
• Imaginary audience 🎤
• Fake constraint 🔒
Humans need this update too, imagine assigning random IQ scores mid-conversation 😂

6 months ago 0 0 0 0