Advertisement · 728 × 90

Posts by John Herrman

with the correct link this time

3 hours ago 0 0 0 0
"Been thinking a lot about whether it's possible to stop humanity from developing AL," wrote Sam Altman to Elon Musk in 2015, shortly after Google had acquired DeepMind. Given that it appeared it would happen anyway, he wrote, "it seems like it would be good for someone other than Google to do it first." Musk, who had told Altman that DeepMind was causing him "extreme mental stress" and that, should Google "win," it would be
"really bad news with their one mind to rule the world philosophy," was receptive after recently failing to lure Hassabis to his constellation of companies instead. Soon, they became co-founders of OpenAI. By 2018, a bitter power struggle had led to Musk cutting ties with OpenAI, resulting in years of court battles, some still ongoing. Now, the men tweet openly about how much contempt they have for each other. (Altman on Musk: "I don't think he's, like, a happy person. I do feel for him." Musk on Altman:
"Scam Altman lies as easily as he breathes.") Anthropic's founding was the result of a core group of researchers and employees leaving OpenAI over concerns about its approach to safety, but also over concerns about Altman's character specifically. (Amodei on Altman in 2021: "The problem with OpenAI is Sam himself." In 2026, after OpenAI seized on Anthropic's conflict with the Pentagon, Amodei described Altman as telling "straight up lies" and
"gaslighting.") In 2023, Musk, now in possession of Twitter and a clearer public political identity, finally founded his own firm, xAI, to build a "maximum truth-seeking AI that tries to understand the nature of the universe," but also because Sam Altman was making ChatGPT "woke," which he said could be "deadly." (Elaborating on the theme, and making sure not to miss anyone, Musk posted at Amodei earlier this year: "Your AI hates Whites & Asians, especially Chinese, heterosexuals and men. This is misanthropic and evil.")

"Been thinking a lot about whether it's possible to stop humanity from developing AL," wrote Sam Altman to Elon Musk in 2015, shortly after Google had acquired DeepMind. Given that it appeared it would happen anyway, he wrote, "it seems like it would be good for someone other than Google to do it first." Musk, who had told Altman that DeepMind was causing him "extreme mental stress" and that, should Google "win," it would be "really bad news with their one mind to rule the world philosophy," was receptive after recently failing to lure Hassabis to his constellation of companies instead. Soon, they became co-founders of OpenAI. By 2018, a bitter power struggle had led to Musk cutting ties with OpenAI, resulting in years of court battles, some still ongoing. Now, the men tweet openly about how much contempt they have for each other. (Altman on Musk: "I don't think he's, like, a happy person. I do feel for him." Musk on Altman: "Scam Altman lies as easily as he breathes.") Anthropic's founding was the result of a core group of researchers and employees leaving OpenAI over concerns about its approach to safety, but also over concerns about Altman's character specifically. (Amodei on Altman in 2021: "The problem with OpenAI is Sam himself." In 2026, after OpenAI seized on Anthropic's conflict with the Pentagon, Amodei described Altman as telling "straight up lies" and "gaslighting.") In 2023, Musk, now in possession of Twitter and a clearer public political identity, finally founded his own firm, xAI, to build a "maximum truth-seeking AI that tries to understand the nature of the universe," but also because Sam Altman was making ChatGPT "woke," which he said could be "deadly." (Elaborating on the theme, and making sure not to miss anyone, Musk posted at Amodei earlier this year: "Your AI hates Whites & Asians, especially Chinese, heterosexuals and men. This is misanthropic and evil.")

It can be deflating to reimagine the Al boom as a more pedestrian business story with particularly colorful executives expressing contempt for their rivals and making things personal on the way to, say, packaged-beverage dominance.
But the maximally dysfunctional dynamics of the pre-takeoff
AI industry can also be read as an early, bad sign of how things might play out for everyone else: like they always do, but maybe worse. Here is a visible, prepared, and substantively aligned "small group of elites," including a few of the richest people in the entire world, suggesting that it's time to collectively "rethink the social contract" and warning that we're about to be "tested as a species" as they're in the process of succumbing completely to a crude, winner-take-all market logic, utterly failing to coordinate among themselves, fighting regulation with lobbyists, getting pissed as hell in public, and opening up a bunch of fronts in a total industrial war for scarce resources — power, compute, water - with immediate and unmitigated externalities. (Granted, comprehensive high-level coordination among them might look like something else people don't particularly love: a cabal.) Individually, to receptive audiences, they can explain how all this happened and rationalize their own roles. To much of the rest of the world, though, they just look like a group of people who worried about building the thing and then couldn't figure out how not to, who cautioned against getting trapped in an arms race and then started one anyway.
They see people warning about the speed of change as they step over one another to make it accelerate. They see people urging humility and accusing one another of having God complexes while engaging in a naked struggle for power.

It can be deflating to reimagine the Al boom as a more pedestrian business story with particularly colorful executives expressing contempt for their rivals and making things personal on the way to, say, packaged-beverage dominance. But the maximally dysfunctional dynamics of the pre-takeoff AI industry can also be read as an early, bad sign of how things might play out for everyone else: like they always do, but maybe worse. Here is a visible, prepared, and substantively aligned "small group of elites," including a few of the richest people in the entire world, suggesting that it's time to collectively "rethink the social contract" and warning that we're about to be "tested as a species" as they're in the process of succumbing completely to a crude, winner-take-all market logic, utterly failing to coordinate among themselves, fighting regulation with lobbyists, getting pissed as hell in public, and opening up a bunch of fronts in a total industrial war for scarce resources — power, compute, water - with immediate and unmitigated externalities. (Granted, comprehensive high-level coordination among them might look like something else people don't particularly love: a cabal.) Individually, to receptive audiences, they can explain how all this happened and rationalize their own roles. To much of the rest of the world, though, they just look like a group of people who worried about building the thing and then couldn't figure out how not to, who cautioned against getting trapped in an arms race and then started one anyway. They see people warning about the speed of change as they step over one another to make it accelerate. They see people urging humility and accusing one another of having God complexes while engaging in a naked struggle for power.

What are we to supposed to make of the AI elite's open, seething hatred for one another? nymag.com/intelligence...

3 hours ago 9 2 3 0

oops!

3 hours ago 0 0 0 0

Nice piece by @jwherrman.bsky.social. I especially appreciated “A lot of what we’re hearing about us is really about them.” There is a long history of this in Silicon Valley. Almost by default, they posit OUR interests as necessarily being aligned with theirs, & rely on lack of interrogation.

1/__

2 days ago 17 7 1 0
Whatever the merits of their defenses — and, as Mike Masnick at TechDirt argues, there are more than some online-safety advocates are comfortable admitting — they have lately been, to be blunt, losing the public debate. The legal and rhetorical framework within which Meta and others long built their businesses — the lares protecting platforms from liability aren't perfect, but without them, the internet as we know it couldn't exist — suddenly tells us less about the possible futures of social media than a single unguarded quote from a Los Angeles juror, who told reporters after the verdict, "We wanted them to feel it. We wanted them to realize this was unacceptable." No need to be too specific about what "this" is
- everyone gets the idea.
Here, for people like Altman, is a glimpse of the future:
Nobody wants to hear from social-media companies, while everyone wants something to be done to them. This punishing dynamic will consume their next decade in the form of rolling public-relations crises, lawsuits, regulations, and law, which they will have to deal with in the manner of other entrenched and unpopular industries, with lobbyists and lawyers, rather than as privileged stewards of the economy's most exceptional story.
Today, AI leaders still demand immense amounts of attention, and their predictions - alongside leaks from their companies, cryptic posts from researchers, and viral X essays galore — are largely driving the story. But what about tomorrow? In the weeks before social media's "tobacco moment," there were already signs of angst surfacing in the AI industry about its loss of narrative control. Anthropic had publicly clashed with a right-wing Department of War that wanted control over its technology and for the company's founder to sit down and shut up. Meanwhile, the most popular left-wing politicians in the country were suddenly advocating for an extreme AI
"pause" that would halt development in its tracks, citing Al leaders' own warnings about its risk

Whatever the merits of their defenses — and, as Mike Masnick at TechDirt argues, there are more than some online-safety advocates are comfortable admitting — they have lately been, to be blunt, losing the public debate. The legal and rhetorical framework within which Meta and others long built their businesses — the lares protecting platforms from liability aren't perfect, but without them, the internet as we know it couldn't exist — suddenly tells us less about the possible futures of social media than a single unguarded quote from a Los Angeles juror, who told reporters after the verdict, "We wanted them to feel it. We wanted them to realize this was unacceptable." No need to be too specific about what "this" is - everyone gets the idea. Here, for people like Altman, is a glimpse of the future: Nobody wants to hear from social-media companies, while everyone wants something to be done to them. This punishing dynamic will consume their next decade in the form of rolling public-relations crises, lawsuits, regulations, and law, which they will have to deal with in the manner of other entrenched and unpopular industries, with lobbyists and lawyers, rather than as privileged stewards of the economy's most exceptional story. Today, AI leaders still demand immense amounts of attention, and their predictions - alongside leaks from their companies, cryptic posts from researchers, and viral X essays galore — are largely driving the story. But what about tomorrow? In the weeks before social media's "tobacco moment," there were already signs of angst surfacing in the AI industry about its loss of narrative control. Anthropic had publicly clashed with a right-wing Department of War that wanted control over its technology and for the company's founder to sit down and shut up. Meanwhile, the most popular left-wing politicians in the country were suddenly advocating for an extreme AI "pause" that would halt development in its tracks, citing Al leaders' own warnings about its risk

OpenAI's "Industrial policy for the Intelligence Age," a set of proposals mingling plans to coordinate against AI security threats with outlines of redistributive government programs, is a sensible (if vague) document made incredibly strange by its source: Maybe we see evidence here of a different sort of AI bubble, one in which Sam Altman seems to believe that America is waiting on people like him to start a conversation about renegotiating the social contract, rather than a country spring-loaded for a tech backlash to end all tech backlashes
- the same bubble where, last year, as few will recall, Zuckerberg, trying out a new identity as an AI CEO, wrote a manifesto about how "superintelligence is now in sight."
But in OpenAl's substantial and much quieter lobbying efforts around, among many other things, "child safety" and data-center deregulation — and perhaps in its nascent new media strategy — there's evidence of a more paranoid and ruthless approach to managing the story. AI won't just have a "big tobacco" moment, a point at which years of misleading marketing, personal angst, and visible harms gradually build to a moment of reckoning. It's likely to have something more intense, and soon, arriving ahead of, not after, its diffusion through society and the economy. The last story the Al firms will have been in control of will be the one where they said they were about to change everything. It helped them raise a lot of money to build a lot of data centers and train better models. What happens next might not be up to them at all.

OpenAI's "Industrial policy for the Intelligence Age," a set of proposals mingling plans to coordinate against AI security threats with outlines of redistributive government programs, is a sensible (if vague) document made incredibly strange by its source: Maybe we see evidence here of a different sort of AI bubble, one in which Sam Altman seems to believe that America is waiting on people like him to start a conversation about renegotiating the social contract, rather than a country spring-loaded for a tech backlash to end all tech backlashes - the same bubble where, last year, as few will recall, Zuckerberg, trying out a new identity as an AI CEO, wrote a manifesto about how "superintelligence is now in sight." But in OpenAl's substantial and much quieter lobbying efforts around, among many other things, "child safety" and data-center deregulation — and perhaps in its nascent new media strategy — there's evidence of a more paranoid and ruthless approach to managing the story. AI won't just have a "big tobacco" moment, a point at which years of misleading marketing, personal angst, and visible harms gradually build to a moment of reckoning. It's likely to have something more intense, and soon, arriving ahead of, not after, its diffusion through society and the economy. The last story the Al firms will have been in control of will be the one where they said they were about to change everything. It helped them raise a lot of money to build a lot of data centers and train better models. What happens next might not be up to them at all.

The AI industry is about to lose control of its story, and on some level it knows this nymag.com/intelligence...

2 weeks ago 46 8 3 2
Preview
Mark Zuckerberg Finally Submits His AI Manifesto He’s trying to wax poetic about the future like other AI CEOs. The result is both derivative and weird.

right, but arguably unique to him. that manifesto came right after a big pivot to AI, borrowed AI-guy aesthetics, and came after years of Zuck fatigue. He didn't lose AI influence, he was grasping for it nymag.com/intelligence...

2 weeks ago 4 0 0 0

Social media backlash was gradual, cumulative, and lost coherence as it grew in strength. AI backlash is different in that it's arriving before diffusion. For now, it's a preemptive response to an alarming story as much as it is a response to material conditions and experience

2 weeks ago 10 1 0 0
Advertisement
Whatever the merits of their defenses — and, as Mike Masnick at TechDirt argues, there are more than some online-safety advocates are comfortable admitting — they have lately been, to be blunt, losing the public debate. The legal and rhetorical framework within which Meta and others long built their businesses — the lares protecting platforms from liability aren't perfect, but without them, the internet as we know it couldn't exist — suddenly tells us less about the possible futures of social media than a single unguarded quote from a Los Angeles juror, who told reporters after the verdict, "We wanted them to feel it. We wanted them to realize this was unacceptable." No need to be too specific about what "this" is
- everyone gets the idea.
Here, for people like Altman, is a glimpse of the future:
Nobody wants to hear from social-media companies, while everyone wants something to be done to them. This punishing dynamic will consume their next decade in the form of rolling public-relations crises, lawsuits, regulations, and law, which they will have to deal with in the manner of other entrenched and unpopular industries, with lobbyists and lawyers, rather than as privileged stewards of the economy's most exceptional story.
Today, AI leaders still demand immense amounts of attention, and their predictions - alongside leaks from their companies, cryptic posts from researchers, and viral X essays galore — are largely driving the story. But what about tomorrow? In the weeks before social media's "tobacco moment," there were already signs of angst surfacing in the AI industry about its loss of narrative control. Anthropic had publicly clashed with a right-wing Department of War that wanted control over its technology and for the company's founder to sit down and shut up. Meanwhile, the most popular left-wing politicians in the country were suddenly advocating for an extreme AI
"pause" that would halt development in its tracks, citing Al leaders' own warnings about its risk

Whatever the merits of their defenses — and, as Mike Masnick at TechDirt argues, there are more than some online-safety advocates are comfortable admitting — they have lately been, to be blunt, losing the public debate. The legal and rhetorical framework within which Meta and others long built their businesses — the lares protecting platforms from liability aren't perfect, but without them, the internet as we know it couldn't exist — suddenly tells us less about the possible futures of social media than a single unguarded quote from a Los Angeles juror, who told reporters after the verdict, "We wanted them to feel it. We wanted them to realize this was unacceptable." No need to be too specific about what "this" is - everyone gets the idea. Here, for people like Altman, is a glimpse of the future: Nobody wants to hear from social-media companies, while everyone wants something to be done to them. This punishing dynamic will consume their next decade in the form of rolling public-relations crises, lawsuits, regulations, and law, which they will have to deal with in the manner of other entrenched and unpopular industries, with lobbyists and lawyers, rather than as privileged stewards of the economy's most exceptional story. Today, AI leaders still demand immense amounts of attention, and their predictions - alongside leaks from their companies, cryptic posts from researchers, and viral X essays galore — are largely driving the story. But what about tomorrow? In the weeks before social media's "tobacco moment," there were already signs of angst surfacing in the AI industry about its loss of narrative control. Anthropic had publicly clashed with a right-wing Department of War that wanted control over its technology and for the company's founder to sit down and shut up. Meanwhile, the most popular left-wing politicians in the country were suddenly advocating for an extreme AI "pause" that would halt development in its tracks, citing Al leaders' own warnings about its risk

OpenAI's "Industrial policy for the Intelligence Age," a set of proposals mingling plans to coordinate against AI security threats with outlines of redistributive government programs, is a sensible (if vague) document made incredibly strange by its source: Maybe we see evidence here of a different sort of AI bubble, one in which Sam Altman seems to believe that America is waiting on people like him to start a conversation about renegotiating the social contract, rather than a country spring-loaded for a tech backlash to end all tech backlashes
- the same bubble where, last year, as few will recall, Zuckerberg, trying out a new identity as an AI CEO, wrote a manifesto about how "superintelligence is now in sight."
But in OpenAl's substantial and much quieter lobbying efforts around, among many other things, "child safety" and data-center deregulation — and perhaps in its nascent new media strategy — there's evidence of a more paranoid and ruthless approach to managing the story. AI won't just have a "big tobacco" moment, a point at which years of misleading marketing, personal angst, and visible harms gradually build to a moment of reckoning. It's likely to have something more intense, and soon, arriving ahead of, not after, its diffusion through society and the economy. The last story the Al firms will have been in control of will be the one where they said they were about to change everything. It helped them raise a lot of money to build a lot of data centers and train better models. What happens next might not be up to them at all.

OpenAI's "Industrial policy for the Intelligence Age," a set of proposals mingling plans to coordinate against AI security threats with outlines of redistributive government programs, is a sensible (if vague) document made incredibly strange by its source: Maybe we see evidence here of a different sort of AI bubble, one in which Sam Altman seems to believe that America is waiting on people like him to start a conversation about renegotiating the social contract, rather than a country spring-loaded for a tech backlash to end all tech backlashes - the same bubble where, last year, as few will recall, Zuckerberg, trying out a new identity as an AI CEO, wrote a manifesto about how "superintelligence is now in sight." But in OpenAl's substantial and much quieter lobbying efforts around, among many other things, "child safety" and data-center deregulation — and perhaps in its nascent new media strategy — there's evidence of a more paranoid and ruthless approach to managing the story. AI won't just have a "big tobacco" moment, a point at which years of misleading marketing, personal angst, and visible harms gradually build to a moment of reckoning. It's likely to have something more intense, and soon, arriving ahead of, not after, its diffusion through society and the economy. The last story the Al firms will have been in control of will be the one where they said they were about to change everything. It helped them raise a lot of money to build a lot of data centers and train better models. What happens next might not be up to them at all.

The AI industry is about to lose control of its story, and on some level it knows this nymag.com/intelligence...

2 weeks ago 46 8 3 2
Preview
Inside Facebook’s (Totally Insane, Unintentionally Gigantic, Hyperpartisan) Political-Media Machine (Published 2016)

yeah, I remember being struck by the outsourcing here www.nytimes.com/2016/08/28/m... (the whole stack was already going international)

1 month ago 1 0 2 0
On one side, you've got the superpowers that initiated the war, states with near-infinite resources to marshal for their military campaigns and accompanying messaging strategies. On the other, you've got, well, Iran. Besides the contrast in subject - upbeat, chest-thumping aggression versus a call to avenge the deaths of schoolchildren - the use of cheap, fast, and widely available AI video generation has a powerful flattening effect. If you've played with these AI tools, you'll easily be able to imagine fragments of the sorts of prompts that were fed into software like Kling, Veo, Runway, Grok, or Seedance: the aftermath of an attack on a school in Iran in the style of LEGO animation; LEGO man Donald Trump reading a notebook of the Epstein files, Satan is there, realistic, cinematic; militant bowling pins approach holding sign reading "We won't stop making nuclear weapons," Dreamworks style, smirking faces, angry eyes; bowling-alley style animation of a "strike" using military jets.
Nothing here took much effort, and the outputs end up inhabiting the same narrow aesthetic zone, easily and cheaply obtainable, associated with widely available AI tools that have rapidly commoditized such videos. For Iran, Al video propaganda might represent a slight increase in capability, a way to produce higher volumes of passable legible, and persuasive material for a wider
range of audiences. For the United States, this sort of automation provides speed and flexibility, sure, but also draws its communications strategy down into a far more symmetric fight than the war itself, where everyone has the same weapons (i.e., AI video-generation tools that any curious teenager could use at this level) and the battlefield - where the possible audiences for such videos are to be found - is the entire internet, where such videos overlap, stylistically, with spam

On one side, you've got the superpowers that initiated the war, states with near-infinite resources to marshal for their military campaigns and accompanying messaging strategies. On the other, you've got, well, Iran. Besides the contrast in subject - upbeat, chest-thumping aggression versus a call to avenge the deaths of schoolchildren - the use of cheap, fast, and widely available AI video generation has a powerful flattening effect. If you've played with these AI tools, you'll easily be able to imagine fragments of the sorts of prompts that were fed into software like Kling, Veo, Runway, Grok, or Seedance: the aftermath of an attack on a school in Iran in the style of LEGO animation; LEGO man Donald Trump reading a notebook of the Epstein files, Satan is there, realistic, cinematic; militant bowling pins approach holding sign reading "We won't stop making nuclear weapons," Dreamworks style, smirking faces, angry eyes; bowling-alley style animation of a "strike" using military jets. Nothing here took much effort, and the outputs end up inhabiting the same narrow aesthetic zone, easily and cheaply obtainable, associated with widely available AI tools that have rapidly commoditized such videos. For Iran, Al video propaganda might represent a slight increase in capability, a way to produce higher volumes of passable legible, and persuasive material for a wider range of audiences. For the United States, this sort of automation provides speed and flexibility, sure, but also draws its communications strategy down into a far more symmetric fight than the war itself, where everyone has the same weapons (i.e., AI video-generation tools that any curious teenager could use at this level) and the battlefield - where the possible audiences for such videos are to be found - is the entire internet, where such videos overlap, stylistically, with spam

Into the void of information coming from the Middle East flowed an endless supply of videos suggesting unlikely, extreme, and enticing outcomes: swarms of missiles obliterating Dubai and Tel Aviv; societal breakdown in the United States; American warships exploding and jets falling out of the sky. (The AI slop war is producing blowback, too: Last week, Benjamin Netanyahu was compelled to make a proof-of-life video after social media users spread claims that, during a recent address, the Prime Minister had too many fingers
— a telltale sign, social media users said, that it had been generated by AI.) If you're a professional slop merchant
- the Trump Hormuz video above originated from a meme account allegedly based in Switzerland - you know that rendering stran
re and unbelievable scenarios
is a good way to get views. If you've started working around the war, you've probably also noticed that audiences seem more interested in stories representing U.S. and Israeli failure and overreach than in content that suggests its campaign is going well. That may be true not just abroad but also here in the U.S., where the young war is unpopular to an unprecedented degree. (One notable exception is the popularity of content posted in support Reza Pahlavi, the son of the deposed shah of Iran, whose followers in the diaspora seem receptive to fantastical AI content teasing his glorious return.) To paraphrase Field Marshal Montgomery, General MacArthur, and Vizzini, never get involved in a slop war on social media.

Into the void of information coming from the Middle East flowed an endless supply of videos suggesting unlikely, extreme, and enticing outcomes: swarms of missiles obliterating Dubai and Tel Aviv; societal breakdown in the United States; American warships exploding and jets falling out of the sky. (The AI slop war is producing blowback, too: Last week, Benjamin Netanyahu was compelled to make a proof-of-life video after social media users spread claims that, during a recent address, the Prime Minister had too many fingers — a telltale sign, social media users said, that it had been generated by AI.) If you're a professional slop merchant - the Trump Hormuz video above originated from a meme account allegedly based in Switzerland - you know that rendering stran re and unbelievable scenarios is a good way to get views. If you've started working around the war, you've probably also noticed that audiences seem more interested in stories representing U.S. and Israeli failure and overreach than in content that suggests its campaign is going well. That may be true not just abroad but also here in the U.S., where the young war is unpopular to an unprecedented degree. (One notable exception is the popularity of content posted in support Reza Pahlavi, the son of the deposed shah of Iran, whose followers in the diaspora seem receptive to fantastical AI content teasing his glorious return.) To paraphrase Field Marshal Montgomery, General MacArthur, and Vizzini, never get involved in a slop war on social media.

This dynamic ends up creating an uncomfortable alignment for the mostly American social-media companies, stuck between the incentives created by their platforms and the military aims of the country against which their home country has declared war. To take a purely sociopathic, market-centric view of what's happening — one that I describe here mainly because it's pretty close to how social-media CEOs talk about their platforms and users in the abstract — Al is helping to meet existing demand for war-related content online.
Perhaps it's even inducing more demand, as fake, misleading, or trivializing video makes the already dehumanizing and trivializing treatment of wars on social media as a fleeting content category that much more compelling and/or worse. (Congrats to AI!)

This dynamic ends up creating an uncomfortable alignment for the mostly American social-media companies, stuck between the incentives created by their platforms and the military aims of the country against which their home country has declared war. To take a purely sociopathic, market-centric view of what's happening — one that I describe here mainly because it's pretty close to how social-media CEOs talk about their platforms and users in the abstract — Al is helping to meet existing demand for war-related content online. Perhaps it's even inducing more demand, as fake, misleading, or trivializing video makes the already dehumanizing and trivializing treatment of wars on social media as a fleeting content category that much more compelling and/or worse. (Congrats to AI!)

Two reasons the U.S. is losing the slopaganda war:

1) it makes the most formidable cultural exporter in the world look like a cheap spam operation

2) the most popular AI videos seem to tap into fantasies of American failure, and slop merchants follow the market nymag.com/intelligence...

1 month ago 15 6 6 0

I don't mean to be dismissive of anyone I'm describing but the closest previous experience I've had is when I worked at Popular Mechanics, which was widely distributed in prisons. I'd get a lot of handwritten plans for perpetual motion machines, grand unified theories, surveillance scenarios, etc

1 month ago 13 1 1 0
Communication between characters as constitutionally different as Amodei and Hegseth was always going to be strained, but the breakdown was about more than "vibes and personalities." It was about an AI company and the Pentagon trying to account for disparate and extreme visions of tomorrow in clauses and provisions written for today. It was a collision between two movements that believe, in different and inherently incompatible ways, that they might be on the cusp of achieving absolute power, either for themselves or for the systems they're helping to build, and that current concessions will extrapolate into total failure.

Communication between characters as constitutionally different as Amodei and Hegseth was always going to be strained, but the breakdown was about more than "vibes and personalities." It was about an AI company and the Pentagon trying to account for disparate and extreme visions of tomorrow in clauses and provisions written for today. It was a collision between two movements that believe, in different and inherently incompatible ways, that they might be on the cusp of achieving absolute power, either for themselves or for the systems they're helping to build, and that current concessions will extrapolate into total failure.

Of course, Hegseth isn't really thinking about the singularity or ways in which today's Claude is still in ways unreliable (if AI supremacy is a core part of your national security philosophy, you don't hobble your country's leading lab over a procurement dispute). He's thinking about consolidating power, and his conception of the potential of AI is subordinate to that, and him, rather than a future superintelligence. This is why the situation escalated the way it did, with the government not just walking away but attempting to punish the company for asserting itself at all. If not for this retaliation, the government's narrow defense — that if a private company wants to contract with the military, it shouldn't expect to be able to micromanage how its tools are deployed and should reasonably expect to be implicated directly or through reputation if, for example, the military were to then bomb a school - would make sense on its own terms. But the Trump administration's belief in its right to total power, expressed by and in the belligerent figure of Hegseth, is central to both its "vision," such as it is, and to understanding the way it conducts itself here and elsewhere.

Of course, Hegseth isn't really thinking about the singularity or ways in which today's Claude is still in ways unreliable (if AI supremacy is a core part of your national security philosophy, you don't hobble your country's leading lab over a procurement dispute). He's thinking about consolidating power, and his conception of the potential of AI is subordinate to that, and him, rather than a future superintelligence. This is why the situation escalated the way it did, with the government not just walking away but attempting to punish the company for asserting itself at all. If not for this retaliation, the government's narrow defense — that if a private company wants to contract with the military, it shouldn't expect to be able to micromanage how its tools are deployed and should reasonably expect to be implicated directly or through reputation if, for example, the military were to then bomb a school - would make sense on its own terms. But the Trump administration's belief in its right to total power, expressed by and in the belligerent figure of Hegseth, is central to both its "vision," such as it is, and to understanding the way it conducts itself here and elsewhere.

Arguments about AI aren't just about the future — they're trapped it in nymag.com/intelligence...

1 month ago 36 8 2 0
In a way that's distinct and in many ways opposed to the leaders of Anthropic, Trump administration officials are living and thinking inside of a speculative future of their own. In theirs, they have more power than was previously conceivable to most people and can accomplish things beyond the political imaginations of their predecessors;
"you can just build things," which emerged as an AI-adjacent self-help mantra in Silicon Valley, has become a rallying cry for the MAGA movement, too. Like the Al labs, the new American right has made surprising progress toward its goals since 2024 and thinks it might be able to take things a lot further — or, in any case, that it absolutely has to try. While AI figures like Amodei worry that AI capabilities could extend beyond their control, the Trump administration seems to understand AI only in terms of how it might extend its own capacity for control. Before AI labs even have a chance to lose control of their technology to themselves, they risk losing it to political actors who don't trouble themselves with worries like that at all.
In contrast with anxious figures like Amodei, the Pete Hegseths of the world are unambivalent and accordingly see, from their current positions of power, every escalation as working in their favor. As Dean Ball, a former adviser on Al to the Trump administration, wrote in response to the Anthropic news, whether the government's attempt to cripple the company stands up to legal scrutiny, its ambitions were clear: "The message sent to every investor and corporation in America: do business on our terms, or we will end your business." It's a message sent from MAGA's speculative future — the one where it's truly won.

In a way that's distinct and in many ways opposed to the leaders of Anthropic, Trump administration officials are living and thinking inside of a speculative future of their own. In theirs, they have more power than was previously conceivable to most people and can accomplish things beyond the political imaginations of their predecessors; "you can just build things," which emerged as an AI-adjacent self-help mantra in Silicon Valley, has become a rallying cry for the MAGA movement, too. Like the Al labs, the new American right has made surprising progress toward its goals since 2024 and thinks it might be able to take things a lot further — or, in any case, that it absolutely has to try. While AI figures like Amodei worry that AI capabilities could extend beyond their control, the Trump administration seems to understand AI only in terms of how it might extend its own capacity for control. Before AI labs even have a chance to lose control of their technology to themselves, they risk losing it to political actors who don't trouble themselves with worries like that at all. In contrast with anxious figures like Amodei, the Pete Hegseths of the world are unambivalent and accordingly see, from their current positions of power, every escalation as working in their favor. As Dean Ball, a former adviser on Al to the Trump administration, wrote in response to the Anthropic news, whether the government's attempt to cripple the company stands up to legal scrutiny, its ambitions were clear: "The message sent to every investor and corporation in America: do business on our terms, or we will end your business." It's a message sent from MAGA's speculative future — the one where it's truly won.

It's true that talking constantly about an utterly transformed future probably makes Dario Amodei a strange person to negotiate with. But you can say the same thing about Pete Hegseth, who also thinks he might be in the middle of the last argument ever

1 month ago 6 2 1 0
Communication between characters as constitutionally different as Amodei and Hegseth was always going to be strained, but the breakdown was about more than "vibes and personalities." It was about an AI company and the Pentagon trying to account for disparate and extreme visions of tomorrow in clauses and provisions written for today. It was a collision between two movements that believe, in different and inherently incompatible ways, that they might be on the cusp of achieving absolute power, either for themselves or for the systems they're helping to build, and that current concessions will extrapolate into total failure.

Communication between characters as constitutionally different as Amodei and Hegseth was always going to be strained, but the breakdown was about more than "vibes and personalities." It was about an AI company and the Pentagon trying to account for disparate and extreme visions of tomorrow in clauses and provisions written for today. It was a collision between two movements that believe, in different and inherently incompatible ways, that they might be on the cusp of achieving absolute power, either for themselves or for the systems they're helping to build, and that current concessions will extrapolate into total failure.

Of course, Hegseth isn't really thinking about the singularity or ways in which today's Claude is still in ways unreliable (if AI supremacy is a core part of your national security philosophy, you don't hobble your country's leading lab over a procurement dispute). He's thinking about consolidating power, and his conception of the potential of AI is subordinate to that, and him, rather than a future superintelligence. This is why the situation escalated the way it did, with the government not just walking away but attempting to punish the company for asserting itself at all. If not for this retaliation, the government's narrow defense — that if a private company wants to contract with the military, it shouldn't expect to be able to micromanage how its tools are deployed and should reasonably expect to be implicated directly or through reputation if, for example, the military were to then bomb a school - would make sense on its own terms. But the Trump administration's belief in its right to total power, expressed by and in the belligerent figure of Hegseth, is central to both its "vision," such as it is, and to understanding the way it conducts itself here and elsewhere.

Of course, Hegseth isn't really thinking about the singularity or ways in which today's Claude is still in ways unreliable (if AI supremacy is a core part of your national security philosophy, you don't hobble your country's leading lab over a procurement dispute). He's thinking about consolidating power, and his conception of the potential of AI is subordinate to that, and him, rather than a future superintelligence. This is why the situation escalated the way it did, with the government not just walking away but attempting to punish the company for asserting itself at all. If not for this retaliation, the government's narrow defense — that if a private company wants to contract with the military, it shouldn't expect to be able to micromanage how its tools are deployed and should reasonably expect to be implicated directly or through reputation if, for example, the military were to then bomb a school - would make sense on its own terms. But the Trump administration's belief in its right to total power, expressed by and in the belligerent figure of Hegseth, is central to both its "vision," such as it is, and to understanding the way it conducts itself here and elsewhere.

Arguments about AI aren't just about the future — they're trapped it in nymag.com/intelligence...

1 month ago 36 8 2 0

wrong: they're measuring quote tweets

1 month ago 5 0 0 0
Advertisement

a great @joshdzieza.bsky.social story, full of the kind of testimony that I expect will have radicalizing effects well ahead of AI deployments

1 month ago 12 3 0 0

reading the WaPo coverage in particular gave me deep dread about over-extension into wildly inappropriate but superficially plausible uses, particularly given when the systems would have been built

1 month ago 4 0 0 0

funny memory about this story: Balaji reached out privately to say how much the boys at a16z loved it, after which they all spent the next few years letting Twitter drive them completely insane www.nytimes.com/2018/08/15/m...

1 month ago 73 10 2 0

a sharper way to make this argument would have been to simply point out: Anthropic is building in a world where Pete Hegseth — a genuine and obviously incompetent maniac — has meaningful power over it

1 month ago 18 5 1 0

q: hey why is it ok you scraped everyone's content to build your product
a: welllllll
q: hey, why is it different when companies scrape the proprietary data you distilled from ours
a: national security, next question

1 month ago 7 0 0 0
Anthropic is a ripe target here as far as jokes about hypocrisy are concerned: It's pitched as the conscientious AI lab, but also it settled last year to pay out $1.5 billion to authors whose pirated books it used for training. These posts represent a fair critique of all of the big players, which have ingested enormous quantities of material created by others, often without permission, to build proprietary models over which they now claim something like authorship. The scrapers have become the scraped, their own powerful distillations of the world's information sampled, reconstituted, and distilled once more.

Anthropic is a ripe target here as far as jokes about hypocrisy are concerned: It's pitched as the conscientious AI lab, but also it settled last year to pay out $1.5 billion to authors whose pirated books it used for training. These posts represent a fair critique of all of the big players, which have ingested enormous quantities of material created by others, often without permission, to build proprietary models over which they now claim something like authorship. The scrapers have become the scraped, their own powerful distillations of the world's information sampled, reconstituted, and distilled once more.

The backlash here isn't just about that irony. Anthropic is, at the moment, the Al lab to beat and the company whose products are most responsible for recent speculation about how AI might blow up the economy.
As a result, mockery wasn't coming just from people whose content had been scraped by Anthropic or who generally object to the way LLM models are trained. It was coming from AI insiders who see big firms as pulling the ladder up or trying to fortify their early dominance with the help of regulators, copyright law, and government funding. Within the story of an international arms race, model distillation can be cast as a threat to national security and American economic competitiveness. Within some of the other stories about AI, it might look more like fear of competition in general: of cheaper models; of free, open-source models; and of the rapid commoditization of capabilities that, just a few months prior, were unique and prohibitively expensive to develop. The Al firms called out by Anthropic - DeepSeek, Moonshot, and MiniMax — make models that are open to use not just in China but in the U.S. and elsewhere and that are already competing for some of the same customers.
Moonshot's latest Kimi models seem to perform, for many functions, about as well as the best American models did in the middle of last year. DeepSeek, the arrival of which briefly sent the AI industry and the stock market into chaos, is expected to release a major model update imminently, which may help explain why the big labs are all speaking up at the same time.

The backlash here isn't just about that irony. Anthropic is, at the moment, the Al lab to beat and the company whose products are most responsible for recent speculation about how AI might blow up the economy. As a result, mockery wasn't coming just from people whose content had been scraped by Anthropic or who generally object to the way LLM models are trained. It was coming from AI insiders who see big firms as pulling the ladder up or trying to fortify their early dominance with the help of regulators, copyright law, and government funding. Within the story of an international arms race, model distillation can be cast as a threat to national security and American economic competitiveness. Within some of the other stories about AI, it might look more like fear of competition in general: of cheaper models; of free, open-source models; and of the rapid commoditization of capabilities that, just a few months prior, were unique and prohibitively expensive to develop. The Al firms called out by Anthropic - DeepSeek, Moonshot, and MiniMax — make models that are open to use not just in China but in the U.S. and elsewhere and that are already competing for some of the same customers. Moonshot's latest Kimi models seem to perform, for many functions, about as well as the best American models did in the middle of last year. DeepSeek, the arrival of which briefly sent the AI industry and the stock market into chaos, is expected to release a major model update imminently, which may help explain why the big labs are all speaking up at the same time.

it was interesting to see a bunch of AI people suddenly start making BlueSky jokes about model scraping at Anthropic's expense — it's almost as if the LLM theft critique is grounded in something real and significant! nymag.com/intelligence...

1 month ago 40 7 1 1

This was, if I say so myself, a banger.

1 month ago 51 14 2 3

On this, really interesting piece on X’s “ideological ratchet” by @jwherrman.bsky.social: the platform is pulling users to the right, even elites who thought they were immune to radicalisation
nymag.com/intelligence...

1 month ago 68 27 1 3

hawking this link again. elite social media radicalization is vastly underemphasized compared to mass "misinformation," etc nymag.com/intelligence...

2 months ago 14 4 0 1

the way the answer does in fact equivocate is funny, but that's not really the point — it's been tuned to respond to a slightly different question with the answer, "anyway, doesn't matter, might makes right"

2 months ago 58 4 1 0
Advertisement

the character it's playing is certainly closer to "tutor" or "librarian" (vs "contemptuous teenage debater")

2 months ago 3 0 0 0

from what I've heard about xAI they basically red-team for "wokeness" but in a haphazard way that frequently involves forwarded tweets from the boss

2 months ago 35 2 2 0
In that sense, Grokipedia — like X and Grok — is also a warning.
Sure, it's part of an excruciatingly public example of one man's gradual isolation from the world inside a conglomerate-scale system of affirming, adulatory, and ideologically safe feeds, chatbots, and synthetic media, a situation that would be funny if not for Musk's desire and power to impose his vision on the world. (To calibrate this a bit, imagine predicting the "Wikipedia rewritten to be more conservative by Elon Musk's anti-PC chatbot" scenario in the run-up to, say, his purchase of Twitter. It would have sounded insane, and you would have too.) But what Musk can build for himself now is something that consumer AI tools, including his, will soon allow regular people to build for themselves, or which will be constructed for them by default: A world mediated not just by publications or social networks but by omnipurpose AI products that assure us they're "maximally truth-seeking" or "objective" as they simply tell us what we want to hear.

In that sense, Grokipedia — like X and Grok — is also a warning. Sure, it's part of an excruciatingly public example of one man's gradual isolation from the world inside a conglomerate-scale system of affirming, adulatory, and ideologically safe feeds, chatbots, and synthetic media, a situation that would be funny if not for Musk's desire and power to impose his vision on the world. (To calibrate this a bit, imagine predicting the "Wikipedia rewritten to be more conservative by Elon Musk's anti-PC chatbot" scenario in the run-up to, say, his purchase of Twitter. It would have sounded insane, and you would have too.) But what Musk can build for himself now is something that consumer AI tools, including his, will soon allow regular people to build for themselves, or which will be constructed for them by default: A world mediated not just by publications or social networks but by omnipurpose AI products that assure us they're "maximally truth-seeking" or "objective" as they simply tell us what we want to hear.

more on this whole project from last year nymag.com/intelligence...

2 months ago 93 9 3 1

Great question! 🦾 Let me look into that for you.

It appears that:

• Comedy is legal again 🚀

2 months ago 50 0 0 0