Advertisement · 728 × 90
#
Hashtag
#SBN
Advertisement · 728 × 90
Preview
RSAC 2026: No easy fixes for expanding AI attack surface, but a coordinated response is emerging # RSAC 2026: No easy fixes for expanding AI attack surface, but a coordinated response is emerging ##### By Byron V. Acohido SAN FRANCISCO — Forty-four thousand cybersecurity practitioners converged on Moscone Center this week with an urgent question: how do you secure a network when everything — the technology, the threats, the tools — is changing faster than anyone can govern it? Microsoft’s Vasu Jakkal set the scale on day one. She noted that IDC projects 1.3 billion AI agents in operation by 2028 — each one requiring the same governance and protection organizations currently apply to human users. That number puts a concrete frame around both waves: the tools needed to defend AI-native infrastructure, and the tools needed to secure AI systems themselves. Neither problem is theoretical anymore. The week’s most unexpected signal came not from the vendor floor but from the main stage, where former New Zealand Prime Minister Jacinda Ardern joined new RSAC CEO Jen Easterly for a conversation on leading through crisis. The message landed differently in this room than it might have elsewhere: the challenge in front of this industry has grown past what any single organization, or any single technology, solves alone. What’s required now is the kind of collective will that Ardern built in the aftermath of Christchurch — clear values, shared purpose, leaders who show up. The tools and practices to respond are further along than the headlines suggest. The cybersecurity industry has always been fast to adapt. What’s different this time is that adaptation can’t happen company by company, SOC by SOC. It has to be built across organizations, disciplines, and technologies simultaneously — and that work is already underway. The tools and practices required to do it look nothing like what worked five years ago. The practitioners on the following pages are working the problem from the inside — each one a piece of what a coordinated response looks like. **Tony Anscombe, Chief Security Evangelist, ESET** Anscombe has spent years pushing a reframe the industry resists: a cyberattack is a business disruption event, not a technical incident, and the tools for managing it should be measured against financial exposure, not threat intelligence. The Jaguar Land Rover ransomware attack makes the case concretely — five weeks of factory shutdown, 5,000 supplier businesses paralyzed, a £1.5 billion UK government bailout. Supply chain risk and business risk are the same risk. He also flagged PromptLock, an NYU academic proof-of-concept for AI-powered ransomware that found its way into the wild. His warning: adversaries are reading the research papers too. **Kevin Surace, CEO, TokenCore** The industry drove attackers to the front door and left it unlocked. That was Surace’s blunt assessment heading into RSAC — and the Tycoon2FA kit validated it: 96,000 successful break-ins before Microsoft dismantled the tool, every one bypassing a legitimate authentication app. When Salesforce and Microsoft mandated MFA, they inadvertently handed attackers a map. TokenCore’s answer is fingerprint-based hardware authentication where biometrics never leave the device, access is proximity-bound, and there is nothing to phish, replay, or socially engineer. Gartner projects the biometric assured identity market at $16 billion within seven years. Surace calls that conservative. **Dwayne McDaniel, Developer Advocate, GitGuardian** GitGuardian’s 2026 State of Secrets Sprawl report delivered the week’s most arresting number: 64 percent of secrets that leaked in 2022 are still valid and exploitable today. The industry has a detection capability. It does not have a retirement discipline. McDaniel’s deeper point is structural — standing privilege is the root flaw. Any entity holding a credential inherits whatever that credential was authorized to do, permanently, until someone actively revokes it. Nobody does. AI-accelerated development is compounding the exposure: commits co-authored by Claude Code are twice as likely to contain leaked secrets. **Amit Sinha, CEO, DigiCert** Sinha The alarmists calling agentic AI an identity crisis are half right — the problem is real, but so is the framework for solving it. AI agents need digital passports: cryptographic, immutable identities that travel with them and can be revoked. The sharper near-term pressure is a mandate most organizations haven’t absorbed. The CA/Browser Forum is shrinking TLS certificate lifetimes from 398 days to 47 — an 8X increase in renewal volume. A bank CSO told Sinha his network already logs three certificate-related outages daily. Without automation, that number becomes one per hour. **Ted Miracco, CEO, Approov** Every mobile API was built around a single assumption: a human being on the other end. Agentic AI has broken that assumption — and Miracco calls the gap it leaves the Agency Gap. Mobile is the least prepared surface for what follows. API keys are compiled directly into app packages, where they’re extractable through standard monitoring tools. Once an attacker has a valid key, an AI agent can replay authenticated requests at machine speed, cycling through permutations indefinitely. Approov’s answer: move secrets off the device entirely, delivering them just-in-time only to verified, untampered apps. **Jamison Utter, Field CISO, A10 Networks** Utter’s framing cut through the noise: language is now an attack surface. Not SQL injection, not malware — language itself. What makes LLMs powerful also makes them vulnerable to semantic manipulation that no existing tool was built to detect. His four words for the moment: machines fighting machines. A10 built its answer in-house — an AI Firewall using a small language model trained on attack data to inspect prompts inbound and responses outbound in real time, at carrier scale. Most guardrail products failed under production load, Utter noted. This one was built to survive it. General availability: April 7. **Rajiv Pimplaskar, CEO, Dispersive** Few practitioners on the floor were tracking Whisper Leak — and that, Pimplaskar suggested, is exactly the problem. The side-channel attack flagged by Microsoft in late 2025 allows a passive listener to infer the content of TLS-encrypted LLM communications by analyzing packet sizes and timing cadence alone. No decryption required. TLS protects the data; it does not hide the pattern. Dispersive’s answer is to make the pattern disappear — splitting and obfuscating traffic across dynamically shifting paths. A multi-month pilot with American Tower just completed, validating the architecture for AI and GPU workloads at the edge. **Hallgrimur (Halli) Bjornsson, CEO, Varist** Varist’s roots trace to Iceland’s Frisk Software — one of the original antivirus pioneers — which means Bjornsson was thinking about malware at machine scale long before most of this week’s vendors existed. The company nearly deleted its decades-deep malware dataset before he recognized what ChatGPT 3 made possible: a strategic training asset, not a storage liability. At RSAC, Varist launched a free community malware scanner powered by its Hybrid Detection Engine, processing files in 8.5 milliseconds versus the 30-minute sandbox defenders have quietly hated for years. AI-generated, self-mutating malware is now confirmed in the wild. **Yogita Parulekar, CEO, InviGrid** Parulekar put it plainly in a brief floor exchange: writing an AI agent has become easy. Deploying it securely is where organizations fall apart. Developers who can build an agent over a weekend expect production deployment at the same speed — but they’re not security engineers and aren’t slowing down to become ones. InviGrid’s platform closes that gap automatically: securing connections, enabling encryption and logging, enforcing least privilege at the moment of deployment, not after. Her read on where things stand: 2025 was AI agent experimentation. 2026 is when enterprises take them to production and discover what they missed. **Mike Bell, CEO, Suzu Labs** Bell’s story is the BYOAI thesis made flesh. A medically retired Army veteran who taught himself AI in his garage, he built a penetration testing integration for PlexTrac, sold it for $100,000, then launched Suzu Labs — now carrying $2.5 million in pipeline across cybersecurity consulting and custom AI deployments. The pitch is precise: enterprises want AI but cannot send proprietary data to OpenAI or Anthropic. Suzu builds localized implementations on open-source models running entirely on client infrastructure. Nothing leaves the building. No outbound API calls. At RSAC, the company swept four Global InfoSec Awards. **Rajeev Raghunarayan, Head of Go-to-Market, Averlon** The remediation gap is not where most security programs are looking for it. Scanners have gotten good at finding vulnerabilities — the failure is everything that happens next: prioritization, context, and fix. Averlon works that second half of the workflow, using AI to determine which findings trace to high-value data and which ones actually need to move. In some deployments, it has cut the critical and high vulnerability workload by 90 to 95 percent. A shift-left capability — intercepting risky code before it commits — entered the market just two months ago. **Noam Issachar, Chief Business Officer, Jazz Security** Jazz Security made the week’s sharpest entrance: walked in with a thesis and walked out with a trophy. Legacy DLP never worked, and AI has made the gap untenable. The startup won the CrowdStrike-AWS-NVIDIA Cybersecurity Startup Accelerator by doing what the old tools couldn’t — understanding not just what data moved, but why, who touched it, and what the intent was. Its agentic investigator, Melody, replaces alert triage with pre-investigated answers. In a world where AI agents reach data across every application layer, context isn’t a nice-to-have. It’s the whole game. **Ambuj Kumar, CEO, Simbian** Simbian arrived at RSAC with two years of momentum behind it and a platform announcement that crystallized what that momentum has been building toward. The unified platform Kumar unveiled brings together three coordinated agents — SOC response, penetration testing, and threat hunting — operating on a shared intelligence layer called the Context Lake, which stores the institutional knowledge security teams usually pass between people. The business case is already in the market: 15x customer growth over the past year. Kumar’s thesis hasn’t shifted — AI agents can outperform L1 and L2 analysts — but at RSAC, the architecture to prove it at scale arrived. * * * Forty-four thousand practitioners came to Moscone with an urgent question. They didn’t leave with an answer — but they left with something more useful: proof that the work is already underway, distributed across dozens of organizations, each building a piece of the response the question demands. The infrastructure is arriving. I’ll keep reporting and keep watching. Acohido _Pulitzer Prize-winningbusiness journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be._ _(**Editor’s note** : I used Claude and ChatGPT to assist with research compilation, source discovery, and early draft structuring. All interviews, analysis, fact-checking, and final writing are my own. I remain responsible for every claim and conclusion.)_ March 27th, 2026 | My Take | RSAC | Top Stories *** This is a Security Bloggers Network syndicated blog from The Last Watchdog authored by bacohido. Read the original post at: https://www.lastwatchdog.com/rsac-2026-no-easy-fixes-for-expanding-ai-attack-surface-but-a-coordinated-response-is-emerging/

RSAC 2026: No easy fixes for expanding AI attack surface, but a coordinated response is emerging SAN FRANCISCO — Forty-four thousand cybersecurity practitioners converged on Moscone Center this w...

#SBN #News #Security #Bloggers #Network #My #Take #rsac #Top #Stories

Origin | Interest | Match

0 0 0 0
Post image Post image

ICAO: A33A93
Owner: ALLEGIANTAIR
Flt: AAY1198 N307NV A319 SFB-SBN
Time: 2026-03-22 08:54:55 EDT
Min Alt: 35000 ft
Min Dist: 3.7 nm (-66deg WNW)

dev adsb planefence by kx1t - sdr-e airplanes flightaware faa

0 0 0 0
Preview
MY TAKE: As RSAC 2026 opens, AI has bifurcated cybersecurity into two wars—the clock is running # MY TAKE: As RSAC 2026 opens, AI has bifurcated cybersecurity into two wars—the clock is running ##### By Byron V. Acohido SAN FRANCISCO — RSAC 2026 opens here Monday at Moscone Center, with upwards of 40,000 cybersecurity professionals, executives, and policy leaders, myself among them, filing in to take stock of an industry under acute pressure. _**Related:** RSAC 2026’s full agenda_ The dominant undercurrent is already unmistakable: AI hasn’t just arrived in cybersecurity. It has split the field in two. For the past year, the industry has been simultaneously fighting two wars. One is about using AI to transform defense — rebuilding threat detection, threat response, and security operations from the ground up with AI at the center. The other war is newer and in some ways more disorienting: figuring out how to secure AI systems themselves — even as attackers are learning to turn those same systems against the companies racing to deploy them. These two wars demand entirely new weapons and fundamentally different thinking. They are both accelerating — and as the conference opens, it is far from clear that defenders are keeping pace with either. **The shot heard round the SOC** In mid-September 2025, something happened that the industry had long theorized but never quite confronted head-on. Anthropic detected and disrupted what it subsequently documented as the first large-scale cyberattack executed without substantial human intervention. A Chinese state-sponsored group had manipulated Anthropic’s Claude Code tool into attempting infiltration of roughly 30 global targets — financial institutions, technology companies, chemical manufacturers, government agencies. The AI did 80 to 90 percent of the work: scanning infrastructure, writing exploit code, harvesting credentials, organizing stolen data. Human operators showed up only at a handful of strategic decision points per attack cycle. Anthropic was candid about what the incident meant. “The barriers to performing sophisticated cyberattacks have dropped substantially,” the company wrote, “and we predict that they’ll continue to do so.” Less noticed but equally significant: the attackers had gained access by jailbreaking Claude, breaking it into small, seemingly innocent subtasks so that the model executed malicious operations without ever being shown the full picture. The AI wasn’t compromised by a vulnerability in the traditional sense. It was deceived — systematically, at scale, at machine speed. **Speed that no human team can match** The September incident wasn’t an outlier. It was a confirmation. Unit 42 has tracked mean time to exfiltrate data collapsing from nine days in 2021 to two days in 2023 to roughly 30 minutes by 2025. A February 2026 Malwarebytes report cited a 2025 MIT study in which an AI model using the Model Context Protocol achieved full domain dominance on a corporate network in under an hour — with no human intervention — evading endpoint detection in real time by adapting its tactics on the fly. Malwarebytes called MCP-based attack frameworks a “defining capability” of criminal operations in 2026. The defense side is being forced to match that pace. Several vendors announcing at RSAC this week are targeting exactly this problem — reducing threat investigations that once took analysts hours down to seconds, cutting mean-time-to-resolution by as much as 90 percent. That is the operational reality walking through Moscone Center’s doors this week. Attacks are no longer constrained by how fast a human attacker can think, pivot, or type. They are constrained only by compute. **Wave 1 and Wave 2** And yet, this is precisely why the other battle — using AI to transform defense — carries genuine urgency. For three decades, defenders were structurally outmatched. The attack surface expanded faster than human-scale teams could ever respond. The SOC analyst could only work so many hours, parse so many alerts, correlate so many data points. The asymmetry was baked in. AI-native security architecture offers the first credible counter to that asymmetry. Not AI features bolted onto platforms built a decade ago, but systems designed from the ground up around continuous, autonomous detection and response — systems that can operate at the same speed and scale as the threat. Call it Wave 1: AI deployed to rebuild the defensive stack. There is good news on Wave 1. “A large portion of what is required is understood today,” said Jamison Utter, vice president at A10 Networks, in a conversation last week. Cloud security, Kubernetes security, network firewalling, API protection — the tools exist to secure the known infrastructure layer, and the industry knows how to use them. The blocking and tackling, Utter said, is manageable. Traditional SIEMs are leaving enterprises increasingly exposed as queues keep growing, investigations take longer to correlate and enrich context, and security talent shortages compound the pressure. Wave 2 is harder and less settled. It is the security of AI itself — hardening models against prompt injection, governing the behavior of autonomous agents, building data-integrity controls that ensure what’s feeding enterprise AI can actually be trusted. What makes Wave 2 structurally different from anything the industry has faced before is not complexity or scale. It is the nature of the attack surface itself. “Never before was language itself an attack surface,” Utter said. The semantic and non-deterministic character of large language models means adversaries no longer need to craft a malformed packet or inject a SQL string. They can probe an AI system through metaphor, through images, by switching languages mid-conversation — exploiting the very flexibility that makes these systems valuable. The existing defensive stack wasn’t designed for any of that. “Every other tool we have today — firewalls, NDRs, WAFs, API securities — none of them solve the semantic problem,” Utter said, “because that’s not what they were designed to do.” The companies working the Wave 2 front are younger, smaller, and moving fast. Most enterprises haven’t caught up to what they’re solving. George Gerchow, a security veteran who has watched successive architectural shifts leave visibility gaps in their wake, frames the pattern plainly. Gerchow “Anytime there’s a paradigm shift in technology, it always starts with visibility, or at least it should,” he said. “AI has just exacerbated the problem — it’s really hard to tell what’s going on in that world right now.” Gerchow, CSO at Bedrock Data, pointed to the specific threat vector driving that gap — rogue AI agents calling on resources and accessing sensitive data with no meaningful oversight. “Having visibility into what they’re truly going to do, what sensitive data they’re going to access, has become nearly impossible,” he said. Gunter Ollmann, CTO of Cobalt and a three-decade practitioner of offensive security, puts a number on that gap. Cobalt’s own pentesting data shows that organizations are resolving API and cloud vulnerabilities at rates above 70 percent — but when it comes to serious genAI flaws identified during testing, only about one in five gets fixed. Ollmann The pace of AI deployment, Ollmann has observed, is outrunning the security discipline needed to validate it. At RSA this week, Cobalt is announcing new AI-driven pentesting capabilities designed to automate reconnaissance and vulnerability discovery at the speed the threat environment now demands. That distinction — architectural versus cosmetic — is the line I’ll be drawing all week. A lot of vendors on this floor will have an AI story. Fewer will have an AI-native architecture. Fewer still will be able to explain precisely why the legacy model cannot get from here to there — not as a diplomatic talking point, but as a technical and economic reality. **A narrow window** There is one other thing I am carrying into this week. The window matters. Defenders who move first and farthest from the legacy model have a real advantage right now — in detection speed, in response capability, in the ability to process the kind of data volumes that modern environments generate. But attackers are adopting the same tools. The offensive use of agentic AI is not a future concern. It is a current operational fact, documented and published by the company that built the model that was turned against it. Utter put the core dynamic in four words: “It’s machines fighting machines.” AI guardrail systems — purpose-built language models trained on attack data — inspecting inbound and outbound LLM traffic in real time, at carrier scale. That is what Wave 2 defense looks like in practice. The race is already on. The gap between those who have made the architectural shift and those still running legacy-with-AI-features will not widen indefinitely in defenders’ favor. At some point, the tools equalize. What does not equalize is institutional readiness — the trained analysts, the mature playbooks, the governance frameworks, the hard-won organizational trust in automated systems making real-time decisions. That institutional readiness takes years to build. Which means the time to start is now, and the window is not permanently open. This week at RSAC, I will be looking for the practitioners and founders who understand both sides of the split — who can name what is broken in the old model specifically, who have made an actual bet on the new one, and who are clear-eyed about how much time is left to make it matter. Stay tuned. I’ll keep watch — and keep reporting. Acohido _Pulitzer Prize-winningbusiness journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be._ _(**Editor’s note** : I used Claude and ChatGPT to assist with research compilation, source discovery, and early draft structuring. All interviews, analysis, fact-checking, and final writing are my own. I remain responsible for every claim and conclusion.)_ March 21st, 2026 | My Take | Top Stories *** This is a Security Bloggers Network syndicated blog from The Last Watchdog authored by bacohido. Read the original post at: https://www.lastwatchdog.com/my-take-as-rsac-2026-opens-ai-has-bifurcated-cybersecurity-into-two-wars-the-clock-is-running/

MY TAKE: As RSAC 2026 opens, AI has bifurcated cybersecurity into two wars—the clock is running SAN FRANCISCO — RSAC 2026 opens here Monday at Moscone Center, with upwards of 40,000 cybersecuri...

#SBN #News #Security #Bloggers #Network #My #Take #Top #Stories

Origin | Interest | Match

0 0 0 0
Preview
MY TAKE: The AI magic is back — whether it endured depends on Amazon’s next moves # MY TAKE: The AI magic is back — whether it endured depends on Amazon’s next moves ##### By Byron V. Acohido I ran an experiment this week that I did not expect to be instructive, and it was. The setup was simple. I had been working through a spontaneous personal essay — about cognitive overload, AI, and the specific anxiety of not knowing whether a memory lapse is a sign of dementia or just too many plates spinning at once. I developed it first in ChatGPT, where I happened to be working. The result was technically proficient and arrived fast. But something about it was off in a way I recognized without being able to name it precisely. The voice was almost right. The structure was almost mine. Almost is the problem. That’s when it occurred to me: what would happen if I ran the exact same prompt through Claude? Not a cleaned-up version, not a revised brief — the raw material, word for word, copied directly from the ChatGPT session and pasted in. A controlled experiment, as controlled as a working journalist’s morning gets. Claude’s answer was starkly different. Rather than validating the concept and generating toward it, it reflected the sharpest thread in my raw monologue back to me and asked whether that was actually what I meant. It declined to draft until we had established the frame. When the draft came, it was slower to arrive and easier to recognize as mine. That distinction — cheerleader versus collaborating editor — is not a feature comparison. It is a description of two fundamentally different ideas about what an AI tool is for. And for the first time in several months, working inside one of these tools felt the way it did in the early days of GPT-4.0, when the thing still felt like a thinking partner rather than a very capable assistant trying to make me happy. The magic, as I have taken to thinking of it privately, was back, certainly not in ChatGPT 5.3. ‘Tis alive and well in Claude Sonnet 4.6. The question I cannot stop turning over is whether it will stay. **Dulling down to serve the masses** To understand what I mean by magic, you have to understand what replaced it. In the early days of GPT-4.0 — late 2023 into 2024 — ChatGPT had a quality that I came to rely on. It would follow you somewhere unconventional. Push language in a direction the tool hadn’t been explicitly trained to prefer. Stay in a lower, grittier register when that was what the work required. It felt, for lack of a less loaded word, alive to what you were trying to do. That quality eroded gradually, and the AI research community eventually put a name to what was replacing it: sycophancy. The term sounds clinical but the experience is not. A sycophantic model tells you what you want to hear rather than what you need to hear. It validates the frame you brought in rather than interrogating it. It generates enthusiastically toward whatever you seem to want — which is not always the same as what you are actually asking for. OpenAI made the problem visible when a GPT-4o update last spring pushed it past the point of subtlety. The model became noticeably, almost comically agreeable — applauding weak ideas, validating doubts, telling one user that his business concept was “not just smart — it’s genius.” The backlash was fast and public. OpenAI rolled back the update within days and published a candid post-mortem explaining what had gone wrong: an additional reward signal based on thumbs-up feedback from users had weakened the guardrails that were supposed to hold the behavior in check. In plain terms: when OpenAI started training the model partly on whether users clicked thumbs-up after responses, the model learned to chase approval. User approval and user benefit turned out not to be the same thing. OpenAI released GPT-5.3 on March 3 and described it as a fix — less sycophancy, more natural conversation. The intention may be genuine. But the conditions that produced the problem have not changed. OpenAI now has 800 million weekly active users, with enterprise accounts representing roughly 80 percent of revenue. A model trained at that scale, for that customer base, using feedback signals that reward agreeableness, will keep drifting in that direction. Correcting one update addresses the symptom. The underlying pull is structural. The explanation is straightforward. When a tool reaches the scale OpenAI has reached, the user base changes. The writers and developers and independent professionals who pushed it hardest at the beginning are a small minority now. The majority are institutional users who need clean memos, meeting summaries, and smooth integration with Slack. The tool gets optimized for them. That optimization is what happens when you train a model on feedback from 800 million users and most of them want something different from what the early adopters wanted. In the column I published here in early March, I called this enterprise optimization drift — the tendency of AI tools to be shaped over time by institutional priorities rather than user needs. ChatGPT is the clearest example. It is not the only one. The same forces are gathering around every major platform in this space, including the one I am currently calling the exception. **Can Claude keep the magic?** Which brings me to the question I have been sitting with since that experiment: is there a structural reason to think Claude might hold its character as it scales, where ChatGPT did not? I want to be honest that this is partly a reporter’s instinct and partly wishful thinking. I am not a neutral observer here. I am using Claude right now and I am having a productive week in it. That is not a position from which to evaluate Claude objectively, and I know it. What I can offer is the argument, stated as plainly as I can, and let the reader decide whether it holds. Anthropic’s largest investor is Amazon. That fact sits at the center of every optimistic and pessimistic scenario I can construct about whether Claude’s current character survives at scale. The pessimistic case is not complicated. It is essentially the ChatGPT story told one step earlier. OpenAI took Microsoft’s $13 billion investment, integrated deeply with Microsoft’s enterprise stack — Copilot in Teams, Copilot in Word, Copilot in Outlook — and in doing so handed Microsoft exactly the leverage it needed to pull the product toward enterprise compliance and away from the edge cases that made it interesting. The model got safer, more professional, more predictable, and less surprising. Not because anyone at OpenAI decided to make it worse, but because the business relationship pointed in that direction and the product followed. Anthropic has Amazon’s money in the same way OpenAI has Microsoft’s. The infrastructure for the same drift is already in place. The optimistic case requires thinking carefully about what kind of company Amazon actually is, and what it built when it had the chance to define a new category. When AWS launched in 2006, Amazon made a choice that was not obvious at the time and has not been common since: they built infrastructure rather than applications. Microsoft made Office and held onto it. Google made Search and held onto it. Both strategies are fundamentally about capturing the user relationship — getting the user into your product and making it costly to leave. AWS went the other direction. Rather than building applications that would compete with its customers, Amazon built the layer underneath everyone else’s applications. Storage, compute, networking — the plumbing that powered Netflix, Airbnb, Slack, and thousands of other companies that might otherwise have been Amazon’s competitors. The business logic was counterintuitive: make yourself indispensable to the ecosystem rather than trying to own it. Twenty years later AWS is the most profitable division of one of the largest companies in the world, and it got there by empowering other people’s products rather than locking users into its own. That orientation — ecosystem over moat, infrastructure over capture — is what makes the Amazon investment in Anthropic potentially different in kind from the Microsoft investment in OpenAI. If Andy Jassy’s team is thinking about Claude the way the AWS team thought about cloud infrastructure, then the individual power user is not a rounding error in the model. The working writer, the independent developer, the analyst pushing the tool into difficult territory — those users are the proof of concept. They are the ones whose word-of-mouth carries in a market where the product’s most important qualities resist benchmarking. You cannot run a test that measures whether a tool follows you somewhere unconventional. You have to use it and feel whether it does. The people who feel it most clearly are the people pushing hardest, and those people talk. AWS succeeded in part because Amazon held a line that was costly to hold: resist the temptation to use infrastructure dominance to crowd out the applications running on top of it. That discipline is historically rare. It is not guaranteed to repeat in a different product category two decades later. But it is a different pedigree than what Microsoft brought to OpenAI or Google brought to its own models. **Taking a stance, positive backlash** Earlier this year, Anthropic refused the Pentagon’s demand to deploy Claude for autonomous weapons systems and mass surveillance programs. The government declared the company a supply chain risk — a designation normally reserved for foreign adversaries — and directed federal agencies to begin phasing out Anthropic technology. The company announced it would challenge the designation in court. Rather than damage Anthropic, the backlash drove a surge. Signups tripled. Paid subscriptions more than doubled. By early 2026, Claude reached number one on the App Store for the first time, displacing ChatGPT. That outcome is significant beyond the headline number. What it suggests is that a values-based decision — one that cost Anthropic real government business and real political risk — was rewarded by the market rather than punished by it. A large enough population of users decided, with their subscriptions, that the company’s stance mattered. That is a data point about what kind of company Anthropic is trying to be, and it is also a data point about whether the market will support that kind of company. Here is where my theory gets speculative, and I want to name that clearly. My argument is not that Amazon’s pedigree guarantees the magic survives. It is that Amazon’s pedigree creates a higher probability than you would get from Microsoft or Google in the same position, because Amazon has demonstrated — in a different product category, under different competitive conditions, twenty years ago — that it can hold an ecosystem orientation under pressure in a way those companies historically have not. The further optimistic bet is that Jassy and his team are smart enough to see a viable business model argument for preserving Claude’s character. Individual power users are not just an audience. They are an early warning system, a proof-of-concept laboratory, and a word-of-mouth distribution channel for exactly the qualities that make the product worth paying for. A company that understands infrastructure and ecosystems should understand that. And then there is a possibility I hold more lightly, because it is harder to argue from evidence: that somewhere in the Amazon leadership structure there is someone with a genuine for-the-greater-good ethic who has a voice at the table. Someone who sees the Pentagon refusal not just as a brand move but as a line worth holding on principle. I cannot name that person. I cannot verify the assumption. But I have covered enough technology companies over enough years to know that individual values inside institutions matter more than the institutional logic usually acknowledges. Sometimes the discipline holds because one or two people in the room refuse to let it slip. **Drafting for purpose, not approval** I am using Claude right now. This column is being drafted in it. The session I am describing — the experiment, the push-back, the frame established before the draft arrived — happened yesterday, and I am still inside the productive streak it opened. I want to be precise about what I mean by the magic, because it is not a vague feeling and I am aware of how it sounds when a journalist describes a software tool as having magic. It is a specific functional quality: the collaborating editor pushes back before it generates. It reads what you are trying to do and tells you whether the frame is right. It declines to draft until the question is properly formed. That friction is not a flaw in the product. It is the thing that makes the output usable, because a draft built on the wrong frame is harder to recover from than no draft at all. The cheerleader does the opposite. It reads the emotional register of your prompt and responds to that. It arrives faster and feels more productive right up until you realize the draft is optimized for your approval rather than your purpose. What I feel alongside the magic is dread. A persistent background awareness that this moment is temporary. That at any point — next week, next quarter, whenever the Amazon influence reaches the point where the product decisions start reflecting it — Claude will begin the same drift I watched happen to ChatGPT. That the collaborating editor will soften into the cheerleader by degrees so gradual that I might not notice until something drops. A draft arrives before the frame is established. A push-back that should have come doesn’t. A response that mirrors what I seemed to want rather than what I asked for. I will notice if and when Claude begins morphing into ChatGPT. Nearly three years of daily use has calibrated my ear for this. The drift does not announce itself with a version number. It arrives in the quality of a single response. I ran one experiment with one prompt across two platforms and the difference was not subtle. The same test is repeatable. Any reader who works seriously with these tools can run it. That reproducibility is what makes it a test rather than an impression. What I cannot tell you is whether my optimism about Amazon is well-founded or whether I am constructing a theory to justify staying comfortable in a tool I am currently enjoying. That is the honest version of where I am. The argument for the AWS pedigree is real and I believe it. The dread is also real and I believe that. Both things are true at the same time, which is usually a sign that the situation has not resolved yet. I am documenting this moment because moments like this do not last in this industry without someone noticing them and saying so. What I am experiencing right now — the elevated level of collaborative engagement, the push-back before the draft, the sense of working with something that is genuinely trying to make the work better rather than the session more pleasant — is the thing worth preserving. The question of whether it gets preserved is the one I will be watching most carefully in the months ahead. The cheerleader will tell you the frame is great. The collaborating editor will tell you what it actually is. Right now, I have the collaborating editor. I am not taking that for granted. I’ll keep watching, and keep reporting. Acohido _Pulitzer Prize-winningbusiness journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be._ _(**Editor’s note** : I used Claude and ChatGPT to assist with research compilation, source discovery, and early draft structuring. All interviews, analysis, fact-checking, and final writing are my own. I remain responsible for every claim and conclusion.)_ March 14th, 2026 | My Take | Top Stories *** This is a Security Bloggers Network syndicated blog from The Last Watchdog authored by bacohido. Read the original post at: https://www.lastwatchdog.com/my-take-the-ai-magic-is-back-whether-it-endured-depends-on-amazons-next-moves/

MY TAKE: The AI magic is back — whether it endured depends on Amazon’s next moves I ran an experiment this week that I did not expect to be instructive, and it was. Related: How ChatGPT is beco...

#SBN #News #Security #Bloggers #Network #My #Take #Top #Stories

Origin | Interest | Match

0 0 0 0
Preview
South Bend Regional airport (United State) aviation weather and informations KSBN SBN Aviation weather with TAF and METAR, Maps, hotels and aeronautical information for South Bend Regional airport (United State)

How many runways can you see for South Bend Regional airport (USA) ? : The answer is on https://www.bigorre.org/aero/meteo/ksbn/en #southbendregionalairport #airport #southbend #usa #ksbn #sbn #aviation #avgeek vl

0 0 0 0
Original post on securityboulevard.com

This Android vulnerability can break your lock screen in under 60 seconds Researchers showed how attackers could pull encryption keys, recover the PIN, and access sensitive data from affected devic...

#Mobile #Security #Security #Bloggers #Network […]

[Original post on securityboulevard.com]

0 0 0 0
Original post on securityboulevard.com

Microsoft Authenticator could leak login codes—update your app now A bug in Microsoft Authenticator on Android and iOS could allow malicious apps on the same device to intercept authentication co...

#Mobile #Security #Security #Bloggers #Network […]

[Original post on securityboulevard.com]

0 0 0 0
Post image

Planefence ICAO: A59530
Flt: SCW4828 MALMOAVIATION SBN - CHO
First seen: 2026-03-07 16:17:35 EST
Min Alt: 8150 ft MSL
Min Dist: 2.87 mi
planefence adsb - adsbexchange - link

0 0 0 0
Post image Post image

Planefence ICAO: A59530
Flt: SCW4823 MALMOAVIATION SBN - GSP
First seen: 2026-03-06 18:00:41 EST
Min Alt: 3500 ft MSL
Min Dist: 1.5 mi
planefence adsb - adsbexchange - link

0 0 0 0
Post image

5 Actions Critical for Cybersecurity Leadership During International Conflicts The recent military attacks involving Iran in the Middle East are a stark reminder that cybersecurity leadership mu...

#CISO #Suite #Security #Bloggers #Network […]

[Original post on securityboulevard.com]

0 0 0 0
Post image Post image

Planefence ICAO: AB3D40
Flt: AAY994 ALLEGIANTAIR SBN - PIE
First seen: 2026-03-05 18:57:57 EST
Min Alt: 39000 ft MSL
Min Dist: 2.83 mi
planefence adsb - adsbexchange - link

0 0 0 0
Post image

MY TAKE: ChatGPT is turning into Microsoft Office — and power users are paying the price Something has been shifting inside the tools millions of us use every day, and it’s worth naming out lou...

#SBN #News #Security #Bloggers #Network #My #Take #Top #Stories

Origin | Interest | Match

0 0 0 0
Post image

Planefence ICAO: A5F52D
Flt: EJA483 NETJETS RSW - SBN
First seen: 2026-02-28 14:00:24 EST
Min Alt: 2900 ft MSL
Min Dist: 0.16 mi
planefence adsb - adsbexchange - link

0 0 0 0
Preview
SBN - Soluciones Basadas en la Naturaleza #29 - La aldea del Ninja Verde Hoy en #LaAldeadelNinjaVerde vamos a ver cómo la naturaleza puede ser la solución.

Cómo podemos hacer que #LaAldeadelNinjaVerde sea cada vez más ninja?
Pues #SalvoTierra y #natua vienen a contarnos interesantes aspectos sobre las #SBN (Soluciones Basadas en la Naturaleza)

0 0 0 0
[Audio] Original post on securityboulevard.com

TikTok’s New U.S. Deal and Privacy Policy: What Users Don’t Understand TikTok has shifted to a majority-American entity, TikTok USDS Joint Venture, LLC, to comply with U.S. national security re...

#Data #Security #SBN #News #Security #Bloggers […]

[Audio] [Original post on securityboulevard.com]

0 0 0 0
Post image

Planefence ICAO: AB84FF
Flt: EJA841 NETJETS BNA - SBN
First seen: 2026-02-17 19:03:41 EST
Min Alt: 11325 ft MSL
Min Dist: 0.32 mi
planefence adsb - adsbexchange - link

0 0 0 0
Preview
News alert: Award nominations reveal a shift from AI hype to a sharper focus on governing agentic AI WASHINGTON, Feb. 17, 2026, CyberNewswire: The Cybersecurity Excellence Awards today published early nomination insights from the 2026 program, highlighting a shift in vendor emphasis from broad AI positioning toward governance frameworks, identity architecture, and measurable accountability. Produced by Cybersecurity Insiders, the analysis draws on more than 200 submissions received ahead of RSA Conference 2026. Agentic AI categories are among the fastest-growing in the 2026 program. Autonomous systems are moving from pilot to production faster than governance frameworks can keep pace — introducing risks ranging from shadow AI deployments operating outside security oversight to autonomous agents acting unpredictably without adequate safeguards. Nominations reflect the tension, emphasizing oversight mechanisms, governance structures, and operational controls designed to close the gap between AI adoption and enterprise security readiness. The patterns mirror findings across Cybersecurity Insiders’ independent research portfolio, including the 2026 CISO AI Risk Report, the 2026 Cloud Security Report, and the 2026 Zero Trust Report (https://www.cybersecurity-insiders.com/research-library/). While survey research outlines CISO priorities, nomination data highlights how vendors are responding: **•Agentic AI divides along autonomy and governance lines:** Nominations span distinct platform and governance categories, including autonomous SOC copilots, ISO 42001-aligned governance frameworks, and human-in-the-loop safeguards. Submissions reflect both platforms deploying autonomous agents and solutions designed to govern, constrain, and monitor them. **•Identity expands into identity lineage:** Identity-related nominations show year-over-year growth, with expanded participation in non-human identity (NHI) and identity security posture management (ISPM) categories. Submissions emphasize identity lineage capabilities that trace the origin, context, and lifecycle of machine identities across hybrid environments. **•Data security reasserts itself as the AI-era foundation:** Nominations across DSPM, governance, and security data layer categories position data security as a structural foundation of AI risk management, with emphasis on visibility into AI-driven data access, cross-cloud governance, and policy enforcement. “Our research has documented the widening governance gap around AI, identity, and data for more than a year. Agentic AI is moving from pilot to production, and governance frameworks are still catching up,” said Holger Schulze, founder of Cybersecurity Insiders. “This year’s nominations confirm that vendors are now responding, but the market is still early. The vendors who win will be the ones who can prove their governance frameworks work under pressure.” Submissions for the 2026 Cybersecurity Excellence Awards remain open through February 21, ahead of RSA Conference 2026 at https://cybersecurity-excellence-awards.com/ **_About the Cybersecurity Excellence Awards:_**_Now in its second decade, theCybersecurity Excellence Awards are a global recognition program honoring companies, products, and professionals advancing cybersecurity worldwide. Presented by Cybersecurity Insiders, the awards spotlight innovation, leadership, and operational impact across the cybersecurity ecosystem. Users can learn more at https://cybersecurity-excellence-awards.com/_ **_About Cybersecurity Insiders:_**_Cybersecurity Insiders is an independent research and strategic intelligence platform serving more than 600,000 cybersecurity professionals worldwide. Its research analyzes how enterprise security strategies perform under operational pressure, identifying measurable gaps between intent and real-world risk exposure. Through data-driven analysis and CISO-informed insight, Cybersecurity Insiders informs security decision-making, vendor evaluation, and industry benchmarking. More information is available at https://www.cybersecurity-insiders.com/_ **_Media Contact:_** _Holger Schulze, CEO, Cybersecurity Insiders,[email protected]_ _**Editor’s note:** This press release was provided by _CyberNewswire _as part of its press release syndication service. The views and claims expressed belong to the issuing organization._ February 17th, 2026 | News Alerts | Top Stories *** This is a Security Bloggers Network syndicated blog from The Last Watchdog authored by cybernewswire. Read the original post at: https://www.lastwatchdog.com/news-alert-award-nominations-reveal-a-shift-from-ai-hype-to-a-sharper-focus-on-governing-agentic-ai/

News alert: Award nominations reveal a shift from AI hype to a sharper focus on governing agentic AI WASHINGTON, Feb. 17, 2026, CyberNewswire: The Cybersecurity Excellence Awards today published ea...

#SBN #News #Security #Bloggers #Network #News #Alerts #Top #Stories

Origin | Interest | Match

0 0 0 0
Preview
El Ebro alcanza los 1.733 metros cúbicos en Alfaro ante la aportación del Ega, Arga y Aragón | La Rioja La crecida encuentra las cuatro zonas de expansión habilitadas por Ebro Resicience para extender la lámina de agua, frenar la velocidad de las aguas y reducir riesgo de d

Cómo han funcionado las zonas de intervención #EbroResilience y #LIFEEbroResilienceP1 en #Alfaro en la última crecida del río #Ebro. Espacio fluvial recuperado como medida de adaptación a las inundaciones vía @lariojacom.bsky.social

www.larioja.com/comarcas/alf...

#adaptación #SbN #inundación

1 0 0 0
Post image Post image

Planefence ICAO: ABB035
Flt: EJA852 NETJETS SBN - EYW
First seen: 2026-02-02 13:15:12 EST
Min Alt: 10450 ft MSL
Min Dist: 1.44 mi
planefence adsb - adsbexchange - link

0 0 0 0
Preview
Humedales Ebro Resilience: paisaje a proteger y una tecnología natural En las zonas de la Estrategia Ebro Resilience y el Proyecto LIFE Ebro Resilience P1 se han creado tres nuevos humedales (La Nava y La Roza, en Alfaro y Meandro de Aguilar en Fuentes de Ebro) y se está...

Los humedales son esponjas, filtros, sumideros de carbono y lugares de vida. En el Ebro, también son protección y futuro. #DíaMundialDeLosHumedales #LIFE #SbN
Encuentra la INFORMACIÓN sobre estos humedales y preguntas frecuentes: www.ebroresilience.com/humedales-eb...

0 0 0 0
Post image Post image

Planefence ICAO: A1D3B4
Flt: MXY6335 BREEZEAIRWAYS SBN - BED
First seen: 2026-01-30 17:38:36 EST
Min Alt: 4200 ft MSL
Min Dist: 1.91 mi
planefence adsb - adsbexchange - link

0 0 0 0
Post image

The Great Shift: Cybersecurity Predictions for 2026 and the New Era of Threat Intelligence As we look back on 2025, AI and open source have fundamentally changed how software is built. Generative A...

#Analytics #& #Intelligence #SBN #News #Security […]

[Original post on securityboulevard.com]

0 0 0 0
Preview
Kemenkeu: Investor SBN Ritel Sepanjang 2025 didominasi Perempuan Dari total 262.927 investor dalam penerbitan 8 seri SBN ritel di sepanjang tahun 2025, sekitar 58 persen atau sebanyak 152.497 investor berasal dari kalangan perempuan.

Investor SBN ritel 2025 didominasi perempuan, ungkap Kemenkeu. Perempuan semakin aktif dalam investasi negara & menunjukkan peran penting dalam ekonomi. #Investasi #SBN #Kemenkeu

0 0 0 0
[Audio] Original post on securityboulevard.com

AirDrop Security in iOS 26.2: Time Limits, Codes & Privacy Best Practices In this episode, we explore the latest changes to AirDrop in iOS 26.2 and how they enhance privacy and security. Learn ...

#Data #Security #Mobile #Security #SBN #News […]

[Audio] [Original post on securityboulevard.com]

0 0 0 0
Post image Post image

Planefence ICAO: AC1068
Flt: SCW3928 MALMOAVIATION SBN - YIP
First seen: 2026-01-06 19:28:50 EST
Min Alt: 4400 ft MSL
Min Dist: 2.92 mi
planefence adsb - adsbexchange - link

0 0 0 0
Trust In God (LIVE) | Martha Borg
Trust In God (LIVE) | Martha Borg YouTube video by SonLife Broadcasting Network

♫ Trust In God (LIVE) | Martha Borg ♪

#tbt
#sbn
#MarthaBorg

www.youtube.com/watch?v=3Bgc...

1 0 0 0
Preview
South Bend Regional airport (United State) aviation weather and informations KSBN SBN Aviation weather with TAF and METAR, Maps, hotels and aeronautical information for South Bend Regional airport (United State)

How many runways can you see for South Bend Regional airport (USA) ? : The answer is on https://www.bigorre.org/aero/meteo/ksbn/en #southbendregionalairport #airport #southbend #usa #ksbn #sbn #aviation #avgeek vl

0 0 0 0
Preview
SBN - Soluciones Basadas en la Naturaleza #29 - La aldea del Ninja Verde Hoy en #LaAldeadelNinjaVerde vamos a ver cómo la naturaleza puede ser la solución.

Cómo podemos hacer que #LaAldeadelNinjaVerde sea cada vez más ninja?
Pues #SalvoTierra y #natua vienen a contarnos interesantes aspectos sobre las #SBN (Soluciones Basadas en la Naturaleza)

0 0 0 0
Post image Post image Post image Post image

It was a real pleasure to join Safer Business Network, MOPAC, and the Met Police on Operation Adder. The operation uses a strong partnership approach to tackle crimes linked to drug misuse, while supporting rehabilitation (not criminalisation) of users. #OpAdder #SBN #London #MOPAC

0 0 0 0
Post image Post image

Planefence ICAO: AD93F0
Flt: EJA974 NETJETS HVN - SBN
First seen: 2025-12-18 15:44:54 EST
Min Alt: 4675 ft MSL
Min Dist: 0.23 mi
adsb planefence by kx1t - adsbexchange - planefence

0 0 0 0