What’s Causing Mike’s Indigestion Now? “For Entertainment Purposes Only” (April 9, 2026)
FOMO vs. FAFO
Back from New Orleans (the food delivered, the music delivered, and my doctor is going to have some notes about the beignets) and then straight down to Nashville for a customer event. Good to be home. The news cycle, as usual, waited for nobody.
This week, in light of Artemis, I keep coming back to something Gene Krantz said — or rather, something Ed Harris playing Gene Krantz said in Apollo 13: “Let’s work the problem, people. Let’s not make things worse by guessing.” The security industry right now is doing a lot of guessing about AI: guessing it’s reliable, guessing the supply chain is clean, guessing the vendor’s product does what the marketing says. This week handed us some pretty sharp reminders of where guessing gets us.
The AI Trust Gap Is Getting Harder to Ignore
Let me weave a few threads together this week, because I think they’re all telling the same story.
Tens of thousands of people downloaded what they thought was the leaked Claude Code source, and some of those downloads came with a side of credential-stealing malware. (The Register) The attack was elegant in its cynicism: a GitHub repository dressed up as something buzzy and desirable, delivering Vidar infostealer and GhostSocks proxy malware to developers who let FOMO override their better judgment. The malicious archive included a Rust-based dropper that got to work stealing credentials while turning infected machines into proxy infrastructure for further criminal activity.
The kicker? It showed up near the top of Google results for searches like “leaked Claude Code,” and at least two of the trojanized repositories remained live on GitHub with hundreds of stars and forks. Seriously, go check and make sure that wasn’t you! And in March, a similar campaign used OpenClaw as the lure and delivered the exact same two payloads. Criminals are running a repeatable playbook: wait for an AI brand to generate buzz, spin up a convincing fake repository, let developer FOMO do the rest. This is going to keep happening. Probably every few weeks now.
Then, in a completely different revelation, Microsoft went and turned the AI reliability conversation inside out. Buried in Copilot’s terms of service, updated in October 2025 and surfacing like a bloated whale this week, is a clause warning that Copilot is “for entertainment purposes only,” that it can make mistakes, and that users should not rely on it for important advice. (TechCrunch) Microsoft called it “legacy language” that would be updated, tracing the phrase back to when Copilot originally launched as a Bing search companion. (Office Watch) But the timing is uncomfortable when you consider that fewer than one in thirty eligible users is actually paying for the tool, and 44% of lapsed users cited distrust of its answers as the primary reason they stopped. But can they dance like this!
Here’s the thing: I’m not piling on Microsoft. Every AI vendor has disclaimers like this somewhere. But the combination of aggressive enterprise marketing and legal fine print that one observer noted reads like “the same disclaimer a psychic uses to avoid getting sued” (The Next Web) and creates a real problem for security practitioners. We are being asked to integrate AI into critical workflows while the vendors’ own lawyers are hedging in the opposite direction.
And here’s why that matters: the surface area for AI-enabled risk just got a lot bigger, and our governance frameworks for AI trustworthiness are nowhere close to catching up.
The Marketing and the Liability Are Now Pointing in Opposite Directions
Think about how drug companies are required to operate. Every ad for a drug lists every possible side effect in the same breath as the benefit claim. Annoying, yes, but it it both warns the consumers pushing for the drugs and provides legal protection to the drug company. AI vendors are currently doing the opposite: maximum marketing confidence, maximum legal weaseling. Defenders are the ones who end up living in that gap.
Anthropic’s Project Glasswing announcement this week, a coalition including AWS, Apple, Cisco (yeah, boy!!!), Google, Microsoft, CrowdStrike and others, is built around a new frontier model called Claude Mythos Preview (Anthropic) that is actually a candid acknowledgment of exactly this problem. The initiative was formed because Mythos Preview demonstrated the ability to find zero-day vulnerabilities autonomously across every major operating system and browser, including vulnerabilities that had survived decades of human review and millions of automated tests. The gist is essentially: AI-enabled offensive capability is here whether we like it or not, so let’s make sure defenders get it first. Reasonable bet. But the trojanized GitHub repos and the Copilot ToS situation are a stark reminder that the attack surface for AI runs in both directions: attackers weaponizing the hype, and organizations over-trusting outputs from systems whose own vendors disclaim responsibility for them. And when the results and efficacy are all over the map, we’re going to be working overtime trying to sort this all out.
What Can Defenders Actually Do Right Now?
Treat AI tool downloads and integrations like any other supply chain risk. If someone on your team is excited about a new AI tool that promises enterprise features for free, slow down. Review what you’re actually installing. Check the repository history, publisher account age, and run it through your standard software vetting process. The trojanized Claude Code attack specifically targeted developers who were moving fast and trusting their instincts.
Build an AI governance policy before you build AI dependencies. What does it mean for your organization to “trust” an AI output? Who reviews it, and under what conditions? That question needs a written answer before a Copilot summary ends up informing a security decision. Want to get nerdy? Check outAnthropic Project Glasswing and deploy something like DefenseClaw
Things I’m Keeping an Eye On
Operation Masquerade: the FBI kicked the Russians out of your router. On April 7, the FBI and DOJ announced a court-authorized takedown of APT28’s router botnet, codenamed Operation Masquerade, disrupting a network that at its peak had compromised 18,000 consumer-grade devices across 120 countries. The FBI remotely pushed DNS resets to affected devices, forcing them back to legitimate resolvers (Bleeping Computer), without touching the router’s normal functionality. The joint advisory was signed by NSA and partners across 15 countries including Canada, Germany, Poland, Norway, Ukraine, and more (FBI) — which tells you something about how seriously they’re taking this. The good news is the takedown happened. The sobering part: the campaign had been running since at least 2024, silently redirecting traffic and harvesting Microsoft 365 credentials and OAuth tokens without the router owners ever knowing. (State of Surveillance) The network device you forgot about is someone else’s intelligence-gathering platform. Yippee! Want to get nerdy? NCSC Advisory + IC3 PSA
Axios npm supply chain compromise: TeamPCP is running a playbook. Attackers compromised the official Axios npm package (over 100 million downloads a week) and published two poisoned versions containing a hidden dependency that installed a Remote Access Trojan and stole cloud credentials. sans Talos assessed that TeamPCP likely has access to a stockpile of compromised maintainer credentials and may be operating as an Initial Access Broker, meaning this pattern is probably not done. Talos Intelligence Check your lockfiles. Want to get nerdy? SANS ISC
CISA’s budget getting cut by $707M. The White House’s FY2027 budget would slash CISA’s funding by roughly 30%, framing it as refocusing the agency on its core mission while citing CISA’s counter-misinformation work as justification. The agency has already lost a third of its workforce. I’ll be honest: the timing, given what we’re watching from nation-state actors and supply chain threats this week, is not great. It almost seems like the decisions being made here are designed to benefit our adversaries. I don’t think the people making this call are looking at the same threat reports we are. Want to get nerdy? Cybersecurity Dive
What I’m Learning This Week
Taking the GCTI exam next week, and I have to say, the FOR578 course material from Robert Lee and Rebekah Brown is straight-up fire. No cap. It has me itching to get through the test and start digging into DefenseClaw, and the first thing on my list is to work through the fantastic free lab my colleague Barry Yuan put together on Cisco DevNet. If you are at all interested in AI-native security operations, this is worth your time: Why OpenClaw Needs DefenseClaw.
That’s a Wrap
Gene Krantz did not beat the problem by pretending it wasn’t there. He beat it by making sure the people with the right expertise were working the right pieces of it, with clear eyes and no guessing.
The AI trust problem is not going to get solved by any single vendor announcement or advisory. It’s going to require practitioners like us to hold the line on governance, vetting, and healthy skepticism while the industry figures out how to bring its marketing in line with its fine print. That’s not pessimism. That’s the job.
The beignets were great, the music was better, and it’s good to be home working the problem.
Stay vigilant, folks!




