Ironman's Suit is Nothing Without Tony Stark
And you can't replace him.
There’s a scene in the first Iron Man film where Tony Stark, in a cave and totally improvising, builds something extraordinary out of scraps that his captors couldn’t figure out how to use. The bad guys - the Ten Rings - had access to a crate full of Stark Industries weapons. They had the hardware. What they didn’t have was the person who understood it well enough to make it do something that mattered.
I think about that scene a lot lately.
The Suit Doesn’t Fly Itself
The Iron Man suit (across every iteration) is one of the most capable pieces of technology ever imagined. It can go supersonic, punch through armored vehicles, interface with satellite networks, and make tactical decisions in milliseconds. It is, by every measure, transformative technology. I mean, I want The Force (specifically the Force Choke) but I would settle for a suit.
Without Tony Stark inside it, the suit is a very expensive paperweight.
There’s a moment in Iron Man 3 where the Mark 42 is being remote-piloted, doing its best without him. It’s impressive. It’s useful. And it is absolutely, categorically not the same as when Tony is in the suit. RDJ’s Tony is reading the room, making the call no algorithm predicted, improvising against an adversary that didn’t read the threat model. I mean, he’s pretty cool!
I’m not making the tired argument that “AI can’t replace humans.” That’s a no brainer, at least for most humans (sit down, Elon). What I’m arguing is something more specific, and I think more important:
The suit makes Tony better. Without Tony, there is no multiplicative gain. There is only the suit, doing what suits do, in the absence of judgment and improvisation.
I’ve tried to wear the Suit
I’ve been working closely with AI (specifically Claude & Claude Code, but Github Copilot, Gemma, and Llama have had some time too) for the better part of the last year. Early on, a lot of the most interesting work has been on my own time, driven by a passion for threat intelligence and poor coding skills that goes well beyond my job description. Lately, I have been working to apply it to the gig.
Take MITRE ATT&CK gap analysis. If you’ve ever tried to build a comprehensive coverage map for a customer, you know it’s the kind of work that takes days: techniques against detections, detections against data sources, data sources against what you actually have deployed and tuned. I (we, sorry Claude) built a Jupyter Notebook pipeline integrating DeTTECT, ATT&CK Navigator, and a custom parser that turns adversary campaign reports into heat maps of technique coverage. Me (us)! The coding imbecile! The AI helped me write the code, structure the data model, and debug the parts that broke in interesting ways. But knowing which techniques matter for a specific threat actor profile - and knowing why a particular detection gap is a three-alarm fire versus a tolerable risk - that came from years of trying to tackle this as a consultant to customers, not from the model.
Then there’s AdversaryMapper — an agentic threat intelligence tool that generates org-chart-style visualizations of APT groups: actors, malware families, infrastructure clusters, campaign relationships, all drawn from open-source reporting and structured into a graph database. It’s just become a seven-agent pipeline. The AI wrote a substantial portion of the code. I specified what the tool needed to answer, because I suffered in the problem space long enough to appreciate what questions a threat intel practitioner actually asks during an investigation. And because it seemed like it would never get done if I didn’t have the help.
On the lab side, I’ve used AI to accelerate building out a my SOC testbed: OpenCTI, TheHive, MISP, Zeek, Cortex, Arkime, Snort, all running on a Proxmox stack with proper reverse proxy and certificate infrastructure. The kind of environment that lets you test detection logic against real adversary tooling instead of dreaming about it. AI got me to a working state in days instead of weeks. And it had to educate me a ton along the way. But my understanding of how those platforms integrate, what data flows matter, and how to validate that detections are actually firing correctly - that’s not something you prompt your way into.
Here’s the thing: I built all of that as a hobbyist. As someone doing this because I find it genuinely fascinating, not because there was a ticket in a queue with my name on it. Now imagine what happens when you hand that same capability to the people whose job it is, with organizational context, customer data, production systems, and a roadmap full of work that never seems to get done.
None of that happens without me. None of it happens as fast, or as well, without the AI. We sort of need each other.
That’s the point.
The Inertia Problem (And Why I’m Not Exempt)
I haven’t used AI as much as I should. Inertia is real. The habit of doing things the way you’ve always done them (something about an old dog and new tricks) makes it tough to change. Especially when those methods have worked well enough to build a career on.
There are nights I’ve stared at a blank page or slide knowing I could have a pretty slick draft in ten minutes if I just started talking to the AI, and instead I’ve written it longhand (or hunt-and-peck, as is my style) out of habit, or momentum, or the endearing stubbornness that comes from being good at something and not wanting to feel like you need a crutch.
I’m learning that old habits AND FEAR are holding me back. Tony Stark didn’t feel like the suit is a crutch. The suit is an extension of what he’s capable of. The difference is that he earned the right to use it well. The suit responds to him because he understands physics, engineering, materials science, and combat at a level that lets him make split-second decisions the suit’s AI alone wouldn’t make correctly. I only have mastery of 3 of those disciplines ;)
The learning curve for AI isn’t “how do I use the tools.” It’s “do I have enough domain knowledge to ask the right questions of it and know when the output is wrong.”
And that last part is everything.
What Gets Lost When You Remove the Stark
We are in a moment where organizations - large, sophisticated, well-resourced organizations that know better - are looking at AI ROI predictions and making a brutal calculation. That formula seems to say: if AI can do 70% of what this person does, and we can get that 70% for a fraction of the cost, viola, the math works!
The math isn’t math-ing, folks.
It doesn’t work because the 30% that gets cut isn’t overhead. It’s judgment. It’s the person who reads an AI-generated diagram and says “this looks sweet, but it is wrong” It’s the engineer who looks at AI-generated code and senses an opportunity to zig when the model wants to zag. It’s the solutions engineer who knows that the customer asking for X actually needs Y, and the AI will happily help them implement X straight into a wall.
You’re not cutting cost. You’re cutting the part of the system that knows when the system is wrong.
We’ve all watched AI confidently produce incorrect results. I’ve seen hysterically funny MITRE ATT&CK technique mappings. I’ve seen it hallucinate CVE details that don’t exist. I’ve had it generate code that compiled clean and had a logic flaw that would have caused a silent failure in production. Thank goodness nobody trusts me with PROD, but when I am the one catching the model? Yikes. In every case, I caught it, because I know the domain well enough or have the gall to ask enough questions so I know what right looks like.
A junior person with six months of experience, handed that same AI output, probably doesn’t catch it. Not because they’re not capable, but because expertise is what you use to evaluate outputs, and you can’t shortcut the accumulation of expertise.
The Better Calculation
There’s a different math that I’d argue produces better outcomes for everyone involved — shareholders included, if we’re being honest about it.
Take the engineers, researchers, architects, and specialists you were considering replacing. Give them the super-suit. Invest in training them to use it well. Then watch what happens.
For software developers: That bug backlog quietly accumulating technical debt for two years? A developer who knows the codebase, who knows which shortcuts are load-bearing walls and which refactors will ripple into three other services, with AI handling the pattern-matching and boilerplate, starts moving through it at a pace that would have seemed implausible twelve months ago. Security vulnerabilities that lived in the “we’ll get to it” column get closed. Features that got deprioritized because nobody had cycles get shipped (yay, SaaS-delivered NAC!). The developer isn’t replaced. They become measurably more effective at the thing they were hired to do, and the product gets safer and more capable in the process.
For security researchers: The detection engineering work sitting in the backlog because nobody had cycles gets done, by the person who understands the adversary tradecraft, knows where the telemetry gaps are, and can tell the difference between a detection that fires on real events versus one that fires on contrived training data. AI handles the scut work. The researcher tackles the judgment. The output becomes real defense, not a checkbox for a compliance audit.
For pre-sales solutions architects: This is the one I know most personally, and I think it’s the most under-appreciated opportunity in the room. A pre-sales architect’s job is to understand what a customer needs, understand what the product can do, and bridge the gap between those two things. They’re often under time pressure, often in ways the product team never anticipated (let’s see the PM bench-press an Account Manager all year!). With AI as a force multiplier, that architect can now show instead of tell. Pilots that used to require a professional services engagement can be prototyped in days. Custom integrations that were previously out of scope can be roughed out well enough to prove feasibility and handed off to engineering with a working blueprint. Solutions that weren’t quite fitting the customer’s environment can be adapted, instrumented, and demonstrated. This with the architect’s deep knowledge of both the product and the customer context steering every decision.
That’s not just a better sales motion. That’s a trust-building motion. It creates customers who actually succeed with the product rather than just buying it. That difference shows up in renewals, in references, and in the roadmap feedback that makes the next version better. You want to know what a product’s real gaps are? Ask the pre-sales architect who just spent three days trying to make it fit a customer environment it wasn’t designed for. Now imagine they had the time and tooling to not just identify that gap, but prototype a solution for it.
In every case, the human’s expertise is what makes the AI output trustworthy. The AI’s throughput is what makes the human’s expertise scalable.
That’s not headcount optimization. That’s force multiplication. And the difference in output quality between those two approaches is not marginal.
The Part I Can’t Quite Let Go
There are people I respect - people who are genuinely good at what they do, who have spent years building real expertise - who are numbers in a spreadsheet right now. But they aren’t numbers on a spreadsheet. They’re people.
And what keeps me up about it isn’t just the human cost, though that’s real. It’s that the expertise being lost is exactly the expertise that makes the AI work correctly for you and the shareholders. You can’t efficiently replace senior experts with AI, because AI-generated work that isn’t reviewed or guided by an experienced innovator is a liability masquerading as a solution.
The Ten Rings gang had the weapons. They didn’t have Tony Stark. They dug a cave. Tony built a suit.
We’ve all seen this sort of story everywhere. You bet on the innovator.
A Note on My Own Scorecard
I started this piece talking about what I’ve built with AI help. I want to end it by being clear that I’m not writing from a place of having figured this out completely. I’ve left a ton of weekend/weeknight productivity on the table. I’ve let my own stubborn inertia win more times than I’m proud of. I’ve been slower to adopt workflows that I can see clearly would make me better at my job. I am getting there!
But I’m also aware enough to understand what I bring to the collaboration. The AI already makes me faster. It makes my drafts structurally stronger. It helps me build things I couldn’t build alone in any reasonable timeframe.
It has never once told me something I didn’t already know enough to evaluate.
That’s not a limitation of the AI. That’s a feature of the collaboration. The suit needs a Tony. Don’t take away the Tony’s.


