Becoming AI Durable: How to Future-Proof Yourself in the Age of Automation
There’s a lot of noise right now about AI taking over jobs. Some of it’s hype, some of it’s fear, and a little of it’s reality. I get it — I’ve seen the same reactions inside cybersecurity teams, engineering orgs, and risk functions. When automation or AI tools start showing up in workflows, people naturally wonder what that means for their future. Here’s the truth: yes, some jobs will change or even disappear. But there’s also a massive opportunity for the folks who learn how to work with AI — not against it. The future isn’t about humans being replaced; it’s about humans being augmented. The people who understand that early will have the advantage. That’s what I mean when I say AI durable. It’s not about surviving AI — it’s about staying relevant because you’ve adapted, stayed curious, and found where the human still matters most. Step 1: Start…
The “Agent Rule of Two” — Designing Safer AI Agents Through Limitation
When Meta released its Agent Rule of Two framework, it clicked for me immediately.Not because it was revolutionary in concept — but because it gave language to something most of us have already been trying to do in practice: limit the blast radius of automation. If you’ve ever built bots, workflows, or automation jobs that can read data, act on it, and then go tell the world about it… you’ve probably felt that quiet sense of “hmm, maybe we gave this thing a little too much power.” That’s exactly what the Rule of Two addresses. It’s a straightforward principle for building or reviewing agents in a way that keeps you out of the “one bad input away from a data breach” category. The Core Idea The Agent Rule of Two says that an AI agent (or automation process) should never have all three of these abilities in a single session:…


