Cybersecurity Reading List - Week of 2026-02-02

Published on: 
February 2, 2026
On This Page
Share:

This OWASP guide popped up on my radar this week and, yes, it’s about AI. And yes, it’s entirely predictable. But what appeals to me at the moment is its predictability amidst the nondeterminism of LLM rakestepping. Catastrophic outcomes in these complex systems are foreseeable not just from today, or the day this Adversa post was published, but at least from 1984. It was in 1984 that sociologist Charles Perrow published “Normal Accidents: Living With High-Risk Technologies.” Normal Accidents had nothing to do with artificial intelligence, yet seeing how it’s being deployed today, the book now has everything to do with it. Perrow studied major industrial accidents across much of the twentieth century and isolated some important insights on unexpected catastrophic failures inevitable enough to be called Normal Accidents:

  • The system is complex.
  • The system is tightly coupled.
  • The system has catastrophic potential.

In the agentic systems we see proposed and being implemented before us, certainly complexity plays an integral role - the dirty little secret of LLMs is that to make one useful, especially for a specialized expert task, you’re dealing with multiple layers of LLMs with varying levels of autonomy. It’s the sausage being made behind that single pane of glass most AI products pretend to be. 

We then turn to tight coupling - essentially, complex systems producing outputs that must occur in a specific order, such as a multi-stage chemical treatment process. It is the anticipated sequence - in Perrow’s words, the invariant sequence - where B must follow A, because that is the only way to make the product - that defines tight coupling. Think about the sub-tasks each Agent is charged with; pre-prompt hardening against injection attacks, shifting tone and scope of the LLM response, providing expectations to shape system output. Above that and the primary agent doing the task, you have multiple other systems working to evaluate, validate, and re-shape output before it’s pushed to the surface agent, who relays it to you. Should those multiple subsystems interact in varied ways or orders, the output is necessarily - perhaps catastrophically - affected.

Catastrophic potential is mostly self-evident, but let us take a specific example: the modern Security Operations Center, or SOC. Perrow’s book provides multiple corollary environments - think a Nuclear Power operations center full of sensors, monitors, and potential alerts. Or the cockpit of a commercial airplane, which had seen much more automation in the decades prior to 1984 and provided starkly relevant examples of alert and attention issues at critical moments. Indeed, we see SOC failures in some of the biggest hacks on record, where alerts are missed or disregarded, leading to major systemic damage.

So in the SOC we have a complex, tightly-coupled system with catastrophic potential. “The essence of the Normal Accident,” Perrow wrote, is “the interaction of multiple failures that are not in a direct operational sequence.” That is, system components interacting in sequences and ways not only unexpected, but “incomprehensible” during the incident, often leading to much worse outcomes. 

And what do we do, 42 years after Normal Accidents’ release? We add a complex, relatively tightly-coupled system of agents to a complex, certainly tightly-coupled system with catastrophic potential called the Security Operations Center. And not only that, but a system of agents fundamentally empowered by their own nondeterministic nature. 

“What distinguishes these [system component] interactions,” Perrow wrote, “is that they were not designed into the system by anybody; no one intended them to be linked. They baffle us because we acted in terms of our own designs of a world that we expected to exist - but the world was different.”

In the rush to the AI/Agentic SOC, expect many Normal Accidents.

Podcasts

Articles

Research Papers and Reports

Related Content

SecuritySnacks
SecuritySnack: Phishing Interviews
Phishing campaign targets job seekers with fake career portals and interview invites, stealing ID.me credentials and deploying malware since August 2025.
Learn More
SecuritySnacks
Pay to Lose: Dubious Online Gambling Games
Be wary of "real money" games this New Year. This report uncovers hundreds of fake Android gambling apps using spoofed reviews, fake win declarations, and "waistcoat" shells to trick users into sideloading unregulated, predatory gambling software.
Learn More
SecuritySnacks
Cybersecurity Reading List - Week of 2026-01-05
Learn More