Back to all articles

Building Feedback Loops Through Incident Transparency

Security experts advocate for a shift in how the industry handles incident reporting, arguing that detailed, transparent feedback loops are essential for collective defense. By analyzing the mechanics of past failures, organizations can move beyond compliance checklists to implement evidence-based risk reduction.

Triage Security Media Team
4 min read

Security resilience relies on the ability to learn from the past. Experts are advocating for a fundamental shift in how organizations handle security incidents, suggesting that specific, transparent disclosure of how and why compromises occur is necessary to effectively reduce risk across the digital ecosystem.

At the RSAC Conference on Monday, March 23, threat research experts Adam Shostack and Adrian Sanabria will present the argument for structured feedback loops in a session titled "A Failure Is a Terrible Thing to Waste: The Case for Breach Transparency."

Shostack, founder of Shostack and Associates, and Sanabria, founder of The Defender's Initiative, suggest that the cybersecurity field lacks the formal diagnostic processes found in other safety-critical industries. They draw comparisons to aviation, medicine, and public health, where adverse events, such as plane crashes or medical errors—trigger rigorous investigation and information sharing to prevent recurrence.

In an interview with Dark Reading, the researchers note that while these industries scrutinize failures to improve safety standards, the cybersecurity sector often treats investigations primarily as legal matters to be contained. Consequently, the technical lessons that could protect other professionals are often lost.

Engineering Safety Cultures

This reluctance to share technical details prevents the community from extracting insights from security incidents, Shostack explains. Rather than obscuring the sequence of events or deflecting responsibility, the guiding principle for incident response should be a clear admission of the error followed by a detailed explanation of the mechanics.

While organizations often fear being viewed as negligent, Sanabria points out that successful unauthorized access rarely results from total incompetence. Instead, research indicates that incidents typically stem from chains of small, specific gaps, such as missing patches, configuration drift, weak monitoring, or inadequate testing coverage.

"It's rarely one thing," Sanabria says. "There are dozens of controls that should have stopped the [threat actor] and didn't."

Disclosing these granular details allows the industry to identify and reinforce specific control failures, moving beyond fear of blame toward a model of collective prevention.

Balancing Liability and Engineering

The approach to incident disclosure in the US varies significantly by state and organization. While publicly traded companies are required to disclose material security incidents in SEC filings, these disclosures often lack the technical depth required for engineering analysis.

Shostack attributes the lack of diligent post-mortem sharing to the friction between legal and engineering mandates. Legal counsel is ethically bound to minimize client liability, often advising executives to limit communication to avoid potential litigation. This stands in contrast to engineering ethics, which prioritize public safety and system integrity.

"This isn't what we do when a bridge falls down, when an airplane falls out of the sky," Shostack notes. "When we have any other technologically mediated system failure, we talk about what happened and we learn from it."

Without formal governance prioritizing technical transparency, the legal perspective often dominates, keeping the operational details of security failures inaccessible to the broader engineering community.

Regulatory support for this type of transparency remains limited. The Cyber Safety Review Board (CSRB) was established to function similarly to the National Transportation Safety Board, investigating major cyber incidents to publish real-time feedback. However, following administrative changes in January, the current administration removed the board's members. This pause occurred while the CSRB was investigating the compromise of US telecommunications companies by the China-backed group Salt Typhoon. While the board technically exists, the lack of active personnel halts its intended function.

Extracting Lessons from Available Data

Despite these challenges, actionable data exists for those willing to look. Sanabria has reviewed public documentation—including congressional reports, regulatory filings, and lawsuits, and notes that these sources contain a "pile of gold" regarding failure modes.

The difficulty lies in the fact that technical narratives are often oversimplified in the immediate aftermath of an incident. Sanabria cites the 2017 Equifax incident, where initial headlines focused almost exclusively on an unpatched Apache Struts vulnerability. Later congressional records revealed that the root causes included deeper failures in internal communication, process management, and testing protocols.

“What everyone remembers is Day One,” Sanabria says. “The deeper lessons often come 18 months later." By that time, the industry’s attention has often shifted, and few professionals read through the lengthy reports that detail the systemic breakdowns.

Some organizations are proactively modeling transparency. Following a ransomware incident in 2023, The British Library published a comprehensive after-action report outlining the mistakes made and lessons learned. In Canada, federal privacy commissioners released findings regarding the PowerSchool education technology compromise, providing insight into the systemic issues affecting educational institutions.

While the US Federal Trade Commission also publishes detailed complaints, Sanabria argues that even these public resources often lack the "how" of the compromise—the specific narrative of technical failure that helps security engineers adjust their defenses.

Evidence-Based Risk Reduction

Without empirical evidence to inform defense strategies, the industry risks investing in what Sanabria terms "busywork generators", compliance activities and tools that consume resources without significantly lowering real-world risk.

"Every other industry that cares about safety builds feedback loops so they can get better," Sanabria says. "Reducing risk is more of a gamble without data."

Establishing formalized transparency and governance is necessary to determine if security investments are genuinely improving incident response and prevention capabilities.