# Stop Comparing Safety and Cybersecurity, They Have Very Little in Common
Ross Haleliuk opens with a haunting analogy from Dug Song, the founder of Duo Security. In aviation, a plane crashes the same way only once. Black boxes reveal the failure, designs change, and that crash never happens again. In cybersecurity, organizations get breached the same way repeatedly. The same company, the same reason. Song called it Groundhog Day in the worst possible sense—a hamster wheel where we relive the same incidents without actually improving.
Haleliuk agrees with the pain but rejects the comparison. Safety and security, he argues, are fundamentally different problems. The seatbelt analogy that security leaders love to cite simply doesn't hold.
Safety Addresses a Stable Problem. Security Doesn't.
Seatbelts solve a fixed equation. Gravity doesn't learn. Physics doesn't adapt. Engineers account for speed, impact angle, passenger position, vehicle height—a finite set of variables. Once tested, the solution works across millions of identical scenarios.
Cybersecurity faces something entirely different. Attackers study your controls, adapt their tactics, and hunt for gaps. Ross Haleliuk writes, "Basically, wearing a seatbelt won't trigger a new kind of crash, but a security control often does trigger a new attack path."
His friend Luigi Lenguito, founder of BforeAI, captures the distinction sharply. Safety trends toward zero accidents as root causes are fixed. Security cannot trend to zero because adversaries are motivated to increase harm. The environment is non-controlled.
"It doesn't make sense to draw parallels between safety and security, and it's time to put the seatbelts analogy to rest."
What's at Stake: Life Versus Nuisance
Plane crashes devastate families. Car accidents kill people. Safety measures protect what society agrees is our most precious gift.
Most cybersecurity breaches, Haleliuk notes, are financially motivated. The average person experiences them as inconvenience: spam emails, adware pop-ups, fraudulent charges that banks reverse. Even massive breaches like Uber or Equifax feel distant—breach notification emails and free identity monitoring that most don't understand.
Ross Haleliuk writes, "Basically, to most individuals cybersecurity breaches are a nuisance, and to most businesses they are, well… the cost of doing business."
He acknowledges exceptions. Ransomware attacks on hospitals have harmed patients. Families have lost decades of savings. Private details leaked can cause irreparable harm. But these remain anomalies in public perception, not the expected consequence of a breach.
Critics might note this framing underestimates how cybersecurity failures cascade into physical harm. Power grids, water systems, medical devices—all increasingly connected. The distinction between digital and physical risk may narrow faster than Haleliuk suggests.
Standardization Works for Safety. It Fails for Security.
A seatbelt is a seatbelt. No configuration. No context. It works the same way every time. Airbags, guardrails, child safety locks—all passive, predictable, effective without ongoing mental effort.
Security controls are deeply contextual. Every deployment depends on customer environment, configuration, business processes, data flows. Controls interact unpredictably. A single gap causes cascading failures. Ross Haleliuk writes, "Implementing a security tool the wrong way can (and does) itself lead to a breach, making it easier for attackers to get in (imagine if wearing a seatbelt increased the likelihood of a car crash)."
Standardization itself becomes vulnerability. When thousands of companies standardize on the same firewall or VPN tool, a single exploit breaches them all simultaneously. Haleliuk points to Cisco and other network device vendors—thousands of organizations compromised at once because they chose the same protection.
As Ivano Bongiovanni noted in a LinkedIn thread Haleliuk references, the core distinction is human intent. Safety: no malicious intent. Security: malicious intent. A system can be safe but not secure, and vice versa.
The Magic Answer Doesn't Exist
Haleliuk closes with sober realism. We cannot repurpose the safety model for cybersecurity. Borrowing paradigms from slow-moving industrial manufacturing won't work for fast-paced technology defense.
Ross Haleliuk writes, "I think it's just about time to accept that the best we can do is to continue maturing our defenses so that we can consistently make it harder and harder for the adversaries to achieve their goals. There is no other magic answer."
Critics might argue this accepts defeat prematurely. Safety achieved dramatic improvements through regulation, standardization, and cultural shift. Why assume security cannot do the same? The Secure by Design pledge Haleliuk mentions—CISA's comparison to automotive and aviation initiatives—suggests some leaders still believe borrowed paradigms can work.
Bottom Line
Haleliuk's argument cuts through security's favorite analogies with uncomfortable clarity. Seatbelts work because physics doesn't learn. Security fails because attackers do. The comparison isn't just wrong—it's dangerous, luring leaders into solutions that cannot work. The verdict: stop borrowing from safety. Build defenses that assume adaptation, context, and relentless opposition. There is no seatbelt for the internet.