Security people tend to earn a reputation as dour and overly negative thanks to our habit of approaching every subject by asking “what’s the worst that could happen if we do this?” It’s not that we can’t (or don’t want!) to see the potential benefits of a new technology, process, or policy. Rather, searching for the worst possible outcome and engineering ways to lessen that outcome’s impact, probability of manifesting, or both is one of our essential functions. We exist to protect our organisation and its people.
To that end, we need to understand what might occur should things go totally off the rails. This necessity gives us our gloomy reputation.
That said, someone must ask the uncomfortable questions and champion effective mitigating solutions. Denial isn’t a viable strategy, especially when you’re discussing potential outcomes involving irreversible tragedy … be that the end of the business, or the end of a worker’s life.
This came to mind Saturday, 12th November, while I was waiting outside baggage claim at Dallas Fort Worth International Airport for one of my mates to arrive. While my guest secured his luggage, I browsed my news feed. The first reports of a mid-air collision at a small airport had just posted, along with video captured by attendees at the air show where it happened.
According to the Dallas Morning News, “The crash happened about 1:20 p.m. Saturday during the Wings Over Dallas show at Dallas Executive Airport, according to the Federal Aviation Administration, and involved two historic warcraft – a Boeing B-17 Flying Fortress and a Bell P-63 Kingcobra.
“Video posted on social media shows the P-63 banking, and colliding directly with the B-17, which was flying straight. The impact immediately disintegrated the P-63 and split the B-17 in half, with the front half of the fuselage exploding in flames as it impacted the ground.”
I watched videos of the incident over and over despite the autumn chill outside baggage claim. The P-63 came into frame aft and left of the bomber, moving faster than its partner. It looked – to my untrained eye – like the fighter turned 90° with its left wing pointing up, then deliberately curved. The P-63 went from following the bomber to flying straight into it just aft of the B-17’s left wing root. I’m not a pilot; I had to look up the manoeuvre.
I think the fighter pilot might have been trying to perform a climbing turn (a “chandelle”) and failed to gain the altitude they needed to pass over the bomber. Instead, the two aircraft hit and disintegrated.
Hopefully the experts will be able to explain what happened, why it happened, and how to prevent it in the future.
I’ve been watching a video documentary series on YouTube lately called “Disaster Breakdown” that does exactly that: the creator shares the lessons-learned from various historical disasters and tried to explain what went wrong and what – if anything – could’ve been done to prevent the tragic outcome. In almost every case they’ve studied, some element of human behaviour was causal – and if that behaviour had been interrupted or corrected, the disaster might either have not happened at all, or might have been correctable.
I’ve been fascinated by the effect of human behaviour on disasters ever since our Wing Safety NCOIC briefed us on the lessons-learned from the C-17 crash at Elmendorf AFB in July 2010. The sergeant had more than just video to work from; he played us the audio from the aircraft’s cockpit voice recorder [2] and explained where the crew’s cavalier violation of regulations led to their irreversible loss of control of their aircraft and its subsequent fatal crash.
A few years later, our Air War College class studied the June 1994 Fairchild AFB B-52 crash using the same analytical approach. We studied how lax organisational culture, repeated failures by key leaders to enforce regulations and basic performance standards, and a laissez-faire approach to risk management led a dangerously overconfident pilot to crash his plane and kill his crew in front of their families watching from the ground.
One lesson that I’ve internalized from these historical disasters is that there’s no such thing as a “completely safe” flight. The best designed and best maintained airplane, a well-trained crew, and perfect weather can’t guarantee anything. All those factors can greatly reduce the likelihood that something will go wrong and will surely improve the operator’s odds of recovering from what happens, but no one can say with absolute certainty that the probability of disaster is zero. It’s never zero. More the point, poor human judgement can undermine the effectiveness of all other safety controls.
As if to illustrate that point, I nearly got into a wicked auto accident on the way out of the airport Saturday afternoon. If you’re not familiar with the airport, a single multilane motorway bisects the airport from north to south. As you leave DFW by the either exit, the four outbound lanes open up into a wide fan, allowing drivers to choose from 16 toll lanes. Cars naturally slow down and spread out. What they don’t do is suddenly veer 90° to the flow of traffic and attempt to cross the space perpendicular to everyone else. At least, they’re not supposed to.
That’s what a bone-headed jack-wagon did right as I was lining up to enter our nearest toll lane. I heard a screech of brakes come from the big SUV driving parallel to me on my left, saw a flash of white sedan shooting across my bow from left to right, and slammed on my brakes. My car’s collision detection sensors were a second faster than I was (thankfully). We went from 65 km/s to zero right as the big SUV T-boned the insane white sedan … but didn’t stop it. The sedan had enough momentum to pass by me mere centimetres from my bumper before turning back in the correct direction and slowing to a stop.
That near-miss reminded me that the old safety sergeant’s axiom that there’s no such thing as a “completely safe” flight holds true for ground travel. The best designed and best maintained car, a well-trained driver, and perfect weather can’t guarantee anything when humans are involved. Fortunately, the engineers who designed my sedan knew a lot about detecting and preventing crashes, which is why my emergency braking initiated a second faster than my human brain could react. Great engineering, to be sure … but if I hadn’t already been slowing down to a safe approach speed and staying a half car-length behind the SUV on my left, I’d have taken that white sedan’s hood ornament right through my seat. I was holding back on approach because I didn’t trust the decisions being made by the other drivers around me.
I attribute those pre-emptive defensive actions as much to my security training as I do to my driving instructors. Understanding how other people think, feel, perceive, and decide under stress has been tremendously helpful in avoiding collisions.
Thinking about what might go wrong before each trip – factoring the weather, illumination, visibility, traffic, and congestion help me plan safer routes. Factoring my own health, fatigue, and distractions helps me factor how much extra distance I should keep from other vehicles to account for my potentially reduced reaction time and other drivers’ irrational behaviour. Sure, my car has awesome built-in safety controls, however I never assume that those controls are guaranteed to work. I plan for things to go seriously wrong and drive accordingly. I strive to give myself as much extra room to manoeuvre or react as I practically can.
If that seems excessive, yeah … it probably is. That said, it’s the same approach that security professionals tend to take toward project planning: sure, we might have fantastic supporting technologies (like off-site backups) to help protect us from accidents and mistakes, but we don’t count on those technologies to work perfectly, every time.
More importantly, we look beyond a solution’s engineering to consider how human behaviour and outside dependencies might undermine the effectiveness of our design, then try to give ourselves as much extra room to manoeuvre – so to speak – as we practically can. Our goal isn’t to create a perfectly fail-proof solution since that’s not a realistic option. Our goal is to detect, interrupt, and lessen the potential impact of bizarre user behaviour. To paraphrase Murphy’s Law, “anything a can go wrong will go wrong, especially if people are somehow involved.”
I don’t see this mindset as “dour” or “overly negative” … I see it as positively pragmatic. We’re looking to find ways to protect our organisation, our systems, our process, and our people from the myriad strange decisions that they might make and the actions they might take.
That’s why I’m fascinated by the recent air show tragedy in equal measure to my horror at the senseless loss of life. In the same vein, I’d really like to know why that jack-wagon in the white sedan thought he could ram his way through traffic like some sort of homage. I want to learn what happened and why in both incidents. Hopefully the experts will figure them out and have lessons to share that we can all implement to prevent such a disaster from occurring again.
Understanding the physics, engineering, and environmental data will surely be straightforward compared to the mysteries of what were the operators thinking? Their reasoning surely made sense to each of them at the moment. What, then, convinced them to do something so inadvisably dangerous? Why accept the unnecessary risk when the potential cost was so high? I hope we find out.
[1] DM me if you have a better idea of what the pilot might have been attempting. I’d like to know.
[2] Which was horrifying.
YouTube video link
© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543