Robert Bea: Master of
Forensic Engineering


If Robert Bea shows up on your project, it’s not a good sign. Either you’re in the middle of a major disaster or someone is worried enough to send out the nation’s foremost forensic engineer to take a look. Men’s Journal calls him the “Master of Disaster.” Bea is professor emeritus at the Department of Civil and Environmental Engineering, University of California-Berkeley, and co-founder of the Center for Catastrophic Risk Management, a nonprofit organization. He also runs his own consulting company called Risk Assessment and Management Services.

Bea has studied some of the worst engineering disasters in U.S. history, including the Exxon Valdez, space shuttle Columbia, and Deepwater Horizon. He was eager to share his insights and caution ASME members about complacency, fearlessness, repeating mistakes, and doing it for the money, all of which can result in catastrophic failures that haunt engineers for the rest of their lives.

Bob, you analyzed 600 major engineering failures that occurred from 1988-2005. Are there any new trends since then?

The only trend is "bigger and badder.” There have been more catastrophic system failures: BP Deepwater Horizon, PG&E San Bruno, Hurricane Sandy. This trend should be expected because our infrastructure systems generally are in very poor condition and are more interconnected. The failure of one causes the failure of another. We also have more severe “tests” from nature as we work in more severe environments and face global climate changes.

What is the most common denominator you see in engineering failures?

Organizations that lose their way by developing gross imbalances between production and protection. One of the big drivers for increasing production is decreasing costs (decreasing protection). The balance progressively shifts until there is a major system failure—a monetarily-driven spiral to disaster.

Professor Robert Bea.

What is the one thing mechanical engineers can do to minimize the risk of failure?

Design “people-tolerant” systems that are forgiving of the mistakes that people will make. Design systems that have an “acceptable reliability” that has been explicitly defined and the design developed so that it will equal or exceed that requirement. Design systems that can be inspected and maintained to allow the acceptable reliability to be maintained during the life of the system. The best way to do this is to develop, implement, and sustain the 5Cs:

  • Cognizance: Awareness of the hazards and consequences of failures.
  • Commitment: Top-down and bottom-up to develop systems that provide adequate protections for the production.
  • Capabilities: Address performance of complex systems that are dominated by “human and organizational” factors.
  • Culture: Provide systems that have acceptable performance and reliability characteristics that develop acceptable balances between production and protection.
  • Counting: Effective, validated, quantitative ways to measure safety, reliability, production, and protection characteristics of systems; you cannot manage what you cannot measure.


Why do most mechanical engineers make poor forensic engineers?

Many engineers have some special talents that qualify them for engineering. For example, an aptitude for science, logic, physics, and configuring “things” to make other things that are useful.

So, when it comes time to develop understanding of the root causes of failures and accidents, they focus on the things they understand, not why those things were used in the first place. The most influential root causes are the “why’s,” those human and organizational factors that explain why things are what they are.

To avoid potential problems, do you put every project through a team analysis before launching into it?

Absolutely! You must have the right stuff to get the right results. People have to be selected so that their talents and aptitudes match the jobs that have to be performed. Once the selection process is achieved, then the training process needs to further develop those talents so that the right results are achieved, even during unbelievable conditions.

What sort of team training do you recommend?

Intense and continuing training in three forms: normal activities (for example, landing an airplane), abnormal activities (landing an airplane in the fog), unbelievable activities (landing an airplane that has lost power in both engines on the Hudson River).

Sully Sullenberger is a good friend of mine, and was before he became our “Hero of the Hudson.” Sully contacted me in the early 2000s to understand the reliability characteristics in commercial aviation. He wanted to learn more about crisis management and why US Air had five fatal accidents in five years.

What Sully did was not an accident. It was fully rehearsed and prepared for. Sully and his colleagues prepared for the worst. The airplane designers prepared for the worst. That’s why the airplane did not sink rapidly. It had backflow valves in the fuselage air intakes. The Airbus had been designed for a water landing, even when it was not supposed to land on water, because the engineers understood that could happen in an emergency.

How do you know when a project is as safe as it can possibly be?

Theoretically it is possible to develop a system that has a likelihood of failure of very near zero. But a zero likelihood of failure is not practical given the different uncertainties that must be confronted during the life of a system. Therefore, we must design systems to have a non-zero likelihood of failure. However, the likelihood of failure needs to be small and also acceptable to those who are exposed if the system fails.

This raises the question, “How safe is safe enough?” The answer should be developed from a social process that engages inputs from the exposed public, the exposed environment, the government, and industry. Only when that critical question has been answered in quantitative terms should engineers develop a system to achieve that “acceptable safety” during its lifespan.

Mark Crawford is an independent writer.

Theoretically it is possible to develop a system that has a likelihood of failure of very near zero.

Prof. Robert Bea, University of California-Berkeley


March 2013

by Mark Crawford,