Building a culture of objection

Summary: accepting objections is one of the most valuable skills a manager can learn, and yet the role models we get fail to highlight this

As many other geeks, I have always been fascinated by aviation and its history. One of the things that kept intriguing me was a simple question: what could cause the dramatic decline in the number of plane crashes attributable to pilot error?

I recently found an answer to that question in a How We Decide by Jonah Lehrer, crediting two different factors for the increased safety.

The first is due to technical progress: the introduction of commercial flight simulators allowed pilots to practice in difficult situations before experiencing them in real life.

However, the other cause is more interesting as it depends exclusively on human factors:

There was one other crucial factor in the dramatic decline of pilot error: the development of a decision-making strategy known as Cockpit Resource Management (CRM). The impetus for CRM came from a large NASA study in the 1970s of pilot error; it concluded that many cockpit mistakes were attributable, at least in part, to the “God-like certainty” of the pilot in command. If other crew members had been consulted, or if the pilot had considered other alternatives, then some of the bad decisions might have been avoided. As a result, the goal of CRM was to create an environment in which a diversity of viewpoints was freely shared.
― Jonah Lehrer, How We Decide

Similar strategies have been later adopted by other professions in which decisions have a crucial role, such as fire-fighters, naval officers and surgeons, to the point that Cockpit Resource Management has then been renamed Crew Resource Management to encompass its applications outside the airline industry.

The CRM program is based around a mantra: “See it, say it, fix it”, which implies that:

  • anyone is able to identify problems, issues or warning signs, not just the person in charge;
  • whenever some warning sign is spotted, the person making the discovery has the duty to report it;
  • the reporter of a problem should suggest (or start looking for) a possible solution immediately.

What is even more interesting is that, being born in a context where hierarchy is extremely important, CRM training puts a significant stress on the communication between supervisors and subordinates. It is so easy for supervisors to mistake objections for insubordination that they are trained to accept them, and specific training is given on how to question authority without threatening. For example, firemen are suggested to follow these five steps when advocating their own position:

  • An opening statement using the addressed person’s name (“Dave,” “Captain,” “Chief”)
  • Stating your concern as an owned emotion (“I think we are heading for a problem…” )
  • Stating the problem as you see it (“It looks like that building is getting ready to flash”)
  • Offering a solution (“I think we should evacuate the interior crews right now”)
  • Obtaining agreement (“Do you agree?”)

― International Association of Fire Chiefs, Crew Resource Management, A positive change for the fire service

Software engineering is a different context, as no lives are at risk during the development of a software project. However, I believe we could benefit greatly from this kind of attitude.
It is often the case that bad decisions taken early on during the life of a project carry significant consequences, consequences that could have been avoided if team members had voiced their opinion more effectively.

We are trained to care for the members of our teams, to respect their opinions, but often accepting objections is not something we are taught. If we are leading other people, we should do whatever we can to make them feel at ease in expressing their dissent with what we say, even when it hurts our ego.
If we are part of the team, we have the responsibility to object when we see an incoming disaster, but our objections should never sound as attacks.

Most of the role models we venerate highlight uncompromising leadership, vision and strength. But it is better to be proven wrong earlier than to be judged in hindsight after a catastrophe.


Published by

Alessandro Bahgat

Master geek, developer, avid reader and one of the minds behind and

4 thoughts on “Building a culture of objection”

  1. That’s an interesting post and it reminded me of this further reading: .
    Nevertheless, it is not obvious (or feasible at all) to adapt what is proven effective in failure/trouble management (i.e., to make problem explicit, to share aawareness of their early emergence, to ask for help, etc.) to cooperative work, e.g., software development. I am afraid that objecting and defending one’s own ideas, although it could be useful and sometimes even better than just conceiling potential frictions and divergences, is just a completeley different thing than “distributed cognition” (cf. Hutchins research on cockpits and alike settings) in resource management. JMHO (sort of objecting, maybe?)

  2. Thanks for the book pointer, I will certainly check it out.

    Yours is a good point indeed: the line between making healthy objections and escalating wars of ideology is often very thin.
    In my (limited by definition) experience, what you pointed out as defending one’s own ideas has frequently become the root cause of many conflicts.
    The groups that I have seen work best frequently featured a culture where the ownership of an idea was seldom relevant. Once a position was expressed, anyone was considered free to point out flaws and mistakes. In this kind of context, objections are never personal and the atmosphere is generally relaxed.

    Again, I agree that distributed cognition is a different concept, but I believe it is still relevant when building software with a team of more than one person. It is very difficult to maintain an in-depth knowledge of every single component that makes up a system, isn’t it?

    Consider this example: during a design review, the lead architect in charge of a project could propose a refactor of a core component. Although it may initially sound a good idea, one of the new hires has been recently working on that component and noticed some corner cases that may harm the stability of the product on the medium term, but she is not sure and decides to keep her concerns for herself. The architect knows better, after all.

    What would be a good way to address this kind of scenario?

  3. Great post! One point I would disagree with though is:

    > Software engineering is a different context, as no lives are at risk during the development of a software project.

    I think this is the wrong attitude that we have in our industry because many problems DO occur when a software system goes down. Sure, most of the systems we work on may or may not result in the death of someone. However, the failure of our systems can directly result in economic issues for companies. These issues may not affect the world, but it could directly affect the lives of the people in one’s company.

    Richard Cook really nailed this point home in an interview from Velocity in 2012: (in section “A View From Not Too Far Away”).

    1. Thanks John,
      I agree with you: while there might no immediate risk in writing software, the programs we author can be deployed to control airplanes, medical equipment or nuclear reactors: mistakes made in this context can be very costly.
      The moment we start considering the effort of maintaining a complex system in operation, risks become very real.

      Thanks for sharing your comment, I also appreciated the article you linked.

What do you think? Leave a Reply:

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s