Bimodal IT: and other Snakeoil

By 22 April 2016Business Blog, Devops
Share Button

I presented my critique of Bimodal IT at DevOps days this week, it was a fun talk to give with some great questions from the audience; all in all a very well run event.  This post is summary of that talk.

tldr; Bimodal IT has an initial allure due to its simplicity but it fails to deliver on its two main aims: lowering risk for critical systems and securing innovation for new digital services.  As if that wasn’t bad enough, its cultural poison.

For those not familiar with the main thrust of Gartner’s “Bimodal IT”:

“Bimodal IT is the practise of managing two separate, coherent modes of IT delivery, one focused on stability and the other on agility.  Mode 1 is traditional and sequential, emphasizing safety and accuracy.  Mode 2 is exploratory and non-linear, emphasizing agility and speed”

So your Mode 1 services have been around for a while, they are mission critical, and they contain the important data that is of central importance to your organisation (your customer data, your trading platforms, yours systems of record).  Meanwhile your Mode 2 services are new, they can break (it’s a beta!) but they are redefining how you engage with your customers and you need to get them out there fast, before you competition does.

Neat huh?

The Problem with “Protected” Innovation

Let’s start with innovation.  Bimodal IT is supposed to give your agile teams the free hand it needs to create the new products and services that your business needs to deliver if its even going to have the luxury of managing the legacy software of tomorrow.

But in our experience this type of ‘protected’ innovation lasts about as long as a vase of flowers, you get an initial blooming, and then death.  Why? Because your innovation isn’t taking place in an ecosystem that allows it to take root.

What do I mean by ecosystem here? Well, how many services within any large organisation operate in complete isolation? I would hazard, none.  They all have dependencies or in some way interact with other services as part of their normal operation.  And where do all these related services reside?  They are much more likely to be in our “Mode 1” environment simply because, if your an organisation is anything other than a startup, you’ll have *way* more services that display Mode 1 characteristics than you’ll have new, “Mode 2” services.  Furthermore, age also makes it even more like that these existing “Mode 1” services will contain the critical data that you need to get to give your new “Mode 2” service a chance of becoming a real innovation in the marketplace and not just an app that puts hats on cats*

To be fair to Gartner, their Bimodal approach does allow for some limited data exchange between the two modes, but if we’ve decided that we aren’t going to tackle fully modernising these “Mode 1” services (because we’re focused on stability) but we are going to couple them to our new digital services, then the evolution of everything along the value chain will slow down to the pace that can be supported by the slowest component, our “Mode 1” service.  The assumption here is that we need everything to move at the same pace and I’ve had many an argument where I’ve been told there will be no problem with the “Mode 1” dependency because we’ll ‘stick an API in front of it’, but the point here is that we’re operating in an environment of high uncertainty, so we have no idea what we’ll even want from an API to a dependant service at this point, we need to iterate the API as well.

So we just hitched up our supercar to a supertanker.  We just killed innovation.  

Risk Vs. Change

Oh well, at least we did our utmost to safeguard our existing critical services, by not compromising the processes that protect them right?  Not really, sorry.

If we fail to keep an open mind and explore some, admittedly sometimes counter-intuitive, ideas around risk vs. change, we are missing out on opportunities to better protect our critical services.

The central (false) assumption around risk with Bimodal IT is that more change means more risk, but there is strong evidence that more change can and actually does lower risk.

How?  Well regular change of an IT system generally means that the content of any one change will be smaller.  And a small change will tend to have less risk associated with it than a large one.  Another way of looking at it is that a small change will tend to have a smaller ‘blast radius’ in the event that the changes goes bad, as well as a lower mean time to recovery (MTTR), a common measure of how quickly you can roll-back or otherwise fix a failure.

Smaller blast radius, faster MTTR combined with the fact that a team which performs changes regularly will get better at it (further reducing the chance of error), means that smaller, frequent changes actually LOWERS risk, not increases it.

Jez Humble has done a lot of great thinking and research on this and I’d highly recommend any of his writing.

So, I hope that’s enough to get you thinking that Bimodal IT may not hold as much promise as a strategy as you might initially think.  In my next post I’ll try and put the final nail in the Bimodal coffin by relaying some of our experiences on the cultural impacts of Bimodal IT.

As a technical strategy, it’s snakeoil, but as a way to organise your teams, it really is poison...

* there are some great apps that do this though 🙂

 

Want to see the slides I used at DevOps Days? Here you go..


 

 

Subscribe to our blog

Join the discussion 2 Comments

  • Stu says:

    I’m all for tri-modal or multi-modal IT, ie. having middle group small service-team “settlers” take innovative products/solutions and begin the process of making them more industrialized inside the company, per Wardley’s view on bimodal.

    That said: “So we just hitched up our supercar to a supertanker. We just killed innovation.”

    I think this is far too pessimistic, and not reflective of the alternative of “small service delivery teams” alternative in your deck. Ultimately we need to be aware of the fallacy that “the new system will be better designed than the old one”. It’s not always a wise or economic decision to rewrite a legacy. Sometimes “stick an API on it and iterate” is exactly the right thing to do (though it certainly shouldn’t be done in a bimodal fashion, there we agree).

    The key is to move away from the all-seeing all-knowing “enterprise service bus” towards decentralized API microservices that your service teams can iterate on in a “mode 2” style… and there needs to be architectural constraints or facilities (aka “with “anti corruption layers”) in place to protect the enterprise from performance or data quality problems. Eric Evans wrote Domain Driven Design, which is a foundational text for the Microservices movement … he has a great talk on “four strategies for dealing with legacy systems”, that highlight many such approaches to “strategic design” .

  • […] in Part One of my blog post I discussed the challenges as I see them posed by a BiModal IT approach to driving […]

Leave a Reply

*
To prove you're a person (not a spam script), type the security word shown in the picture.
Anti-spam image