A Misconception of Workload in Complex Systems

By | November 30, 2017

Most organizational leaders are concerned with how work is done in their organization. This is indeed a very important concern but despite all the attention, it is most often addressed incorrectly. Today we will dig into this topic a bit deeper.

The Army of Walking Capacity Buckets

Organizations usually rely on the concept of “capacity” to plan and execute work. The idea is very simple at its core, which in part explains its widespread adoption. If you have 120 people, for example, you get 120*X capacity within a time box of size X. You can now plan new initiatives by matching their size against this number.

Here’s a big problem with this logic. Assume we have two teams: A and B, 5 people in each. Team A generally gets backlog items that create a pretty even load distribution across the team members. Team B, on the opposite, has a person (who we will name Olga) with a unique skill set. In this time box most of the backlog items involve Olga and she’s quickly becoming a bottleneck. So, in case of Team A, the throughout is high and can be well predicted by the overall team capacity. In case of Team B, however, the capacity of the team is absolutely useless. The throughput basically depends on Olga’s individual capacity, making the rest of the team run idle for the most part. In fact, it’s usually a lot worse than that as they, instead of going idle, keep creating more “stuff” that will pile up for Olga. So, in one case you have 5 people that move at the speed of 5 and in the other, five people that move at the speed of one person. If by any chance you think that Team B’s case is some sort of exotic mishap, I would encourage you to think again.

This logic scales further up by considering teams instead of individuals and so forth. The main takeaway is this: capacity is a very poor predictor of throughput. It’s not just inaccurate, it’s dangerous as it leads to unreasonably optimistic estimates for the delivery capability and, as a result, leads to substantially overloading the people. It’s counterproductive to view an organizational unit (a team, a team-of-teams, etc.) as a collection of “capacity buckets” that one could uniformly use to deliver value. Instead the structure of dependencies, knowledge pathways and skill sets are primarily responsible for the outcomes.

But Wait, There’s More: Variability

Business demand doesn’t stand still over time. The new backlog items at some point begin to require a different skill ratio and induce different dependency structure within the group. Team A in our example above, despite their prior stable record in terms of individual workload, may suddenly start to experience quite a turbulence with the new scope. Suddenly, there are significant bottlenecks and constraints that couldn’t have been predicted based on prior periods. The problem, therefore, is farther complicated: the internal structure of the workload (which primarily determines the outcomes) turns out to be a moving target.

Wow… Just when we thought we approached the solution…

You Have To Stop Fighting the Physics… It’s Not Helpful

The workload, by it’s nature, is heterogenous and variable. This makes it practically impossible to effectively plan and execute work based upon a “wholesale” capacity approach. It’s not that you will be “a little off”. Your calculations, depending on circumstances, will be way off the target. This is the way things are and it’s up to us whether we want to accept the reality and exploit the underlying forces to produce better economic results or we keep fighting the reality and continue to pretend that workloads are homogenous and predictably stable. The latter means that the organization is losing an opportunity to improve its performance. But what’s even worse, the organization fails to arrive at the right mental model of reality and continues to “optimize”, driving itself into even more trouble.

We have to think carefully what do we like better: the illusion of predictability and uniformity of workload or the actual business outcomes, because these are completely different things.

The Solution Is Not an Improved Mousetrap

This problem solves at the fundamental level. First and foremost, the obsession with capacity and utilization has very clear roots. It’s in the flawed assumption that once you’ve defined a boatload of scope, the success of the initiative equals to properly implementing that scope. This is not how product development works. This is a misapplication of manufacturing principles. We just take the ideas that guide the process of creating repetitive value and apply it to product development where we continually produce unique type of output. That “manufacturing” mentality has to go first, otherwise no method will ever produce good results.

Ok, so if that’s the wrong mentality, what is the right one? It’s actually very simple: in product development, learning and adjusting are the primary success factors. It’s not about the plan, it’s about your ability to quickly understand and properly interpret new facts and then adjust the course of action based on that. It is very easy to say whether an organization understands the nature of product development or follows the illusion of predictability. Simply look into the consequences of deviating from the plan or initiating change. If the organization welcomes change as a vital component of success, if the policies and rules (as well as the actual leadership attitude), make it easy to adjust scope and effort allocation then the organization obviously knows what it’s doing. If, on the opposite, changes are frowned upon, they are very hard to get approvals for and people prefer to rather go in a knowingly sub-optimal direction to avoid the hassle around implementing change, then you are dealing with an organization that treats product development as manufacturing.

So, This Is How It Works

Learning and adjustment. L-e-a-r-n-i-n-g and a-d-j-u-s-t-m-e-n-t…

This doesn’t preclude longer-term planning, but it definitely implies a fundamentally different treatment of planning and execution:

  1. The emphasis is on outcomes rather than outputs. This means more rapid, cross-cutting feedback loops and “value” rather than “scope”-oriented metrics.
  2. Planning and forecasting are informed by empirical evidence of delivery capability rather than capacity thinking, i.e. it should reflect the prior knowledge of constraints and bottlenecks within the system.
  3. It’s okay to plan but plans should not over-constrain the outcomes. Your organization wants to know how much (money, time, etc.) approximately it is going to require to build this and that? It’s totally fine to want to know that, as long as the following rule applies: the organization treats the plan as a collection of assumptions, some of which will play out as expected and others – not at all. This means that the organization will welcome new information and will be looking to adjust the course of action correspondingly. Re-scoping and highly incremental implementation of large initiatives is rather a norm: scope is an important variable, by changing which we can optimize the value delivered.
  4. To be able to quickly learn and adjust, self-organization is vital. Indeed, given the heterogeneity and variability of workload there’s no chance to apply top-down approach to most of the tactical decisions. This shifts the role of the leadership quite a bit and gears them up for addressing the impediments to self-orgnizaiton, collaboration and fast learning.

Wait a Second, But We Are Special…

Who isn’t? I can tell you though that there are two types of “special”:

  1. Certain planning and workload management requirements are externally imposed. That’s, for example, when you are a contractor whose contract agreement uses scope as a key “currency”. Let me be clear about something here. For starters, just because you are operating under such constraints, doesn’t mean that those constraints are helpful to either of the parties. This topic deserves it’s own big article (or articles, rather). I hope I’m not creating an impression that we are dealing with some ideal scenarios here. I happen to have a very good understanding of what contract work is like as that’s the environment where I started my career and worked in multiple different capacities, on both sides of the business: as a contractor and also as a customer, at different times. So, I don’t underestimate the case. At the same time, while it’s the customer who originally imposes the constraints, it’s the contractor who fails to make the leap towards a better collaboration model by demonstrating an alternative approach in action and gradually taking the customer on this journey that will benefit both parties.
  2. The constraints are self-imposed. Well, no excuse then. You are doing it at your own expense. Time to pivot and start treating workload in product development the way it should be treated.

Lastly, the usually superficial adoption of Agile and Lean practices unfortunately does not prevent from capacity thinking as a primary workload management tool. You need to address the root causes of the problem, some of which we discussed above, for Agile and Lean to actually work in your environment.

So… how is workload managed in your organization?

 

By Alex Yakyma, Org Mindset.