When I came out with this title I realised that it is completely in opposition to a title of a great post by Paweł Brodziński that was published almost a year ago. While I agree that having the right behaviours in place in a team is the goal, it’s important to understand the pitfalls of having too much in progress. The topic has been discussed bazillion times already, yet I’d like to add to it by allowing you to experience it by playing with a simulator. I’ve learned that this is more helpful than just talking about Little’s Law.
But before we discuss Little’s Law, let’s take a look at other aspects of having too much work in progress, that are also very important.
Multitasking is quite a popular topic, which has been studied by many scientists. Studies differ in their conclusions – some state that you can lose 10 IQ points due to distractions like notifications, some say that because of context switching you can produce lower quality products, others focus on impact to learning ability, another points out the amounts of stress caused by this phenomena. All those studies have one thing in common – multitasking is counterproductive. If you’re interested in more detail, check out Wikipedia or this article.
To see that in action, you can try a little exercise (originally published by Dave Crenshaw):
- Using timer, measure time needed to write on a piece of paper text “Switchtasking is a thief” and then numbers from 1 to 21.
- Then, measure writing the same text and numbers in separate rows, but one letter and then number at a time (so start with S, then 1, then w, then 2 and so on).
- See the difference in time, quality and possibly stress for yourself.
Little’s Law (which actually isn’t a law, but rather a tautology) deals with queueing systems and states that the average number of items in a system equals the average arrival rate times the average total time an item spends in the system. This form is known as:
(L = avg. no. of items; λ = avg. arrival rate; W = avg. time spent in system by an item. When you look at the formula again, it does look like a law, doesn’t it? You know, law, LAW, L=AW, L=λW…)
The only requirement is that the system needs to be in a steady state, that is, the variables defining the behaviour of the system don’t change over time. Those variables may be the average time between arrivals, task size distribution with its parameters and so on. This rules out the initial or final periods of the system, when it’s empty and is getting filled with tasks or the other way around.
There is one exception of this requirement. If we can “catch” our system empty (ie. with L=0) twice, then the law holds true between those moments in time. For further reading, click here.
Little’s Law in Our World
You can often see a different representation of Little’s Law in the context of software development, that is very similar to the original:
WIP = Throughput * Lead Time
(WIP = Work In Progress. Of course, all are averages)
As you probably noticed, what really changed was λ – the arrival rate is now Throughput. Interestingly, the change was made mainly because of focus on manufacturing operations, where the output is the most important measure of any manufacturing system. Also, sometimes it’s easier to measure the output. Yet this change has some implications:
- The average throughput should equal the average arrival rate. In other words, throughput should equal demand.
- Every job that enters the system should eventually exit it.
Now, having all that in mind, let’s see this in action!
Before you test Little’s Law in your team, you can play with it in the Kanban Flow Simulator. You can very easily set up a board there that will show how the law is behaving.
Below you can find a screenshot of an example situation. The simulation is configured with default values except for
- “Task Creation Strategy” that is set to “Constant Push” with demand at 8 tasks per day.
- The number of days used in the calculation of moving averages for diagrams is set to 100.
- Team members are not allowed to multitask or pair.
More information on configuration and defaults can be found here. You can review the full configuration of simulations by clicking on values in rows below.
So I ran it multiple times with different WIP limits and here are the results. All numbers are averages from last 100 days of the simulation. The simulation was set up as described above, only WIP Limits were changing.
|All limits set to:||WIP (tasks)||Lead Time (days)||Throughput (tasks/day)||WIP/Lead Time (tasks/day)|
The table above shows the raw data gathered while running simulations. The last column is basically calculated Little’s Law. Numbers in this column should equal Throughput and… they are!
The diagram below show the data in more human friendly way.
As you can see, WIP is growing with limits – the higher the limits, the higher WIP, and that’s pretty obvious. Throughput is growing, reaching its highest value at limits set to 5. From now on, Throughput is constant. This is due to our bottleneck (Development in this case), that is not able to process more tasks per day. You can observe that Lead Time, that was stable for limits 1-5, grows for limits bigger than 5. So with every limit increase, Throughput stays the same, WIP grows and Lead Time grows.
Without changing anything else than limits (so no adding or removing headcount in particular), you may optimise for shortest Lead Time and/or highest Throughput. Using higher limits is pointless – they only increase Lead Time without any benefits in Throughput.
All of this can be observed with more variation in the input. Random task sizes, random time of tasks spawning or moving people between columns – all of these add variance, but Little’s Law still holds.
What about a challenge for you now? I’ve manipulated the system by setting the same limit for every top level column, but you can set limits for subcolumns as well. Can you find the best configuration? Try changing just WIP limits and put your findings in the comments section below! Click here to try. There are better configurations that those used above!
So, Limit WIP!
Limiting WIP is very powerful. It can help in improving your performance without any cost! Just put some limits and watch your metrics. But be careful, as there is no simple answer for at what level those limits should be set. The simulator is only some approximation of reality, a simplified model. The real world is much more complex and scary. So experimentation is needed.
Importantly, you can set limits both too low or too high. The optimum value is somewhere in the middle, and you can choose what you want to optimise for.
Why not starting collecting some data about your flow and then trying to change the parameters from today onwards? I think it’s worth the trouble.