Organize Around Value with Agile Long Term Planning

1 Comment

In my previous post I discussed the various agile practices that can be used to scale from self-organizing teams to ecosystems of self-forming teams. In this post I will focus on how members of teams in an Agile Ecosystem can collaborate to perform the Long Term Planning required to understand if they are organized for value.

To recap, an Agile Ecosystem is a cohesive collection of agile teams and supporting structure (Travellers, Enablers, and Service Providers) focused on a common set of products, problems, or outcomes.

Let’s start by scaling Team Sprint Planning to Agile Long term Planning. Agile Long Term Planning is the act of applying team level sprint / team level planning concepts to a larger scale, typically an ecosystem.

When we think of team level planning we often think of teams that:

  • conducting release planning where Epics are placed into a product backlog
  • conducting sprint planning where Epics are decomposed into stories and placed onto a sprint backlog based on team velocity

The Graduated Backlog

Agile Long term planning to a larger scale includes expanding our concept of a backlog to what I call a Graduated Backlog. A Graduated Backlog encompasses both a larger Time Horizon, from days and weeks to months, and perhaps quarters, as well as a larger scale of Value Increments, from individual stories to larger increments; think Epics (Sagas?), Features, Business Value Increments, Minimum Viable Products, etc.

It is important to note that a graduated back log has an order of magnitudes less precision the farther away the work is from being started. When we don’t expect to start work for a long period of time, we expect it to be poorly understood. As the team completes higher priority items, this work will “move left” as the start date of the work becomes closer to today’s date.

The team is expected to decompose the work the closer we get to starting it, iteratively breaking Strategies into Outcomes, Outcomes into Business Increments, Business Increments into Thin Slices, and Thin Slices into Tests. (some teams break stories into tasks). A more agile friendly taxonomy would be Sagas→Epics → Features→Stories→Tasks. Let teams use what ever words they want to describe how they are breaking things up, getting some alignment at an Ecosystem level is helpful but not mandatory.

Use Throughput To determine Where to Place Demand On the Graduated Backlog

But how can we anticipate when the team will be able to start a particular piece of work? We can do this by estimating the teams throughput. Agile teams deliver value by breaking larger request into smaller increments often known as Stories. The more teams deliver, the better they can anticipate their likely story throughput over time, for example average stories per month, or average stories per week. This number can be used to place increments of value in a Graduated Backlog, and position according to likely starting week, month, or quarter (remember the farther out we go from today, the less precisely we want to define our start period).

Caveat: the longer the time horizon of your backlog, the more likely the items in it will be thrown out! This is a hard learned lesson for many team and the organizations they belong to. Try to continually reduce the time horizon of your backlog!

Not all teams have to agree on using stories to capture throughput, although it makes measuring across team a little easier. As long as the team is able to decompose work into relatively small increments that can be tested / verified for correctness, it doesn’t matter what teams call the unit of work used to capture throughout

What About Velocity

Many of you will ask why not use team velocity, a commonly accepted metric for communicating how much agile teams deliver? The short answer is that teams that are passionate about using velocity should be allowed to do so, but where we can use throughput we should. Why?

Velocity is based on the idea that teams take individual work items, often call stories, and perform relative estimation on them, expressed as a count of story points. These story points have no unit of effort in and of themselves. Rather, they are just a means to relatively rank each story in terms of complexity and effort. The team then estimates their typical velocity as the number of story points delivered end to end within a short time period, often a couple of weeks, frequently called a sprint.

So what is wrong with estimating points and tracking velocity? Well for starters, if we accept that stories are already small, and that the team will deliver quite a few of these in a short period of time, then points serve as a needless extra layer of abstraction. We can make the exact same observations by simply counting the number of stories that flow through our system of work.

The biggest value of estimating stories in points is to ensure that we only work on stories that are small. Once we have accomplished this, the difference between the complexity of one story vs another will be averaged out over time. Stories being small, have a small amount of inherent variation. What we really want teams to do is watch out for items in the backlog that are not small.

In other words we need to be careful that our stories are actually stories, and not masking a much larger piece of work, when this happens we need to break it up into stories before we start working on it!

Throughput allows us to Track End to End Capability

Throughput also allows us to look at capability outside a very narrow time horizon such as a sprint, and across a larger part of the value creation process, ie than just engineering. This lets us track the throughput of items that can take longer than a single sprint to deliver.

Let’s take an examples, using flow metrics we have determined the capacity of a team, in this case they have delivered 90 stories over the last 9 weeks. This gives us a team throughput of approximately 10 stories /week or 40 stories /month.

Fill the Graduated Backlog Through Relative Sizing

When a team is good at decomposing their work into small, incremental, testable units, then we can use the team’s story throughput as a measure of a teams ability to deliver, and then estimate items in the backlog in terms of number of stories, in it. This is where relatively sizing can really shine! We can use it to quickly size new work by asking how big is this piece of demand compared to something that was previously delivered? Is it bigger or smaller? By a lot or a little?

I like to relative size visually, by placing the work on vertical axis and encouraging the team to place previous work initiatives based on a complexity scale. Estimating is simply an act of placing new items in the right spot according to relative complexity.

Go Deeper where you Need to, But No Deeper

Initially this is just a swag! We want experience to inform the conversation! Where there are to many unknowns the team will need to spend more time exploring the work using techniques like impact mapping or story mapping. It’s important however to not facilitate these exercises with an eye towards doing a complete analysis. The team needs to take the vantage point of an explorer, you want to go wide for the most part, and go deep in order to uncover assumptions that are getting in the way of doing a reasonable, low precision sizing effort.

This relative sizing can be done with successive waves of refinement based on how soon the delivery start date is. Starting at the initiative level, then doing deeper dives to get a better understanding of the smaller increments of value, such as features, thin slices and eventually stories.

Again, taking our example team, if they have been demonstrating a throughput of 40 stories a month, then the team could relatively size future work in terms of stories and get an at least reasonable understanding of when they could start a particular piece of work, assuming reasonable prioritization was done on the work.

Again the point here is not a highly precise model. We are using reasonable approximation to make imperfect information visible. We are providing a space where multiple teams and there stakeholder can collaborate and align on what the future could hold, with an eye towards revising as we uncover new information. **

With a little bit of visualization, we can connect our backlog to multiple teams, say all the teams in an Agile Ecosystem. This allows us to see who is working on what outcome. We can clearly see if one team is overloaded, or if there is less demand for another team. In this way Long Term Planning allows team members to understand what outcomes will be worked on, and allow them to move between teams based on that understanding.

This type of view can be facilitated using physical card walls or electronic ones. What is important is that all members of the Ecosystem spend some time looking at it holistically and asking the questions

  • are we organized for value?
  • should I move to another team?
  • should our team be working on a different outcome?
  • how can I provide better support to another team?

There are an infinite variation of ways we can get people in an ecosystem to look at this work, I’ll cover that in the section on running Cadences aimed at Multiple Levels of Feedback.

Using Long Term Planning with a Graduated, Visible Backlog is just the beginning in terms of how we can scale agile practices to help an Ecosystem Of teams continuously organize, as the complexity of our ecosystem increases, we can look to using Work Types and Visual Flow Management with Kanban to enrich team Members understanding of how to organize around the value, a topic I’ll go over next.

Read more in Agile Organizational Design the book.

Share: Facebook Twitter LinkedIn

One Comment

  1. Pingback: Using Kanban to Continuously Re-Organize around Value – Agile by Design Inc.

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.