Archive for the ‘Software Development’ Category

What’s Next for the Agile Manifesto

Posted on February 13th, 2011 by Dennis Stevens  |  11 Comments »

This weekend, about 33 people got together at the Snowbird Cliff Lodge to celebrate the 10th anniversary of the Agile manifesto. This group was invited and hosted by Alistair Cockburn. The goals were to have a celebration, talk about the successes achieved and the problems facing the community, and hopefully contribute something back around the problems that we can sensibly address.

This is not a replacement or an extension for the Agile Manifesto. It is more of a focusing statement relevant to our understanding of today’s problems and needs. There was a lack of alignment at some levels – although the expected disconnects, Kanban vs Scrum vs XP or whatever, didn’t arise. The biggest lack of alignment I saw was between those that feel we need to address Agile across the business and the group that believes Agile is about only software development, asking “Who are we to be describing how the organization should be designed?” I believe this gap has at it roots the different perspective of the people who attended. Some work within software development teams. Some help organizations adopt Agile. And some help organizations exploit the Agile ability to rapidly deliver working increments of value to update business models and deliver on new value propositions.

But there was a ton of alignment on some issues. There was great energy and flow in the room. There was some negativity and cynicism that came from our focus on what problems exist. But that was the exercise – identify problems that we can sensibly address. I found the entire weekend to be valuable and enjoyable. And I believe we came up with something of value as well.

Process

The facilitators had identified seven categories of questions or issues that had been identified through pre-session interviews with some of the attendees.

  • The Future
  • Training
  • Technical
  • Culture
  • Enterprise / Other Communities
  • Value
  • Agile Backlash

After an initial group warm up, we broke into seven groups with each being assigned to an area. We worked to identify the gaps or issues that need to be addressed in the industry today in our assigned area. We then we rotated around the room in our teams to review the other areas, adding any additional issues we identified and moving some issues from one category to another. We then went back to our original categories and identified underlying themes from the issues. These became the big problems that needed to be addressed. This was great conversation within our groups and the underlying themes were pretty clear and consistent. We then did a read out of our themes to the bigger group and had some additional discussion.

Then we took a five hour break.  Some people stayed and did more work, some napped, some when skiing. I went up on top of the mountain with Alan Shalloway, Joshua Kerievsky and Ahmed Sidky. The view was awesome and I really enjoyed the company and the conversation.

When we reconvened and worked to consolidate the big problems under the following headings.

  1. What problems in software or product development have we solved?
  2. What problems are fundamentally unsolvable?
  3. What problems can we sensibly address — problems that we can mitigate either with money, effort or innovation?
  4. What are the problems we don’t want to try and solve?

We then grouped the problems under “What problems can se sensibly address” into themes and dot voted to identify the biggest issues. Finally, we worked to craft a sentence to address the four top themes. This became a challenging process as there were 30 strong willed people with different perspectives all trying to influence the sentences. As we narrowed this down through our consensus process there was a lot of discussion and debate. At the end of the allotted time we had the following.

We, the undersigned, believe the Agile community must::

  1. Demand Technical Excellence
  2. Promote Individual Change and Lead Organizational Change
  3. Organize Knowledge and Promote Education
  4. Maximize Value Creation Across the Entire Process

Demand Technical Excellence

At the end of the day, you can’t deliver value through technology if you are not delivering quality. This category brings in aspects of architectural, engineering, and design. This is still a pressing issue and must be addressed in the community to deliver on the promise of the Agile Manifesto.

Promote Individual Change and Lead Organizational Change

Here is an example of a sentence that we had a broad range of perspectives on. Without adoption by individuals and alignment of organizational governance and management models, Agile won’t deliver on its value proposition.

Organize Knowledge and Promote Education

This isn’t just about the practitioners, it includes the broader business context as well. The community needs to build on the broad body of knowledge that exists within and outside the community – we have to avoid reinventing everything. Diversity of thought is important to the ongoing growth of the community – but we don’t actually do a very good job of intentionally building on the body of knowledge.

Maximize Value Creation Across the Entire Process

Software Development is not an end unto itself. Too many organizations moving toward Agile are focused on just the software development team. This is only valuable to the point that the software development team is the constraint in the organization. We need to learn how to do a better job of defining value and aligning the cadence across the organization and improving the flow of value from concept to delivery.

Closing Thoughts

This was a dynamic crowd with a lot of experience. In this group, there was very little contention between flavors of Agile. Everyone was open and working to address the needs of the industry and the broader needs of the communities we live in. There are lots of problems – I am sure there will be a lot of talk about “The Elephants” – problems that didn’t explicitly make the list. There will be some dissenters. And I think there may be some work to refine the sentences. Hopefully without losing the meaning of the points.

I believe that Alistair’s goals were achieved. We had a nice celebration – we came to consensus (although not unanimous agreement) on the big issues in front of us. And we shared a lot of energy and community. I got to meet and develop relationships with a number of amazing people. And we ate and drank a lot both nights. I don’t know what comes out of this effort in the bigger community. Now, let’s see how the Agile community responds to the outcome.  I hope we rally around the big issues and continue to improve where we work and the value we deliver.

Standing By My Agile Claims

Posted on November 25th, 2010 by Dennis Stevens  |  2 Comments »

I recently posted a presentation, Agile Foundations and Agile Myths, at www.slideshare.net/dennisstevens. My goal in the presentation is to juxtapose destructive behaviors arising from a predictive approach and destructive behaviors from an agile approach. This is a deliberately provocative presentation – finding flaws with the common interpretation of both sides of the discussion. Glen Alleman over at Herding Cats responded to my presentation in his post More Agile Claims. He took offense to my “strawman” arguments against the predictive approach.

Predictive Approach to Improving Software Development

What I am calling the Predictive Approach addresses the “mental scaffolding” or underlying assumptions I see everyday in companies practicing what they believe they are guided to do in the PMBOK®, OPM3®, CMMI and DoD . The Predictive Approach describes the practices that arise from the overzealous and misguided belief that the following approach is the best way to improve software delivery.

  • Standardize processes
  • Optimize resource utilization
  • Perform rigorous up-front design
  • Produce comprehensive documentation
  • Get up front commitment to a definitive Scope, Cost and Schedule
  • Enforce strict adherence to the detailed plan

This is where Glen takes offense. He points out that nowhere in the PMBOK®, OPM3®, CMMI or DoD literature are we instructed to apply these in the value destroying way that I discuss. In fact, he charges that I am just “making it up.” But, Glen misses the point. I am not arguing about the literature – I agree with Glen’s interpretation. And I am not making it up. Every day I go into companies that are applying this approach in value destroying ways. In the last year I have been in multiple companies whose documented methodologies prescribe:

  • linear-document driven "over the wall" process thinking
  • assigning resources to multiple projects
  • comprehensive upfront design with detailed documentation
  • commitment to definitive scope, cost, and schedule, and
  • strict adherence to precise start and stop dates for individual tasks

I am not debating the literature – it is an issue of practice. Glen suggests in our discussion in the comments to his post that even the government sometimes exhibits this value destroying behavior.  My points aren’t straw man arguments against the literature – they are attacking a common set of misguided underlying beliefs that the value destroying practices arise from.

In fact, the Agile Manifesto itself arose as a response to common practice – not as a set of new practices. As of the writing of the Agile Manifesto, the writers (and many others including Glen Alleman and myself) had been practicing software development in an “agile” way for a decade or longer. Read the agile manifesto as a response to value destroying practice that is trying to bring attention back to what is necessary to successfully deliver software projects.

The Agile Manifesto

  • Individuals and interactions over process and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more.

Agile Approach to Improving Software Development

And now, … the rest of the story.

The Agile literature rails against too much planning, non-value added documentation, misguided change control, de-humanizing linear process, and layers of bureaucracy.  There is clear recognition in this faction that these are value destroying. But the overzealous and misguided application of these beliefs lead to the other common set of value destroying practices I frequently run into.

  • No Planning
  • No Documentation
  • No Estimating or Commitments
  • No Process
  • No PM, BA or QA

Again, I don’t see the core agile literature advising these practices either. Alistair Cockburn, Jim Highsmith, Mike Cohn, Jeff Sutherland and Kent Beck all prescribe effective practices that support appropriate and progressively elaborated planning, documentation, commitment, and process. They all support the roles of project management, business analysis, and quality assurance when they aid in managing complex products. So the second part of my presentation addresses these misguided practices and presents finding a responsible and situation specific approach to delivering software that focuses on responding to change, enhancing flow and delivering value to the customer.

The Core Discussion

Glen’s response highlights an interesting challenge. My suggestion that current project management practices are value destroying brought out a strong response against agile. I have had the same response from the agile community when I suggest that agile beliefs result in value destroying behavior. Businesses need to be able to make informed business decisions – so we need predictability and planning (Check out Glen’s immutable principles). At the same time we need to manage the work in a way that protects and optimizes value. What is missing in many organization’s is a way to have the responsible discussion exploring underlying assumptions. Companies need to finding ways to make the situation specific changes necessary to balance predictive planning and agile execution.

We are Doing QA All Wrong

Posted on August 23rd, 2010 by Dennis Stevens  |  17 Comments »

Over the last month I have presented or participated in several user groups and conferences. There was Southern Fried Agile in Charlotte, Agile 2010 in Orlando, a PMI Atlanta professional growth event, meetings of Agile Atlanta and the Atlanta Scrum Meet-up, and Product Camp Atlanta (not surprisingly in Atlanta). These conferences give me an opportunity to meet with people doing, or attempting to do agile in the enterprise. There are a lot of common patterns that I am seeing. One of the most common patterns is that we are doing QA all wrong in most companies.

Imagine if someone approached you up front and said “I have an idea. Let’s design our product development system like this. First, development is going to take in less then precise requirements. Then development is going to build something from the requirements. Then we are going to deliver this to a separate group that will then figure out how to test it for accuracy. Then that team will test it. When the product doesn’t pass the test we just defined we will return it to development for them to fix it.”   Is this the way that your test organization operates with your development organization? Does the relationship between QA and development seem reasonable? Why are challenges between QA and development so prevalent in our organizations?

We Don’t Understand Quality Assurance

Quality Assurance and Quality Control are different things. We have Quality Assurance organizations – but most often they are involved in Quality Control control activities.

Quality Control: Find defects. These activities are focused on validating the deliverable. They are performed after the product (or some aspect of the product) has been developed. It also enforces fit to specification. This protects production and our customers from getting defective deliverables that do not meet the specification. But when done on a completed product it isn’t doesn’t enable flow and defects found downstream of the deliverable result in rework in development.

Quality Assurance: Prevent defects. These activities are focused on the process used to create the deliverable. They are performed as the product is being developed. So, Quality Assurance should be helping us improve our processes so we don’t create defects in the first place. A key point to this is that creating a common understanding of the specification and how we will validate it up front is really important to eliminating defects.

If your Quality Assurance activities are primarily focused on finding defects and ensuring fit to requirements after the fact then your QA group is primarily doing Quality Control. Getting QA involved in improving the specification communication process through is a good place to start. A specific technique might be exploring communicating acceptance criteria to development with the same examples QA will use to perform acceptance tests.

We Don’t Understand the Cost of Rework

Finding defects after the product has been delivered ensures that you will generate rework. Rework from failed QA tests are extremely expensive to flow. Sending work back into development interrupts work in process. This has a dramatic effect on cycle time across the system – which results in increased cost of every item in the system.

After the fact testing is also typically done in batches when a release or iteration is delivered to QA from development. Defects uncovered in these batches create bottlenecks at development.  Just like when traffic backs up on the highway, bottlenecks in product development result in a long term cost to the overall system.

We can dramatically reduce the cost and adaptability of the system by reducing this rework. One way to address this defect bottleneck is to perform quality control frequently – not in batches. Another way is to prevent defects through effective quality assurance in specification understanding and in development. This will reduce the number of defects produced that escape to Quality Control.

We Have Designed the Product Development System Wrong

We used to teach that there needed to be a separation of testing and development. Having them both report to the same person was like having the person that wrote the checks in accounting also reconcile the check book. It was too easy to cheat.  We created QA teams that were completely separate from development – and that reported up through different management structures. I was talking to the director of development at a company where the QA and development chains of command meet at the CEO. When you design the organization to QA and development in different silo’s you will make it very difficult to have effective Quality Assurance involvement. You will ensure Quality Control behavior.

This separation made sense at one point (I think). It doesn’t make as much sense today – especially as the resulting silos result in slower delivery times and increased costs. What has changed in the last 10 years is:

  • the cost of setting up testing environments is much lower
  • our ability to rapidly and frequently produce testable deliverables for these environments is much greater, and
  • our ability to perform acceptance, integration, regression, performance, load, and stress testing frequently is enhanced.

Quality Assurance needs to be part of the development team and process. We need to change our thinking around this.

So we need to integrate Quality Assurance very early in the process. We need to ensure early arrival of acceptance test examples to development. But, we still need independent test teams when we are doing large or complex projects that require integration of many team’s outputs. Independent test teams can be running parallel testing or test-immediately-after delivery. When they are getting high quality components, they focus on user experience, performance, load, and stress testing.  Move everything you can into the development teams (enhanced with quality assurance) so you can produce low defect escape rates. Then eliminate the defect bottleneck and increases in cycle time by treating most of the defects at the independent test team as new specifications to development.

We have the Wrong Expectations

We Expect Perfection – Not Improved Value

Our QA organizations tend to be focused on defect identification and perfection in delivery. The focus of QA, and everyone in the organization, needs to be on producing value-add and delighting the customer – not on defect-free code. This doesn’t mean that defects are okay – but every application has some defect or limitation. Unless you are building aircraft navigation systems or other critical systems you are going to deliver some defects. Make sure the defects don’t block business value – but there is a point where it is more important to the customers and business to deliver value.

We Expect Local Optimization – Not Flow

Rather than perfecting Quality Control, QA can add a lot of value helping the organization enhance the flow of value through the entire product development system. Delivering a unit of value requires analysis, development and testing. There is no value delivered internally. The local optimization problem of improving one area at the expense of another is not useful. Crossing the boundaries between analysis, development, and testing is needed to optimize flow of value. The various teams need to figure out how to do it together – not just improve their area.

Expect our Current Constraints to be Permanent

Organizations labor under the belief that their constraints are fixed. Find a way to break down the barriers that lead to your constraints. Don’t accept the current constraints as permanent. Increase thinking at the system level to find ways to improve the product development process.

We Don’t Use the Tools Available to Us

There is an impressive amount of automation and related techniques available at very low cost to help build the QA environment that allows for fast feedback, building on progressively tested levels, and improving opportunities for frequent testing. Developers should be using tools that support automated unit testing and only checking in code that passes all their unit tests. This provides a basis for validation and refactoring. Test driven development or test  just after development should be ubiquitous – but it is not. Continuous Integration environments that ensure that each check-in results in a valid and testable platform help teams perform integration and build validation. This helps everyone work on a stable build and ensures that no one gets way off track. There are automated tools for acceptance testing and some levels of UI testing. When the tools are combined you can build a very powerful regression testing environment that can also rapidly perform performance, load, and stress testing.

The lack of general adoption of these tools probably has multiple roots. Sometimes it can be difficult to leverage all of them with legacy systems. They require new skill sets and generate new artifacts that have to be maintained. On the surface, they automate tests that are currently performed by QA groups and so their is fear and organizational threat. But some subset of these tools should be broadly adopted by each team. QA teams can focus their efforts on valued added items like Exploratory Usability Testing and Release Validation.

Summary

Most organizations are doing QA all wrong. It is time to start doing it right, the cost of low quality in dollars, time, customer satisfaction, and the engagement of employees depends on it.

  • Switch their primary focus to Quality Assurance
  • Help the organization reduce re-work
  • Integrate QA and QC with analysis and the development organization and work to eliminate defects – not just identify themF
  • Enable value, flow, and ongoing improvement
  • Leverage proven tools and techniques across the Product Development Organization to increase the value delivered

The status quo is not sufficient to meet the needs of most organizations. While not all of these approaches will be right for every situation, the rate of adoption is too low to be justified. There is far too much latent value being tied up in organizations because we are doing QA all wrong.. What obstacles do you see in your organizations?

I Signed the Oath of Non-Allegiance

Posted on July 18th, 2010 by Dennis Stevens  |  2 Comments »

I signed the oath of non-allegiance 300dpi.png

Alistair Cockburn, in the Oath of Non-Allegiance, has issued a call to action for the software development community to stop bickering and calling out contending methodologies. He has called for “discussion about whether an idea (agile plan-driven impure whatever) works well in the conditions of the moment.” As someone who has earned his PMP, CSM , received certificates from the Lean Enterprise Institute, and who completed David Anderson’s Kanban Coaching workshop – I have had the opportunity to see this bickering up close. I have occasionally even as the target of it.  Here is Alistair’s call to action:

I promise not to exclude from consideration any idea based on its source, but to consider ideas across schools and heritages in order to find the ones that best suit the current situation.

This is becoming an increasing familiar theme. We see it in Laura Brandeburg’s Business Analyst Manifesto. It is expressed by by Liz Keogh, Jean Tabaka and Eric Willeke in “A Community of Thinkers”. I am involved in a PMI Agile team where we are trying to make sense out of the benefits associated with bringing together the PMI Body’s of Knowledge (Portfolio, Program, and Project) in the context of Agile. I am working with the IIBA’s Agile Business Analyst core team to express the Business Analysis Body of Knowledge in an Agile fashion. I also participate in various Yahoo and Google groups where we are working at having these kinds of conversations involving Kanban and Lean.

These teams and discussion groups bring together practioners and thought leaders from various communities working to understand and to share. Sometimes the discussions get heated. You have a lot of successful, intelligent people sharing their experiences and trying to make sense out of their successes while simultaneously trying to expand their world view. The awesome thing is that there is a lot of learning and connecting going on in these communities.

You may want to protect your turf -  but this is the future. The tools and resources that support software development have improved orders of magnitudes in the last two decades. It’s crazy to believe we don’t have a long way to go to figure out the best ways to deliver software – especially at the enterprise.  Tom Peters quotes General Eric Shinseki, Chief of Staff, U.S. Army – “If you don’t like change, you’re going to like irrelevance even less.” I choose to join the community who is looking for ways to honor the fundamental premise of the Agile Manifesto – We are uncovering better ways of developing software by doing it and helping others do it.

Shorten and Reduce Variability in Lead Times using Kanban

Posted on June 30th, 2010 by Dennis Stevens  |  1 Comment »

When I start talking about speeding up work and reducing variability in lead time to a group of software developers, the initial reaction is often, “We can’t do that. This is knowledge work. You can’t reduce variability in the work itself. The hard stuff just takes longer and the easy stuff is shorter.” This is often followed by, “You don’t want to take the creativity out of our work”, or “You can’t treat us like machines.” So, it is important to point out up front that I agree with all of these statements. So, why would you want to shorten and reduce variability in lead times and how can you reduce lead time and variability without inhibiting the creativity required to do the work.

Why Reduce Variability and Lead Times?

Lead time is the average time it takes for one work item to go through the entire process – from start to finish – including time waiting and time consumed by rework. In most cases, unless you are intentionally managing your lead time, you will have lead times that are highly variable and probably excessively long. Why would you want to reduce lead time duration or variability?

Increase Predictability

Reducing variability in lead time will allow you to consistently know and commit when something can be delivered. Delivering consistently will help to increase the trust within the system. One of the very beneficial outcomes from increasing trust in the system is that it leads to a dramatic reduction in expediting. Once the system becomes predictable, the business may be able to make new offers to customers based on the high level of predictability.

Faster Feedback

Reducing lead time duration results in faster feedback. Faster feedback can result in increased quality. There are number of reasons for this.  Less work is done based on work items that require rework. Shorter cycles result in better fit since the feedback can be gathered and applied frequently. Also, faster feedback means that the team can minimize the work required to meet the objectives.

Flexibility and Responsiveness

Shorter lead times and trust based on predictability increases options for flexibility. You can delay some decisions until very late – deciding just before you pull into the system the details of solution. Expediting now means putting an item into the queue as the next item. Also, you can make new promises to customers based on this increased level of flexibility and responsiveness.

Wimbledon

Okay, so there are some benefits to reducing variability and duration in lead time. But, how can you reduce lead time duration and variability without inhibiting the creativity required to do the work.

Wimbledon provides some insight into this. At Wimbledon, the games take as long as they take. The number of games played is determined up front – they have to play all the games. There are Men’s and Women’s Singles, Men’s and Women’s Doubles, Mixed Double’s and many players participate in multiple events. Games can’t play in darkness (except on center court). Games can’t be played in the rain. Players can’t overlap doubles with mixed doubles or with singles play.

Wimbledon has been played 142 times and the finals have been delivered late twice. That’s pretty amazing given the wide level of variation in the length of games and the other constraints that must be addressed.

How does Wimbledon accomplish this? They have policies that impact the timing of the games – for the most part without impacting the way the game is played.  For example:

  • A tie breaker in the first four sets. This tie breaker is open ended as it requires a player to win by two points – only the final set requires winning by two games.
  • Games can start earlier on a day if games are behind.
  • The gap between games can be shortened to get in additional play each day.
  • The tournament director may have players warm up on other courts to bring games closer together.
  • Additional courts can be opened for play as long as it doesn’t create a conflict across events.
  • They minimize the impact of rain by covering courts during rain delays.
  • They have added lights and a roof over Center Court to allow games to run longer and during rain.

Combined, these policies allow the games to take as long as they take – while allowing the tournament to deliver a fixed number of games in a fixed time.

Reducing Variability and Lead Time in Software Development

Just as software development isn’t manufacturing, it isn’t tennis either. What the example shows is that lead time duration and variability can be reduced without changing restricting the way the game is played and with minimal impact on the rules of the game. This concept can be translated to software development.

Reduce Waiting

How much time is a work item actually actively being worked on? If you pay careful attention to flow of work through your system you will likely find that a typical work item spends more time waiting to be worked on then being worked on. It is not unusual to find 5-10x wait time to work time. With wait time being a large portion of lead time, reducing wait time will have a significant impact on reducing lead time. Limiting WIP and pulling work are key techniques to reducing waiting.

Rework: Or Failure Load

Another big cost on lead time – and typically a huge impact on variability – is rework. Rework is the result of a defect that unintentionally escapes from one work queue and is identified in another. The result is that work moves backwards through the system – increasing lead time not just of the current work item, but of other work items. Leveraging techniques that minimize or eliminate rework are important to reducing variability and duration of lead time. Test driven development, automated test frameworks, continuous integration, and coding standards are methods of reducing or eliminating rework. Investing into reducing rework reduces lead time duration and variability.

Making Work Ready

One cause of variability and extended lead times is when work is pulled that isn’t ready to be worked on. This can happen when dependent work items aren’t prepared, required external resources aren’t standing by, or when the outcome (not the how) is not well understood. Making work ready requires understanding and aligning dependent work items. Minimizing dependencies during design helps reduce negative impact. Using scheduling methods like Kanban to schedule external (non-instantly available) performers helps coordination of external performers so work can continue to flow.  Feature injection, where outcomes are defined during analysis and presented as testable examples is an excellent method of understanding and clearly communicating the expected outcome. Extra effort put into making work ready often results in reduced lead time.

Relatively Small and Similar Size Work

Large work items – or high variability in size and complexity of work items will result in higher variability and duration of work items. Breaking work into relative small and similar size work is a good method for reducing variability and duration of lead time. Breaking solutions down into small work can also result in improved design, higher testability, and more flexibility in the solution. This doesn’t mean that work should be broken down arbitrarily. Work should be broken down to the smallest level that is reasonable and no smaller.

Swarming

Swarming is when team members work together on work items to move them forward faster. Sometimes this is increasing the number of developers doing development – often it involves having generalists work in areas outside of their specialization. You will want to have the performers work on work items that are in risk of being late against their  SLA.

Tracking The Data

To track lead time, track the entered day and exit day from the Kanban. Do this by having the performer who pulls the card into the first queue write the date it was pulled. When it is moved to the last queue, write that date on the card. Lead time will be the difference between the two days.

To track waiting time, put a blue dot on the card for each day it sits waiting to be pulled into any queue. Count the dots when you pull the card off the board to see how many of the lead time days were waiting days.

To track defect days, you can create a defect card when a defect in a piece of work is identified downstream from its source. Move the card back to the source location and move it forward through the system until it catches up with its card. Put the start date on the card when it is created and the end date on the card when it catches up with the original card. Defect days is the difference between the start date and the end date of the defect card. Track the cumulative defect days on the work item card.

Track blocked days when the card is blocked. This can be done by putting a red dot on the card for each day it is  blocked.

Reducing Lead Time Duration and Variability in Kanban

So, by tracking data related to Lead Time, waiting time, defect time, and blocked time, the team can identify where to act to reduce the lead time duration and variability. Then, identify and leverage strategies like reducing waiting, reducing rework, making work ready, defining small size work, and swarming, to improve lead time. Tracking causes of defects and blockages can help make decisions to focus these strategies appropriately. Reducing lead time duration and variability will result in increased predictability, faster feedback, improved flexibility and responsiveness.

Kanban: What are Classes of Service and Why Should You Care?

Posted on June 14th, 2010 by Dennis Stevens  |  6 Comments »

My last post, Kanban and When Will This Be Done? was about using Cycle Time to make reliable commitments to the customers of your organization. I talked about prioritizing and swarming as methods of helping meet your Cycle Time commitments. In most organizations, it turns out that not everything has the same level of value or time pressure. Some things need to move through faster than others either because they are needed to meet customer promises or because earlier delivery means more income for the business. Some have to be completed before a specific date due to market or compliance or integration reasons. Still others can be completed in the normal course of the flow of work.

So, we can’t treat everything in a homogenous fashion. But, handling the planning and scheduling to meet the differing levels of time pressure can consume a lot of time and energy. The key is to create a simple and transparent approach for the team to self-organize around the work to meet the needs of the business. This approach should address the inherent variability in the work. The way that we address these differing levels of time pressure is through Class of Service. Class of Service is a mechanism for typing work items and is used to inform team decisions about prioritizing and swarming.

Class of Service Policies

Each Class of Service will have its own policy to reflect prioritization based on the different risks and business value. Classes of service and knowledge of the policies associated with them empower the team to self-organize the flow of work to optimize business value. Used effectively this will result in improved customer satisfaction, elevated employee engagement, and increased trust. A Class of Service policy will include the following for the class of service:

  • how to make the class of service visible,
  • the impact this class of service has on WIP limits,
  • prioritization and swarming agreement for the class of service,
  • estimation requirements for the class of service.

You can use the use card color or a row to visualize the class of service. The row is particularly useful when you are reserving some performer capacity to the class of service. The WIP limit policy may allow for the work item to exceed the column WIP limit, could reflect that there is a maximum of a particular Class of Service, or require a minimum of a Class of Service. Prioritization can include FIFO, Pull Next, or Interrupt Current WIP. Estimation can range from no estimation to a detailed estimate.

Sample Classes of Service

These Class of Service examples are drawn from David Anderson’ book, Kanban: Successful Evolutionary Change for Your Technology Business and personal experience.

Expedite: The Expedite Class of Service can be very disruptive to flow. So I use a lane to protect some capacity against these disruptions and then limit the number of expedites was can have on the board at a time. This allows us to expedite while minimizing the impact on the other Classes of Service. In his Kanban book, David suggests swarming on blockers only. I believe you can swarm on delayed items and expedites, especially if you have left some performer capacity available when you set your WIP limits. This helps maintain the flow through your system.

Many organizations experience a large amount of expediting.This is a sign of low trust in the organization’s ability deliver reliably. So the emotional response is to expedite everything. In my experience, demonstrating an ability to make and keep commitments for the Standard Class of Service will establish trust that overcomes this emotional response. The pressure to expedite decreases dramatically when you are able to make and keep commitments to your organization.

Sample Expedite Class of Service policy:

  • Expedite requests will be shown with white colored cards.
  • Expedite requests have their own row on the Kanban board. The Expedite row has a WIP limit of one.  A limited amount of performer capacity is reserved for Expedites.
  • The column WIP limit can be exceeded to accommodate the Expedite card.
  • Expedite requests will be pulled immediately by a qualified resource. Other work will be put on-hold to process the expedite request.
  • Expedite requests will be T-Shirt sized to set Due Date expectation.

Fixed Delivery Date: The Fixed Delivery Date Class of Service is used when something has to be delivered on or before a specific date. To accomplish this, you will pull the card from the first queue sufficiently in advance of when it needs to be delivered. In the previous post, we suggested you set the policy at the Mean + 1 Standard Deviation. This will result in an 80% likelihood of success. This is likely to be an unsatisfactory success rate. So you will want to do two things. First, increase the policy for Fixed Delivery Date Class of Service items from the Mean + 1 Standard Deviation to the Mean + 3 Standard Deviations. This will increase likelihood to above 99%. Secondly, you should do some more detailed analysis when making this work ready to mitigate the risk that you will deliver this late. If the item is large it may be broken up into smaller items and each smaller item should be assessed independently to see whether it qualifies as a fixed delivery date item.

There is a significant opportunity to misuse this Class of Service when dealing with an organization that is comfortable with a traditional plan driven approach to managing a project. There is a tendency to want to use the Fixed Delivery Date in the same fashion as the tasks on a Gantt Chart. The problem with this is that it will spawn the same destructive behavior that we discussed in my last post (losses accumulate, gains are lost). So only use the Fixed Delivery Date when you have an external due date constraint. Remember the concept of Regression to the Mean and use the Standard Class (discussed next) and flow to deliver to a plan.

Sample Fixed Delivery Date Class of Service Policy:

  • Fixed delivery date items will use purple colored cards.
  • The required delivery date will be displayed on the bottom right hand corner of the card.
  • Fixed delivery date items will be given priority of selection for the input queue based on a the historical mean + 3 standard deviations of the cycle time for tasks of the same size.
  • Fixed delivery date items will be pulled in preference to other less risky items. In this example, they will be pulled in preference to everything except expedite requests.
  • Unlike expedite requests, fixed delivery date items will not involve putting other work aside or exceeding the kanban limit.
  • If a fixed delivery date items gets behind and release on the desired date becomes less likely it may be promoted in class of service to an expedite request.

Standard: The Standard Class of Service is used for the typical work that goes through the system. The mechanisms of flow and focus will move the work through the system as fast as it can be done without disrupting the system. Large Standard Class of Service items may broken down into smaller items with each item queued and flowed separately. Standard Class of Service work should be pulled into each queue on a First in First Out basis. Estimating Standard Class of Service items beyond T-shirt sizing doesn’t add any value (although even t-shirt sizing could be argued is not adding value) – if it is work you are going to do then you should go ahead and do it.

Sample Standard Class of Service Policy:

  • Standard class items will use yellow colored cards.
  • Standard class items will be prioritized into the input queue based on their cost of delay or business value.
  • Standard class items will use First in, First out (FIFO) queuing approach to prioritize their pull through the system.
  • No estimation will be performed to determine a level of effort.

Intangible: The Intangible Class of Service is used for work that goes through the system that doesn’t directly deliver value to the customer. These are items like production bug fixes, retiring technical debt, usability improvements,or branding  and design changes. This is work that needs to be done – but for which it is hard to show an ROI. It is a good idea to have some Intangible work going through the system. It is better to set this aside when an Expedite comes through then work with an associated due date or cycle time expectation. I like to set a policy that at least one intangible item is moving through the system. Again, the Intangible Class of Service items may be broken down into smaller items with each item queued and flowed separately.

Sample Intangible Class of Service Policy:

  • Intangible class items will use green colored cards.
  • Intangible class items will be prioritized into the input queue based on their intangible business value.
  • Intangible class items will be pulled through the system regardless of its entry date so long as a higher class item is not available as a preference.
  • No estimation will be performed to determine a level of effort or flow time.

Why Do We Care About Classes of Service

Not all work has the same value as it moves through the system. Additionally, there are varying levels of time pressure on the items moving through the system. Planning and Coordinating all the possible impacts would be very difficult. Classes of Service create a simple and transparent approach for the team to self-organize around the work to meet the needs of the business. As each performer is looking to pull their next item into their queue, they scan the candidate items. In the sample above, they would first pull an Expedite if one existed. If not, and there was a Fixed Delivery Date item that needs to be started soon to meet the Fixed Date then they would pull that. If that doesn’t exist they could either pull a Standard Class of Service item or, if there aren’t any intangible class of service items active on the board, they would pull an intangible on the board.

Using Classes of Service to prioritize pulling and swarming results in an easy to sustain system that self levels risk and value optimization. It inspires flow behavior, empowers the team, increases employee engagement and optimizes the value produced by the system. Classes of Service also create transparency into the promises made to the customers and increases the reliability of promises being kept resulting in higher trust across the organization. So, we care about Classes of Service because, for a minimal investment, they increase value, balance risk, improve reliability, increase employee engagement and increase trust within the organization.

Kanban and When Will This Be Done?

Posted on June 7th, 2010 by Dennis Stevens  |  14 Comments »

A question that comes up a lot in Kanban is, “How can I know when this feature will be done?”.  First off, it is important to understand what makes estimating unpredictable – and then what steps Kanban presents to address this. Estimates are unreliable because the inherent variability work is compounded by the lack of flow in most teams and destructive behavior caused by fixed point estimates. Kanban, through a few simple techniques, a little bit of math, and the decoupling of planning and scheduling will give you the ability establish accurate estimates with minimal effort.

Cycle Time

The key metric we will use for estimating is Cycle Time. Cycle Time, in most Kanban and Lean contexts, is the time between when the team starts working on an item and when the item is delivered. imageCycle Time is what is interesting to managing (whether managed by management or self managed) the team. This metric becomes a key management metric to reduce variability in the system, make reliable commitments, and provide insight into whether improvement efforts are delivering expected results. We are going to talk about using Cycle Time to make reliable Due Date Performance commitments.

Cycle Time is easy to capture. When a card is pulled into the first work queue – enter the date on the card next to Start. When the card is moved to the last queue, enter the date on the card next to Finish. Cycle time is Finish Date – Start Date. At the end of the week, when you pull cards off the board, capture the cycle time for each card into a spreadsheet or whatever tool you are using for tracking.

Distribution Curves

What you will most likely find after you have captured a little data is that lead times fall in a distribution that is skewed to the right. This shows the variability in the work that you perform – there is some absolute limit on the shortest time the work can take – but there are a lot of ways it takes longer. imageTake the Cycle Time data and find the Mean and the Standard Deviation. Then set the expectation to the Mean + 1 Standard Deviation.

In this chart the work items are taking from 5 to 15 days. The Mean is 8.6 days and the Standard Deviation is 2 days. You can set a Due Date expectation 11 days and and be sure that you will hit this date about 80% of the time without changing anything you do. So that’s the policy, we will deliver work within 11 days with an 80% due date performance expectation. Over time, we hope to be able to improve the 80% and the 11 days.

There are a few nice things about this Cycle Time number. It takes a lot of complexity into account. Your Cycle Time calculation will have already incorporated the impact of rework while in the system, the typical impact of disruption from expediting, and the impact of waiting in the ready/done queues.

What about the 20% that will be late? There are a couple of things you can do. First, you will begin to be able to identify early which stories are going to be late. These will be ones that are more complex or end up with lots of rework. You can address these with capacity, prioritizing and/or communication to the customer. Addressing it with capacity means swarming on the task to bring it in closer to the due date. Addressing it with prioritizing might mean moving the card up higher in the stack then one that is likely to be done early. The cost of swarming and prioritizing means that another card might be delayed – but if you are thoughtful about prioritizing and swarming you won’t cause delay outside of the Due Date promise. For some cards, you will communicate to the customer that the card will exceed the due date performance expectation – while not optimal this is within your delivery commitment as long as it doesn’t happen more than 20% of the time.

T-Shirt Sizing

Many teams are dealing with work of various sizes. If you do a frequency analysis of your data you will probably see clusters of data. imageIn the picture below we have three clusters.  Rather than spending significant time in complex assessments to support estimating, the team can rapidly assign tasks into these clusters. We call this T-Shirt size estimating. Cluster like work into XS, S, M, L, and XL. You may decide that you don’t accept XL tasks without decomposition or require breaking them down. Then determine estimates for each size based on Mean + 1 Standard Deviation for each of the clusters. You can determine some simple process for assigning work into one of the T-Shirt sizes.

This won’t give you perfect estimates. Perfect estimates are almost impossible to determine. Additionally, Cycle Time Variability is probably impacted more by variation in the system like waiting and rework then anything else. It is impossible to determine the impact of the system by more deeply analyzing the task. With T-Shirt sizing and Cycle Time, you can get as accurate results in minutes as could you do in months. While not perfectly accurate – it’s as meaningful as anything you can do.

Schedule a Release

One of the interesting concepts in statistics is the concept of Regression Toward the Mean. What this means is that when you bring a lot of estimates together, you are likely to  have a final result where the average over time is close to the historical average. One example of this is driving to work. If you were asked to plan how long it would take to get to every stoplight on the way to work exactly, this would be impossible. But you have a pretty good idea what time you need to leave home each day to arrive at work at the right time.

Because of regression toward the mean and the actions we can take based on the visibility Kanban provides us, using T-Shirt sizes and cycle time to schedule a release will result in a pretty accurate estimate – and the cumulative estimate will be more precise than any individual estimate.

Decouple Planning and Scheduling

From a human nature standpoint, data can be a dangerous thing. The way we often couple planning with scheduling of resources is an example of this. With the way we have historically run projects, we put all the estimates together and then hold the team accountable to each individual estimate. This is unreasonable because we know that the estimate represents a range of possible outcomes. Five problems arise when we tightly couple estimating and scheduling of the work.

1. Bludgeoning: If we produce point estimates when outcomes are probabilistic – and then hold people accountable to hitting each individual estimate we are going get value destroying behavior. Bludgeoning people when they don’t hit 80% estimates means we are going to get 95% estimates. In most cases, the 95% estimate is about twice the duration of the likely outcome. This puts a lot of padding into the system.

2. Sandbagging: Another destructive management behavior when we couple planning and scheduling is accusing the team of sandbagging. This happens when the team delivers something in half the time of the estimate. Management will often want to go back and cut the “excessive padding” out of the rest of the estimates. First off, management drove the sandbagging behavior. Secondly, we know statistically that most work should be delivered earlier than an 80% or 95% estimate. The team learns not to deliver early because management charges them with sandbagging. So teams don’t deliver early and consume time that, on average, we need to accrue to the project.

3. Parkinson’s Law: Typical management techniques result in the team delaying delivery until the last moment. One outcome is that work expands so as to fill the time available for its completion. This is problematic because it leads to gold-plating of work that increases downstream efforts. It also ensure we don’t deliver a task that could be delivered early doesn’t get delivered early.

4. School boy Syndrome: The other behavior spawned by this is that the team doesn’t start as soon as possible because they wait until they think they need to start to deliver. This often means the team will start in time to deliver on the likely outcome – but deliver late when the task turns out to be more complicated or runs into a delay.

5. Murphy’s Law: If something can go wrong it will. We know that some work will take much longer than other work. This is Murphy’s Law. The behavior that arises from coupling planning to scheduling combined with Murphy’s Law ensures that projects will be delivered late – even against a 95% estimate. It is said that the losses accumulate and the gains are lost.

Kanban Helps Reliably Determine When Work Will Be Done

We can’t precisely predict how long work will take. Estimates represent a wide range of possible outcomes. In traditional project management, we ask for unreasonable estimates and then manage the work in such a way that we guarantee the project will be delivered late. Additionally, late delivery of work is as often driven by behavior that inhibits flow as the uncertainty around the work effort.

In Kanban we use Cycle Time data derived from the performance of the system to plan when work will be done. Then we decouple this planning effort from work scheduling – using the Kanban board to focus the team on flow. Kanban provides the team opportunities to make capacity and prioritization decisions that help address the innate variability of the work being performed to deliver within the policy. Cycle Time estimates, combined with limited WIP and a focus on flow, helps Kanban helps deliver predictable outcomes.

Deming and the System of Profound Knowledge

Posted on May 10th, 2010 by Dennis Stevens  |  5 Comments »

Knowledge work, such as software development, is becoming a critical element of the success of every business. It follows that getting better at knowledge work is important to improving the performance of those businesses. There is a powerful model that has been broadly applied to achieve success in knowledge work. I believe it gets overlooked because of its association with manufacturing and TQM.

A Little History

In 1960, the Prime Minister of Japan awarded Dr. Deming Japan’s Order of the Sacred Treasure, Second Class citing Deming’s contributions to Japan’s industrial rebirth and its worldwide success. Deming is widely credited with improving production in the United States during the Cold War, although he is best known for his work in Japan.

Deming did not teach statistical process control. Upon arriving in Japan to present to the Japanese Union of Scientists and Engineers (JUSE) (His first talk was July 10, 1950) Deming introduced an eight day course with these words “The Control Chart is No Substitute for the Brain”. He had assistants teach the statistical control of quality portion of the course. Deming taught the theory of the system and cooperation (The World of W. Edwards Deming by Cecelia Kilian).

He taught 29 seminars between 1950 and 1951, reaching essentially every essentially every single top manager in Japan. He taught his four-day course to over 200,000 American executives between 1980 and 1991. Again, he didn’t teach statistics – he taught how to derive knowledge about the system from data and how management should act on the organization as a system. He didn’t teach manufacturing at all – his goal was to transform the management of organizations. He believed mis-management was solely responsible for the dysfunction in organizations.

TQM is not The System of Profound Knowledge

Deming is often associated with TQM. And while TQM claims Deming, Deming did not claim TQM. From Deming’s biography:

It is very common, and sadly, very wrong, to hear comments on Deming’s work that sound like “It’s SPC,” or “It’s about the 14 points.” Others think about it as team and teamwork. Some think of it as some sort of humanitarian stuff. The one that upset Deming the most was “It’s about TQM,” referring to Total Quality Management. He did not want his name to be associated with TQM, as aware as he was of the risk of “guilt by association.”

The Japanese applied his teaching to manufacturing. He taught the same information in the 80’s in the US. A critique of US adoption of these techniques was that our objective was to replicate the Japanese manufacturing success and all we heard was statistics and process control – we missed the points about how understanding creates knowledge and how to manage human nature – therefore we missed the transformative nature of his message. TQM embodies these shortcomings. TQM fails because it is time consuming, generates a level of detail that is not stable or valid, is not organizationally sustainable, and is very expensive to implement. The focus of TQM is data – Deming’s focus was on effective management.

Deming’s System of Profound Knowledge

Deming believed that generating a shared understanding of the system, taking actions that optimize economic outcomes, and aligning the beliefs of the people within the system were keys to a sustainable ongoing improvement effort. Deming taught this as the System of Profound Knowledge (SPK). There are four points to SPK:

1. Appreciation of the system: For Deming, the organization is a system – a network of interdependent components that work together to try to accomplish the aim of the system. The system includes management, staff, suppliers, and customers. Without an aim there is no system. Understanding the system and the aim of the system helps us rise above complexity.

2. Understand how variation impacts the system: Understand the intrinsic capacity to supply the output required in the face of variation. Deming understood that variation was a phenomenon common to all human activities – particularly in knowledge work. Understanding variation equips us with the conceptual basis for correct management of performance and the improvement of capacity.

3. Theory of Knowledge: A theory of knowledge explains how a combination of methods, people, and environment produces a foreseeable change. Knowledge isn’t data. Knowledge is the ability to interpret and act on the system – by management, by the staff within the system, by suppliers, and by the customers. Knowledge of the system arises from within the system itself. Deming taught management how extract knowledge – not how to perform statistical process control.

4. Psychology: Psychology helps to understand people and their behavior and to appreciate their natural inclination toward success, learning and innovation. People respond based on the norms and rewards in their environment. When we make organizations hierarchical people act to defend their space in the hierarchy. When we create a system wide understanding people act to improve the system’s performance.

Impacts of SPK Thinking

Deming felt that managing to cut costs almost always decreases performance and increases costs – while managing the system to allow quality to emerge will reduce costs and increase performance. This is because a focus on cutting costs locally creates a toxic and competitive environment. Deming believed that workers have nearly unlimited potential if placed in an environment that adequately supports, educates, and nurtures their sense of pride and responsibility. He stated that the majority of a worker’s effectiveness is determined by his environment and that the malfunctioning of an organization is almost entirely due to the misunderstanding of the system by management. The System of Profound Knowledge dramatically shapes the way we think about work and workers.

Is Agile More than Iterative and Incremental Development?

Posted on May 6th, 2010 by Dennis Stevens  |  No Comments »

Incremental and Iterative Development

In my last few posts (here and here), I have been talking about Iterative and Incremental Development (IID). This is where the solution under development is built in small chunks or increments. These chunks are integrated and reviewed for feedback on fit with the customer need and quality of the product. This feedback informs future develop as we iterate through the solution. Development teams have been applying IID for over 50 years. IID calls for progressive elaboration in support of iterative and incremental delivery in the context of delivering value. It helps reduce product risk by facilitating change associated with learning throughout the project. In this way IID delivers on two values of the Agile Manifesto – working software and responding to change.

What Makes IID Agile

The Agile Manifesto also identifies two more values – individuals and interactions and customer collaboration. These are social aspects of software development. And while good teams have probably been doing this as long as IID has been around, Agile explicitly calls these out. Now the team is continually getting feedback on the viability of the product and capabilities of the development team. So in addition to IID, Agile drives transparency in the project status, viability of the product, and performance of the team. It also compels collaboration and learning about both the product and the delivery capabilities of the team.

But Wait, There’s More

In addition to addressing the four values of the Agile Manifesto, the team has to be developing using effective techniques. This means using automated testing, continuous integration, effective design and refactoring. In my CSM class, Jeff Sutherland stated that he had seen no hyper-productive development teams that were not applying these Agile engineering practices. These practices enable the team to reliably deliver on the four values from the Agile Manifesto.

Agile is more than Iterative and Incremental Development

So, Agile is more than IID. In addition to the ongoing product feedback from IID, Agile calls for ongoing feedback on the capabilities of the team, the viability of the product, and the knowledge developed by team members. And that is what is “new” in the last 15 years. All five areas of focus are explicitly called out. Teams that are moving to Agile or struggling with Agile adoption should look at each of these areas of focus:

  • Progressive elaboration in support of iterative and incremental delivery in the context of delivering value
  • Reduce risk by facilitating change associated with learning throughout the project
  • Drive transparency in the project status, viability of the product, and performance of the team
  • Compel collaboration and learning about both the product and the delivery capabilities
  • Enabling software development engineering practices

If these areas of focus are not in place, or they don’t interoperate together to continually improve the performance of the team, the team will probably not be achieving the benefits of Agile.

Iterations versus Continuous Flow

Posted on May 3rd, 2010 by Dennis Stevens  |  6 Comments »

Last week I wrote a post on Iterative and Incremental development. In that post I pointed out that you can do iterative and incremental development without having time-boxed iterations. Mike Cottmeyer asked in a comment to clarify when time-boxed iterations made the most sense and when continuous flow made the most sense. This post answers that question.

When Time-boxed Iterations Make the Most Sense

Time-boxed iterations serve several purposes that are not well served in continuous flow. They create an opportunity for the team to stop and get feedback from the customer on their product.  They can help with chunking up work to support the planning process and validation of the design of the solution. There are times when this is really important.

So, time-boxed iterations work best:

1. When there is a lot of opportunity for feedback on current deliverables to significantly update subsequent work

2. When the planning horizon is stable enough that a subset of the backlog can be prioritized for at least the duration of the iteration

3. When the backlog items can fit in the iteration time-box

4. When dependent resources (for example, planning and testing) can not be available to support continuous flow

For example, time-boxed iterations work best at the outset with a new product, new technology, or substantially new set of features.

When Continuous Flow Makes the Most Sense

Continuous flow has some advantages over time-boxed iterations. Overtime, the level of resources available to do work is higher and the backlog is much more flexible and adaptive to variation in the arrival of work. For example, once the product architecture is stable and well defined and work is arriving in bursts with various levels of urgency.

So, continuous flow works best:

1. When the work is arriving continuously

2. When the planning window can be (or must be) very small

3. When the outcome is pretty clear and feedback is not as likely to impact subsequent work

4. When there are multiple work streams operating at different cadences

For example, continuous flow when working on defects and small enhancements in a stable product or working with technical support issues.

Non-relevant factors

Effective planning. Data can be gathered from both iterations (velocity) and continuous flow (lead time) to determine when a product can be delivered and to establish visibility into the current status of the project.

Retrospectives. Iterations create a natural break for retrospectives. There is no reason why retrospectives must be tied to iteration releases. Continuous flow teams can have retrospectives on either an event driven or calendar driven basis.

Motivation to get to done. Iterations create pressure to complete work in a specific time-box. Continuous flow creates pressure to met lead-time commitments (policies).

How they can work together

We are making a standard practice of combining Kanban at the strategic planning level to support continuous flow of strategic initiatives across the organization and Scrum (or Kanban) at the development team level. It doesn’t make sense to us to create time-boxed deliverables at the strategy execution deliverable level. We generate plenty of pressure to move forward from the time commitments on the tasks on the Kanban board. At the development level – when there is a lot of opportunity for feedback to update subsequent work – we (may) use iterations to keep the team focused. For defects and minor enhancements we continue to use a continuous flow model – maybe within the same team.

Iterations versus Continuous Flow?

I believe this is a hammer versus saw discussion. Each approach is suited to fit different scheduling and feedback needs. In many cases, time-boxed iterations make a lot of sense but would result in a less than optimized approach then in other cases. I believe that early on in a product (when there is a lot to be learned and a lot of risk to manage) iterations probably make the most sense. At some point, when the product has switched primarily to support, continuous flow makes the most sense. I hope this answers Mike’s question from my previous post.