Categories
Uncategorised

Are you ready to ship?

Everyone knows you need to test software. No question. A good thing. But it often doesn’t happen. Or teams start with the best of intentions, which fall by the wayside in the face of pressure to deliver above all else. Because software is a fairly intangible thing, it’s hard to spot the cracks starting to appear. So let’s consider the importance of software testing from an Agile/Lean perspective. Maybe something here will resonate with you, your developers, your stakeholders.

Good software testing is a vital part of an Agile/Lean product development process. Without it, it becomes harder and harder to adhere to some of the core principles of these methodologies, causing the development process to become less efficient and, perhaps more importantly, less innovative over time.

Let’s begin with the Lean principle that work in progress (WIP) should be minimised. Rooted in manufacturing, the principle is equally applicable to software development, but less well understood. In manufacturing, WIP isĀ  visible. It’s a pile of components at one station in the manufacturing process that are awaiting processing by that station. Big piles = Lots of WIP = Loss of efficiency in the system as a whole. Inventory is lying dormant, and that’s not a good thing in manufacturing. Inventory lying around isn’t making money. In fact it’s costing money.

But what is WIP in software development? It’s the features and bug fixes that a development team are delivering for a software product. The key word here is delivering. In other words, they’ve been implemented but have not yet been delivered to the end users. It has cost money to implement the features and fix the bugs, but they are not delivering a return to the business until they are in the hands of end users. Not a very tangible thing, even to the developers delivering those features and writing those lines of code each day. Agile attempts to make WIP more visible.

The sprint backlog makes WIP visible by giving the team a visual representation of the work they have commited to complete in the next sprint (normally a couple of a weeks). At the end of a sprint, there should be a potentially shippable product. These two elements together are key to minimising WIP. Teams generally manage to adopt the sprint backlog. It’s the potentially shippable product element where things often fall apart.

What does having a potentially shippable product at the end of a sprint mean in relation to the underlying Lean principle of reducing WIP? Let’s take a common example. A development team has committed to delivering a number of new features in their software product AmazingApp. They implement the features over a couple of weeks and deliver them into quality assurance (QA) and user acceptance testing (UAT) environments.

Does the team have a potentially shippable product at the end of the two weeks? Unlikely. Let’s consider why, by asking a series of questions.

  • Have QA testers and/or users confirmed that the new features meet their acceptance criteria?
  • Does the system still meet all existing non-functional requirements (generally performance and capacity related)?
  • Do all of the existing features still work the way they used to?

Unless the answer to all of these questions is a confident “Yes!”, the team does not have a potentially shippable product. There might still be WIP:

  • one of the new feature doesn’t meet the acceptance criteria
  • the performance of the product is no longer acceptable
  • a couple of existing features no longer meet their acceptance criteria

So what does good software testing look like? What processes are required to deliver a potentially shippable product at the end of each sprint. I’ve summarised what I think are the key requirements below:

  1. Automate Deployment. Reduce the friction of promoting a change into production. Perhaps you have one or more non-production environments, and a production environment. The faster you can promote a change, the sooner it can be made available to QA and UA testers.
  2. Automate Regression Testing (Functional and Non-Functional). Promoting a change quickly to QA and UA testers is not enough. They (and the development team) need to be confident they’re testing a product that hasn’t been broken in existing areas, whether they be functional or non-functional.
  3. Create an Agile QA/UAT Process. QA and UA testing should happen iteratively in parallel with the development process, and in close collaboration with the development team. Implementing new features during a sprint and then handing them over to testers at the end means there is no potentially shippable product.

Lack of automated regression testing reduces efficiency and innovation. The effort required by QA and UA testers to test new features is fairly constant through the life of a software product. But over time, the number of features and complexity of a product grows. If regression testing is a manual process, the effort required to ensure a growing list of features still work grows with every increment of the product.

Developers become fearful of making a change because there is no quick feedback to indicate whether their change broke something. Innovation suffers as a result. Developers focus more on not breaking the product than on delivering ground-breaking and value-adding new features. And the features that do get implemented take longer and longer to deliver into production as they have to go through an ever increasing regression test cycle.

Too much invisible work in progress is killing efficiency and innovation in software development, and one of the main causes is bad software testing.

Image source: By Jef Poskanzer (originally posted to Flickr as smash) [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons