8 Steps to improve quality and reduce delivery time

I am amazed at just how many remedial projects are around.
I thought the quality issues that plagued software delivery a decade ago are history and we have cracked the problem . So where do we still go wrong?

1: If its not important enough to fix don't waste time on it

I recently helped a financial organisation with quality issues. On day 1 the test manager proudly described his fully automated regression suite. This had 100% coverage and ran every Wednesday night. Great place to start. On Thursday I asked for the results. They will be ready by the weeekend once we've analysed them.What does that mean, surely a test passed or failed? How many failures are there? Well about 400, but it's not that simple.

We (by which he meant the entire test team) will spend the next two days cross referencing these to previous failures. The business has agreed we don't need to fix those. There are 380 acceptable failures, we have to assess the other 20 to establish if they need to be fixed. I looked at the first failure which had been around for 18 months. Tell me about this one I invited.

Well - we have verified that the software behaves correctly.
So why do we test for it, can't we just delete the test?
The spec was incorrect. I have to test it to report the difference between what was specified and what was delivered. Investigation showed that nobody else could even find the spec - and certainly had no interest in comparing it to the completed product. In their minds the product had been successfully delivered months ago.

Ironically the test team complained that it was critically short staffed and needed more resource. We got rid of the bogus failures which freed the team to do what they were actually there for.

2: Kill the defect tracking tool

Whenever I ask an organisation why they use such a tool I am greeted by astonishment. How could I not understand such a basic neccessity? Everyone knows they give us:

  • Quality metrics so we can manage quality - you can't manage what you can't measure
  • Accurate metrics of what our quality really is

Aha, the good old illusion of control. I'm afraid I beg to differ. Here is what they really give us:

  • Hidden and unmanaged queues. We report on progress and forecast completion based on the state of the product backlog. But its a lie because there is another queue of stuff that we need to come back to. Organisations try to absorb this queue or wait until it is so big that they have to stop work in favour of a bug fixing sprint.
  • Confirmation that quality is negotiable - We prioritise our defects and fix these based on the priority. Have you ever spotted a Fix if time category? If its not important enough to fix don't waste your time managing it.
  • We eliminate the need to speak to each other. A defect can bounce around the organisation for weeks without actually doing anything about it. Meanwhile the daily report confirms that its in progress (and under control)
  • We bypass the customer. Surely the customer decides what quality means
  • We game the results to improve the metrics. We downgrade priorities. We re-assign defects to deployment requests, they don't show up on the metrics - our software is fine, its an environmental issue.

Guess what? The customer doesn't care about the metrics. He has a product or a feature that either meets his needs or doesn't - there is nothing in between.

3: Test the whole thing

Nonsense that I keep hearing:

  • TDD is about unit testing, we still need integration, acceptance and regression tests
  • We can't write tests for the user interface until it has been delivered
  • End to end testing can only happen after a release or milestone
  • We will ramp up our testing resource after release X
  • We will start performance testing after release X

Well how about this? We would like to develop a web application. Before a single line of code is written we could write the following test:
Given I have typed [the application url] in my web browser
When I navigate to the requested page
Then I am served a valid HTML page and the HTML does not contain the text ignorecase:"error"

Now we have a test that proves that our web server is correctly configured and accessible (and that's all we have to do to make the test pass). We also have our first integration test. When the application is hooked up to a database the test will fail if the database server is misconfigured or unavailable. This is also our first deployment test. At a later stage we will extend this test to verify that we actually hit the correct landing page.

Do we need integration tests? If we are testing end to end the answer is usually no. If you are using a web service along the way your end to end test will fail if your application can't find the web service or it doesn't behave as expected. The answer is only usually no because it still makes sense to write integration tests for 3rd party components where you have no control over the code.
Does this mean we don't need unit tests? Absolutely not. Unit tests are of no relevance to your customers or users but they do give an indication of code integrity and maintainability. If you are struggling to write unit tests (or have a tool generate them for you) it means that your code is not easily supportable or maintainable - Fix it!

Testing from end to end means you need far fewer tests.

4: Never, ever walk past a failing test

Tests fail for one of three reasons:

  • The product doesn't behave as it should - Fix the product
  • The test is invalid - Delete the test
  • The test is incorrect - Fix the test

There is no point in TDD if you do not respond to failures. To avoid wasting time and resource every failure must be responded to immediately. Nothing less than a 100% pass rate is ever acceptable. If you don't do this from day 1 you will soon become swamped. The financial company I mentioned earlier had no chance of dealing with every defect when they received 400 false positives.. Get rid of the noise!

Continue to Part 2