Tuesday, September 2, 2008

Tests after story closure?

As a customer I tend hang out in the VersionOne google-group user forums. Sometimes I find myself providing agile coaching feedback unrelated to the tool. Today was one of those days.

A peer was asking questions about the tool when running tests. But their team regularly closed stories in one sprint and had the testing team test that work in the next sprint. I had to share the following feedback:

"Flags are raised in my head whenever someone says that their testing team works on the prior iteration's work.

This is problematic for the following reasons:
- the development team has moved on to new work
- if the testing team finds something, they must "interupt" the development team to go back and fix the bug
- this throws off velocity (or it is gamed and false)
...and this whole concept feels like a mini-waterfall since it ignores that the product should be "potentially" shippable at the end of an iteration.

What we did on my last team was have a CI (continuous integration) build that ran every night separate from the check-in test/build process. If everything passed, then the test environment was automatically updated in the middle of the night. Thus, our test team was part of the sprint team and they tested stories as they were built. Stories were not closed unless Dev, Test agreed AND the customer accepted them. This removes the pitch-over and splash-back feeling.

It can also remove a lot of interruptions and debates that pit QA against Dev.

Having said all of that- you may not technically be able to do this today. Or, your culture may not be ready to truly be agile. As a pragmatist, I say whatever you are doing is better than not. I just wanted to throw this out there as a potential goal to strive for. "

I'm glad I said the last part... my peer responded with appreciation but noted that they were working through the technical hurdles to creating an automated build.


  1. Hi Kevin.
    I share your opinion about the necessity of a close involvement of the QA-team within the sprint. As we know agile is a lot about feedback. Getting feedback about the actual quality of what you have delivered sometimes in the next sprint is just much too late (and I agree with all the reasons you give). I sadly never worked in a team where development and QA worked as close together as I would expect them to/I would think they should. But I had a project that was the "perfect negative" example. The QA team tested the version from the previous sprint (we had one month sprints) and things went from bad to worse over time. When I left the project (partly out of sheer frustration and lack of hope the situation would ever improve), the QA-team was hardly able to even integrate the results of the previous sprint within the following sprint, let alone have the errors they found being corrected. This was a terrible experience - and seeing how management was unable/incapable/unwilling to react to the situation made it really depressing.

  2. I'm totally with you on this. IMHO the QA team is always wanting to do a good job and get only the best stuff out the dorr, but it's notoriously hard for them to keep up with an agile process. Usually this is because of reliance on manual testing processes.

    CI works when your tests are automated, but until you reach that point a lot of teams have to rely on manual testing (which is fraught and costly).