Monday, June 8, 2009

Automated Acceptance Tests...

There's an interesting article on InfoQ about automating acceptance tests. It is hinting that this practice hasn't had the successful following that some of the other XP practices have had. The article appropriately ends with this conclusion:

Now, consider if the tests written by the QA department are written before the development begins. The information provided by these scenarios now occurs at the beginning of the iteration in a predictable fashion. Therefore the uncertainties are reduced, velocity becomes more stable (fewer random interruptions), and that means more predictability.

So, are automated acceptance tests jus something the elite (or lucky) have been able to make work? Is there an internal flaw that is unseen that has caused it's less-than-stellar adoption? Or is it just difficult with proven benefits and a practice that every software development team should aspire to adopt?

Having attempted to do automated acceptance testing with my last team in a healthcare setting using FitNesse, I left the following comment:
... The value of automation is the repeated run.

Automated acceptance tests can have a high value for high data exchange (as opposed to screen manipulation). For example, test a signup or registration form for all of the required fields, field level logic, etc. With regression testing, this insures data integrity. They are also a good fit for testing REST interfaces or other public accessible API's.

Don't use them for drag and drop UI testing or color/stylesheet testing.
Do any of you have experiences in this area?

No comments:

Post a Comment