Monday, 28 March 2016

Because if we don’t learn from History Channel… we are doomed to repeat History Channel.

Test Automation Snake Oil


This is from 1999 and yet is still so relevant to challenges we still face with Test Automation. Go read it. It's 6 pages and doesn't need a review so instead below are some thoughts that I had from reading it. Surprisingly little of it has dated aside from probably an advance in tooling allowing for more tasks to be automated. This is still evolving today and hopefully I can get a post up about the field of cognitive or visual automated testing soon.

I have a little experience in a QA role. It was the start of my career and it was a mixture of manual and automated testing. I even ended up developing an internal automation framework when I had just graduated so I do have an interesting in Software Testing but do not in any way claim to be an expert in the field. I think Quality is the responsibility of everyone on the team especially when you have a setup, like we do in work, that places the QA Engineer/Software Engineer in Test within a small feature team. I feel that in a role in Software Management it's a massive part of the role to still be a big ambassador for Quality even if you aren't in affecting the code on a daily basis.

I think the drive to automate all the things is coming from a really good place but do worry sometimes that we charge head first into battle that it's really easy to end up in a position with a mass of tests which make it a lot more difficult to evolve our application and release features than they should. That's before you get into the challenges associated with maintaining all the LOC that you've just generated to create those tests or the terrible fear of over testing because sometimes it gets confusing between what you can test and what you should test.

So when you read that in 1999, "In fact, in my experience, they fail more often, mainly because most organizations don't apply the same care and professionalism to their testware
 as they do to their shipping products." shouldn't it hurt and surprise us that sometimes we're still making those mistakes. When you read that Boreland in the 90's (shout out to the Silk Test massive)
correlated if bugs were found by automated or manual testing it makes you wonder if we are still even doing that as an industry now. Can we point at something and state that we know
how well the automated tests are performing?

In a world of flaky tests or tests which need changed with every single code check-in (those not impacting external interfaces included) shouldn't we be looking critically again at R.O.I on each automation endeavour in the same way we should do with every user story we write? Does having a suite of tests even stop us from doing the appropriate manual runs in that area? Does a false sense of security exist?

The most important thing that test automation is meant to do is free the resources you have from doing mundane repeated work and using their brain and insight to find the unexpected. The big concern should be if we set up our test engineers to spend all their time writing automated tests are we really setting them up for success? I think it's only fitting to leave with one more quote from the article "That's why I recommend treating test automation as one part of a multifaceted pursuit of an excellent test strategy, rather than an activity that dominates  the process, or stands on it own." That's something I think is even more important today than they day it was written.

No comments:

Post a Comment