Monday, 28 March 2016

Because if we don’t learn from History Channel… we are doomed to repeat History Channel.

Test Automation Snake Oil


This is from 1999 and yet is still so relevant to challenges we still face with Test Automation. Go read it. It's 6 pages and doesn't need a review so instead below are some thoughts that I had from reading it. Surprisingly little of it has dated aside from probably an advance in tooling allowing for more tasks to be automated. This is still evolving today and hopefully I can get a post up about the field of cognitive or visual automated testing soon.

I have a little experience in a QA role. It was the start of my career and it was a mixture of manual and automated testing. I even ended up developing an internal automation framework when I had just graduated so I do have an interesting in Software Testing but do not in any way claim to be an expert in the field. I think Quality is the responsibility of everyone on the team especially when you have a setup, like we do in work, that places the QA Engineer/Software Engineer in Test within a small feature team. I feel that in a role in Software Management it's a massive part of the role to still be a big ambassador for Quality even if you aren't in affecting the code on a daily basis.

I think the drive to automate all the things is coming from a really good place but do worry sometimes that we charge head first into battle that it's really easy to end up in a position with a mass of tests which make it a lot more difficult to evolve our application and release features than they should. That's before you get into the challenges associated with maintaining all the LOC that you've just generated to create those tests or the terrible fear of over testing because sometimes it gets confusing between what you can test and what you should test.

So when you read that in 1999, "In fact, in my experience, they fail more often, mainly because most organizations don't apply the same care and professionalism to their testware
 as they do to their shipping products." shouldn't it hurt and surprise us that sometimes we're still making those mistakes. When you read that Boreland in the 90's (shout out to the Silk Test massive)
correlated if bugs were found by automated or manual testing it makes you wonder if we are still even doing that as an industry now. Can we point at something and state that we know
how well the automated tests are performing?

In a world of flaky tests or tests which need changed with every single code check-in (those not impacting external interfaces included) shouldn't we be looking critically again at R.O.I on each automation endeavour in the same way we should do with every user story we write? Does having a suite of tests even stop us from doing the appropriate manual runs in that area? Does a false sense of security exist?

The most important thing that test automation is meant to do is free the resources you have from doing mundane repeated work and using their brain and insight to find the unexpected. The big concern should be if we set up our test engineers to spend all their time writing automated tests are we really setting them up for success? I think it's only fitting to leave with one more quote from the article "That's why I recommend treating test automation as one part of a multifaceted pursuit of an excellent test strategy, rather than an activity that dominates  the process, or stands on it own." That's something I think is even more important today than they day it was written.

Sunday, 27 March 2016

Your Code As A Crime Scene

Just finished reading this by Adam Tornhill and wanted to share a quick review of it

Really interesting book. Part theory part hands on experience and as such it's a different style to a lot of programming books which I think keeps you engaged after the enticing introduction which tells you that the same techniques used to track serial killers can help you find problems in your software. It's a premise like that which get's you interested.

Sometimes it feels like a bit of a gimmick. I don't feel like often the forensic techniques are anything more than a touch point that helps you understand the premise of some pretty nifty statistical analysis through your code and particularly your source code history.  We see the map that indicates where jack the ripper may have  lived but then our own code map seems a bit more straightforward. It still acts as a nice narrative and vehicle for getting the concepts across but I don't feel I'm any closer to starting my private detective business unfortunately.

Section 1 starts with the theory to help you understand where the bottlenecks may be in your code and in particular how to tell from how a code evolves and grows how it may have suffered from growing pains. Essentially V1.0 if your code is normally written by a smaller group with a singular purpose but as time goes on more people come in to the project and that can very easily lead to some sprawl of the code base and differing standards and implementation styles so it's great if you can try and identify those areas because they may need a proper ground up redesign.

This is done through a couple of different mechanisms one of which is mining the richness of information in your git history. The tooling provided will allow you to find code which is complex or could be home to many errors another way of doing that which has been suggested by colleagues was just to keep an eye out for anywhere I had made commits....dickheads.

So while you may not be pounding the beat whilst looking for clues as to where the issues are, what you get instead is some great heuristics and insights. A lot of them that I have no experience of like indentation-based complexity for example which indicates you can judge complexity of code visually from how it's indented. All of these techniques are based on academic finding and Adam acts almost as a conduit bringing these to industry to say hey this could be useful to you today don't want for it to become a fancy start-up with a .io address.

We also get to have worked examples both on the code base that's used to analyse code bases (how meta ;) ) to NHibernate and others. It's brilliant to get this level of hands on work in a book which could easily disappear into theory and intellectual grandstanding. It makes the examples feel like something you can actually aspire to and all in all this made me interested in checking out Microsoft R (pretty sure there is a free EdX course on it) and checking out the world of data analysis.

Section 2 talks at length about automated testing and also some of the shortcomings you can have with the implementation when you have an 'extensive' set of tests that are too close to the code and counter-intuitively slow down progress you can make.

I found the chapter on this to be probably the most interesting and thought provoking of the book. It's not often a developer will talk about writing tests and even less want to draw attention to the tests they've written but this is a frank discussion about how the best intentions around testing can end up weighing you down because the same discipline and thought processes we bring to the application code isn't present often in the test code.

Section 3 is on social psychology and so its very different to what has come before. It's less actionable (to start but does come back around) and I guess more of intellectual value. That's not to say there isn't touch points of real world but initially it's removed from concious thoughts etc. Interesting points include discussing why Brooks law on increasing team size doesn't fit for open source projects.

This feels like a manual for sitting down and working out how to mine the information rich source that is in your source code history. As an industry which has such a steely focus on big data etc we often seem wary to turn the microscope around to look at ourselves and see what can be learned about how we work and what makes for successful software delivery. Though the techniques may not stay the same I truly believe this book could make a big difference to how software gets delivered. Actionable metrics for software development is such a bloody minefield that it's hard not to be really floored when someone can not only show you actionable and scientifically backed ones but also make it an interesting read.

I'm going to make a concious effort to use the tools listed in the book and see what information we can bring out. We are in a pretty distributed project set-up (which I guess is becoming the norm via micro-services) so sometimes I think some of our temporal dependencies will be lost because it's across multiple git repos but I still think a lot of what is discussed and reasoned about here will be useful. When you have a book like this which isn't about a technology explicitly or about a class of problem I think there's no higher praise than to say it's thought provoking and has driven me to want to implement what's talked about as soon as I can.

Well worth a read!