Interviewer: You’re giving a session at the Next Generation Testing conference in Chicago on the 18th September that describes some “intelligent” mistakes in test automation. This sounds like an oxymoron to me!
Dorothy Graham: Yes it is a bit self-contradictory. Another title might be “It seemed a good idea at the time”.
A mistake is an action resulting from defective judgment, carelessness or a misunderstanding.
“Intelligent” means exercising good judgment. But if judgment is based on a misconception, then a mistake is still made, but for the best of reasons.
Int: Looking at ads for testers these days, most organizations want testers who can write code, but you have spoken out against this – why, and what does this have to do with automation?
DG: What I object to is the assumption that it has to be the tester who becomes a test automator, who works directly with the tool. Because the tools use scripting languages (which are programming languages), any tester who wants to use the tool directly must therefore become a developer of some kind.
Of course there are some testers who will be very happy to become developers (including automation script developers working with the tool’s scripting language) – I have no objection to that, and it is very useful particularly in an agile team.
And it is very useful for testers to understand code and to have some technical knowledge.
Someone needs to have sufficient technical expertise and programming skill to build a good automation framework too – but not all testers. All testers should be able to write and run automated tests, whether or not they are developers – this is what a good automation framework will provide.
The danger is that it implies that the only good tester is one who can code. I believe that this denigrates our own discipline and is alienating some excellent testers. Not all testers want to become programmers, and not all testers would be very good at it. If you are a tester who has come from a business background and is very happy dealing with tests and very effective at finding defects with your testing, why should you have to stop doing what you love to do something you don’t like and won’t enjoy?
Forcing all testers to become developers is damaging to those testers and therefore to testing, and it doesn’t help the automation either, to have de-moralized people doing what they don’t like, not very well. Hans Buwalda says, “you may lose a good tester and gain a poor programmer”.
Int: You’ve mentioned that “many organizations never achieve the significant benefits that are promised from automated test execution?” Who promises these benefits, and why do so many organizations fail to achieve them?
DG: I’m afraid that it is tool vendors who often get carried away and promise things that cannot be achieved, or omit to mention the effort needed to achieve good benefits. I was at a conference not long ago where one of the vendors (who will obviously remain nameless!) was promising that using their testing tool could achieve zero defects in the application being tested!
I was so annoyed I actually went and had a frank discussion with the representative who was there, and I believe they have now modified their web site, as I don’t see this claim there any more.
The people who make the decisions about what tool to get and how much effort and time would be needed in test automation are often at very senior levels in the organization and not aware of what is actually needed to achieve real and lasting success. If they believe the over-hyped promises, they will not see any need to invest effort and time to build good automation, as they think it will just come fully-formed “out of the box.” This makes it very difficult for testers, test managers and test automators to be able to achieve what would be possible if only they had realistic investment.
Int: When is automation not the answer, and what benefits are there for manual testing?
DG: Nice question – I like it! Automated testing never replaces all of manual testing. There are some things that are better and/or easier to do manually, for example seeing if the layout looks nice or the colors are pleasing. There are some things that would take a long time to automate; if these tests are not run very often, the effort to automate them is not worthwhile. Usability issues must have human beings in order to assess the human interface – you cannot automate a real person!
Manual testing has many benefits, probably the biggest being its flexibility and bug-finding ability. Exploratory testing is the most effective approach since the human brain is engaged! For example, if there is something just a little strange that happens during a test, the human tester might think “that’s odd”, follow a new line of investigation and find a major bug. An automated test will only do as it is told and never thinks “that’s odd”, it just compares what’s in its comparison file – in fact the tool doesn’t think at all – it is the least intelligent tester you will ever have.
The best approach is to use people to do what people do best, and use the computer to do what it does best. Test automation gives the best benefits when it removes tedious and repetitive testing (e.g. repeated regression tests), freeing the testers to design more tests and do better manual testing.
Int: You have two published books on automation – are you working on another one?
DG: Actually, I am now working on a wiki containing test automation patterns, along with Seretta Gamba, whose idea it was (when she read the other chapters in the Experiences book). This provides useful “bite-sized” advice about aspects of system level test automation. See TestAutomationPatterns.org. There may be a book coming to support this at some point!
In testing for more than thirty years, Dorothy Graham is coauthor of four books—Software Inspection, Software Test Automation, Foundations of Software Testing, and Experiences of Test Automation: Case Studies of Software Test Automation. Dot was a founding member of the ISEB Software Testing Board, a member of the working party that developed the first ISTQB Foundation Syllabus, and served on the boards of conferences and publications in software testing. Dot holds the European Excellence Award in Software Testing and the ISTQB Excellence Award. Learn more about Dot at DorothyGraham.co.uk.
This interview has been adapted from an interview by Noel Wurst of SQE in January 2013. The original can be found here: