Firstly a big thankyou to Iain and his post for getting these thoughts rolling. I’m enjoying Iain’s writing, and I think anyone who is interested in testing (context-driven or not) should follow his work.
A snapshot of my reply, and the area that I want to write about in this post…
“We had multiple business users come in and help us test. They were not there for UAT (or similar), they were there as numbers. Numbers to help us do all the testing we needed to. So, the most common argument for the steps, was that we needed them so the business testers could do it without too many questions. Now, I for one love questions (just not the same one over and over again). But there was just ‘no time’ for questions.”
Now, when I say that they were not there for UAT… it’s kind of a grey statement. Officially they weren’t, but due to the nature of our ‘process’ at the time, it sort of morphed into a mixture of system testing and UAT. Actually, probably more UT as there really was no acceptance involved. So, according to ‘those you shall remain nameless’ we needed step-by-step test scripts. How else would these users help us get the tests to 100% complete?
So there in lies our first problem… the need to be 100% complete. Never happened by the way. Coverage was the order of the day (not uncoverage as Iain also highlights). If we get to 100% complete, we’re covered. Phew, it’s so simple (NOT).
These teams had a mixture of permanent testers and the aforementioned business users. Our permanent testers would spend weeks developing our details test scripts, which they and the business users would then execute in order to check the product was as specified. I would say that 9 times out of 10, these were positive. Their aim was to prove the specification was correct. We needed these business users to get through all the work (which we created by producing these awesome scripts).
What did this mean? Amongst other things… they were boring. Not much fun just sitting there following a script. It didn’t long for our business users to get over their initial “wow, this is different”, and for it to become a “wow, when will this be over?” What does this drive? A lack of focus. When you’re bored, your no motivated. When you’re not motivated, you generally don’t care all that much. Bad way for a tester to be.
So, in hindsight (after reading Iain’s post) I started thinking about how I should have approached the situation. What would my arguments be against the suggestion that we needed these scripts?
Iain’s questions (of which I’m sure he has a thousand more) from his reply to my reply, followed by my elaboration, is probably a great start:
Would you have needed extra bodies if you didn’t have the overhead of scripts?
Short answer, no. Well, nowhere as many that’s for sure. This would have allowed us to be a little more ‘choosey’ when trawling through the list of names that were available to us. Maybe we could have chosen those with more suitable domain knowledge, or those that we have had come previously and showed us some great skills, motivation, etc.
The sheer volume of work required needed either more permanent testers, or some business users. I have no doubt that by cutting out the scripting we could have positioned ourselves much better. With more time available (by not writing said scripts) the testers could have identified more quality related information earlier on.
Would your testing have been better without scripts?
Yes. Given the right people testing, definitely yes. I’m a big fan of test ideas and using these as test cases.
Now, when I say “the right people” I mean the right people. I think this is really important to the success of any venture, especially testing. Even if the tester doesn’t have a certain skill or level of knowledge that you’d prefer… if they have the drive and motivation to deliver a quality product, then give me them! If they sit there and ask questions all day, so be it. They may ask 100 questions, and only 1 of them may be crucial to highlighting a gap, but that’s cool in my book. :0)
Hard to say whether this would have been the case in the above situation. Generally speaking I think so. I think many of the permanent testers may have had the motivation sucked out of them by idiot scripts. Given the opportunity to approach testing differently, many would have excelled.
Made to walk without crutches, might the users have tested better without scripts?
Yes, with the inclusion of a mission of course. I think we could have used these business users in a more effective way.
When they used to say (for example) “this is awful, I wouldn’t use this”, the general response was one of “well, that’s what the specification says, so it must be right”, and I’m sure we’ve all heard this before.
What if we had asked “why?” Why did they think it was awful? Why wouldn’t they use it? The power of questions is unquestionable. It could have been a 5 minute ‘fix’ to get them using it again, but we’ll never know.
Might not asking questions have revealed some interesting and useful answers?
Yes. See above. ;0)
What was the mission, coverage or uncoverage?
To ‘those you shall remain nameless’ it was coverage. To me, it was also coverage for a long time. That’s what I was taught to be ‘best practice’. I was young and naïve once. If only I had questioned things a little more. Oh well, better late than never I guess.
So towards the end, for me it was uncoverage. Iain explains this well. As this was not seen to be the ‘way’, it was hard. One of the reasons I moved on (one of many that is).
So what other questions/arguments are there for needing idiot scripts for our business users? Throw them at me.
Thanks again Iain.