Working in New York

I recently published a post which covered a bit about our personal journey in NYC over the past 18 months (approx.). Here I wanted to write more about it from a working perspective.

I’ve known Paul Holland and Keith Klain for a while now. I met Paul in person for the first time back in 2014 when we both attended and spoke at StarCanada in Toronto. About a month later we saw each other again in Sweden for Let’s Test. Later that year I met Keith in person for the first time when he came out to Australia to keynote for Let’s Test Oz. Of course, I had known both of them previous to these encounters through the online testing community.

At Let’s Test in Sweden I recall getting up early one morning to partake in a game of disc golf (yep, I had no idea what it was either… but it’s fun, so check it out!) which Paul was running. He’s bloody good at it, but don’t tell him I said that. While throwing discs and trampling the Swedish country side (and, as it turns out, collecting a deer tick) we got talking about Paul’s recent move to NYC to work with Keith. What they were doing, and attempting to do long term, was inspiring. It wasn’t just good testing work, it was also making an impact socially. I know our memories can deceive us, but I’m confident in quoting Paul as saying… “Do you want to come work with me in New York?” I laughed it off. I thought to myself, it’s too big a move. I can’t take my entire family across the other side of the planet. I can’t even comprehend what NYC would be like after living my whole life in little old Adelaide, Australia. Well, apparently I can. Approx. a year and a few Visa hurdles later we landed in NYC!

I’ll be honest, the move was tough. It would be one of, if not the, hardest things we have ever done. NYC for a holiday… fun and exciting. NYC to live… fucking scary. You can read more about this stuff in the other post.

June 1 was my first day on the job, and what do I walk into? A public RST class being lead by Michael Bolton, with co-presenter being Paul! I had taken RST back in 2012, but that was with James Bach… so I was looking forward to the course from a different perspective. Attending an RST class is a pretty big deal for an Aussie, we don’t get the opportunity very often while down-under (in Australia), so I was stoked!

After the course I spent more time learning about the business, helping out with some pre-sales stuff, and also getting the office organised (building book shelves, etc.) as it was only very recently completed. Travelling out to the South Bronx each day was an eye opener, but as with most things… you get used to it.

What the company was doing, and what they wanted to do, for the community was outstanding. You could not only take pride in the operational work you did, but also the positive impact you were having on people’s lives. It made getting out of bed each day that much easier.

My first and only client (as I stayed with them throughout my time in NYC) was a global financial services organisation. You can read more about my time with them here. It was a huge learning experience for me, and I’m very grateful for the opportunity.

The client work kept me pretty busy, so unfortunately I couldn’t get involved as much as I would have liked in the consulting side of the business. However, I did learn a lot from Paul and Keith. It’s unfortunate that it broke down the way it did. Early in 2016 Keith left, and Paul also departed later the same year. For those that want to know more about it… don’t ask me – I’m not interested in talking about it or potentially fuelling any bullshit that’s out there. Those that need to know, know… and those that are true friends won’t judge without finding out the truth from the source.

The client work, along with my personal life (trying to get as much out of the city as possible while there) also meant I didn’t end up doing much with the awesome NYC Testers meetup group. I managed to get along to quite a few of the meetups, and they were always well organised and useful. I didn’t get the chance to present, despite their best efforts, and for that guys, I’m truly sorry.

To my team in the ‘Chicken Room’ – You guys kick arse! One of the best teams I have ever worked with. An often frustrating and complex project, but you held it together really well. I’ll miss you all very much.

Tristan and Cochrane – Thanks for being my homies when I most needed it. Peace and love.

NYC Testers – You’re a shining example of what a meetup can become, the power it can bring, and the friendships it can create. Kate, Anna, Tony (Tiggr – yes, I left the ‘e’ out on purpose), and Perze… keep on doing what you do! I hope to see you all again some day.

Paul – There are few people that can make me laugh as much as you can… all while teaching me important things about testing. Thanks for your ongoing support. It’s a shame things didn’t work out differently towards the end there mate. However, I do look forward to working with you again in the future… I’ll make it happen somehow!

Keith – There are few people who can curse as much as you can… and get away with it! I’ve said it before, and I’ll say it again… best boss ever! Thank you for everything. You went above and beyond the call of duty, and I’ll be forever grateful. I also look forward to working with you again in the future mate!

Peace…

Removing a Tester…

So, Katrina Clokie (KC) tweeted a tweet last week…

Still thinking about this question that someone asked me a week ago: “If we removed a tester from the team, would quality go down?”

To which my initial reply was…

you never remove a tester though, you remove a person and that person is unique. So very hard to answer.

Followed by…

Yes, thank you for articulating so clearly one of the reasons that I’m finding it difficult to find a generalised response.

I feel like there should be a “these are the 5 things to consider” type of response, but I can’t even nail that. Yet.

So I started tweeting my thoughts on some of them (which I’ll articulate further below), and that was followed by a few more tweets from KC, including this one…

I feel like we’re thinking about the question entirely differently, and I’d like to read your blog on this topic🙂

…and here we are.🙂

So hopefully KC will reply to this with why she thought that, because I’m still a little confused by it. Anyway, there were quite a few replies to her initial tweet and many ‘cautions’ about defining what quality means… which I completely agree with. However, I’m stuck by the ‘remove tester’ part of it so I’ll stick with that for now.

My first thought was in relation to ‘tester’ versus ‘testing’. I know KC reasonably well, so I was confident I didn’t need to clarify and that she in fact was talking about the role of the tester rather than the activity of testing. With that I moved immediately to thinking about the person who was in the role of the tester because, let’s face it, a role description is only ever as good (or bad) as the person performing the role. With that, there was no way I could answer the question without getting specific about a particular context, which wasn’t going to happen on Twitter because I hate conversation on Twitter.😉

So I tweeted my initial response.

After KC acknowledged and mentioned the ‘5 things’ I tweeted the following…

1. Mindset, 2. Skills, 3. EQ/personality, 4 and 5. Hmmmmmmmmm

4. Current team fit. How they work together. Are they a motivator/demotivator? Do they hold it together?

communication is too broad and sits in skills anyway… help me out with 4 and 5 KC.

4. Support (from other people/teams & artifacts/automation/monitoring) 5. Quality (depends what it means to your org?)

Notice how nothing we’ve mentioned is ultimately role specific? It’s all very much related to the individual who may be removed, rather than the role. So when reading the following consider it in relation to a specific person in your team, the one that you call tester and performs the role of tester (whatever that may mean to you).

  1. Mindset – I’ve often said the following… developers are positive, testers are negative. Now before you get angry with me, let me explain. Developers build stuff. Sure, they may break things down in order to build, but ultimately they build, create, bring life to new code. Testers seek to break things down. Sure, they may build things in order to break other things down, but ultimately they find breaks and disprove. If you remove a tester who likes to break things down, what impact will that have on your team and it’s output?
  2. Skills – Humans are unique. Each bring different skills to the team. You may have two testers who are both brilliant with SQL, but does that mean they look at things the same way? No. Skills go much deeper that how they use a keyboard, or what they may have listed on their CV. So when you remove a tester, what actual skills are you taking away from the team?
  3. EQ/personality – Sure, you could put this under skills and call them ‘soft skills’, but I won’t. I’m calling them out separately because that’s important I think they are. I’ve worked in many teams over the years and it’s always been fascinating for me to watch how people interact with each other and find their place within the team. Introverts, extroverts, people who have empathy, people you don’t… oh, and there’s usually at least one ass-hole. Taking one person out of the mix can have a huge impact, which can be very positive, very negative, or somewhere in-between. So, go ahead… remove that tester, but think about it first.
  4. Current team fit – In hindsight, this is likely an amalgamation of all 3 points above. Everything adds up to the ‘team fit’. I guess I was in a hurry to produce a number 4!
  5. Communication – Yeah well, like I said in the tweet… too broad and likely sits within skills also. However, it’s still very much worth considering when thinking about how your team communicates, and more specifically how the tester you’re going to remove communicates. They may very well be a conduit to the wider team. In my experience testers seem to fit well in that spot as they tend to have a broader perspective of the project… focusing on the initial business need right through to the developed solution’s use.

I’ll let KC explain her 4 and 5 (hopefully – please KC?).

So the key message from me is exactly what my initial tweet said – you don’t remove a tester, you remove a person performing a role. Getting another person to perform that same role, or the activities that are intended to be performed by it, will always give you different results, always.

So who has removed testers and what happened?

Example: Atlassian QA Model (there’s way more than one link BTW) – OK, so they didn’t really ‘remove’ testers. They changed their approach, and the role seems to have changed with it (although the number of staff appears to have significantly decreased in the ‘testing’ space). What happened? Well, from what I’ve researched the outcome has been more positive than negative. I don’t work there, so I don’t know this first hand, but it appears to be working for them.

I’ve used (and still do use) Atlassian products, and my experience has always been quite good. So if I’m a measure… well done Atlassian.

Unfortunately (or perhaps fortunately) my first hand experience has always been introducing testers rather than removing them… so I can’t add any personal experience. However, the way the industry is moving, I think I’ll soon be experiencing the removal. Things move quickly these days. The time available to explore a product is constantly shrinking as we push things to production faster and faster. We’re pushing automation to developers (which I think is good), we’re doing more automation (which I also think is good if it’s well thought out), so the need for large numbers of testers is reducing. I don’t believe there will ever be a point where we won’t need a human interacting with and exploring a product (not in my lifetime anyway). However, does that human need to be called a tester?

Ah, and now we return to ‘tester’ versus ‘testing’… so when removing a tester don’t just think about the role description. Think about the person you’re removing and the activities they perform. Will the other members of the team be able to do what they did? Think… and I mean critically.

I liked this from Aaron Hodder…

A team is a complex system. I don’t think you can predict the outcome. (plus let alone diving into “what does qual mean”)

A complex system indeed, and why is that? Because they’re built with humans!

Also, again, that caution on what quality means… an extremely important part of the equation which many books could be written about (oh, they have!).

Peace…

Experimenting with Process Change – You’re Allowed to Fail

Over the past 12-18 months I’ve been working a lot with various process change. Transition to this, transition to that, and all the associated process change required as you learn and adapt along the way. If I had to call out the biggest challenge, or even blocker to success, it would be people’s unwillingness to fail.

*Fail: to fall short of success or achievement in something expected, attempted, desired, or approved:

The experiment failed because of poor planning.

*Dictionary.com (I underlined the word experiment)

From my experience people’s understanding and definition of the word fail is too negative. It can often prompt quitting… “I failed and that, so I’m going to quit”.

I’ve been thinking about how to change that, or perhaps even a new word to use…

  • Flounder
  • Fall
  • Fizzle
  • Flop

They just don’t work.😉

The reason I underlined the word experiment in the example taken from Dictionary.com is that it aligns so nicely with where my thinking is currently at – treating process change as an experiment, or a series of experiments to be more precise. Attempting to soften the blow of the words fail and failure.

Taking from science, what happens when an experiment fails? Sure, we could simply quit thinking that our hypothesis is false. However, the science community teaches us to push on, revisit our variables and control them more tightly if required, take information gathered from the failure and revise the hypothesis, then experiment again. Rinse and repeat.

Remember, a failed experiment can yield just as much valuable information as a successful one, if not more!

diagram

Identify Potential Process Problem/Desired Change – At this point you’re thinking about the why, the goal of the change. Transitioning to a new way of working? Identified a timeliness problem via value stream mapping? This is the starting point where you identify a potential process problem, decide a change is required to drive efficiency, or perhaps you’re told to change by a higher power. Whatever the reason, you need/want to change.

Build Process Change Hypothesis – You’re now thinking about the how. Big bang? Small increments? What can you attempt in order to see success with the change and meet the desired goal.

Plan Experiment/Control Variables – The size of the plan and the amount of effort put into it will of course depend on the experiment and the amount of variables you’re potentially dealing with. When experimenting you need to be able to control and understand the variables, and with process change this can be extremely difficult due to the often large scale of human involvement. You may even need to allow for the mood of people involved, and who knows what that could be from one minute to the next. This is where understanding more than control is important. If you can at least understand the variable (as you cannot control a person’s mood for example) you can allow for it during the experiment and when drawing your conclusions.

Run Experiment – Execute your plan and run your experiment! Take notes, collect data/information, monitor your variables and continue to understand them if you cannot control them.

Draw Conclusions – This is the important part (well, it’s all important but this is where you can influence people’s view on failure). Did what you implement meet your goal? No. That’s fine, we’re simply experimenting remember. Take what you’ve learned from the experiment and circle back to the top!

This all seems very easy and logical in theory. As with most things, in practice it’s far more difficult. Some cautions:

  • Don’t bite off more than you can chew. The size of the experiment can make a huge difference in the success of this change process and people’s willingness to fail. If you build and experiment that will take months to run then people’s acceptance and understanding of a failure will be much harder to come by. Not only that, with a larger process change experiment comes more variables. You need to limit the variables as much as possible, especially those you cannot control.
  • Use the language of experiment often. The ‘general’ understanding of the word experiment (and the process of experimenting) brings a softer meaning to the words fail and failure. People understand that experiments fail, and that it doesn’t mean we should immediately give up.
  • In line with the first caution above, think about the level of risk associated with your experiment. If your experiment does fail, what will the impact of that failure be? Can you recover quickly? If you assess the impact of the failure being too great then it could be a sign you need to breakdown the experiment into smaller ones, or remove potential high risk variables. You can’t sell a process of experimenting if your experiments keep killing the business!

The key with all of this is getting people to understand and accept that a certain level of failure is OK, as long as you’re learning from it.

The Invisible Gorilla…

gorilla

I finally finished it… after a duration of more than a year! While I don’t entirely blame the book for such a long period of time being required (moving to NYC played a part) it was a difficult read. Not that it contained overly complex ideas, but I found it pretty tedious throughout. I would normally just stop reading a book like this, but I was very interested in the content it proposed, so I pushed myself.

Pretty sure most people reading this post would have heard of the short video titled ‘The Invisible Gorilla’. Count the passes of the basketball, get to the end, did you see the gorilla (sorry, spoiler)? It’s a sound experiment, and holds a great wealth of insight to EVERYBODY, not just software testers. It was that video that lead me to purchase the book. The video captures one illusion; the illusion of attention, and book further builds on that illusion including other common ones namely the illusions of memory, confidence, knowledge, cause, and potential.

Each chapter focuses on a different illusion in the above order. It’s a mix of example via real-life story and example via real-life experiment. I found the stories quite fascinating and likened them to those told within Freakonomics. It was fascinating to read about criminal witnesses mistaking what they sore or what they remembered, or how confidence can be the deciding factor in multiple situations, and so on. As with many elements of psychology, the explanations used to describe human behaviours in the book were broad, and you can never really be 100% certain of their accuracy, however it was enlightening for the most part.

When the examples moved into descriptions of experiments it all slowed down a bit for me. This is no doubt a personal preference on how a book can keep my attention, so don’t let be a deciding factor for you. Having the ‘data’ to back up the theories does help, albeit a little boring for me to read about. They weren’t all that way, but I think as you near the end of the book you can sense an element of repetition in the writing which causes things to drag on a bit.

For me personally the illusions of attention, memory, and cause were the most powerful.

The illusion of attention has been a big player in my software testing career and until I learned about inattentional blindness and James Bach’s focus/defocus heuristic (something that also helped me a lot with the Dice Game), I’m pretty sure I fell victim to it a lot more often. I have also witnessed many other testers move directly past obvious bugs because their focus was on another part of the product, or on something entirely different all together (having a bad day, etc.). I probably should clarify that the bugs were obvious to me, but using the word obvious generally is not entirely fair. Due to the illusion of attention the bugs were not at all obvious to the tester sitting in front of me, and they cannot be blamed for that… they’re human. Pair testing helps a great deal to counter this illusion. Even if you both try and focus on exactly the same thing, you won’t be. Each of us see things in a unique way, which helps us identify different bugs, even if only slightly. The trick is to be consciously aware that your attention is a limited resource. Don’t think that you’ll see it all, because you likely won’t (hell, it’s science baby!). Another thing that has helped me counter this is product tours with particular charters. If you spend a lot of time with the same product and you test for similar things, take a step back and chart a course for a usability tour, or a security tour, etc. Whatever type of tour you think may yield some important information that you may not have discovered otherwise.

Early failures with the Dice Game also alluded to the illusion of memory. I first played the Dice Game during my Rapid Software Testing training. I was lucky in that respect as we could work in groups attempting to find the pattern. During that game, and in some since, I very quickly forgot (well, actually misremembered) the previous patterns I had been working on. There were many occasions where I finally solved it thinking that I had already been down that path with no success. This continued until I began taking more significant notes throughout the game. With those notes I could constantly check on what theories I’d worked on previously and my problem of misremembering was gone. As a tester, how many times have you been asked why you didn’t identify a bug that was now present in production? How many of those times do you distinctly remember testing in that area of the product? How did you know? Did you just ‘remember’ testing it, or did you actually go back and look over your notes? The book does a fabulous job of showing the reader how human memory can deceive us, very easily. Reading about this illusion once again highlighted the importance of taking notes and gathering evidence while testing.

The illusion of cause has been an interest of mine since reading Freakonomics. Generally speaking the entire book is about this illusion and how people are quick to jump to a particular cause rather than seeing it merely as a correlation. Humans have evolved in a way that allows them to identify patterns, but not only this, they also seek them. Our understanding of time (always moving forward) drives many of the patterns we identify/seek. In a sequence of events it’s natural for humans to identify the first event as the cause of the remaining, when in reality they may not be related at all. In the software industry I think we can spend too much time seeking and then identifying the wrong cause for many of our failures. If the sequence of events lines up nicely we jump on the first event as the cause. This chapter was a reminder for me to look at events in a different way, and to question my own tendency to identify a particular pattern.

There are also many lessons to be found in relation to the illusions of confidence, knowledge (often a result of the former!), and potential. While it was a struggle at times, I would encourage people to read it.

The ultimate lesson I took away from this book? To live my life and do my work while being aware that these illusions exist. Sure I’ll forget from time to time (I’m only human after all), but hopefully I’ll be able to remind myself before I miss something really important.

Peace.

Seek, Identify, Gather… A model for talking about testing.

When I talk about testing I tend to talk about information. For me, testing is all about information. Information that helps decision makers make decisions, and information that informs risk. How we identify that information depends on the type of information we need, and to know that we can do several things… identify risk, ask questions, user research, analyse specifications, find out what’s important to those who matter; the list goes on.

Only recently have I actually thought about this as a model. So I thought I’d spend some time visualising it (something that helps me think deeper about a subject).

What testing means to me at a high-level:

testing

If testing wasn’t all about information, then it would be all about risk (generally speaking). So first up I like to talk about risk and risk identification. Testers (for the most part) are good at that. We tend to come with varied backgrounds; business, tech, users, and so-on. So we can view a product with many different lenses which provides greater risk identification coverage.

Then I move to the information required in order to understand and potentially mitigate the risk. The information we seek depends on the risk we’re trying to understand, as does how we seek and identify it.

Let’s visualise that part:

testing2

I then like to talk about gathering and reporting the information. Test reporting can be very painful, or it can be a breeze. Which one it ends up being not only depends on your stakeholders, but also on how you gather the information while testing. If you don’t record your testing (screen captures, notes, etc.) then how are you going to gather the information you need to report? Pass and fail? Whatever… realistically that means nothing. You need to work out what your stakeholders need (or in the real world, what they want) to see, and then gather the information in a way that translates as easy as possible to it. You may have a project sponsor that totally gets what you’re doing and completely trusts you, and so doesn’t need to see anything… but lurking in the background is the internal audit team who need to see EVERYTHING! So I like to include that when I talk about testing as it can make your break your entire effort (rightly or wrongly).

Thoughts? Feedback is always welcome.

Peace…