Sunday 3 August 2014

Creating a common test approach across multiple teams

I was recently involved in documenting a test strategy for a technology department within a large organisation. This department runs an agile development process, though the larger organisation does not, and they have around 12 teams working across four different applications.

Their existing test strategy document was over five years old and no longer reflected the way that testers were operating. A further complication was that every team had moved away from the original strategy in a different direction and, as a result, there was a lack of consistent delivery from testers across the department.

To kick things off, the Test Manager created a test strategy working group with a testing representative from each application. I was asked to take a leadership role within this group as an external consultant, with the expectation that I would drive the delivery of a replacement strategy document. After an initial meeting of the group, we scheduled our first one hour workshop session.

Before the workshop

I wanted to use the workshop for some form of Test Strategy Retrospective activity, but the one I had used before didn't quite suit. In the past I was seeking feedback from people with different disciplines in a single team. This time the feedback would be coming from a single discipline across multiple teams.

As preparation for the workshop, each tester from the working group was asked to document the process that was being followed in their team. Though each documented process looked quite different, there were some commonalities. Upon reading through these, I decided that the core of the new test strategy was the high-level test process that every team across the department would follow, and that finding this would be the basis of our workshop.

I wanted to focus conversation on the testing activities that made up this core process without group feeling that other aspects of testing were being forgotten. I decided to approach this by starting the workshop with an exercise in broad thinking, then leading the group towards specific analysis.

When reflecting on my own observations of the department, and reading though the documented process from each application, I thought that test activities could be categorised into four groups.

  1. Every tester, in every team in the department, does this test activity, or should be.
  2. Some testers do this activity, but not everyone.
  3. This test activity happens, but the testers don't do it. It may be done by other specialists within the team, other departments within the organisation, or a third party.
  4. Test activities that never happen.

I wrote these categories up on four coloured pieces of paper:



At the workshop

To start the workshop I stuck these categories across a large wall from left to right.

I asked the group to reflect on what they did in their roles and write each activity on an appropriately coloured post-it note. For example, if I wrote automated checks for the middleware layer, but thought that not everyone would do so, then I would write this activity on a blue post-it note.

After five minutes of thinking, everyone got up and stuck their post-it notes on the wall under the appropriate heading. We stayed on our feet through the remainder of the session.

The first task using this information was to identify where there were activities that appeared in multiple categories. There were three or four instances of disagreement. It was interesting to talk through the reasoning behind choices and determine the final location of each activity.

Once every testing activity appeared in only one place we worked across the wall backwards, from right to left. I wanted to discuss and agree on the areas that I considered to be noise in the wider process so that we could concentrate on its heart.

NEVER
The never category made people quite uncomfortable. The test activities in this category were being consistently descoped, even though the group felt that they should be happening in some cases. There was a lot of discussion about moving these activities to the sometimes category. Ultimately we didn't. We wanted to reflect to the business that these activities were consistently being considered as unimportant.

OTHERS
As we talked through what others were doing, we annotated the activities with those who were responsible for it. The level of tester input was also discussed, as this category included tasks happening within the team. For example, unit testing was determined to be the developer's responsibility, but the tester would be expected to understand the coverage provided.

SOMETIMES
When we spoke about what people might do, most activities ended up shifting to the left or the right. Either they were items that were sometimes completed by the tester when they should have been handled elsewhere, or they were activities that should always be happening.

EVERYONE
Finally we talked through what everyone was doing. We agreed on common terminology where people had referred to the same activities using different labels. We moved the activities into an agreed end-to-end test process. Then we talked through that process to assess whether anything had been forgotten.

After the workshop

At the end of the hour, the group had clarity of how their individual approach to testing would fit together in a combined vision. The test activities that weren't common were still captured, and those activities outside the tester's area of responsibility were still articulated. This workshop created a strong common understanding within the working group, which made the process of formalising the discussion in a document relatively easy. I'd recommend this approach to others tasked with a similar brief.

3 comments:

  1. Really good approach, and a well written article Katrina.
    I'm going to give this a go :)

    ReplyDelete
  2. What was the reason behind wanting a common test approach across all teams?

    ReplyDelete
    Replies
    1. Hi Aaron,

      A common test approach across teams might be wanted to address quality issues with differing definitions of 'done' across teams in an environment where multiple product (and component) teams are all committing code to the same mainline for continuous integration and deployment. This could increase stakeholder confidence for feature sets that involve cross-team dependencies.

      Katrina shared some of her experiences and successes with this approach in her CAST 2014 track session, "Black and White: Software Testing for Scientists". As a result of her presentation and also seeing this blog post, I am inclined to plant similar seeds to promote a more awareness of test strategies and practices across teams. Thanks, Katrina!

      Regards,
      Lanessa

      Delete