Monday 26 October 2015

How I write my blog

Over the past week, I've had a few conversations about writing a blog. As a result, I thought I'd record my approach to blogging, to try and encourage a wider group of people to write. Here's how I get from idea to tweet.

No backlog

I prefer to write about things that I care about right now rather than work from a backlog of ideas. I tried a backlog of ideas once, but it didn't really work for me to have a bucket of potential topics to write from. I found the choice to be paralyzing rather than enabling.

Write to a person

I like to think of a real person who would potentially get value from my thoughts, then write the post for them. This helps me to pitch the tone of my writing - not too condescending and not too complex. I also find writing to an individual easier than writing to a generic group e.g. Bob vs. "testers who want to start blogging".

Refining loop

I write my posts by paragraph. I'll type out my ideas, almost in a train of consciousness, then go back through the words and refine them into something that reads nicely. Realising that my thoughts don't have to come out perfectly the first time has really helped me to write more freely.

Proofread in context

When I finish a post, I read through it in my blog editor. Then I also read through it in the preview version to see it in the layout that will appear on my blog. Even when I think I'm done, looking at the words in a different format will often prompt me to change phrasing and pick up spelling mistakes. It's a fresh perspective for my brain.

Practice

When I look back at my earlier blog posts, I find them pretty embarrassing. I imagine that when I look back on this post, and others of this era, I will find them embarrassing too. I feel that my writing is improving the more that I blog, so I try to keep practicing to maintain this evolution.

Set goals

I have a self-imposed target of three blog posts per month. I don't always hit that target, but I find that it motivates me to write. Without this, I am prone to getting stuck in writing ruts where my head won't settle on a topic and I can't identify people who will care about what I have to say without doubt creeping in.

Pleasing everyone

What I write doesn't have to be something that everyone likes, or universally useful, or shared widely across the world. I figure, at a minimum, it's valuable to me to write my blog. I get better at writing, I work out how to articulate my ideas, I develop a voice. I find it easier to consider pleasing others as a bonus.

Sharing

I always tweet when I write a blog. It's difficult for people to get any value from what I'm writing if they don't know it's there. I also like getting feedback from the community. Because sharing is circular, I also try to promote writing from others by tweeting content that I enjoy and having a list of blogs I follow on my site.


If blogging is something that you'd like to start, or you'd like to do more of, it's likely that the only thing stopping you is yourself. I hope these tips encourage you to write. I look forward to reading what you have to say.

Thursday 22 October 2015

Feedback for Conference Speakers

I spoke at a number of conferences over the past month or so. After each talk I received a variety of feedback, from a variety of channels. The genesis for this post is two pieces of feedback I received for the same talk at the Canterbury Software Summit.

From the conference survey responses:
"Good coverage of the topic; however: agile teams/tribes should be self-sustained. Katrina's presentation though was explaining management activities. What I missed was how the agile approach really works for BNZ, how they constantly improve, what issue and challenges they faced and face etc. Missing enthusiasm. Average slide quality."

From a direct message on Twitter:
"One of the guys at work I talked to today, appreciated your talk at Canterbury software summit. We're thinking of now trying some of the ideas you talked about. Katrina please keep using your gift of inspiring the testing community, as it makes our jobs more enjoyable and fruitful."

As you can see, one person was utterly underwhelmed while the other felt inspired and motivated to make changes in his organisation.

As a new speaker I had no frame of reference for feedback, or any notion of what to expect back from the audience when I delivered a talk. Had I received that first piece of feedback for my first presentation I would have been entirely disheartened.

Now that I've presented a few times, I'm starting to see patterns in when I receive feedback, what type of feedback it is, and how I can use it. To illustrate, here's the feedback I received after my 'Diversify' keynote talk at the recent WeTest Weekend Workshops.

Verbal

I find public speaking a taxing activity. At the end of the talk, my adrenaline is racing - I know it's all over and I am looking to get away from people for a few minutes to calm down. However, there is usually at least one person who comes to the front of the room to speak to me. 

I like that people do this. The things they wish to say are usually positive and it's good to get immediate validation that it all went okay. Unfortunately I usually don't remember the nice things that they've said, because my brain isn't working properly yet!

Occasionally I get immediate feedback of a different kind. At WeTest Weekend Workshops someone approached to suggest how I could improve my use of the handheld microphone. Strangely I always remember this sort of feedback, the things that aren't entirely positive, despite being in the same agitated mental state.

I consider the number of people who come to the front of the room after my talk to be a loose indicator of the emotional response of my audience. The more people, the more I feel like I spoke about something that really resonated for them.

Social Media

After my talk I like to find a quiet spot, take a few deep breaths, and then check the reaction from Twitter. I see three broad categories for the feedback that appear in my Twitter timeline.

Announcements

The first tweets are the people who simply say that they are attending my talk. Announcement tweets contain no judgement and no content. Often they contain a photo from near the start of the presentation.



As I've started to gain a wider following on Twitter, I think the number of people who announce that they're at my talk has increased. As a new speaker, very few people got excited about merely attending my sessions! I consider announcement tweets a loose indicator of my reputation in the community behind the conference.

Ideas

The next tweets will be the ideas from my talk. These might be pieces of content that resonated with people, summaries of my main points, or tweets that let people who are not in the audience know that they've been mentioned.



I consider idea tweets a loose indicator of how engaged people are in the content. In some respects I prefer that there are fewer of these type of tweets, as I believe that most people find it difficult to actively listen while also composing the perfect 140 characters on Twitter.

Reaction

Finally there are the tweets that come at the end of the talk. Reaction tweets are all about judgement, though on Twitter you're usually just going to get the happy vibes from people who loved it and felt inspired.


Reaction tweets are about the buzz. I consider these a replacement to coming to the front of the room after the presentation, and so treat them as the same loose indicator of the emotional response of my audience. The more reaction tweets I get, the better. Even if they're not all positive, at least I touched a nerve!

Event

If the event information has been published online, via Meet Up, Facebook, or some other alternative, there is usually an opportunity to post feedback.

I find that the feedback I receive via social media comes from people who feel a connection to me as an individual, or who are confident about expressing their opinions to a wide audience. By contrast the feedback I receive via the event page comes from people who I do not know well, those who need longer to process their reaction to the presentation, or those who are not on Twitter!

There is also a shift in language. People have had time to reflect, so their reaction is less emotive and more analytical. On Twitter people "love" the presentation while on Meet Up it's "great".


Providing feedback via Meet Up requires effort beyond the time frame of the event itself. I consider this feedback a loose indicator of how I've improved my standing in the community behind the conference.

Survey

Many conferences send out a post-event survey to all the attendees to help them improve their format, content and structure for the following year.

Survey feedback is anonymous and, of all the forms of feedback, gives the widest spectrum. It seems that once there is no association between your feedback and your name, people become remarkably honest.

Here's a selection of survey comments about the speakers at the WeTest Weekend Workshops event to illustrate this:
  • Katrina's presentation was awesome. Very motivating 
  • Keynote was useful and aligned with the theme. 
  • Might have been even better if we had a more diverse speakers. 
  • Would be lovely to see more "activity" type events over the vanilla "here's a talk" type events 
  • The talks I attended were average from my perspective.
  • Did not find it as useful as I thought it would be.

Suddenly there's a much richer picture that includes those who had a less enjoyable experience. At conferences without a survey form, the only negative feedback you receive may be the absence of positive feedback.

I consider survey feedback a loose indicator of what I can improve in my presentations. I don't listen to everything, and where there are clearly other factors at play I take the criticism with a grain of salt, but overall I find it a valuable source of information to help me refine my content and delivery.

Blogs

Finally, there are people who want to share the talk with others. I take blogs, and other post-event activities of this nature, as a form of feedback. I treat these as a loose indicator of lasting impact.

After WeTest, the following resources have appeared that referenced my keynote:

The Big Picture

The volume and type of feedback I get varies greatly between presentations. It's taken time to establish my own interpretations of an influx of information that might otherwise feel overwhelming. I use the types of feedback I've described to determine:
  • The emotional response from my audience
  • How engaged people are in my content
  • My existing reputation in the community behind the conference
  • Whether I've improved my reputation in the community behind the conference
  • What I can improve in my presentations
  • Whether I've had a lasting impact

Sunday 18 October 2015

Changing the conversation about change in testing

Over the past couple of weeks I've been challenged to rethink how I advocate for change in testing. Here are four ideas, from three different conferences, that I hope will improve my powers of persuasion.

Commercial viability

At the Canterbury Software Summit in Christchurch, Shaun Maloney talked about how technical excellence does not guarantee commercial success. You may have a beautifully implemented piece of software, but if you can't sell it then it may all be for naught.

Shaun shared his method of determining commercial viability of an idea using the following questions. Is it busted? Can we fix it? Should we fix it?

If the answer to all three questions is 'yes' then he believes that there is merit in pursuing the idea. If, at any stage, the answer is 'no' then the idea is abandoned.

Shaun visualised this in a very kiwi way using stockyard gates:



This made me wonder, when I advocate for change, how often do I fail at this first gate? If the people I'm talking to just don't think that the testing we do now is busted, perhaps they mentally kill the conversation before it even begins?

Moments of doubt, desire and dissatisfaction

Also at the Canterbury Software Summit, Andy Lark spoke about reimagining business by looking to address moments of doubt, desire and dissatisfaction as experienced by customers. He gave an example of how Uber succeeded in the taxi market by solving a moment of doubt, showing users the location of their taxi. [ref]

I think testing is ripe for reinvention. If talking about how things are broken isn't working, perhaps we'd have better success in focusing on the areas where people experience moments of doubt, desire or dissatisfaction?

Strategic priorities

At the Agile Testing Alliance Global Gathering in Bangalore, Renu Rajani shared some information from the World Quality Report 2015-2016. This report is compiled by CapGemini, HP and Sogeti. This year they interviewed 1,560 CIOs and IT and testing leaders from 32 countries.

The results of one particular question struck me. When asked "What is the highest QA and testing strategic priority?" on a scale of 1 to 7, with 7 being the most important, they responded:

Source: World Quality Report 2015-2016

For this group, detecting software defects before go-live was the fifth highest priority.

When I reflect on how I frame change to management, I often talk about how:

  • we will find more defects,
  • the defects we discover will be more diverse and of a higher severity,
  • testing will be faster, and 
  • we will reduce the cost of testing through a more pragmatic approach.

I have never spoken about corporate image, how we increase quality awareness among all disciplines, or how we improve end-user satisfaction.

My arguments have always been based on assumptions about what I believe to be important to my audience. This data from the World Quality Report has made me question whether my assumptions have been incorrect.

Sound bites

At WeTest Weekend Workshops in Auckland, John Lockhart gave an experience report titled "Incorporating traditional and CDT in an agile environment". During his talk I was fascinated by the way he summarised the different aspects of his test approach. To paraphrase based on my notes:

"ISTQB gives you the background and history of testing, along with testing techniques like boundary analysis, state transition diagrams, etc. CDT gives you the critical thinking side. Agile gives you the wider development methodology."

Would "the critical thinking side" be something close to your single sentence statement about context-driven testing? What would you say instead? John's casual remark made me realise that I may be diminishing the value of my ideas when I summarise them to others.

Putting the pieces together

I see an opportunity to create a more compelling narrative about change in testing.

I'm planning to stop arguing that testing is broken. Instead I'm going to start thinking about the moments of desire, doubt and dissatisfaction that exist in our test approach.

I'm planning to stop talking about bugs, time and money. Instead I'm going to start framing my reasoning in terms of corporate image, increasing quality awareness among all disciplines, and improving end-user satisfaction.

I'm planning to stop using impromptu sentences to summarise. Instead I'm going to start thinking about a sound bite that doesn't diminish or oversell.

Will you do the same?