A Context-driven Approach to Testing

Ten years have passed since my first testing stint. Projects have come and gone, but at least two things have remained constant during all these years: software development projects are becoming more and more demanding—in terms of implementing new technologies and meeting the expectations of stakeholders—and testing is still a great resource to ensure that expectations are met.

Looking back at all the projects I’ve been a part of has made me realize that for the testing purposes of any system, project, or product, experience matters a lot, but perspective might matter even more. Our perspective is driven by the stake we have in a given project, which in turn defines our approach.

A decade on, I still find myself wondering how my perspective has changed. A friend suggested that I revisit reading material from my days as a new tester to see if the lessons I draw from it are different from those I now feel are most important. That seemed like a good starting point.

Reading Assignment

One of the books that made a big impact on me at that time was Lessons Learned in Software Testing: A Context-Driven Approach, by Kaner, Bach, and Pettichord. In 11 chapters, it lays out fundamental testing concepts in relatively simple lessons. It's an easy read that I have recommended several times over the years to people who need to understand the thought process that goes into a testing approach. Even if you are not a direct participant in the testing process, you still benefit from understanding how that part of your organization functions (I'm assuming that you, dear reader, play a part in the development process in some capacity).

I picked this book to start and thoroughly enjoyed the trip down memory lane, and I understood why my friend had recommended re-reading as a way to set a baseline for perspective changes: it is painfully easy to pinpoint shifts in opinion and understanding when you have cumulative knowledge giving you a more critical outlook.

So, what had changed for me? Almost everything—and that's not just my “easy answer,” it's reality. Below, I'm going to pull out the major points of the first half of the book in order to show how the essential concepts have changed for me in these ten years. The book’s last four chapters are very detailed in how the testing team and process are managed (there's even a chapter dedicated to managing a testing career). I won’t cover those in this blog post because those topics deserve their own entry.

The role of the tester

This is the title of the first chapter, and it may be the concept that has the most radically different meaning to me at this point. The role is not the same in every organization, because it depends greatly on whose expectations control it. Sure, we can (and do) assign generic definitions for this role, but it will always be adapted to fit a given set of expectations. During my first reading of this chapter, I didn't appreciate enough the importance of negotiating the role—but this is now one of the first things that come to mind when I’m initiating a new endeavor. If we define clear boundaries, expectations, and goals, every role in the process will be better defined. So, in a nutshell, what is the role of the tester? Dreaded ambiguous answer: it depends (*ducks for cover*). But really, the responsibility and scope of action will vary depending on the context and the players involved. Don't typecast it. Instead, focus on defining your mission.

The mission

The mission will be determined by how well the expectations for the testing organization (not for individuals) are defined and negotiated. Is the priority to find the most important bugs fast? Certify that the product is compliant with a regulatory standard? Assure quality by minimizing risk of failure and maintenance costs? All of the above? The mission will be a mix of these and several other priorities, and for determining these we need to understand our clients.

Understanding clients

I wish I had given the proper respect to this sooner in my career. Testing is a service, not a product: you are providing a service often to several clients within the same organization, and each might have different needs that do not always align perfectly with each other. Out of all the players who will benefit from your work, first identify who matters in your project and in which order—this will help with setting priorities. Examples of clients for a testing organization are end users, project managers, developers, marketing, support departments, and business management.

If you get clarity early on about who your client is, you can define tests and approaches for your strategy based on whose hat you will wear when approaching the project, and you will know where to direct your questions when they arise.

Question everything...but not all the time...and not always out loud

You will have questions. Questions are good. They help point your testing compass in the right direction. Sometimes a bit of research will answer some of your questions, but more often than not, you will have to direct them to some of the players in the organization. And herein lies one of the issues that no one prepares you for: some of these questions will bring to light things that were overlooked, and depending on how you ask, some people might get defensive. A tester's job is not to put people on the spot, as this might lead to another very dangerous path, where your team tries to become Process Improvement Central, which the testing organization might not be fit for.

I have seen this happen more times than I can recall, and during my first years, I was also guilty of doing it to some degree. Keep in mind that some questions are better asked in private conversations rather than status meetings, and you should always discern which questions are the awkward ones before you pipe up.

Never be the gatekeeper

I remember now reading this particular lesson and thinking, "Yeah, who wants the responsibility of deciding if the product ships or not?" and brushing it off before moving on to the next one. But then, I also remember being drunk with power at one particular job where I was almost always the deciding vote at the go/no-go meetings. The thing is, if the product shipped buggy, no one remembered that there were two other votes for "go": it was almost a given that testing would be blamed for letting the bugs go through and deciding that the quality was good for release.

The lesson here is not to avoid the responsibility of release control, but rather to insist that it be shared among several stakeholders. Testing results should provide enough information to facilitate that decision in terms of the testing mission, and in turn, the testing mission should reflect priorities set by the other players on the team—your clients. Communicating all this makes it clear to everyone involved that the opinion of the testing organization comes from a place that takes into consideration everyone's objectives rather than its own. The testing effort does not take place in a silo.

Thinking like a tester (and testing based on your thought process)

This is one chapter where I nod in agreement all the way through. My original takeaway from it was the importance of developing critical thinking and coming to conclusions based on evidence and inference. That much remains unchanged, but this time I’ve also gathered that we have to pay attention to how we approach that thinking in realistic terms—i.e., how we should think and how we do think.

Breaking up requirements and system functionality into testable pieces based on the available information is always the start of the process, but this can easily turn into a mechanical chore. The key to avoiding that is understanding why we chose certain techniques when we developed our first battery of tests: Were they better suited for the type of system that we have? Did a lack of resources prevent us from using a different technique? Can we take elements of one approach and isolate specifics of the system to be tested with that? No two systems are the same, so the testing approach should not be either. There are techniques that can be repeated to save time but the notion that they will always fit into the current process without any sort of customization is a fallacy.

Intuition is a guide(line), not a map

I'm stealing this one straight from the book’s lesson "Intuition is a fine beginning, but a lousy conclusion." Trusting your gut feeling can be good because that sense almost always comes from experience, but just trusting that gut feeling without confirming or exploring alternatives is dangerously close to living in a confirmation-bias state, as we tend to remember only the times that our intuition was correct and shrug off the times when it didn't pan out. So the recommendation is to use that intuition as a starting point for exploring and dissecting the task at hand.

We are all biased to some degree. Bias can't be avoided but it can be managed. Understand the types of bias that come into your core thinking and be aware of when they are happening. Remember the advice to question everything? That includes yourself.

Fresh eyes

Tester fatigue is a real thing. As we deal with complex stuff, we tend to dull our senses in order to focus on one particular aspect to better comprehend it. This a defense mechanism that the brain uses to filter the stimuli bombardment. The same thing happens when we are dealing with exploratory tasks: when everything is new, we notice a lot of details as we are being introduced to them, but as time goes on the brain tends to ignore what it senses as familiar and focus on what it perceives as new. If you are in charge of a particular part of a product, it might be greatly beneficial to switch tasks with someone else every so often and then come back and see how you perceive things that you thought would remain unchanged.

Bug advocacy

Remember a few paragraphs back when I wrote about how testing results provide information that will help the team make better decisions about the product? The basis for those test results are the bug reports.

I have worked with a lot of people in my time as a tester, and I have encountered people who are great at finding bugs and faults in systems, but who are not so great at putting those findings into a properly written report. The problem with this is that their findings are often valid, but since they can't articulate why the issues are important or how they impact the overall quality of the product, these results tend to be ignored or relegated to the backlog without a second thought.

Bug reports are important because they provide the information that benefits everyone in the organization: for developers the report brings about the tools to reproduce and isolate an issue to make it easier to resolve; for the business side, it brings to their attention which requirements are being met and how; and for project managers, the report helps identify risk areas for the current phase of the project.

At some point I was focused on sheer number of bug reports rather than the content of what I was reporting. I'm certain that a lot of junior testers find themselves in that situation, and most of the time it’s not their fault, which brings me to my next point: if you manage a testing team, do not use bug report systems as a metric for individual tester performance.

You are what you write, but bug reports are not a measure of a tester's performance

If you reward quantity over quality, that's exactly what you are going to get. In one of my first jobs in this industry, my team was tasked with filling a report with the number of bug reports we wrote for that week and their identification numbers in the system. At first, we didn't think much of it, but six months into the project—during mid-year reviews—we became aware that the contents of the report were being used as a metric to compare "how we were doing and contributing." This resulted in a 200% increase in bug reports for the second half of the year, as everyone tried to report as many things as they could. This might sound like a good thing, but when you go from a few reports with summaries like "unvalidated form field produces buffer overflow and crashes product X" to ten reports like "color on banner is slightly off from printed comps" or "image is skewed two pixels to the left," then the increase in the number of reports might not seem that beneficial.

Pick your battles

Not every bug is destined to be resolved right away. Some of them might be relegated to backlog limbo and that's OK. Testers, repeat after me: "Not all my bugs are going to be resolved, and that's OK." Repeat it again. And again.

We tend to think that what we find is groundbreaking, that if not for us and our heroic efforts, the project would fail miserably. It's good to have that confidence and high regard for the importance of your work when you are starting out, but don't start thinking that the priorities are set by the testing team—you might not be aware of things happening in other parts of the organization that will make what you found (and deemed critical) rather innocuous in a future iteration of the product.

Your job is to provide information that will help the team make better-informed decisions based on all available data. If you feel that something should be a higher priority on the backlog, there are ways to highlight the impact and risk potential for the people making the calls about how to allocate resources for the project without causing friction between departments.

Flies, honey, and vinegar

All other players on the team and in the organization are your clients. You provide a service for them, and at risk of being redundant on this point, remember that you are working toward the same goal that they are. Your mission is specific to your testing organization, but I have never encountered a case where that mission differs from the project’s main goal, which is to ship a quality product.

A mantra that has followed me around for some time now is "Developers are your friends. Be a friend to them." Sure, you can always encounter that exception, the developer who feels attacked when a bug is reported against a part that he or she worked on. It's natural because we tend to perceive what we do as part of us. It's easier to deal with these situations when there is a healthy line of communication between teams and the individuals on those teams. So, don't engage in "devs vs. testers" politics—that dynamic is one of the worst work environment killers ever.

What you can do is help the other teams by testing earlier and getting involved in defining tests as soon as there are requirements defined (yes, even before any code has been written). If you share these early testing scenario definitions with developers in particular, you can help them take a test-driven development, or TDD, approach to their coding. If you let them know what and how you are going to test, they will have that in mind when coding to ensure that those test scenarios pass your scrutiny. In addition to the development team, share the test scenarios with the business side so that they can assess whether you are omitting certain requirements from your testing or see that they overlooked putting something in the requirement definition (it happens).

Think of your job as being the liaison between the business and the technical side: if you do a good job of translating those business requirements into technical test requirements, you will help developers code better, accounting for both business needs and the technical specifications.

Did this second read help me in any way? Absolutely! I'd like to think that I'm wiser and not just older, and now I have a better understanding of what the authors intended in writing this book in the first place. While I do not agree with them 100 percent, I also know that is due to my own experience, which is different from what they had experienced when putting the lessons together. My context and my perspective have been molded by my experience. I still recommend this book to anyone seeking testing-specific reading material, and that will remain unchanged for some years to come.

Project Ricochet is a full-service digital agency specializing in Open Source.

Is there something we can help you or your team out with?

A bit about Manuel:

Manuel has a Bachelors degree in Information Systems and brings 8+ years of QA and Release Management expertise to Ricochet. He has worked for companies large and small, from IBM to startups with a penchant for quality. In his spare time, he enjoys personal fitness and helping with dog & cat adoption rallies for local non profits and animal shelters. He can also be found at various local meetups for Django, Python, Ruby and JS.