Wednesday, December 16, 2009

The Seven Basic Principles of the Context-Driven School

While I was reading an interview of James, I found a quick link.
This can be very helpful for folks who are still trying to understand the thought process of conext driven schools.
You can directly read it on http://www.context-driven-testing.com/
For easy reference I have duplicated the whole write up below.
----------------------------------------------------------------------

The Seven Basic Principles of the Context-Driven School

1. The value of any practice depends on its context.
2. There are good practices in context, but there are no best practices.
3. People, working together, are the most important part of any project's context.
4. Projects unfold over time in ways that are often not predictable.
5. The product is a solution. If the problem isn't solved, the product doesn't work.
6. Good software testing is a challenging intellectual process.
7. Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.

Illustrations of the Principles in Action:
· Testing groups exist to provide testing-related services. They do not run the development project; they serve the project.
· Testing is done on behalf of stakeholders in the service of developing, qualifying, debugging, investigating, or selling a product. Entirely different testing strategies could be appropriate for these different objectives.
· It is entirely proper for different test groups to have different missions. A core practice in the service of one mission might be irrelevant or counter-productive in the service of another.
· Metrics that are not valid are dangerous.
· The essential value of any test case lies in its ability to provide information (i.e. to reduce uncertainty).
· All oracles are fallible. Even if the product appears to pass your test, it might well have failed it in ways that you (or the automated test program) were not monitoring.
· Automated testing is not automatic manual testing: it's nonsensical to talk about automated tests as if they were automated human testing.
· Different types of defects will be revealed by different types of tests--tests should become more challenging or should focus on different risks as the program becomes more stable.
· Test artifacts are worthwhile to the degree that they satisfy their stakeholders' relevant requirements.
An Example:
Consider two projects:
1. One is developing the control software for an airplane. What "correct behavior" means is a highly technical and mathematical subject. FAA regulations must be followed. Anything you do -- or don't do -- would be evidence in a lawsuit 20 years from now. The development staff share an engineering culture that values caution, precision, repeatability, and double-checking everyone's work.
2. Another project is developing a word processor that is to be used over the web. "Correct behavior" is whatever woos a vast and inarticulate audience of Microsoft Word users over to your software. There are no regulatory requirements that matter (other than those governing public stock offerings). Time to market matters -- 20 months from now, it will all be over, for good or ill. The development staff decidedly do not come from an engineering culture, and attempts to talk in a way normal for the first culture will cause them to refer to you as "damage to be routed around".
· Testing practices appropriate to the first project will fail in the second.
· Practices appropriate to the second project would be criminally negligent in the first.

Commentary
In the years since we first published the description, above, some people have found our definition too complex and have tried to simplify it, attempting to equate the approach with Agile development or Agile testing, or with the exploratory style of software testing. Here’s another crack at a definition:
Context-driven testers choose their testing objectives, techniques, and deliverables (including test documentation) by looking first to the details of the specific situation, including the desires of the stakeholders who commissioned the testing. The essence of context-driven testing is project-appropriate application of skill and judgment. The Context-Driven School of testing places this approach to testing within a humanistic social and ethical framework.
Ultimately, context-driven testing is about doing the best we can with what we get. Rather than trying to apply “best practices,” we accept that very different practices (even different definitions of common testing terms) will work best under different circumstances.
Contrasting context-driven with context-aware testing.
Many testers think of their approach as context-driven because they take contextual factors into account as they do their work. Here are a few examples that might illustrate the differences between context-driven and context-aware:
Context-driven testers reject the notion of best practices, because they present certain practices as appropriate independent of context. Of course it is widely accepted that any “best practice” might be inapplicable under some circumstances. However, when someone looks to best practices first and to project-specific factors second, that may be context-aware, but not context-driven.
Similarly, some people create standards, like IEEE Standard 829 for test documentation, because they think that it is useful to have a standard to lay out what is generally the right thing to do. This is not unusual, nor disreputable, but it is not context-driven. Standard 829 starts with a vision of good documentation and encourages the tester to modify what is created based on the needs of the stakeholders. Context-driven testing starts with the requirements of the stakeholders and the practical constraints and opportunities of the project. To the context-driven tester, the standard provides implementation-level suggestions rather than prescriptions.
Contrasting context-driven with context-oblivious, context-specific, and context-imperial testing.
To say “context-driven” is to distinguish our approach to testing from context-oblivious, context-specific, or context-imperial approaches:
Context-oblivious testing is done without a thought for the match between testing practices and testing problems. This is common among testers who are just learning the craft, or are merely copying what they’ve seen other testers do.
Context-specific testing applies an approach that is optimized for a specific setting or problem, without room for adjustment in the event that the context changes. This is common in organizations with longstanding projects and teams, wherein the testers may not have worked in more than one organization. For example, one test group might develop expertise with military software, another group with games. In the specific situation, a context-specific tester and a context-driven tester might test their software in exactly the same way. However, the context-specific tester knows only how to work within her or his one development context (MilSpec) (or games), and s/he is not aware of the degree to which skilled testing will be different across contexts.
Context-imperial testing insists on changing the project or the business in order to fit the testers’ own standardized concept of “best” or “professional” practice, instead of designing or adapting practices to fit the project. The context-imperial approach is common among consultants who know testing primarily from reading books, or whose practical experience was context-specific, or who are trying to appeal to a market that believes its approach to development is the one true way.
Contrasting context-driven with agile testing.
Agile development models advocate for a customer-responsive, waste-minimizing, humanistic approach to software development and so does context-driven testing. However, context-driven testing is not inherently part of the Agile development movement.
For example, Agile development generally advocates for extensive use of unit tests. Context-driven testers will modify how they test if they know that unit testing was done well. Many (probably most) context-driven testers will recommend unit testing as a way to make later system testing much more efficient. However, if the development team doesn’t create reusable test suites, the context-driven tester will suggest testing approaches that don’t expect or rely on successful unit tests.
Similarly, Agile developers often recommend an evolutionary or spiral life cycle model with minimal documentation that is developed as needed. Many (perhaps most) context-driven testers would be particularly comfortable working within this life cycle, but it is no less context-driven to create extensively-documented tests within a waterfall project that creates big documentation up front.
Ultimately, context-driven testing is about doing the best we can with what we get. There might not be such a thing as Agile Testing (in the sense used by the agile development community) in the absence of effective unit testing, but there can certainly be context-driven testing.
Contrasting context-driven with standards-driven testing.
Some testers advocate favored life-cycle models, favored organizational models, or favored artifacts. Consider for example, the V-model, the mutually suspicious separation between programming and testing groups, and the demand that all code delivered to testers come with detailed specifications.
Context-driven testing has no room for this advocacy. Testers get what they get, and skilled context-driven testers must know how to cope with what comes their way. Of course, we can and should explain tradeoffs to people, make it clear what makes us more efficient and more effective, but ultimately, we see testing as a service to stakeholders who make the broader project management decisions.
Yes, of course, some demands are unreasonable and we should refuse them, such as demands that the tester falsify records, make false claims about the product or the testing, or work unreasonable hours. But this doesn’t mean that every stakeholder request is unreasonable, even some that we don’t like.
And yes, of course, some demands are absurd because they call for the impossible, such as assessing conformance of a product with contractually-specified characteristics without access to the contract or its specifications. But this doesn’t mean that every stakeholder request that we don’t like is absurd, or impossible.
And yes, of course, if our task is to assess conformance of the product with its specification, we need a specification. But that doesn’t mean we always need specifications or that it is always appropriate (or even usually appropriate) for us to insist on receiving them.
There are always constraints. Some of them are practical, others ethical. But within those constraints, we start from the project’s needs, not from our process preferences.
Context-driven techniques?
Context-driven testing is an approach, not a technique. Our task is to do the best testing we can under the circumstances–the more techniques we know, the more options we have available when considering how to cope with a new situation.
The set of techniques–or better put, the body of knowledge–that we need is not just a testing set. In this, we follow in Jerry Weinberg’s footsteps: Start to finish, we see a software development project as a creative, complex human activity. To know how to serve the project well, we have to understand the project, its stakeholders, and their interests. Many of our core skills come from psychology, economics, ethnography, and the other socials sciences.
Closing notes
Reasonable people can advocate for standards-driven testing. Or for the idea that testing activities should be routinized to the extent that they can be delegated to less expensive and less skilled people who apply the routine directions. Or for the idea that the biggest return on investment today lies in improving those testing practices intimately tied to writing the code. These are all widely espoused views. However, even if their proponents emphasize the need to tailor these views to the specific situation, these views reflect fundamentally different starting points from context-driven testing.
Cem Kaner, J.D., Ph.D.James Bach

2 comments:

simi said...

First of all. Thanks very much for your useful post.

I just came across your blog and wanted to drop you a note telling you how impressed I was with the information you have posted here.

Please let me introduce you some info related to this post and I hope that it is useful for software testing community.

There is a good Software Testing resource site, Have alook

http://SoftwareTestingNet.com

simi

Meeta Prakash said...

Thanks Simi.
I am sure people reading this blog will benefit further from this link.
I re-post only those things that I feel can make a significant difference to readers.
--Meeta