Wednesday, December 16, 2009

The Seven Basic Principles of the Context-Driven School

While I was reading an interview of James, I found a quick link.
This can be very helpful for folks who are still trying to understand the thought process of conext driven schools.
You can directly read it on
For easy reference I have duplicated the whole write up below.

The Seven Basic Principles of the Context-Driven School

1. The value of any practice depends on its context.
2. There are good practices in context, but there are no best practices.
3. People, working together, are the most important part of any project's context.
4. Projects unfold over time in ways that are often not predictable.
5. The product is a solution. If the problem isn't solved, the product doesn't work.
6. Good software testing is a challenging intellectual process.
7. Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.

Illustrations of the Principles in Action:
· Testing groups exist to provide testing-related services. They do not run the development project; they serve the project.
· Testing is done on behalf of stakeholders in the service of developing, qualifying, debugging, investigating, or selling a product. Entirely different testing strategies could be appropriate for these different objectives.
· It is entirely proper for different test groups to have different missions. A core practice in the service of one mission might be irrelevant or counter-productive in the service of another.
· Metrics that are not valid are dangerous.
· The essential value of any test case lies in its ability to provide information (i.e. to reduce uncertainty).
· All oracles are fallible. Even if the product appears to pass your test, it might well have failed it in ways that you (or the automated test program) were not monitoring.
· Automated testing is not automatic manual testing: it's nonsensical to talk about automated tests as if they were automated human testing.
· Different types of defects will be revealed by different types of tests--tests should become more challenging or should focus on different risks as the program becomes more stable.
· Test artifacts are worthwhile to the degree that they satisfy their stakeholders' relevant requirements.
An Example:
Consider two projects:
1. One is developing the control software for an airplane. What "correct behavior" means is a highly technical and mathematical subject. FAA regulations must be followed. Anything you do -- or don't do -- would be evidence in a lawsuit 20 years from now. The development staff share an engineering culture that values caution, precision, repeatability, and double-checking everyone's work.
2. Another project is developing a word processor that is to be used over the web. "Correct behavior" is whatever woos a vast and inarticulate audience of Microsoft Word users over to your software. There are no regulatory requirements that matter (other than those governing public stock offerings). Time to market matters -- 20 months from now, it will all be over, for good or ill. The development staff decidedly do not come from an engineering culture, and attempts to talk in a way normal for the first culture will cause them to refer to you as "damage to be routed around".
· Testing practices appropriate to the first project will fail in the second.
· Practices appropriate to the second project would be criminally negligent in the first.

In the years since we first published the description, above, some people have found our definition too complex and have tried to simplify it, attempting to equate the approach with Agile development or Agile testing, or with the exploratory style of software testing. Here’s another crack at a definition:
Context-driven testers choose their testing objectives, techniques, and deliverables (including test documentation) by looking first to the details of the specific situation, including the desires of the stakeholders who commissioned the testing. The essence of context-driven testing is project-appropriate application of skill and judgment. The Context-Driven School of testing places this approach to testing within a humanistic social and ethical framework.
Ultimately, context-driven testing is about doing the best we can with what we get. Rather than trying to apply “best practices,” we accept that very different practices (even different definitions of common testing terms) will work best under different circumstances.
Contrasting context-driven with context-aware testing.
Many testers think of their approach as context-driven because they take contextual factors into account as they do their work. Here are a few examples that might illustrate the differences between context-driven and context-aware:
Context-driven testers reject the notion of best practices, because they present certain practices as appropriate independent of context. Of course it is widely accepted that any “best practice” might be inapplicable under some circumstances. However, when someone looks to best practices first and to project-specific factors second, that may be context-aware, but not context-driven.
Similarly, some people create standards, like IEEE Standard 829 for test documentation, because they think that it is useful to have a standard to lay out what is generally the right thing to do. This is not unusual, nor disreputable, but it is not context-driven. Standard 829 starts with a vision of good documentation and encourages the tester to modify what is created based on the needs of the stakeholders. Context-driven testing starts with the requirements of the stakeholders and the practical constraints and opportunities of the project. To the context-driven tester, the standard provides implementation-level suggestions rather than prescriptions.
Contrasting context-driven with context-oblivious, context-specific, and context-imperial testing.
To say “context-driven” is to distinguish our approach to testing from context-oblivious, context-specific, or context-imperial approaches:
Context-oblivious testing is done without a thought for the match between testing practices and testing problems. This is common among testers who are just learning the craft, or are merely copying what they’ve seen other testers do.
Context-specific testing applies an approach that is optimized for a specific setting or problem, without room for adjustment in the event that the context changes. This is common in organizations with longstanding projects and teams, wherein the testers may not have worked in more than one organization. For example, one test group might develop expertise with military software, another group with games. In the specific situation, a context-specific tester and a context-driven tester might test their software in exactly the same way. However, the context-specific tester knows only how to work within her or his one development context (MilSpec) (or games), and s/he is not aware of the degree to which skilled testing will be different across contexts.
Context-imperial testing insists on changing the project or the business in order to fit the testers’ own standardized concept of “best” or “professional” practice, instead of designing or adapting practices to fit the project. The context-imperial approach is common among consultants who know testing primarily from reading books, or whose practical experience was context-specific, or who are trying to appeal to a market that believes its approach to development is the one true way.
Contrasting context-driven with agile testing.
Agile development models advocate for a customer-responsive, waste-minimizing, humanistic approach to software development and so does context-driven testing. However, context-driven testing is not inherently part of the Agile development movement.
For example, Agile development generally advocates for extensive use of unit tests. Context-driven testers will modify how they test if they know that unit testing was done well. Many (probably most) context-driven testers will recommend unit testing as a way to make later system testing much more efficient. However, if the development team doesn’t create reusable test suites, the context-driven tester will suggest testing approaches that don’t expect or rely on successful unit tests.
Similarly, Agile developers often recommend an evolutionary or spiral life cycle model with minimal documentation that is developed as needed. Many (perhaps most) context-driven testers would be particularly comfortable working within this life cycle, but it is no less context-driven to create extensively-documented tests within a waterfall project that creates big documentation up front.
Ultimately, context-driven testing is about doing the best we can with what we get. There might not be such a thing as Agile Testing (in the sense used by the agile development community) in the absence of effective unit testing, but there can certainly be context-driven testing.
Contrasting context-driven with standards-driven testing.
Some testers advocate favored life-cycle models, favored organizational models, or favored artifacts. Consider for example, the V-model, the mutually suspicious separation between programming and testing groups, and the demand that all code delivered to testers come with detailed specifications.
Context-driven testing has no room for this advocacy. Testers get what they get, and skilled context-driven testers must know how to cope with what comes their way. Of course, we can and should explain tradeoffs to people, make it clear what makes us more efficient and more effective, but ultimately, we see testing as a service to stakeholders who make the broader project management decisions.
Yes, of course, some demands are unreasonable and we should refuse them, such as demands that the tester falsify records, make false claims about the product or the testing, or work unreasonable hours. But this doesn’t mean that every stakeholder request is unreasonable, even some that we don’t like.
And yes, of course, some demands are absurd because they call for the impossible, such as assessing conformance of a product with contractually-specified characteristics without access to the contract or its specifications. But this doesn’t mean that every stakeholder request that we don’t like is absurd, or impossible.
And yes, of course, if our task is to assess conformance of the product with its specification, we need a specification. But that doesn’t mean we always need specifications or that it is always appropriate (or even usually appropriate) for us to insist on receiving them.
There are always constraints. Some of them are practical, others ethical. But within those constraints, we start from the project’s needs, not from our process preferences.
Context-driven techniques?
Context-driven testing is an approach, not a technique. Our task is to do the best testing we can under the circumstances–the more techniques we know, the more options we have available when considering how to cope with a new situation.
The set of techniques–or better put, the body of knowledge–that we need is not just a testing set. In this, we follow in Jerry Weinberg’s footsteps: Start to finish, we see a software development project as a creative, complex human activity. To know how to serve the project well, we have to understand the project, its stakeholders, and their interests. Many of our core skills come from psychology, economics, ethnography, and the other socials sciences.
Closing notes
Reasonable people can advocate for standards-driven testing. Or for the idea that testing activities should be routinized to the extent that they can be delegated to less expensive and less skilled people who apply the routine directions. Or for the idea that the biggest return on investment today lies in improving those testing practices intimately tied to writing the code. These are all widely espoused views. However, even if their proponents emphasize the need to tailor these views to the specific situation, these views reflect fundamentally different starting points from context-driven testing.
Cem Kaner, J.D., Ph.D.James Bach

Monday, December 14, 2009

Good Read on Scenario Testing

Scenario Testing is one of the important activity most of us get into while doing our day today activities of a software tester. It is one of the important aspects that both functional as well as non functional testers should be considerate of. Important point emphasised by Cem "Scenario testers provide an early warning system for requirements problems that would otherwise haunt the project later." He also says "Scenario testing works best for complex transactions or events, for studying end-to-end delivery of the benefits of the program, for exploring how the program will work in the hands of an experienced user, and for developing more persuasive variations of bugs found using other approaches."

Some important key factors that are highlighted are -

  • balance of manual test cases
  • importance of identifying a bug and fixing it
  • exploring use of program at various user levels
  • importance of signed off requirement doc
  • stakeholder thought perspective
  • identification of key factors of scenario testing

Read more details below from the duplicated aticle. The easy beautifully worded document explains it all. It also suggests how practically the simple approaches can be seamleassly built into our daily testing work process.


An Introduction to Scenario Testing
Cem Kaner, Florida Tech, June 2003

A slightly less complete version of this was published in Software Testing & Quality Engineering (STQE) magazine, October, 2003, with the unfortunate title, "Cem Kaner on Scenario Testing: The Power of ''What If…'' and Nine Ways to Fuel Your Imagination."
This research was partially supported by NSF Grant EIA-0113539 ITR/SY+PE: "Improving the Education of Software Testers." Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF).
Once upon a time, a software company developed a desktop publishing program for the consumer market. During development, the testers found a bug: in a small zone near the upper right corner, you couldn’t paste a graphic. They called this the “postage stamp bug.” The programmers decided this wasn’t very important. You could work around it by resizing the graphic or placing it a bit differently. The code was fragile, so they decided not to fix it.
The testers felt the postage stamp bug should be fixed. To strengthen their case, they found someone who helped her children lay out their Girl Scout newsletter. The mother wanted to format the newsletter exactly like the one she mimeographed, but she could not, because the newsletter’s logo was positioned at the postage stamp.. The company still wouldn’t fix the bug. The marketing manager said the customer only had to change the document slightly, and the programmers insisted the risk was too high.
Being a tenacious bunch, these testers didn’t give up. The marketing manager often bragged that his program could do anything PageMaker could do, so the testers dug through PageMaker marketing materials and found a brochure with a graphic you-know-where. This bug report said the postage stamp bug made it impossible to duplicate PageMaker’s advertisement. That got the marketer’s attention. A week later, the bug was fixed.
This story (loosely based on real events) is a classic illustration of a scenario test.
A scenario is a hypothetical story, used to help a person think through a complex problem or system. "Scenarios" gained popularity in military planning in the United States in the 1950's. Scenario-based planning gained wide commercial popularity after a spectacular success at Royal Dutch/Shell in the early 1970's. (For some of the details, read Scenarios: The Art of Strategic Conversation by Kees van der Heijden, Royal Dutch/Shell’s former head of scenario planning.)
A scenario test is a test based on a scenario.
I think the ideal scenario test has several characteristics:
The test is based on a story about how the program is used, including information about the motivations of the people involved.
The story is motivating. A stakeholder with influence would push to fix a program that failed this test. (Anyone affected by a program is a stakeholder. A person who can influence development decisions is a stakeholder with influence.)
The story is credible. It not only could happen in the real world; stakeholders would believe that something like it probably will happen.
The story involves a complex use of the program or a complex environment or a complex set of data.
The test results are easy to evaluate. This is valuable for all tests, but is especially important for scenarios because they are complex.
The first postage-stamp report came from a typical feature test. Everyone agreed there was a bug, but it didn’t capture the imagination of any influential stakeholders.
The second report told a credible story about a genuine member of the target market, but that customer’s inconvenience wasn’t motivating enough to convince the marketing manager to override the programmers’ concerns.
The third report told a different story that limited the marketing manager’s sales claims. That hit the marketing manager where it hurt. He insisted the bug be fixed.
Why Use Scenario Tests?
The postage stamp bug illustrated one application of scenario testing: Make a bug report more motivating.
There are several other applications, including these:
§ Learn the product
§ Connect testing to documented requirements
§ Expose failures to deliver desired benefits
§ Explore expert use of the program
§ Bring requirements-related issues to the surface, which might involve reopening old requirements discussions (with new data) or surfacing not-yet-identified requirements.
Early in testing, use scenarios to learn the product. I used to believe that an excellent way to teach testers about a product was to have them work through the manual keystroke by keystroke. For years, I did this myself and required my staff to do it. I was repeatedly confused and frustrated that I didn’t learn much this way and annoyed with staff who treated the task as low value. Colleagues (James Bach, for example) have also told me they’ve been surprised that testing the product against the manual hasn’t taught them much. John Carroll tackled this issue in his book, The Nurnberg Funnel: Designing Minimalist Instruction for Practical Computer Skill. People don’t learn well by following checklists or material that is organized for them. They learn by doing tasks that require them to investigate the product for themselves. (Another particularly useful way to teach testers the product while developing early scenarios is to pair a subject matter expert with an experienced tester and have them investigate together.)
Scenarios are also useful to connect to documented software requirements, especially requirements modeled with use cases. Within the Rational Unified Process, a scenario is an instantiation of a use case (take a specific path through the model, assigning specific values to each variable). More complex tests are built up by designing a test that runs through a series of use cases. Ross Collard described use case scenarios in “Developing test cases from use cases” (STQE, July, 1999; available at
You can use scenarios to expose failures to deliver desired benefits whether or not your company creates use cases or other requirements documentation. The scenario is a story about someone trying to accomplish something with the product under test. In our example scenario, the user tried to create a newsletter that matched her mimeographed newsletter. The ability to create a newsletter that looks the way you want is a key benefit of a desktop publishing program. The ability to place a graphic on the page is a single feature you can combine with other features to obtain the benefit you want. A scenario test provides an end-to-end check on a benefit the program is supposed to deliver. Tests of individual features and mechanical combination tests of related features or their input variables (using such techniques as combinatorial testing or orthogonal arrays) are not designed to provide this kind of check.
Scenarios are also useful for exploring expert use of a program. As Larry Constantine and Lucy Lockwood discuss in their book, Software for Use, people use the program differently as they gain experience with it. Initial reactions to the program are important, but so is the stability of the program in the hands of the expert user. You may have months to test a moderately complex program. This time provides opportunity to develop expertise and simulations of expert use. During this period, one or more testers can develop full-blown applications of the software under test. For example, testers of a database manager might build a database or two. Over the months, they will add data, generate reports, fix problems, gaining expertise themselves and pushing the database to handle ever more sophisticated tasks. Along the way, especially if you staff this work in a way that combines subject matter expertise and testing skill, these testers will find credible, serious problems that would have been hard to find (hard to imagine the tests to search for them) any other reasonable way.
Scenarios are especially interesting for surfacing requirements-related controversies. Even if there is a signed-off requirements document, this reflects the agreements that project stakeholders have reached. But there are also ongoing disagreements. As Tom DeMarco and Tim Lister point out, ambiguities in requirements documents are often not accidental; they are a way of papering over disagreements (“Both Sides Always Lose: Litigation of Software-Intensive Contracts”, Cutter IT Journal, Volume XI, No. 4; April 1998).
A project’s requirements can also change dramatically for reasons that are difficult to control early in the project:
§ Key people on the project come and go. Newcomers bring new views.
§ Stakeholders’ level of influence change over time.
§ Some stakeholders don't grasp the implications of a product until they use it, and they won’t (or can’t) use it until it’s far enough developed to be useful. This is not unreasonable—in a company that makes and sells products, relatively few employees are chosen for their ability as designers or abstract thinkers.
§ Some people whose opinion will become important aren’t even invited to early analysis and design meetings. For example, to protect trade secrets, some resellers or key customers might be kept in the dark until late in the project.
§ Finally, market conditions change, especially on a long project. Competitors bring out new products. So do makers of products that are to be interoperable with the product under development, and makers of products (I/O devices, operating system, etc.) that form the technical platform and environment for the product.
A tester who suspects that a particular stakeholder would be unhappy with some aspect of the program, creates a scenario test and shows the results to that stakeholder. By creating detailed examples of how the program works, or doesn’t work, the scenario tester forces issue after issue. As a project manager, I’ve seen this done on my projects and been frustrated and annoyed by it. Issues that I thought were settled were reopened at inconvenient times, sometimes resulting in unexpected late design changes. I had to remind myself that the testers didn’t create these issues. Genuine disagreements will have their effects. In-house stakeholders (such as salespeople or help desk staff) might support the product unenthusiastically; customers might be less willing to pay for it, end users might be less willing to adopt it. Scenario testers provide an early warning system for requirements problems that would otherwise haunt the project later.
Characteristics of Good Scenarios
A scenario test has five key characteristics. It is (a) a story that is (b) motivating, (c) credible, (d) complex, and (e) easy to evaluate.
These aren’t the only good characteristics a test can have. I describe several test techniques and their strengths in “What IS a Good Test Case?” at Another important characteristic is power: One test is more powerful than another if it’s more likely to expose a bug. I’ll have more to say about power later. For now, let’s consider the criteria that I describe as the strengths of scenario tests.
Writing a scenario involves writing a story. That’s an art. I don’t know how to teach you to be a good storyteller. What I can do is suggest some things that might be useful to include in your stories and some ways to gather and develop the ideas and information that you’ll include.
A scenario test is motivating if a stakeholder with influence wants the program to pass the test. A dry recital of steps to replicate a problem doesn’t provide information that stirs emotions in people. To make the story more motivating, tell the reader why it is important, why the user is doing what she’s doing, what she wants, and what are the consequences of failure to her. This type of information is normally abstracted out of a use case (see Alistair Cockburn’s excellent book, Writing Effective Use Cases, p. 18 and John Carroll’s discussion of the human issues missing in use cases, in Making Use: Scenario-Based Design of Human-Computer Interaction, p. 236-37.) Along with impact on the user, a highly motivating bug report might consider the impact of failure on the user’s business or on your company (the software developer). For example, a bug that only modestly impacts the user but causes them to flood your company with phone calls would probably be considered serious. A scenario that brings out such effects would be influential.
A scenario is credible if a stakeholder with influence believes it will probably happen. Sometimes you can establish credibility simply by referring to a requirements specification. In many projects, though, you won’t have these specs or they won’t cover your situation. Each approach discussed below is useful for creating credible tests.
A complex story involves many features. You can create simplistic stories that involve only one feature, but why bother? Other techniques, such as domain testing, easy to apply to single features and more focused on developing power in these simple situations. The strength of the scenario is that it can help you discover problems in the relationships among the features.
This brings us to power. A technique (scenario testing) focused on developing credible, motivating tests is not as likely to bring quickly to mind the extreme cases that power-focused techniques (such as stress, risk-based, and domain testing) are so good for. They are the straightest lines to failures, but the failures they find are often dismissed as unrealistic, too extreme to be of interest. One way to increase a scenario’s power is to exaggerate slightly. When someone in your story does something that sets a variable’s value, make that value a bit more extreme. Make sequences of events more complicated; add a few more people or documents. Hans Buwalda is a master of this. He calls these types of scenario tests, “soap operas.” (See “Soap Opera Testing” at
The final characteristic that I describe for scenario tests is ease of evaluation—that is, it should be easy to tell whether the program passed or failed. Of course, every test result should be easy to evaluate. However, the more complex the test, the more likely that the tester will accept a plausible-looking result as correct. Glen Myers discussed this in his classic, Art of Software Testing, and I’ve seen other expensive examples of bugs exposed by a test but not recognized by the tester.
Twelve Ways to Create Good Scenarios
Write life histories for objects in the system.
List possible users, analyze their interests and objectives.
Consider disfavored users: how do they want to abuse your system?
List “system events.” How does the system handle them?
List “special events.” What accommodations does the system make for these?
List benefits and create end-to-end tasks to check them.
Interview users about famous challenges and failures of the old system.
Work alongside users to see how they work and what they do.
Read about what systems like this are supposed to do.
Study complaints about the predecessor to this system or its competitors.
Create a mock business. Treat it as real and process its data.
Try converting real-life data from a competing or predecessor application.
Twelve Ways to Create Good Scenarios
Designing scenario tests is much like doing a requirements analysis, but is not requirements analysis. They rely on similar information but use it differently.
§ The requirements analyst tries to foster agreement about the system to be built. The tester exploits disagreements to predict problems with the system.
§ The tester doesn’t have to reach conclusions or make recommendations about how the product should work. Her task is to expose credible concerns to the stakeholders.
§ The tester doesn’t have to make the product design tradeoffs. She exposes the consequences of those tradeoffs, especially unanticipated or more serious consequences than expected.
§ The tester doesn’t have to respect prior agreements. (Caution: testers who belabor the wrong issues lose credibility.)
§ The scenario tester’s work need not be exhaustive, just useful.
Because she has a different perspective, the scenario tester will often do her own product and marketing research while she tests, on top of or independently of research done by Marketing. Here are some useful ways to guide your research. It might seem that you need to know a lot about the system to use these and, yes, the more you know, the more you can do. However, even if you’re new to the system, paying attention to a few of these as you learn the system can help you design interesting scenarios.
1. Write life histories for objects in the system.
Imagine a program that manages life insurance policies. Someone applies for a policy. Is he insurable? Is he applying for himself or a policy on his wife, child, friend, competitor? Who is he allowed to insure? Why? Suppose you issue the policy. In the future he might pay late, borrow against the policy, change the beneficiary, threaten to (but not actually) cancel it, appear to (but not) die—lots can happen. Eventually, the policy will terminate by paying out or expiring or being cancelled. You can write many stories to trace different start-to-finish histories of these policies. The system should be able to handle each story. (Thanks to Hans Schaefer for describing this approach to me.)
2. List possible users, analyze their interests and objectives.
It’s easy to say, “List all the possible users” but not so easy to list them. Don Gause and Jerry Weinberg provide a useful brainstorming list in Exploring Requirements, page 72.
Once you identify a user, try to imagine some of her interests. For example, think of a retailer’s inventory control program. Users include warehouse staff, bookkeepers, store managers, salespeople, etc. Focus on the store manager. She wants to maximize store sales, minimize writedowns (explained below), and impress visiting executives by looking organized. These are examples of her interests. She will value the system if it furthers her interests.
Focus on one interests, such as minimizing writedowns. A store takes a writedown on an item when it reduces the item’s value in its records. From there, the store might sell the item for much less, perhaps below original cost, or even give it away. If the manager’s pay depends on store profits, writedowns shrink her pay. Some inventory systems can contrast sales patterns across the company’s stores. An item that sells well in one store might sell poorly another store. Both store managers have an interest in transferring that stock from the low-sale store to the high-sale one, but if they don’t discover the trend soon enough, the sales season might be over (such as Xmas season for games) before they can make the transfer. A slow system would show them missed opportunities, frustrating them instead of facilitating profit-enhancing transfers.
In thinking about the interest (minimize writedowns), we identified an objective the manager has for the system, something it can do for her. Her objective is to quickly discover differences in sales patterns across stores. From here, you look for features that serve that objective. Build tests that set up sales patterns (over several weeks) in different items at different stores, decide how the system should respond to them and watch what it actually does. Note that under your analysis, it’s an issue if the system misses clear patterns, even if all programmed features work as specified.
3. Consider disfavored users: how do they want to abuse your system?
As Gause and Weinberg point out, some users are disfavored. For example, consider an accounting system and an embezzling employee. This user’s interest is to get more money. His objective is to use this system to steal the money. This is disfavored: the system should make this harder for the disfavored user rather than easier.
4. List “system events.” How does the system handle them?
An event is any occurrence that the system is designed to respond to. In Mastering the Requirements Process, Robertson and Robertson write about business events, events that have meaning to the business, such as placing an order for a book or applying for an insurance policy. As another example, in a real-time system, anything that generates an interrupt is an event. For any event, you’d like to understand its purpose, what the system is supposed to do with it, business rules associated with it, and so on. Robertson and Robertson make several suggestions for finding out this kind of information.
5. List “special events.” What accommodations does the system make for these?
Special events are predictable but unusual occurrences that require special handling. For example, a billing system might do special things year-end. The inventory system might treat transfers differently (record quantities but not other data) when special goods are brought in for clearance sales.
6. List benefits and create end-to-end tasks to check them.
What benefits is the system supposed to provide? If the current project is an upgrade, what benefits will the upgrade bring? Don’t rely only on an official list of benefits. Ask stakeholders what they think the benefits of the system are supposed to be. Look for misunderstandings and conflicts among the stakeholders.
7. Interview users about famous challenges and failures of the old system.
Meet with users (and other stakeholders) individually and in groups. Ask them to describe the basic transactions they’re involved with. Get them to draw diagrams and explain how things work. As they warm up, encourage them to tell you the system’s funny stories, the crazy things people tried to do with the system. If you’re building a replacement system, learn what happened with the predecessor. Along with the funny stories, collect stories of annoying failures and strange things people tried that the system couldn’t handle gracefully. Later, you can sort out how “strange” or “crazy” these attempted uses of the system were. What you’re fishing for are special cases that had memorable results but were probably not considered credible enough to mention to the requirements analyst. Hans Buwalda talks about these types of interviews (
8. Work alongside users to see how they work and what they do.
While designing a telephone operator’s console (a specially designed phone), I traveled around the country watching operator/receptionists use their phones. Later, leading the phone company’s test group, I visited customer sites to sit with them through training, watch them install beta versions of hardware and software, and watch ongoing use of the system. This provided invaluable data. Any time you can spend working with users, learning how they do their work, will give you ideas for scenarios.
9. Read about what systems like this are supposed to do.
So you’re about to test an inventory management program and you’ve never used one before. Where should you look? I just checked Amazon and found 33 books with titles like
What To Look For In Warehouse Management System Software, and Quick Response: Managing the Supply Chain to Meet Consumer Demand. Google gave 26,100 hits for “inventory management system.” There’s a wealth of material for any type of business system, documenting user expectations, common and uncommon scenarios, competitive issues and so on.
If subject matter experts are unavailable, you can learn much on your own about the business processes, consumer products, medical diagnostic methods or whatever your software automates. You just have to spend the time.
10. Study complaints about the predecessor to this system or its competitors.
Software vendors usually create a database of customer complaints. Companies that write software for their own use often have an in-house help desk (user support) group that keeps records of user problems. Read the complaints. Take “user errors” seriously—they reflect ways that the users expected the system to work, or things they expected the system to do.
You might also find complaints about your product or similar ones online.
11. Create a mock business. Treat it as real and process its data.
Your goal in this style of testing is to simulate a real user of the product. For example, if you’re testing a word processor, write documents—real ones that you need in your work.
Try to find time to simulate a business that would use this software heavily. Make the simulation realistic. Build your database one transaction at a time. Run reports and check them against your data. Run the special events. Read the newspaper and create situations in your company’s workflow that happen to other companies of your kind. Be realistic, be demanding. Push the system as hard as you would push it if this really were your business. And complain loudly (write bug reports) if you can’t do what you believe you should be able to do.
Not everyone is suited to this approach, but I’ve seen it used with superb effect. In the hands of one skilled tester, this technique exposed database corruptors, report miscalculators, and many other compelling bugs that showed up under more complex conditions than we would have otherwise tested.
12. Try converting real-life data from a competing or predecessor application.
Running existing data (your data or data from customers) through your new system is a time-honored technique.
A benefit of this approach is that the data include special cases, allowances for exceptional events, and other oddities that develop over a few years of use and abuse of a system.
A big risk of this approach is that output can look plausible but be wrong. Unless you check the results very carefully, the test will expose bugs that you simply don’t notice. According to Glen Myers, The Art of Software Testing, 35% of the bugs that IBM found in the field had been exposed by tests but not recognized as bugs by the testers. Many of them came from this type of testing.
Risks of Scenario Testing
I’ve seen three serious problems with scenario tests:
§ Other approaches are better for testing early, unstable code. The scenario test is complex, involving many features. If the first feature is broken, the rest of the test can’t be run. Once that feature is fixed, the next broken feature blocks the test. In some companies, complex tests fail and fail all through the project, exposing one or two new bugs at a time. Discovery of some bugs has been delayed a long time until scenario-blocking bugs were cleared out of the way. Test each feature in isolation before testing scenarios, to efficiently expose problems as soon as they appear.
§ Scenario tests are not designed for coverage of the program. It takes exceptional care to cover all the features or requirements in a set of scenario tests. Covering all the program’s statements simply isn’t achieved this way.
§ Scenario tests are often heavily documented and used time and again. This seems efficient, given all the work it can take to create a good scenario. But scenario tests often expose design errors rather than coding errors. The second or third time around, you’ve learned what this test will teach you about the design. Scenarios are interesting tests for coding errors because they combine so many features and so much data. However, there are so many interesting combinations to test that I think it makes more sense to try different variations of the scenario instead of the same old test. You’re less likely to find new bugs with combinations the program has already shown it can handle. Do regression testing with single-feature tests or unit tests, not scenarios.
In Sum
Scenario testing isn’t the only type of testing. For notes on other types of tests that you might use in combination with scenario testing, see my paper, What IS a Good Test Case, at
Scenario testing works best for complex transactions or events, for studying end-to-end delivery of the benefits of the program, for exploring how the program will work in the hands of an experienced user, and for developing more persuasive variations of bugs found using other approaches.

Wednesday, November 25, 2009

Interesting title of Web seminar, Practical Testing and Evolution: An Enterprise-wide Automation Framework

not sure on how practical or objective the suggested approach will be. Have seen an earlier framework from them that was nice.

Web seminar by ThoughtWorks test automation experts Jeff Rogers and Kristan Vingrys provide guiding principles for developing long-term automation strategies including what to automate, setting reasonable automation strategies and goals, and how to evolve your test suites over time. .............sounds interesting.
Practical Testing and Evolution: An Enterprise-wide Automation Framework.....Title
Test automation efforts that stumble and die are most often the result of misconceived perceptions of the effort and resources necessary to implement a successful, long-lasting automation framework. ......what's new ?
In this event, you will also learn: ....sounds interesting
  • Automation strategies by product and organizational, enterprise-wide tactics to get your automation efforts to live beyond a single project
  • Driving automation early and having artifacts live on continuously–from genesis through development and beyond support and maintenance
  • Using automation to get quick feedback on the viability of the product

Tuesday, November 24, 2009

Comedy of errors !!

Sometime back myself and Ajay had a chat ... and how can acronyms confuse one is a classy example here !! These sections are highlighted in blue.

Ajay: Hi Meeta,

me: hi

Ajay: you prefer twitter or gmail chat or nothing (considering the time)
me: btw i hv been planning to sleep since 10 pm but been consistently on

Ajay: That's the power of ET discussions I would say

me: u bet

Ajay: why do you think managers need details of a process? They are afraid that they do not know what the tester is doing?

me: nah ! they do not know what they or their team is doing. They do not have competency in most of the cases to understand what is rightly done and what is not hence the approach

Ajay: "a document which has been passed on by seniors and told that this would work in most cases" something like that?why is there a gap b/w good tester and a good manager?

me: kind of low self confidence is another reason, bad technical skills is yet another. There is never a gap between a good tester and a good manager

Ajay: is it true that testers with bad tech skills are promoted to management faster than good testers?

me: gap will be there only between a good tester and a bad manager

Ajay: If you can't be replaced, you can't be promoted stuff
me: nah !promotion in product companies is due to your skills. Mostly promotion in service industry is due to your buttering skills

Ajay: good to know why is your smiley not animated?on google talk or Gmail?

Ajay: How come iPhone testing?

me: no idea, why it does not get animated ...test it out

Ajay: so I asked: are you on Gmail or Google talk?

me: iphone- generally has been interesting me for sometime

Ajay: which browser and which version ok

me: IE7gmail

Ajay: ok let me try to simulate it

Ajay: Check from IE 6

me: cant revert using office laptop

Ajay: no no don't

me: sab band ho jaayega if i change any settings

Ajay: XP, Vista?

me: n i'll hv to spend whole of tomm with my ccd guy XP

Ajay: you have chrome installed?

me: not allowed
Ajay: could you please try once with chrome oh ok

oh ok ccd? why ccd? no office?

me: they'll immediately block my official mbox if i download n install any thing directly. ccd is our customer connection division. They take care of our hardware

Ajay: ROFL

me: whats that now ?

Ajay: Cafe Coffee Day

me: mujhe laga tha

Ajay: Rolling on Floor Laughing

me: tum yahi soch rahe hoge isiliye acronyms se door rehne ko kehte hain what is AFD btw ? Ajay: once in my Engineering interview, the external guide asked me do you know ATM- Away from Desk?I said Yes and was kind of shocked that he would give his ATM card and password and ask me to withdraw money

me: ok, gives me a new topic for blog

Ajay: and then realized that he was talking about Adobe Type Manager

Monday, November 23, 2009

New Video on BBST

Found a new video on BBST by Cem posted on 7 Nov 09. I don't remember seeing it before. If you have not yet seen it, catch it on the link

talks about - "To know how to serve the project well, we have to understand the project, its stakeholders, and their interests."

KT Scripting our thinking caps ?

Today Srini sent a tweet


@mheusser What is the deal about Skill Transfer (ST)? It sounds like a data transfer through an USB jump drive !!! In IT we call it as KT


made me think how seriously or casually we take the information gathering .......the sole reason being we do it so often ? or we are bored doing it so often ?

I liked his term "data transfer". It so aptly fits into the mechanical act of sharing information.

The whole building of the "testing castle" lies on this foundation of "KT".

"KT" not only helps us understand the objective to build into test, but also expectations for which tests have to be written.......but indirectly does it "script" our thought process ? ......are we still ready to explore and experiment more exhaustively ? ....... are we thinking more than what is feeded into us ? ......... are we asking sufficient questions beyond what is provided to us as information ? ..................??????

Worrying makes you cross the bridge before you come to it

Nice Article, recently read.
Tester's read the qs inbetween carefully. Can you relate it someplace ?
Worrying makes you cross the bridge before you come to it !!
------------- By Harvey Mackay

Recently I saw a survey that says 40 percent of the things we worry about never happen, 30 percent are in the past and can't be helped, 12 percent concern the affairs of others that aren't our business, 10 percent are about sickness--either real or imagined-- and 8 percent are worth worrying about. I would submit that even the 8 percent aren't really worth the energy of worry. Did you know that the English word worry is derived from an Anglo-Saxon word that means to strangle or to choke? That's easy to believe. People do literally worry themselves to death. . . or heart disease, high blood pressure, ulcers, nervous disorders and all sorts of other nasty conditions. Is it worth it? Some folks seem to think this is a '90s phenomenon, but I've got news for you: advice about worry goes back as far as the Bible. We didn't invent it. We just need to find a way to keep it from ruling our lives. I've been spending a lot of time in bookstores lately, in the middle of a 35-city book tour. From one coast to the other, north to south, some of the most popular self-help books concern worry, stress, and simplifying your life. I have a couple of favorite books to recommend. First, an oldie. Dale Carnegie's "How To Stop Worrying and Start Living." It was first published in 1948, but the advice is just as fresh and valuable as it was then and is right-on for the new millennium. Being a chronic list maker, I found two sections that really knocked my socks off. Both were about business people trying to solve problems without the added burden of worrying. Carnegie credits Willis H. Carrier, whose name appears on most of our air conditioners, with these silver bullets: Analyze the situation honestly and figure out what is the worst possible thing that could happen. Prepare yourself mentally to accept the worst, if necessary. Then calmly try to improve upon the worst, which you have already agreed mentally to accept. Bingo! You can handle anything now. You know what you have to do; it's just a matter of doing it. Without worrying. Another approach I like is a system put into practice at a large publishing company by an executive, named Leon. He was sick and tired of boring and unproductive meetings marked by excessive hand-wringing. He enforced a rule that everyone who wished to present a problem to him first had to submit a memo answering these four questions:
What's the problem?
What's the cause of the problem?
What are all possible solutions to the problem?
Which solution do you suggest?
Leon rarely has to deal with problems anymore, and he doesn't worry about them. He's found that his associates have used the system to find workable solutions without tying up hours in useless meetings. He estimates that he has eliminated three-fourths of his meeting time and has improved his productivity, health and happiness. Is he just passing the buck? Of course not! He's paying those folks to do their jobs, and he's giving them great training at decision-making. Another little gem that's made its way to a #1 New York Times bestseller is Richard Carlson's "Don't Sweat the Small Stuff, and it's all small stuff." Of course, being an aphorism junkie and slave to short snappy chapters, I've found this book can improve perspective in 100 small doses. I love the chapter titles: "Repeat to Yourself, 'Life Isn't an Emergency,'" "Practice Ignoring Negative Thoughts," and my favorite, "Let Go of the Idea that Gentle, Relaxed People Can't Be Super achievers." The point is, you can't saw sawdust. A day of worry is more exhausting than a day of work. People get so busy worrying about yesterday or tomorrow, they forget about today. And today is what you have to work with. I remember the story of the fighter who, after taking the full count in a late round of a brawl, finally came to in the dressing room. As his head cleared and he realized what had happened, he said to his manager: "Boy, did I have him worried. He thought he killed me." Now that's putting the worry where it belongs.

Sunday, November 22, 2009

Do You Spell Testing?A Mnemonic to Jump-Start Exploratory TestingBy James Bach

This is a very interesting post from James Blog. Replicting here for easy read. You can check original directly at*Z(SM)*J(MIXED)*R(relevance)*K(simplesite)*F(A+mnemonic+to+jump+start+testing)*&sidx=0&sopp=10&sitewide.asp?sid=1&sqry=*Z(SM)*J(MIXED)*R(relevance)*K(simplesite)*F(A+mnemonic+to+jump+start+testing)*&sidx=0&sopp=10


Do You Spell Testing?A Mnemonic to Jump-Start Exploratory TestingBy James Bach

In exploratory testing, we design and execute tests in real time. But how do we organize our minds so that we think of worthwhile tests? One way is through the use of heuristics and mnemonics. A heuristic is “a rule of thumb, simplification, or educated guess.” For example, the idea of looking under a welcome mat to find a key is a heuristic. A mnemonic, by contrast, is a “word, rhyme, or other memory aid used to associate a complex or lengthy set of information with something that is simple and easy to remember.” Heuristics and mnemonics go together very well to help us solve problems under pressure. SFDPO Spells Testing A mnemonic and heuristic I use a lot in testing is “San Francisco Depot,” or SFDPO. These letters stand for Structure, Function, Data, Platform, and Operations. Each word represents a different aspect of a software product. By thinking of the product from each of those points of view, I think of many interesting tests. So, when I’m asked to test something I haven’t seen before, I say “San Francisco Depot” to myself, recite each of the five product element categories and begin thinking of what I will test. Structure (what the product is): What files does it have? Do I know anything about how it was built? Is it one program or many? What physical material comes with it? Can I test it module by module? Function (what the product does): What are its functions? What kind of error handling does it do? What kind of user interface does it have? Does it do anything that is not visible to the user? How does it interface with the operating system? Data (what it processes): What kinds of input does it process? What does its output look like? What kinds of modes or states can it be in? Does it come packaged with preset data? Is any of its input sensitive to timing or sequencing? Platform (what it depends upon): What operating systems does it run on? Does the environment have to be configured in any special way? Does it depend on third-party components? Operations (how it will be used): Who will use it? Where and how will they use it? What will they use it for? Are there certain things that users are more likely to do? Is there user data we could get to help make the tests more realistic? Bringing Ideas to Light I can get ideas about any product more quickly by using little tricks like SFDPO. But it isn’t just speed I like, it’s reliability. Before I discovered SFDPO, I could think of a lot of ideas for tests, but I felt those ideas were random and scattered. I had no way of assessing the completeness of my analysis. Now that I have memorized these heuristics and mnemonics, I know that I still may forget to test something, but at least I have systematically visited the major aspects of the product. I now have heuristics for everything from test techniques to quality criteria. Just because you know something doesn’t mean you’ll remember it when the need arises. SFDPO is not a template or a test plan, it’s just a way to bring important ideas into your conscious mind while you’re testing. It’s part of your intellectual toolkit. The key thing if you want to become an excellent and reliable exploratory tester is to begin collecting and creating an inventory of heuristics that work for you. Meanwhile, remember that there is no wisdom in heuristics. The wisdom is in you. Heuristics wake you up to ideas, like a sort of cognitive alarm clock, but can’t tell you for sure what the right course of action is here and now. That’s where skill and experience come in. Good testing is a subtle craft. You should have good tools for the job.

Ever Scrabbled ?

This has got to be one of the most innovative junk e -mails I've received in a while.
Someone out there either has too much spare time or is deadly at Scrabble. (Wait till you see the last one)!

Tester's what say ??

  1. DILIP VENGSARKAR When you rearrange the letters: SPARKLING DRIVE
  2. PRINCESS DIANA When you rearrange the letters: END IS A CAR SPIN
  3. MONICA LEWINSKY When you rearrange the letters: NICE SILKY WOMAN
  4. DORMITORY: When you rearrange the letters: DIRTY ROOM
  5. ASTRONOMER: When you rearrange the letters: MOON STARER
  6. DESPERATION: When you rearrange the letters: A ROPE ENDS IT
  7. THE EYES: When you rearrange the letters: THEY SEE
  8. A DECIMAL POINT: When you rearrange the letters: IM A DOT IN PLACE
    MOTHER-IN-LAW: When you rearrange the letters: WOMAN HITLER

Easy Tips on Art of Writing Better

One of my new friends in testing space asked me to review a small piece of writing.
The gap in the work was setting the symphony between words and sentences and the sync of all the words. I was missing the harmony from the orchestra.
Thought will pen a few quick key tips for our new friends experimenting with writing that have been my learning's in this space.
Here you go ....
  • Chuck the thought what others will think on your writing out of window. People will think what they want to and you will come to know of it only once they read it and give their comments. Give them a chance to speak for you to improve. You need critics towards building success.

Learning - Don't Assume. Believe in what you see.

  • Ask a question "WHY" behind every word and sentence and see if you are able to visualize it from what you have written. You need to "script" readers thought process. "Why, What and How " are most important.

Learning- your reader does not get lost trying to assume / interpret things while reading your work.

  • Don't contradict yourself through the work. Its a catch.

Learning - leaves a bad impression on the reader about you

  • Know your vocabulary well. Do not experiment with words if you do not understand them in depth. The simpler words you choose, more audience you get.

Learning - Let the reader understand what you want to say easily.

  • Don't assume and build perception around things that you do not know basics and facts about. If you have an idea, bounce it as an idea. If you have a learning, bounce it as a learning. If it is a new approach you have thought of, let it go as that.

Learning- Clarity solves many a problems and misconceptions.

  • Do not be influenced with others to such an extent that you loose novelty in whatyou wanted to write. Easy way that I have found to handle it is, first write you ideas on a piece of paper. then start to read about other's ideas/ opinions. Document what you feel is relevant on another piece of paper. Compare both notes. Now with relevant credits document what you feel is your novel ideation.

Learning - Research well. Give due credits. Be ethical.

Remember - Think about WHY’s the person who reads it will ask after reading what u want to write.

Welcome to the world of effective articulation !!

Experience of a job seeker !!

Been talking to a friend .....he quit working with a service company giant in India and started on his own recently.
While the business was slow due to recession, he thought to appear for some interviews....
He says" if business does not help, I used to go to interviews very often and tell people that I came out of "XXX" to start business I never got offers.......when I told them I got chucked out they offer me ..." :)

Irony of this job space !!

Performance measurement analyst role and responsibilities

  • Recently read ....Performance measurement analyst role and responsibilities
    By Lior Arussy, President, Strativity Group

    What are the main job functions of a performance measurement analyst? How is the person in this role responsible for the customer experience?
    Customer experience success is highly dependent on measuring what matters the most to customers. From establishing the right measures to linking them to real results, a performance measurement analyst needs to be fully aware of what the customer experience is about and how to measure it and make the results actionable.
    The roles and responsibilities will include:
  • Identify key customer measurements for the whole organization
  • Identify customer experience measures per touch point
  • Determine frequency and scope of measurements
  • Link measurement results to actual customer spend to justify investment
  • Link customer measurements to operational measurements to enable change
  • Track changes and improvements

Gartner's top 10 strategic technologies for 2010

If you have not yet read ...

Plan your test strategeis and competancy upgrades.

Gartner's top 10 strategic technologies for 2010
By Anne McCrory, Editorial Director22 Oct 2009
ORLANDO, FLA. -- Gartner Inc. released its top 10 strategic technologies for 2010 this week, a list that paints a picture of an agile, mobile, secure enterprise where advanced analytics and social media identify early warning signs of failure and predict emerging business trends.
That vision was further extolled in numerous sessions at the annual Gartner Symposium/ ITxpo, where the research firm's executives described the past year as possibly the worst ever for IT. "Trust declined more dramatically in the past year than ever before," Gartner CEO Gene Hall said in his opening remarks.
Though IT budgets won't increase at many organizations, Gartner predicted a 3.3% growth rate for IT spending next year, plus a shift from capital to operating expenditures as "IT costs become scalable and elastic with the business," said Peter Sondergaard, senior vice president of research.
The top 10 strategic technologies list, proffered annually by David Cearley and Carl Claunch, wasn't the only such list offered up at the event. Sondergaard offered a list of nine focus areas based on an analysis of what people are searching for on Gartner's website. The top tier: Cost management, which will continue to be a top issue for 2010 but will encompass risk and growth as well; cloud computing, which will move from the discussion phase to small pilots; and process optimization around enterprise applications (ERP, customer relationship management, supply chain management) that will allow organizations to get more out of these investments.
His second tier included business intelligence; virtualization, as organizations create the foundation of a cloud infrastructure and move from owned to shared assets; and social media. The latter isn't just for so-called digital natives but also for "silver surfers," those over 60 who will become the most important segment in the next 10 years, he said.
The top 10 strategic technologies for 2010
Cearley and Claunch's list focuses on technologies that have the "potential for significant impact on the enterprise during the next three years." Some have fallen off the list from past years because companies should have already incorporated them into their plans (like service-oriented architecture or master data management), their adoption has slowed (unified communications) or there won't be market shifts warranting inclusion on the 2010 list (specialized systems and servers beyond blades). Others have come back in new forms: virtualization, which topped the 2009 list, is now embedded in several wider areas as well as standing on its own for a specific usage.
Here, then, is the list for 2010:
Gartner's 2009 list
The top 10 strategic technologies for 2009 were as follows:1. Virtualization2. Business intelligence3. Cloud computing4. Green IT5. Unified communications6. Social software and social networking7. Web-oriented architecture8. Enterprise mashups9. Specialized systems10. Servers -- beyond blades
1. Cloud computing. Organizations should think about how to approach the cloud in terms of using cloud services, developing cloud-based applications and implementing private cloud computing environments. "Everything will be available as a service," Cearley said. "That doesn't mean you use it all [or] move it all there."
2. Advanced analytics. Real-time data analysis will enable fraud detection on one hand and prediction and simulation on the other, as organizations use data to look ahead.
3. Client computing. Enterprises need to develop a five- to eight-year client computing roadmap before making near-term decisions such as whether or how to upgrade client hardware or move to Windows 7. The progression of desktop virtualization technology and the range of devices available make this an important analysis. "Build a strategic client computing roadmap bringing all issues and devices together, or you will be following vendor roadmaps," Cearley said.
4. IT for green. The "green" concept has moved beyond energy-efficient data centers to using IT to enable green throughout the enterprise. For example, an organization could use IT to analyze and optimize shipping of goods.
5. Reshaping the data center. A flexible "pod" model, where data center sections can be independently heated, cooled and powered, allows the organization to light up new sections only when needed.
6. Social computing. Organizations need to examine the use of social media by both internal and external constituents and figure out how to govern it. Social network analysis can be used both to detect fraud and to change business processes to boost internal efficiency.
7. Security -- activity monitoring. As targeted attacks rise and cloud computing adds complexity, organizations need to identify a longer-term plan for how all of their security technologies come together. Security incident and event management devices, for example, are one approach that is becoming mainstream.
8. Flash memory. This technology, made ubiquitous by popular USB sticks, is a faster, although more expensive, storage alternative. Price drops mean it will offer a "new layer of the storage hierarchy in servers and client computers," Gartner said.
9. Virtualization for availability. Live migration technology such as VMware Inc.'s VMotion will enable the use of virtualization for high performance, possibly displacing failover cluster software and even fault-tolerant hardware.
10. Mobile applications. Mobile is at a tipping point, given the proliferation of handheld devices and their power and storage.

Requirements and testing

"Requirements and testing" .........below mentioned points are from Richard Bender's approach

Writing Testable Requirements - Deliver requirements that are concise, accurate, modular, and highly testable.
Requirements-Based Testing - Identify important ambiguities in requirements specifications before coding starts.
Mastering the Requirements Process – Learn the complete process of eliciting and writing testable requirements.
Requirements Modeling – Understand how to find and verify requirements with models.
Essential Software Requirements – Use powerful techniques for identifying, documenting, and verifying requirements, including formal Plan-Driven and Agile requirements approaches.
Extending Requirements - Extend the foundations laid in the “Mastering the Requirements Process” course by learning how to choose the best set of requirements to give you a competitive edge and still get your product to market on time.

Saturday, November 21, 2009

What categories of tools we should be looking for while thinking the tester way ?

What makes sense for a tester to have an extended support in terms of tools ........

Questions to ask -

  • do we really need them ?
  • will they help me perform better in this situation ?
  • are these tools scripting ideas & inputs ?
  • are they able to "think" cognitively like you and me ?
  • can they really unearth bugs ?
  • will they increase my productivity ?

Specific to Automation tools

  • Are they able to let me explore the inputs ?
  • am I experimenting while doing testing ?
  • can I make them work they way I'd want them to work ?
  • Are they actually doing what I wanted them to do while executing a script ?

Checkers .........Testers ??

Think tanks .........where are you ??????
This was the question I asked, while I sat through Rapid testing class with Michael Bolton as my instructor ............on Nov 17 & 18 2009 ............
It made me realize how mundane our daily activities have become ........ what are our customers primarily focussed on .......... what are we primarily focussed on ..................?????????
Cutomers - test cases that pass or fail .......success criteria .....pass ;
We - how many pass test cases go from my desk.......
Sadly ........the focus is no where exploratory most of the times ......:(

Metrics is the BUZZ word all many times are we measuring and awarding the maximum bugs identified ?
Lets make the change and do it .......... Let exploratory minds think !!

Wake up time ......... !!

Amazing Energy Group !

Weekend Testers !!

Attended STC on 19 Nov 09 ........and you should have been there to hear from Ajay B on the thought wave called "Weekend Testers" ...........what a redention of thoughts !! was a housefull and had a standing ovation from many including Michael Bolton !!

Great Going Folks !!

Lets be the leaders towards making the change the world needs ........."THINKING TESTERS" !!

Learning's from Michael's Rapid Testing class

...................................... yet to scribe

Thanks Parimala :)

While I am at this page, how can I not thank Parimala ......... who asked me a simple question ......why have I stopped blogging ??

and will definitely like to thank her....... coz that statement made me open this page and realize how quick time passes by ........and that did not let this become my new year resolution ;)

Come back !! Wake up time !

I never realized that it has been more than a year that I published something on this page :)

Attended a Rapid Testing workshop with Michael Bolton in Bangalore on 17 & 18 Nov 09.........and was I amazed at his class handling ?? .......

.......All that could be said was WOW !! ..............this is the way to engage the class ........ perfect one !!

and how can I NOT blog about this .........

Thanks Michael !!
  • for opportunity to learn from you .....
  • for teaching us the differentiator of good testors ......... "checkers and Tester"
  • for "food for thought" bouncers
  • and for waking me up and bringing me back to this world :)