Sunday, April 15, 2007

Golden rules to retain staff

I have recently changed jobs ....... yes, after continuing with last one for ages :)

saw the following 2 posts on the net and found them very interesting

http://www.rediff.com/money/2007/apr/11spec.htm

http://conlak.blogspot.com/2007/03/blog-post.html

Friday, March 16, 2007

Elementary ........ starter

I have seen people coming back and asking what should they test ??

Yes, these are essentially people with less exposure and experience but the fact is that its them who get confused !!

An experienced tester would immediately comment .......EVERYTHING !!

So coming back to the core question what should you test ??

The BlackBox perspective .....
Any sofware would have certain madatory fields and certain non madatory fields on the user interface.

I am not taking any part of usability testing into consideration here .....

Saturday, March 10, 2007

Confusion ....tester = QC ? or QA ? ...something else

Hmmmm.......

for some good time now, I've been seeing confusions in people regarding who is a tester ..... Some people percieve that tester is a quality control person ........some think they are quality assurance persons ........ the list is slightly more bigger, but these two are the main confusing roles ......


Why can't a tester be percieved as tester ? .....why cant he be percieved as someone who knows how to break the code ........ oh !! not another confusion ......... i am not talking about a hacker here but a tester ........

You may agree to differ in opinion but honestly that is what a tester does .......his job is to build scenarios, create plans, find testcases, generate testwares .......and test .........

White Box testing experts get in to delve into the structural aspect of the code and Black Box testing experts want to understand the acceptance levels of the software ....... with the client/ customer perspective.

Oh !! how did I forget to add ......a biggest misconception that failed developers become testers .....

Does the argument finish here ......... NO... wait to hear more on this blog !!

Scenarios in Product and service industry

In a service company scenario you do not own the source code under any circumstances nor the test (plans/scenarios/cases ) etc. You do it for a client who has outsourced these jobs to you. The ownership for all this eventually lies with the client and you have to share it back with him. This will be dependent on the statement of work agreement with the client.

In a product company scenario, source code along with all test related documentation is owned by the company itself. They may or may not share it with the customer of the product. Sharing will be highly dependent on the business need.

Friday, February 23, 2007

Test coverage

My perception:
Test coverage is the quality metric suggesting the quality of test cases and hence of test adequacy criteria.

In short .... we are trying to quantify test adequacy.

A test case/suite is meant to cover or execute test objects like statements, conditions, paths etc as per its test adequacy criterion. The test adequacy criteria might at times be part of agreement between the client / customer and the service provider.

The more number of test objects the test adequacy criteria covers, more will be its coverage. And the more test coverage given by a test case/suite, better will be its quality.

General definition says ...
Test coverage = (no. of test objects covered or executed at least once) / (total number of test objects)

The concept can be illustrated as follows:
Suppose one test case/test suite covers 80 stmts out of 100, 8 branches out of 10, and 20 paths out of 25 then it is giving 80% stmts coverage, 80% branch coverage and 80% path coverage. And it will be a better test case/suite than that which provides lesser coverage than 80%.As an ideal case test cases/test suite should give 100% test coverage. But in practice it does not happen.

Actually test coverage analysis is a type of the code coverage analysis. The academic world more often uses the term "test coverage" while practitioners more often use "code coverage". Test coverage provides a quantifiable measure of how well the test suite actually tests the product. The most basic form of test coverage is to measure what procedures were and were not executed during the test suite.

There are many other test coverage measures.
1.Statement Coverage
2.Decision Coverage
3.Condition Coverage
4.Multiple Condition Coverage
5.Condition/Decision Coverage
6.Path Coverage
7.Function Coverage
8.Call Coverage
9.Data Flow Coverage
10.Object Code Branch Coverage
11.Loop Coverage
12.Relational Operator Coverage and many more.


Test coverage specifies the extend to which test suite ( at times test criteria) fulfils its defined objectives in the system from its inception to retirement like analysis,design,...... for practitioners (BRD, FDD, TDD, ......)
In each phase of SDLC, nature of test coverage varies but purpose remains same.

Test suites which encompasses test cases and results of each one of them, provides big time support during implementation.
In my opinion, test coverage techniques need to mature a little more so that there are no consequences where one phase (or may be more but rarely all) has higher test coverage but others strive to obtain average.

Hence result of that either others are not performed well or suppose to be eliminated.

Thursday, February 15, 2007

Interesting Debate

Me and Cem have been having a very interesting debate over complete testing.

According to Cem the correct answer to the question should be choice ‘b’
Qs ) Which is the best definition of complete testing:
Choose one answer.
a. You have completed every test in the test plan.
b. You have discovered every bug in the program.
c. You have reached the scheduled ship date.
d. You have tested every statement, branch and combination of branches in the program.

And I say answer should be choice ‘d’. Cem reasons, How can testing be complete if there might still be bugs remaining?

And I go back and say that I feel ‘b’ and ‘d’ complement each other.
‘d’ is probably one way to achieve ‘b’.

I reason that all the business units across the world ( I am leaving the small software’s out of scope here, as they can be tested completely most of the times) plan their product releases. Before a release, it is certified that the testing has been complete. The products are released, but still we find 'n' number of customer bugs reported and which directly or indirectly lead to more regressions introduced in the software. These regressions are sometimes identified by the QA teams and sometimes are reported back as new bugs by the customers.
I have till date not seen any white paper from any of the big business houses where they have not had a single bug reported ( either internal or external) against a release.
Now, coming back to my interpretation….. What do you actually try to do in complete testing scenario ?? you try and have such test cases that will check all possible statements, branches and combination of branches in the program. Basically, I try and ensure that all my complexity node points are well covered during testing.

Yes, when I say ‘complete testing’, my idea is that ideally what ever I can think of as a test engineer ( with all my white box and black box test cases) I have covered all the possible scenarios I can think of. But factually, there is always a chance that there are few uncovered, un-trodden pathways lurking around the numerous code lines. More the components, more the dependencies, more the chances of failures. This is the simple testing ‘mantra’ that fits into my thoughts……..

It will be nice if we can have your views on my thoughts ……..

Before I end ….Can anyone recall embarrassment of Bill Gates as he stood to release Windows ’98 and then publicly had to call off the release coz of a failure at the time of release and all this was going on live ……….broadcasting all over the world ……

and last but not the least ....... I am not debating against the definition, but the thought that goes behind the definition ......