Pages

Saturday, 9 November 2013

How to write a better test case!!!

 Image courtesy: http://www.mindmeister.com

Starting with the basic definition, Test Case is a set of input pre-conditions and output post-conditions that are verified for a particular Test Condition identified for the given specification. My definition is, test case is nothing but investigating the functionality in various ways. And the test case could be determined by Pass/Fail status after the execution of it. Every where requirements are mapped with only two types of cases, they are like Positive test cases, Negative test cases. Subsequently go on with the sub requirements too.

Usually test cases are written to estimate the test coverage in the application. Most of the companies which follows standards will author test cases prior to the start testing. It is better to write test cases before starting official testing practice rather doing endless adhoc testing.

1.Before start writing a test case, read out the functional document of the application area. Analyze the test environment carefully and what is the expected behavior of the test area
2.Try to prepare a checklist of all the functionalists. And that checklist could be your test scenario and our target is to identify various cases for the scenarios.
3.For every front end design point mentioned in the document prepare at-least one positive and one negative case.
4.Ensure your test case must cover all the points mentioned in the specification document. Functional and GUI cases are to be written separately.
5.Start with the high priority scenarios which features are important to the application.
6.Test steps must be clear and accurate but not too long. Avoid duplication of test cases
7.Test data should be prepared for every test case. Expected and Actual result must be logged during execution

 With the above characteristics a better test case can be written.

Test Case id: ID Number
Test Scenario: Can be our checklist
Test Case:Test case identified from the scenario. Positive or negative and more...
Test Case Description: Proper and accurate steps for executing the test case
Test Data: Input data to be used for executing the test case
Expected Result: Result as the specification document
Actual Result: Result determined from the application after executing the test case
Status: Pass/Fail
Review Comments: Comments written by the reviewer


Happy Testing...




A Journey towards Testing...:)

Friday, 18 October 2013

How to report a BUG???

Firstly, how do we call it. Either BUG or DEFECT?
As far as my concern, both are practically same. Defect is when the tester raises a issue, where as we call it them as Bug  when the defect is accepted by the developer officially.

Its a professional war :) between a developer and a tester. However its a friendly battle :) Only one can have the upper hand, either a quality code or a strong tester.
When a bug or a defect identified, a tester should report it to the respective developer. But how? What are the steps need to follow while assigning a bug to the developer?



With the following template I guess the bug/defect can be reported accurately

1.Title: A short description about the bug in a single line
2.Identified by: Name of the tester
3.Date: Bug identified date
4.Assigned to: Name of the respective developer.
5.Environment: In which environment does the bug identified. Like windows or linux or solaris... 6.Build no: Build release number in which the bug is identified.
7.Bug Type: States the type of bug it is. Typically these are,
  • Functional:  Bugs that are deviated from the expected flow.
  • Usability: When an end to end scenario accomplished in different way instead of the actual way
  • GUI: Bugs that affect the presentation, look or layout of pages, images and form elements.
8.Bug Severity: This renders the impact of the bug on the application. Defining the type of bug identified, whether its functional or usability or gui or security. Severity levels can be differ as per the process followed by the companies. My severity levels are likely critical, high, medium, low.
  •  Critical: If a bug is mapped to this means, then the screen/application encounter with unexpected errors and can not be tested further and need to be resolved immediately. Ex: Can not log in to app
  • High: If a bug is mapped to this means, then the functionality in the screen/app is deviating from the expected result. Ex: clicking on a link takes you to page X instead of the intended page Y
  • Medium: When a record is saved into database but improperly shown in the user interface. Then the bug can be mapped with this.
  • Low: Bugs that do not interfere with core functionality and are just annoyances that may or may not ever be fixed. Typically spell mistakes, color legends. Ex: Search results format display incorrectly in different browsers
9.Priority: How fast the bug has to be resolved. Decision take by the developer/manager
10.Test data: What kind of test data used while testing and through which data the bug is identified.
11.Module name: Name of the module bug identified in the application.
11.Screen name: Name of the screen under respective module, the bug identified.
13.Description: Detailed description of the identified bug with proper reproducible steps.
14.Root cause: Specify proper reason for the bug caused.
15.Attachment: Proper snapshot of the bug, if required.

"Although tester has classified the bug, lead or manager has right to re-classify the bug."

Happy Testing...





 A Journey towards Testing...:)

Thursday, 27 June 2013

What should you test first when a new functionality change occurs

In my experience of testing an application when build released, as a team first we must read the release notes properly which was provided by the development team. Then prioritizing the our testing levels

1.Sanity Testing(Initial) : Go thru each and every links in the application and ensure that there will be no blockers and finally certify that build is ready for further testing

2.Adhoc Testing for a while: Testing the application with random scenarios for one or two hours

"After all we are most likely  to find bugs on new features which were released with the build, then why doing this Sanity and a sample adhoc?"

"Of-course I do accept that our first priority would be testing the new functionality. But before moving to that tester has to certify that build was properly deployed which is the initial phase before actual testing starts isn't it?  After successful completion of sanity we will proceed according to the release notes and all..."


3.Executing Test Scenarios(High Level) : After a while of adhoc testing, tester must execute all the high priority scenarios according to the release notes which i mean to say is positive way of testing. Surely tester can filter the high priority bugs in this stage itself

4.Executing Test Scenarios(Low Level) : Executing the application in depth including the medium and low priority scenarios also adding with negative scenarios.

5.Re-Testing : Retesting of the Fixed bugs and
After few iterations of the same process, in the final iteration, the big task ends with...


6.Regression Testing: Ensuring new changes would not effect to the existing functionality.

As recently started my journey in testing, I experienced these type process when build was released to testing team.

HAPPY TESTING!!!





A Journey towards testing...:)