SOFTWARE TESTING
Testing is vital to the success of the system. It helps to identify and correct the errors of the system. In the system testing performance and acceptance standards are developed. Testing of the proposed system was done at all stages of project development. After each section of a module is completed, the completed system was undergone testing by giving sample data inputs. Thus each module is ensured error free.
Input data for testing was obtained from the organization itself. Each module was found to be free of errors. Testing was also done with data that are not accepted by the system. Performance monitoring of the system was also done with different ranges of inputs.
Proper error handing methods were adopted to make the system performance more efficient. So testing was a very easy process. Expected output of the system that is desired to get was compared with the obtained actual output and found to be matching. Inputs were given and the outputs were analyzed and found to be error free and satisfied.
1. UNIT TESTING
Unit testing focuses verification effort on the smallest unit of Software design that is the module. Unit testing exercises specific paths in a module’s control structure to ensure complete coverage and maximum error detection. This test focuses on each module individually, ensuring that it functions properly as a unit. Hence, the naming is Unit Testing.
In the project each functionality is considered as a unit and tested. For example registration step is considered as a unit and testing I made. The errors are taken care and appropriate bug fixing is made.
Unit testing comprises the set of tests performed by an individual programmer prior to integration of the unit into a large system. The situation is illustrated as follows
![]()
Coding and debugging Unit testing Integration testing
A program unit is usually small enough that the programmer who developed it can test it great detail, and certainly in greater detail that will be possible when the unit is integrated into an evolving software product.
There are four categories of tests that programmer will typically perform on a program unit:
§ Functional tests
§ Performance tests
§ Stress tests
§ Structure tests
Functional test cases involve exercising the code with nominal input values for which the excepted results are known as well as boundary values (minimum values, maximum values, and values on just outside the functional boundaries) and special values, such as logically related inputs.
Performance test determines the amount of execution time spent in various parts of the unit. Program throughput, response time, and device utilization by the program unit. Performance testing is most productive at the subsystem and system levels.
Stress tests are those tests designed to intentionally break the unit. A great deal can be learned about the strengths and limitations of a program by examining the manner in which a program unit breaks.
Structure tests are concerned with exercising the internal logic of a program and traversing particular execution paths.
1.1 Black Box Testing
Black-box testing, also called behavioral testing, focuses on the functional requirements of the software. That is, black-box testing enables the software engineer to derive sets of input conditions that fully exercise all functional requirements for a program. Black-box testing is not an alternative to white-box techniques. Rather, it is a complementary approach that is likely to uncover a different class of errors than white-box methods.
Black-box testing attempts to find errors in the following categories:
1) Incorrect or missing functions.
2) Interface errors.
3) Errors in data structures or external database access.
4) Behavior or performance errors.
5) Initialization and termination errors.
1.2 White Box Testing
White-box testing, sometimes called glass-box testing, is a test case design method that uses the control structure of the procedural design to derive test cases. Using white-box testing methods, the software engineer can derive test cases that
1) Guarantee that all independent paths within a module have been exercised at least once.
2) Exercise all logical decisions on their true and false sides.
3) Execute all loops at their boundaries and within their operational bounds.
4) Exercise internal data structures to ensure their validity.
2. INTEGRATION TESTING
Integration testing is a systematic test for constructing the program structure while conducting tests to uncover errors related to interfacing. The testing modules are combined into a subsystem. This testing is the verification of the interfaces among system parts. Integration addresses the issues associated with the dual problems of verification and program construction. Strategies for integrating software components in to a functioning product include the bottom-up strategy, the top-down strategy, and the sandwich strategy.
Integration testing addresses the issues associated with the dual problems of verification and program construction. After the software has been integrated a set of high order tests are conducted. The main objective in this testing process is to take unit tested modules and builds a program structure that has been dictated by design.
2.1. TOPDOWN INTEGRATION
Starts with the main routine and one or two immediately subordinate routines in the system structure, then the top-level “skeleton” have been tested. This method is an incremental approach to the construction of program structure. Modules are integrated by moving downward through the control hierarchy, beginning with the main program module. The modules subordinate to the main program module are incorporated into the structure in either a depth first or breadth first manner. It starts with the main routine and one or two immediately subordinate routines in the system structure. After this has been thoroughly tested, it becomes the test harness for its immediately subordinate routines. A test harness consists of the driver programs and data necessary to exercise the modules. Top-down integration requires the use of program stubs to simulate the effect of lower-level routines that are called by those being tested.
2.2. BOTTOM UP INTEGRATION.
Integration testing consists of unit testing, followed by subsystem testing, followed by the testing of the entire system. Modules are tested in isolation from one another in an artificial environment known as a “test harness”. This is the traditional strategy used to integrate the components of software into a functioning whole. This method begins the construction and testing with the modules at the lowest level in the program structure. Since the modules are integrated from bottom up, processing required for modules subordinate to a given level is always available and the need for stubs is eliminated. It consists of unit testing, followed by subsystem testing and then testing of the entire system. Unit testing has the goal of discovering errors in the individual modules of the system. The main aim of subsystem testing is to verify the operation of the interfaces between modules in the subsystem. System testing is concerned with subtleties in the interfaces, decision logic, control flow, recovery procedures, and throughput and timing characters.
This is predominantly top-down but bottom-up techniques are used on some modules and subsystems. This mix alleviates most of the problems of top-down integration and retains the advantage of top-down integration at the subsystem and system levels.
3. VALIDATION TESTING
Validation testing provides the final assurance that the software meets all the functional, behavioral and performance requirements. Validation testing begins when the software functions in a manner that can be reasonably expected by the customer. The software is completely assembled as a package; interfacing errors are uncovered and corrected. And a final series of software functions in a manner that is reasonably expected by the user. Validation testing refers to the process of using software in a live environment in order to find errors. During the course of validation, a failure may occur and the software needs to be modified.
After each validation test case is executed if one of the two possible conditions exist.
a. The function or performance characteristics conform to the specification and are accepted.
b. A deviation from the specification is uncovered and a deficiency list is created.
Once the application was made free of all the uncovered errors, dummy data is input to ensure that the software developed satisfied all the requirements of the customer.
4. OUTPUT TESTING
After performing the validation testing, the next step is output testing of the proposed system, no system could be useful if it does not produce the required output in the specified format. The outputs generated or displayed by the system under consideration are tested by asking the users about the format required by them. Hence the output format is considered in 2 ways – one is on screen and another as e-mails. Required data for the user is displayed in the form of static and dynamic html pages on user commands. The pages are tested for displaying accurate and up to date information and also for providing all the necessary functionalities required by the user in the page itself.
5. USER ACCEPTANCE TESTING
Acceptance testing involves planning and execution of functional tests, performance tests, and stress tests to verify that the implemented system satisfies its requirements.
In addition, functional and performance tests, stress tests are performed to determine the limitations of the system.
Main Testing Principles
§ All tests should be traceable to the customer requirements. According to customer, the most severe defect is that which causes the program to fail to meet the requirements.
§ Tests should be planned long before the actual testing begins. All testing should be planned and designed before any code is generated.
§ The Pareto principle applies to software testing. The Pareto principle implies that 80% of all errors uncovered will likely be traceable to 20% of all program components. The problem is to isolate these suspected components and thoroughly test them.
§ Testing should begin “in the small” and progress towards testing “in the large”. The first test focuses on individual components. As testing progresses, focus shifts to integrated clusters of components and then finally to entire system.
§ Exhaustive testing is not possible. The number of path combinations in even a small program is very large. So it is not possible to test all these paths. But it is possible to test the program logic and ensure that all conditions have been met.
§ To be most effective, testing should be conducted by an independent third party. The software engineer who created the program is not the best person to conduct tests for the software. So, in order to find maximum number of errors in the software, an independent third party is preferred.
Attributes of a good test are:
§ A good test has a high probability of finding an error. To achieve this goal, the tester must understand the software to realize how the software might fail.
§ A good test is not redundant. Testing time and resources are limited. So every test must have a different purpose.
§ A good test should be “best of breed”. There can exist a group of tests having the same intension. In such cases only a subset of these tests is used.
§ A good test should neither be too simple nor be too complex. It is possible to combine a series of tests into one test. But this can lead to masking certain errors. Hence all the tests should be executed simultaneously.
No comments:
Post a Comment