Posted in Information Technology & Systems, Total Reads: 435
Definition: Unit Testing
Unit testing is a method of testing in Information Technology by which individual units of a program (e.g. procedures, classes, functions, interfaces etc) are tested for fitness of use. The tests are written by the developers themselves in order to ensure proper functioning and behaviour of the code as expected. Thus the basic goal of unit testing is to test each segregated individual parts of the program and determine if it behaves as per expectation. This means that each function or procedure in the program returns the correct values when a set of input values are given. At the same time, for any incorrect input value, it should be able handle the failures and exceptions.
A common approach to unit testing involves drivers (to stimulate the calling unit) and stubs (to stimulate the called unit) to be written. These drivers and stubs can be reused at subsequent stages, which enables the regular changes occurring during the development to be re-tested periodically without having to write extra amount of codes.
The method used for unit testing is called White Box testing method. There are several advantages of Unit Testing:
a) Issues and errors in the code can be detected at an early age and rectified, preventing impact on other dependent piece of codes.
b) Unit testing allows the user to maintain and change the code as per requirement, by making the codes less interdependent.
c) As the bugs are detected early, cost of fixing the bugs is also lowered. Unit testing also helps in automation of the testing process.
d) Unit testing helps in making the debugging process simpler. If a unit test fails, it means that only the recent changes to the code needs to be debugged.
Unit testing requires patience and can be tedious. Lot of documentation is required and hence it can be time consuming. However it is one of the most important stages of testing and should be done frequently and continuously.