Joined: 22 Nov 2005 Posts: 700 Location: Troy, Michigan USA
Just curious how do you execute Parallel Testing in Testing your programs. Please some one share their experiences wrt Parallel Testing.
I?m going to assume (always gets me into trouble) that you are comparatively new to programming (coding, testing), and take a literary license.
I am used to several degrees of testing.
The most basic testing is the UNIT test, where you prepare input data that will exercise the change(s) you are making to the program. In an ideal situation you will create input test data that will exercise every branch in logic (each IF statement in the program), including exceptions and error conditions. The object of testing, particularly unit testing, is to not only test your program to see if it runs correctly with good input data, but even more so, to see if there is any way you can break the program with bad input data. Run your program with this input data, and justify (explain) each field value in the output data.
The next step is to do a Production Parallel test. In this testing you run with Production input Data, with the Production program, and save the Production output data. Now, with the same Production input Data, run your Test Program, and save the Test output data.
Now the fun part, you get to compare the two sets of output data. Hopefully your shop has a good compare utility. I usually compare the Test output Data with the Production output Data until I have, maybe 10, records that do not match 100%. I look at each field that does not match, and explain why. Is it one of the fields that we expect to be different? This is good. If it is not one of the fields identified in you UNIT test, you need to explain why that field data changed. Your change may have affected another part of the program that you did not anticipate. Now, run the compare utility again, excluding the fields identified in the pervious runs. Continue this cycle until you can run the compare utility (with excluded fields) until all records match.
At this point, you should have a set of fields that the values have changed, due to modifications you made to the program, with explanations for all.
Now, depending on the magnitude of your change, you may have to do one or both of the following.
STRING testing. Set up a small job stream of production programs that are predecessor and successor to your program. Run this small job stream to make sure you can handle the input data from your predecessor, and that your successor can handle your output data.
INTEGRATION testing is a superset of the STRING test, where you may want to run entire applications, or even entire job streams with multiple applications involved. Because of the massive amount of data, you probably will want to identify, in advance of the test, specific records that you expect to change, how they will affect other programs within your application, and programs outside your applications. Run the data and spot check these results in the appropriate jobs.
Thank you so much for a detailed description on Parallel Testing. I was involved mostly with Unit Testing where my programs are tested on the test region and with the test region data as an input. Usually these test regions are updated with the current production data for every fortnight.