View previous topic :: View next topic
|
Author |
Message |
somunote
New User
Joined: 23 Jan 2010 Posts: 7 Location: Toronto
|
|
|
|
When comparing two versions of a program to determine which one is faster, I focus on the elapsed time of the step only.
Is it correct?
This time is affected by pauses and stops during the execution and may not reflect the reality.
Should I pay attention to other variables (EXCP, TCB, etc)?
Thanks. |
|
Back to top |
|
|
Robert Sample
Global Moderator
Joined: 06 Jun 2008 Posts: 8696 Location: Dubuque, Iowa, USA
|
|
|
|
The actual CPU time usage is the TCB time. SRB time is the system overhead involved in your task and is usually considered a part of the overall CPU usage. Using elapsed time to measure which program is faster is not a good thing to do -- if the system is heavily loaded during one run and lightly loaded during another run, there can be 2 or 3 orders of magnitude difference in the results (yes, one may run 1000+ times faster than the other) even though they use exactly the same amount of CPU time. EXCP count measurement is important if you're looking at different I/O schemes but otherwise doesn't typically have a major impact (although it can if there's several million EXCP being done for your programs).
While it can be instructive looking at elapsed time, you must ensure yourself that the system load is equivalent for the two runs since even minor differences in system load could have major impacts on elapsed time. If you do not do so, you are wrong to use elapsed time as a comparative measure. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello and welcome to the forum,
While the goal is often to reduce elapsed time (it is elapsed time that is most visable to managers and users<g>), elapsed time is nearly worthless to measure how one process compares to another. As Robert mentioned, if is quite diffficult to ensure the exact same load for both (or more) tests.
For the most part, i've had the best results in reducing elapsed time by reducing the amount of i/o a process requires and/or improving the way the i/o that is needed is accomplished.
Once upon a time organizations sometimes invested in creating some benchmark jobs to be run in a completely empty mainframe. I have not seen this done for a long time. . . |
|
Back to top |
|
|
somunote
New User
Joined: 23 Jan 2010 Posts: 7 Location: Toronto
|
|
|
|
dick scherrer wrote: |
For the most part, i've had the best results in reducing elapsed time by reducing the amount of i/o a process requires and/or improving the way the i/o that is need is accomplished.
|
Hi,
could you pls let me know how to improve the i/o?
do you mean adjusting the bufno? avoiding vb records?
thank you very much. |
|
Back to top |
|
|
Robert Sample
Global Moderator
Joined: 06 Jun 2008 Posts: 8696 Location: Dubuque, Iowa, USA
|
|
|
|
The quick and easy fixes are to change block size to half-track blocking and make sure there's enough buffers in the JCL. Long term, looking at the program to determine if there's any way to cut down the reads and writes can help but that's not a fast solution. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
Quote: |
could you pls let me know how to improve the i/o? |
Other than what Robert mentioned, there is little "generic" to do. Typically, some amount of study is needed to determine which i/o might be reduced or eliminated. One such example is the case where 10 processes read several hundred million records to produce multiple small reports and output files. Changing these to pass the huge volume only once and create some extract files to use in the subsequent processes reduced total i/o by over 90%.
Quote: |
avoiding vb records? |
No (and how did this thought originate?), often variable data saves significant amounts of storage space and reduces the number of i/o's required. |
|
Back to top |
|
|
|