I am into a project where I need to reduce the CPU consumption for the batch application. I want to know all the methods which are available to do it, it may be optimizing SORT, tuning SQL any other utilities which takes less CPU.
Joined: 06 Jun 2008 Posts: 8491 Location: Dubuque, Iowa, USA
First, unless you find a way to reduce the number of records being sorted, "optimizing SORT" is generally futile -- SORT has already been very optimized and there is usually little that can be done to reduce CPU usage.
Second, you need a run-time analysis program (such as STROBE) to be effective at reducing CPU usage. Programmers are generally poor at understanding what causes their programs to use CPU time, and hence having a tool will be vastly helpful.
Third, your best candidate for CPU reduction will be the SQL code. However, you may have to add disk space (for indexes, as one example) in order to reduce CPU usage.
Fourth, as Bill suggested -- talk to your production support group; they most likely know the best candidates to look at for CPU reduction!
Fifth, your post title "MIPS/CPU consumption reduction in Batch" is completely wrong. MIPS is a measurement that should NEVER be applied to anything less than the entire machine. MIPS can be reduced by removing the current machine and replacing it with a slower machine -- MIPS cannot otherwise be reduced. Yes, it is common to use MIPS and CPU interchangeably -- but they are not the same and should NEVER be swapped!
Speed or performance of a computer is measured in Millions of Instructions per second (MIPS). In the early days, mainframe capacity was measured by running a standard routine over and over again. For example, an early IBM S/370 computer could run 1 Million instructions a second.
However, MIPS is an no longer used to measure speed. Mainframe computers have simple as well as complex instructions. A program of 5 million complex instructions takes lot more time than a program of 5 million simple instructions. Complex instructions take more CPU cycles. Many computer engineers jocularly dubbed MIPS as “misleading indicator of performance”.
CPU seconds is still often, used by programmers to measure performance and chargeback. The problem is that the work done by a zEC 12 machine in one CPU second is not the same as other mainframe computer models. One z12 CPU second is different from one z10 CPU second. So, IBM introduced a standard measure called Service Units (SU). SU’s and MSU’s are an ingenious way to measure mainframe capacity irrespective of the processor or the workload.
One of the most misused terms in IT has to be MIPS. It's supposed to stand for "millions of instructions per second," but many alternate meanings have been substituted:
Misleading indicator of processor speed
Meaningless indicator of processor speed
Marketing indicator of processor speed
Managements impression of processor speed
Jokes aside, management has a tendency to want one figure to represent a processor's capacity. And companies are spending large amounts of money based on a poorly understood indicator, for both software and hardware acquisitions.
Unfortunately, no one number describes capacity. Processor speed varies depending on many factors, including (but not restricted to):
I/O access density
Software levels (OS and application subsystems)
Workload mix is the largest contributor to the variability of capacity figures.
If you start Googling ibm mips, you can find about 486,000 web sites and many of them talk about how MIPS has not been valid for comparisons or measuring performance for decades.