View previous topic :: View next topic
|
Author |
Message |
vicky1373
New User
Joined: 21 Jan 2006 Posts: 7
|
|
|
|
I have a job which is in running for long and dont know the reason for the same.
The EXCP-CNT is increasing even the CPU% also changes. But the job sound to be still running now for 3 hours... The REAL is 6017 and not changing.
Can anybody help me out ? |
|
Back to top |
|
|
Robert Sample
Global Moderator
Joined: 06 Jun 2008 Posts: 8696 Location: Dubuque, Iowa, USA
|
|
|
|
If EXCP count is increasing, there's I/O going on. The CPU% is changing, so we know it's not in a hard loop. REAL probably won't change a whole lot once the program starts unless you do dynamic calls or other run-time items that change memory.
How long do you think the job should take? What is this estimate based on? Could there be more data than expected causing the longer run time? Is this a test or production job? If you're convinced the job is running longer than it should, you'll need to use a debugger on it or throw some DISPLAY (if COBOL) or other output statements to track what it's doing. |
|
Back to top |
|
|
Bill O'Boyle
CICS Moderator
Joined: 14 Jan 2008 Posts: 2501 Location: Atlanta, Georgia, USA
|
|
|
|
Check your COBOL File "ASSIGN" Statements in the program(s) and verify that they are correct for the type of access.
For example, if the program is accessing a file SEQUENTIALLY and the file is defined as "ACCESS DYNAMIC" in the program, change it to SEQUENTIAL.
Also, in the JCL, the AMORG parameter can be helpful with VSAM buffering as well as Batch LSR (BSLR) for QSAM and/or other flat-files.
Bill |
|
Back to top |
|
|
vicky1373
New User
Joined: 21 Jan 2006 Posts: 7
|
|
|
|
Thanks for the responses.
Actually the job is running in test and only thing is the input is bit heavy... Nothing was changed in the program and in production this job runs in 10 mins. And the input is just trippled..
But compared to the input the time taken is too large... its over 5 hours now..
Also the Assign statements are correct... |
|
Back to top |
|
|
Bill Dennis
Active Member
Joined: 17 Aug 2007 Posts: 562 Location: Iowa, USA
|
|
|
|
Perhaps the system is just very busy?
Another reason can be reading or writing an UNBLOCKED file (one record per block).
If this is a new input or output file, ensure the BLKSIZE= in JCL allows for large blocks and the program "BLOCK CONTAINS 1 RECORD" is not overriding the JCL. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
The code may be such that as the volume increases, the resources needed to complete the run increase exponentially, rather than linear.
When the job ran in production, how much cpu and i/o did it use? How much has the current test execution taken so far?
If your mainrame is nearly "pegged", it may be that test jobs are greatly reduced in priority.
The more info on how the process works, the more we may be able to offere suggestions. |
|
Back to top |
|
|
|