View previous topic :: View next topic
|
Author |
Message |
Priya_Shankar
New User
Joined: 07 Aug 2007 Posts: 22 Location: Chennai
|
|
|
|
A batch job takes much time to read a file which contains millions of records. In what ways the performance of reading the file can be improved?
Also, if any abend occurs like SOC4, SOC7, SB37, SE37, etc in the mid of the job during night batch cycle, how it can be rectified?
Since I (we) don't have access to the production jobs I'm not able to find and analyse the cause. |
|
Back to top |
|
|
KReddy5
New User
Joined: 26 Jun 2007 Posts: 1 Location: Chennai
|
|
|
|
Hi,
It is my assumption that, you are updating either DB2 tables or VSAM files after reading each record and in this scenario it may take some time to perform updations for all those records.
You can fine tune the DB2 queries if you are using in DB2 programs.
In case of space abends like SB37, SE37 there are some third party tools like TSO BLKSIZE and passing values to the parameters like Approximate number of records, and length of each record and type of the record format will provide the primary and secondary space informations.
In case of S0C4 or S0C7 abends, we need to find the Offending records from the job spools and you need to delete/update those records after consulting with DBA or Clients. (It is always better to take the backup of Offending records before deleting). |
|
Back to top |
|
|
Priya_Shankar
New User
Joined: 07 Aug 2007 Posts: 22 Location: Chennai
|
|
|
|
Thanks for the reply.
We are not using DB2. Everything is maintained through VSAM files. Also, as you replied for space abends we go for third party tool TSO BLKSIZE. If any third party tool is not available. then in that case what could be the remedy taken? |
|
Back to top |
|
|
sandeep1dimri
New User
Joined: 30 Oct 2006 Posts: 76
|
|
|
|
Hi
I m not sure if u can implement the below set up.
1. Read the input file.
2. Keep on status file that will contain the information of the record for which abend occured. This goes like this
Read input file and write corresponding record with status as 'A'( considering abend) if all process goes well for this record change its status as 'C' in the status file.
3. If ur jobs abends process the record after the record present in the status file with 'C' staus. It can work fine if both files are KSDS.
Thank
sandeep |
|
Back to top |
|
|
PeD
Active User
Joined: 26 Nov 2005 Posts: 459 Location: Belgium
|
|
|
|
?? reading file ?
Also some tests phases like Volume Test, Stress Test must be done before going into production.
And when some abends occur, usually the production follow up team must fix the problem then pass the documented problem to maintenance team.
It is reasonable to not have direct access to production environment. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
Quote: |
In what ways the performance of reading the file can be improved? |
Until you know what resources are being used, it will be difficult to improve it. You (or someone there) should be able to find statistics on the attributes of the file(s) being read as well as the number of i/o's used in a given process. The usage information is available in the SMF records.
Are you reading vsam files sequentially or via some key? Are you reading large qsam files?
Your x37 abends are space related and to correct/re-run them you wouldn't need access to production. For an 0c7, you will need access to the abend information so that you can determine where the code or the data is wrong. If you can re-create the abend in a non-production environment, you can de-bug without access to production. As Pierre mentioned it is common for developers not to have access to production JCL and/or datasets. |
|
Back to top |
|
|
dr_te_z
New User
Joined: 08 Jun 2007 Posts: 71 Location: Zoetermeer, the Netherlands
|
|
|
|
The only thing you can (not) do: blocking.
Are there "block contains" lines in your code? Are there blocksizes defined in your JCL?
Take it all out, that's obsolete. Let the system determine the optimum blocksize and re-create the dataset on your fastest DASD. |
|
Back to top |
|
|
ramfrom84
New User
Joined: 23 Aug 2006 Posts: 93 Location: chennai
|
|
|
|
Reading Million of Records does not consume Much time, U need to Look into the logic. U can Tune the program to make it run faster.
SB37, SE37,, these two are volumen related issue, Some times there will be less volume allocate for this file because of following reason such as No space for corresponding Volume Serial Number, only Limitation space for the given Volume , Volume is used by other files etc To elimate this u can use Tape Dataset or GDG. |
|
Back to top |
|
|
|