View previous topic :: View next topic
|
Author |
Message |
adarsh.bhalke
New User
Joined: 06 May 2007 Posts: 16 Location: pune
|
|
|
|
Hi,
I am performing the file I/O operation with vast amount of data.But when i submit the jcl its taking more time to give the result.
Can anybody tell me how to monitor the performance here. |
|
Back to top |
|
|
expat
Global Moderator
Joined: 14 Mar 2007 Posts: 8797 Location: Welsh Wales
|
|
|
|
The I/O for any file is dependent ont the access method being used.
Is it VSAM, if yes, KSDS, access=Randon, sequential, skip sequential. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
How many records equals "vast"? Does this job use a single data source (i.e. the vast file)? Does this job interface with any database tables?
How do you base the "taking more time"? More than what some similar job takes or just more than you'd prefer? Has this been running for some period of time or is this a new process that has no history?
What kind of monitoring did you have in mind?
Once you post more info about your process and reply to the questions asked, we may be able to clarify things. |
|
Back to top |
|
|
Devzee
Active Member
Joined: 20 Jan 2007 Posts: 684 Location: Hollywood
|
|
|
|
Does your data resides on TAPE? |
|
Back to top |
|
|
adarsh.bhalke
New User
Joined: 06 May 2007 Posts: 16 Location: pune
|
|
|
|
no actually my first sequential file contains more than 10,000,000 records and second file contains 100 match keys. so if match key from second file matches with any record in first then i have to write thar record to different (third file). Here the first file is multivolume.If i submit the job for same its taking more than 5 hours because for pericular match key from second file there are more then 10,000 records in first file. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
Hello, are the records in the second file the match keys and nothing else?
How long does it take to read the 10million records if the match is not being performed (if you don't have some code that will do this, just "copy" the file with IEBGENER or SORT and assign the output file to DUMMY). Knowing how long it takes to pass the data will help in making an estimate on how long the "real" process should run.
I would expect we can get your process to run in the time it takes to read all of the records plus 10% (or less) if i've correctly understood your requirement. If it takes almost 5 hours to merely read the data, we will have to look further.
Please post back with the answers to the questions above. |
|
Back to top |
|
|
expat
Global Moderator
Joined: 14 Mar 2007 Posts: 8797 Location: Welsh Wales
|
|
|
|
What are you saying here .... that you read the first file and then go through the second file looking for matches, and then do it all again for the next record of the first file ?
Please, tell me that this is NOT happening here.
Have you considered making file 1 a KSDS and then doing ramdom (on key) from the second file. That way you only read file 2 once. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
If the "second" file is only the 200 keys, they could be put into an array and SEARCHed using only a single pass of the 10mil record file. . . If the array was build "in sequence", SEARCH ALL might save even more time.
If i've understood the requirement, there wouldn't need to be any other processing. . . . |
|
Back to top |
|
|
|