IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

Performance monitoring in file I/O operation


IBM Mainframe Forums -> JCL & VSAM
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
adarsh.bhalke

New User


Joined: 06 May 2007
Posts: 16
Location: pune

PostPosted: Tue Jun 12, 2007 8:55 pm
Reply with quote

Hi,
I am performing the file I/O operation with vast amount of data.But when i submit the jcl its taking more time to give the result.

Can anybody tell me how to monitor the performance here.
Back to top
View user's profile Send private message
expat

Global Moderator


Joined: 14 Mar 2007
Posts: 8797
Location: Welsh Wales

PostPosted: Tue Jun 12, 2007 9:00 pm
Reply with quote

The I/O for any file is dependent ont the access method being used.

Is it VSAM, if yes, KSDS, access=Randon, sequential, skip sequential.
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Wed Jun 13, 2007 2:57 am
Reply with quote

Hello,

How many records equals "vast"? Does this job use a single data source (i.e. the vast file)? Does this job interface with any database tables?

How do you base the "taking more time"? More than what some similar job takes or just more than you'd prefer? Has this been running for some period of time or is this a new process that has no history?

What kind of monitoring did you have in mind?

Once you post more info about your process and reply to the questions asked, we may be able to clarify things.
Back to top
View user's profile Send private message
Devzee

Active Member


Joined: 20 Jan 2007
Posts: 684
Location: Hollywood

PostPosted: Wed Jun 13, 2007 9:15 am
Reply with quote

Does your data resides on TAPE?
Back to top
View user's profile Send private message
adarsh.bhalke

New User


Joined: 06 May 2007
Posts: 16
Location: pune

PostPosted: Mon Jun 25, 2007 5:17 pm
Reply with quote

no actually my first sequential file contains more than 10,000,000 records and second file contains 100 match keys. so if match key from second file matches with any record in first then i have to write thar record to different (third file). Here the first file is multivolume.If i submit the job for same its taking more than 5 hours because for pericular match key from second file there are more then 10,000 records in first file.
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Mon Jun 25, 2007 6:27 pm
Reply with quote

Hello,

Hello, are the records in the second file the match keys and nothing else?

How long does it take to read the 10million records if the match is not being performed (if you don't have some code that will do this, just "copy" the file with IEBGENER or SORT and assign the output file to DUMMY). Knowing how long it takes to pass the data will help in making an estimate on how long the "real" process should run.

I would expect we can get your process to run in the time it takes to read all of the records plus 10% (or less) if i've correctly understood your requirement. If it takes almost 5 hours to merely read the data, we will have to look further.

Please post back with the answers to the questions above.
Back to top
View user's profile Send private message
expat

Global Moderator


Joined: 14 Mar 2007
Posts: 8797
Location: Welsh Wales

PostPosted: Mon Jun 25, 2007 6:28 pm
Reply with quote

What are you saying here .... that you read the first file and then go through the second file looking for matches, and then do it all again for the next record of the first file ?

Please, tell me that this is NOT happening here.

Have you considered making file 1 a KSDS and then doing ramdom (on key) from the second file. That way you only read file 2 once.
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Mon Jun 25, 2007 6:58 pm
Reply with quote

Hello,

If the "second" file is only the 200 keys, they could be put into an array and SEARCHed using only a single pass of the 10mil record file. . . If the array was build "in sequence", SEARCH ALL might save even more time.

If i've understood the requirement, there wouldn't need to be any other processing. . . .
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> JCL & VSAM

 


Similar Topics
Topic Forum Replies
No new posts FTP VB File from Mainframe retaining ... JCL & VSAM 1
No new posts Extract the file name from another fi... DFSORT/ICETOOL 6
No new posts How to split large record length file... DFSORT/ICETOOL 10
No new posts Extracting Variable decimal numbers f... DFSORT/ICETOOL 17
No new posts SFTP Issue - destination file record ... All Other Mainframe Topics 2
Search our Forums:

Back to Top