View previous topic :: View next topic
|
Author |
Message |
smaru
New User
Joined: 22 Oct 2008 Posts: 49 Location: India
|
|
|
|
Hi,
We have a KSDS file which is concurrently accessed by more than 30 Jobs. We see the Jobs are getting delayed while accessing this file. When we run the same job outside this window avoiding the contention, it runs a lot quicker.
I would like to know, what actually delays the Job. Is it the type of read (most of them are read by key) or number of concurrent jobs.
Is there a limitation on a VSAM file on concurrent reads?
Please let u know your views on this. |
|
Back to top |
|
|
dbzTHEdinosauer
Global Moderator
Joined: 20 Oct 2006 Posts: 6966 Location: porcelain throne
|
|
Back to top |
|
|
Robert Sample
Global Moderator
Joined: 06 Jun 2008 Posts: 8696 Location: Dubuque, Iowa, USA
|
|
|
|
More than 30 jobs, reading randomly? Curiosity makes me ask what the CI size is for your VSAM file?
This sounds like an ideal opportunity to get your site support group involved in identifying and mitigating to what degree possible the bottleneck(s) in your processing. |
|
Back to top |
|
|
smaru
New User
Joined: 22 Oct 2008 Posts: 49 Location: India
|
|
|
|
I am not sure of the importance of the CISZ. The value for the KSDS file is 10240. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
Quote: |
I am not sure of the importance of the CISZ. |
Which is why you should read VSAM Demystified from cover to cover. . . |
|
Back to top |
|
|
Robert Sample
Global Moderator
Joined: 06 Jun 2008 Posts: 8696 Location: Dubuque, Iowa, USA
|
|
|
|
30 batch jobs, doing 50 I/O per second each, is 1,500 attempted I/O per second. Times 10,240 bytes per I/O (which is why CI size is important) is over 15 megabytes per second. If the path from the disk system to the CPU is ESCON, which has a maximum throughput of 17 Mbytes/s, then your batch jobs are attempting to use over 80% of the channel by themselves. Not to mention any other jobs that might need data from the same disk, and the indexes (so frequently on the same disk as well), and you rapidly can attempt to push more data through the pipe than it physically can handle. Delays become inevitable, and only a major system design can alleviate the issue.
And 50 I/O per second is not a high rate -- peak rates can be well over 1000 per second. The math explains your delays. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
Quote: |
peak rates can be well over 1000 per second. |
Especially with batch jobs - there is no "break" waiting on the user as there is with online tasks. Also, by design, batch jobs tend to do higher volume, which only compounds the bottleneck.
What (dare is say business?) reason is there to run all 30 concurrently? Most of the times i've seen something like this it is because someone simply scheduled them all to start at the same time. There would be no impact if they were staggered to releive the bottleneck.
These jobs are surely interfering with other, but they are most likely causing problems for many other processes as well. |
|
Back to top |
|
|
smaru
New User
Joined: 22 Oct 2008 Posts: 49 Location: India
|
|
|
|
Guys,
That was really useful information. I really appreciate your quick response.
I will get back to you with more questions. |
|
Back to top |
|
|
|