View previous topic :: View next topic
|
Author |
Message |
yoursavi
New User
Joined: 16 Apr 2008 Posts: 11 Location: chennai
|
|
|
|
Hi all,
A JOB-A fails while deleting a datset with message - DATA SET RESERVATION UNSUCCESSFUL
found out that JOB-B also uses the same datset but it got finished 10 seconds befor delete step in JOB-A executed.
One more point to note is, JOB-A used the dataset in 2-3 steps in SHR mode but when it tried to delete it in the last step, JOB-A failed.
Is it a contention problem?? If yes why JOB-B didnt release the datset as soon as it got over
Please help me as i need to reschedule the jobs if required.
Thanks in advance!! |
|
Back to top |
|
|
enrico-sorichetti
Superior Member
Joined: 14 Mar 2007 Posts: 10886 Location: italy
|
|
|
|
the issue is just a moot point,
certainly there is a contention problem, due to poor planning
if two jobs are using a dataset and one of them is also deleting it
You must set up thing in order not to run them concurrently,
use a scheduler to put a dependency on them |
|
Back to top |
|
|
yoursavi
New User
Joined: 16 Apr 2008 Posts: 11 Location: chennai
|
|
|
|
There is a dependency already. JOB-B releases JOB-A. thats why i had mentioned that JOB-B got finished before 10 seconds of deletion attemt by JOB-A.
I hope this will clearly explians the issue. Please let me know if you need more information. |
|
Back to top |
|
|
enrico-sorichetti
Superior Member
Joined: 14 Mar 2007 Posts: 10886 Location: italy
|
|
|
|
Quote: |
found out that JOB-B also uses the same datset but it got finished 10 seconds before delete step in JOB-A executed. |
Quote: |
JOB-B releases JOB-A. |
I just reread the thread, are You sure that job b is the culprit ??
I would suggest to analyze better the definitions and dependencies |
|
Back to top |
|
|
HappySrinu
Active User
Joined: 22 Jan 2008 Posts: 194 Location: India
|
|
|
|
Arvind,
Can you try using 2/3 step of DISP=OLD instead of SHR mode.
I doubt Job b not yet released the dataset.
or the better option i think is make that dataset trigger to your Job a instead of job trigger. |
|
Back to top |
|
|
yoursavi
New User
Joined: 16 Apr 2008 Posts: 11 Location: chennai
|
|
|
|
Hi Srini,
Actually these jobs are running in production everyday. In last one month JOB-A failed only once. JOB-B creates the dataset and releases JOB-A, hence these jobs never run together but Yes JOB-A starts executing as soon as JOB-B is ended. JOB-A uses this datset in shared mode, process the data available in this dataset and finally in last step it deletes it. this dataset is used only in these two jobs.
I put this question here to clarify that is it possible that job has ended but the datasets are not relesed for sometime. Its a rare schenario...thought if somebody had encountered it previously, may help.
Possible solution will be straight forward, to put some delay, say 5 minutes on execution of JOB-A after JOB-B releases it. But the doubt remains the same that why dataset is not getting released though JOB-B has ended.
Anyway thanks to all, for your precious time. |
|
Back to top |
|
|
enrico-sorichetti
Superior Member
Joined: 14 Mar 2007 Posts: 10886 Location: italy
|
|
|
|
if the situation is really the way You describe,
if job a starts after job b has ended
process smf data to be sure,
and the rmf contention report
to obtain evidence
i guess that the situation is aparable ( open a problem with IBM support ) |
|
Back to top |
|
|
yoursavi
New User
Joined: 16 Apr 2008 Posts: 11 Location: chennai
|
|
|
|
Thanks for the updates.....as you suggested I'll go ahead and escalate this issue to support team. |
|
Back to top |
|
|
dbzTHEdinosauer
Global Moderator
Joined: 20 Oct 2006 Posts: 6966 Location: porcelain throne
|
|
|
|
if you plan on deleting the ds in a job step, why not use disp=old for all steps referencing the ds? |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19243 Location: Inside the Matrix
|
|
|
|
Hello,
Quote: |
why not use disp=old for all steps referencing the ds? |
Definitely.
Also, as this only fails rarely, might there be a 3rd process that also allocated the dataset? If the allocatons are SHR, there may be another process involved that caused the allocaton failure. |
|
Back to top |
|
|
gcicchet
Senior Member
Joined: 28 Jul 2006 Posts: 1702 Location: Australia
|
|
|
|
Hi,
I have seen this message when HSM has got hold of the dataset, so as Enrico has mentioned, you need to run the SMF data report.
Gerry |
|
Back to top |
|
|
Pedro
Global Moderator
Joined: 01 Sep 2006 Posts: 2593 Location: Silicon Valley
|
|
|
|
I do not think it is possible for a job that ended to continue holding a dataset.
As Dick mentioned, there is probably a 3rd process that is holding the dataset. I suggest adding a step to JOB-A, which lists the users of the dataset. |
|
Back to top |
|
|
|