View previous topic :: View next topic
|
Author |
Message |
abin
Active User
Joined: 14 Aug 2006 Posts: 198
|
|
|
|
Hi,
In our production there is a file which is NDMed from client side. We need to trigger a job when the NDM process is complete. There is a running issue with NDM process. A file which is NDMed gets catalogued in the middle of NDM process. This causes the job(this job is shceduled to run when the file is catalogued through shceduler) to start running and abending due to contention.
One solution I have seen is to add dependency in the NDM card to trigger a good job if the NDM process is complete and then add dependency to this good job to trigger the normal job. But the NDM card is at client side and modification on this card is not advisable
Is there any other way we can automat the process. |
|
Back to top |
|
|
superk
Global Moderator
Joined: 26 Apr 2004 Posts: 4652 Location: Raleigh, NC, USA
|
|
|
|
Hmm. Tough one. I don't know your standards or how things are normally done at your end, so I can only offer these random thoughts:
1. Do you have console Automation software? Can you setup an event trigger using the automation software to indicate when the process has completed. Then, have the scheduler use two dependencies: first, the dataset being cataloged, and second, the end of the process event.
2. Can you setup the triggered job to just attempt to allocate the dataset, using something like the TSO ALLOC command? If the command fails because the dataset is in contention, exit out cleanly and either wait a little period of time, and try again, or have the job re-schedule itself for a future time, and complete.
3. Part of the Connect:Direct software suite is a file watcher that you can use to trigger events when certain criteria for datasets are matched.
4. How real-time does the batch process need to be? Can you just run a job at regular intervals and "sweep" for the required dataset, similar to my option 2. Once the dataset has been processed, make sure it gets deleted so that subsequent sweep jobs won't re-process it. |
|
Back to top |
|
|
Phrzby Phil
Senior Member
Joined: 31 Oct 2006 Posts: 1042 Location: Richmond, Virginia
|
|
|
|
Why doesn't your job get a WAITING FOR DATASETS message and then wait? Seems that what I see sometimes. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
Quote: |
abending due to contention |
Might the abend be an operator CANCEL because of a
Quote: |
a WAITING FOR DATASETS message |
? |
|
Back to top |
|
|
abin
Active User
Joined: 14 Aug 2006 Posts: 198
|
|
|
|
Hi Superk,
Thanks for the ideas and sorry for late reply. I was out of office.
Quote: |
1. Do you have console Automation software? Can you setup an event trigger using the automation software to indicate when the process has completed. Then, have the scheduler use two dependencies: first, the dataset being cataloged, and second, the end of the process event. |
I should ckeck this one with our shop. This may take some time. I am not sure about what is meant by console automation software here. We uses Control-M for scheduling jobs in production. Is this what is meant by console automation.
Quote: |
2. Can you setup the triggered job to just attempt to allocate the dataset, using something like the TSO ALLOC command? If the command fails because the dataset is in contention, exit out cleanly and either wait a little period of time, and try again, or have the job re-schedule itself for a future time, and complete. |
The file from client comes may be once in a year. So if we set up the listening process how will we know to attempt alloacting. Should we listen all the time around the year.
Quote: |
3. Part of the Connect:Direct software suite is a file watcher that you can use to trigger events when certain criteria for datasets are matched. |
Thanks for this new info. I need check on this.
Quote: |
Quote:
abending due to contention
Might the abend be an operator CANCEL because of a Quote:
a WAITING FOR DATASETS message
? |
Usually when two jobs are in contention with one another for exclusive acces of a dataset(one job is writing and other one is reading the data set) then one of the jobs gets abended. This is what is happening. |
|
Back to top |
|
|
Phrzby Phil
Senior Member
Joined: 31 Oct 2006 Posts: 1042 Location: Richmond, Virginia
|
|
|
|
But if you have DISP=OLD on your writing job, then how does the job (or step - I forget which) even start until it has exclusive access, and then how does the system know if it is reading or writing, since it is yet to do either? |
|
Back to top |
|
|
abin
Active User
Joined: 14 Aug 2006 Posts: 198
|
|
|
|
Hi Phil,
Here NDM process which runs in client system is writing the file into our site. We cannot control or I even don't know the particulars of this job. This job has the exclusive access over new version it is creating. The job which abends is the one running at out end. This job tries to read the new version which is being created by the NDM process. This job is scheduled by the controller because dependency is added in the scheduler inorder to schedule the job whenever a new version of the file is catalogued. |
|
Back to top |
|
|
Phrzby Phil
Senior Member
Joined: 31 Oct 2006 Posts: 1042 Location: Richmond, Virginia
|
|
|
|
Does your job have DISP=OLD? |
|
Back to top |
|
|
abin
Active User
Joined: 14 Aug 2006 Posts: 198
|
|
|
|
No It's DISP=SHR |
|
Back to top |
|
|
Phrzby Phil
Senior Member
Joined: 31 Oct 2006 Posts: 1042 Location: Richmond, Virginia
|
|
|
|
I don't know if it will work, but why not try OLD? |
|
Back to top |
|
|
abin
Active User
Joined: 14 Aug 2006 Posts: 198
|
|
|
|
Sure, but I cannot test it in the production. So, could you please explain what change OLD will do. So that I can give an explanation. |
|
Back to top |
|
|
Phrzby Phil
Senior Member
Joined: 31 Oct 2006 Posts: 1042 Location: Richmond, Virginia
|
|
|
|
OLD
Indicates that the data set existed before this step started and that
this step needs exclusive (non-shared) usage of the data set.
It's worth a try. |
|
Back to top |
|
|
abin
Active User
Joined: 14 Aug 2006 Posts: 198
|
|
|
|
Hi,
I tried to submit two jobs(same program) which uses same file with OLD. Second submitted job waited for some time and abended.
For me the second job should never be submitted until first job is over. In real first job is an NDM job which runs in client system and I prsently do not have a way of knowing whether NDM job is completed succesfully or not. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
Quote: |
Second submitted job waited for some time and abended.
|
Yes, it either timed-out or was canceled by the operator.
Quote: |
For me the second job should never be submitted until first job is over. |
Which appears to make this a job for the scheduler. If you define the second job to be run after the successful completion of the first job, you should have what you need. |
|
Back to top |
|
|
abin
Active User
Joined: 14 Aug 2006 Posts: 198
|
|
|
|
Hi,
Quote: |
Which appears to make this a job for the scheduler. If you define the second job to be run after the successful completion of the first job, you should have what you need. |
BUt the problem is as I mentioned earlier the first job is running at client side and sceduler at my end have no way of knowing the succesfull completion of this job. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
Quote: |
BUt the problem is as I mentioned earlier the first job is running at client side |
Sorry 'bout that - i remembered that, but when i quickly looked back before posting, i missed it.
Maybe the process needs to be reversed and the data pulled from your end rather than pushed from the client system? |
|
Back to top |
|
|
Phrzby Phil
Senior Member
Joined: 31 Oct 2006 Posts: 1042 Location: Richmond, Virginia
|
|
|
|
It's odd that the file would be seen as catalogued before the job finishes.
Perhaps if it were landed as the (+1) of a GDG it might work. |
|
Back to top |
|
|
abin
Active User
Joined: 14 Aug 2006 Posts: 198
|
|
|
|
Quote: |
Maybe the process needs to be reversed and the data pulled from your end rather than pushed from the client system? |
Hmm If nothing else comes up we may need to do it. But it's difficult to change something at client side. Mainly because how does our shop system knows whether a file is created at client side. As I said I earlier our shop system is blank about client system.
Quote: |
It's odd that the file would be seen as catalogued before the job finishes. |
Odd yes. but thats whats happening with NDM especially when file is huge.
Quote: |
Perhaps if it were landed as the (+1) of a GDG it might work. |
It is landed as (+1) of a GDG.
Thanks for all the support we implemented dependencies to the job to run when the file is NDmed. If it abends in system, well we will see it at that time. |
|
Back to top |
|
|
Phrzby Phil
Senior Member
Joined: 31 Oct 2006 Posts: 1042 Location: Richmond, Virginia
|
|
|
|
How about the sender's Connect:Direct process executing, after its COPY, a RUN TASK with PGM=DMRTSUB supplied with a JCL lib and member name of your job to kick off?
You can also pass parameters such as the DSNAME landed. |
|
Back to top |
|
|
abin
Active User
Joined: 14 Aug 2006 Posts: 198
|
|
|
|
HI,
Quote: |
How about the sender's Connect:Direct process executing, after its COPY, a RUN TASK with PGM=DMRTSUB supplied with a JCL lib and member name of your job to kick off? |
That means changing client NDM card correct? Now, as I mentioned earlier any change in client side is discourged. Well I should say "not possible" instead of discouraged. |
|
Back to top |
|
|
Phrzby Phil
Senior Member
Joined: 31 Oct 2006 Posts: 1042 Location: Richmond, Virginia
|
|
|
|
Well, if the sender and receiver cannot cooperate, then I certainly give up. |
|
Back to top |
|
|
superk
Global Moderator
Joined: 26 Apr 2004 Posts: 4652 Location: Raleigh, NC, USA
|
|
|
|
abin wrote: |
It is landed as (+1) of a GDG. |
abin, that's a rather EXTREMELY IMPORTANT detail to omit from your original post!!! CONNECT:Direct has to place an enqueue on the entire GDG so it can do its job and properly update a relative generation without contention and/or interference from outside processes. We could've eliminated a lot of the guesses to your post had we known that fact. |
|
Back to top |
|
|
abin
Active User
Joined: 14 Aug 2006 Posts: 198
|
|
|
|
HI Kevin,
Sorry that I missed that infomation in my original post.
So do we have a solution.
Quote: |
CONNECT:Direct has to place an enqueue on the entire GDG so it can do its job and properly update a relative generation without contention and/or interference from outside processes |
What I understand is when a new version of GDG is NDMed, it is catalogued at the begginning and the NDM process keeps an exclusive hold until the process is over. |
|
Back to top |
|
|
|