IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

Automating NDM process


IBM Mainframe Forums -> All Other Mainframe Topics
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
abin

Active User


Joined: 14 Aug 2006
Posts: 198

PostPosted: Thu Dec 13, 2007 7:40 pm
Reply with quote

Hi,

In our production there is a file which is NDMed from client side. We need to trigger a job when the NDM process is complete. There is a running issue with NDM process. A file which is NDMed gets catalogued in the middle of NDM process. This causes the job(this job is shceduled to run when the file is catalogued through shceduler) to start running and abending due to contention.

One solution I have seen is to add dependency in the NDM card to trigger a good job if the NDM process is complete and then add dependency to this good job to trigger the normal job. But the NDM card is at client side and modification on this card is not advisable icon_cry.gif

Is there any other way we can automat the process.
Back to top
View user's profile Send private message
superk

Global Moderator


Joined: 26 Apr 2004
Posts: 4652
Location: Raleigh, NC, USA

PostPosted: Fri Dec 14, 2007 6:46 pm
Reply with quote

Hmm. Tough one. I don't know your standards or how things are normally done at your end, so I can only offer these random thoughts:

1. Do you have console Automation software? Can you setup an event trigger using the automation software to indicate when the process has completed. Then, have the scheduler use two dependencies: first, the dataset being cataloged, and second, the end of the process event.

2. Can you setup the triggered job to just attempt to allocate the dataset, using something like the TSO ALLOC command? If the command fails because the dataset is in contention, exit out cleanly and either wait a little period of time, and try again, or have the job re-schedule itself for a future time, and complete.

3. Part of the Connect:Direct software suite is a file watcher that you can use to trigger events when certain criteria for datasets are matched.

4. How real-time does the batch process need to be? Can you just run a job at regular intervals and "sweep" for the required dataset, similar to my option 2. Once the dataset has been processed, make sure it gets deleted so that subsequent sweep jobs won't re-process it.
Back to top
View user's profile Send private message
Phrzby Phil

Senior Member


Joined: 31 Oct 2006
Posts: 1042
Location: Richmond, Virginia

PostPosted: Fri Dec 14, 2007 7:09 pm
Reply with quote

Why doesn't your job get a WAITING FOR DATASETS message and then wait? Seems that what I see sometimes.
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Fri Dec 14, 2007 9:59 pm
Reply with quote

Hello,

Quote:
abending due to contention
Might the abend be an operator CANCEL because of a
Quote:
a WAITING FOR DATASETS message
?
Back to top
View user's profile Send private message
abin

Active User


Joined: 14 Aug 2006
Posts: 198

PostPosted: Mon Dec 17, 2007 9:31 am
Reply with quote

Hi Superk,

Thanks for the ideas and sorry for late reply. I was out of office.

Quote:
1. Do you have console Automation software? Can you setup an event trigger using the automation software to indicate when the process has completed. Then, have the scheduler use two dependencies: first, the dataset being cataloged, and second, the end of the process event.

I should ckeck this one with our shop. This may take some time. I am not sure about what is meant by console automation software here. We uses Control-M for scheduling jobs in production. Is this what is meant by console automation.

Quote:
2. Can you setup the triggered job to just attempt to allocate the dataset, using something like the TSO ALLOC command? If the command fails because the dataset is in contention, exit out cleanly and either wait a little period of time, and try again, or have the job re-schedule itself for a future time, and complete.

The file from client comes may be once in a year. So if we set up the listening process how will we know to attempt alloacting. Should we listen all the time around the year.

Quote:

3. Part of the Connect:Direct software suite is a file watcher that you can use to trigger events when certain criteria for datasets are matched.


Thanks for this new info. I need check on this.


Quote:
Quote:
abending due to contention
Might the abend be an operator CANCEL because of a Quote:
a WAITING FOR DATASETS message
?

Usually when two jobs are in contention with one another for exclusive acces of a dataset(one job is writing and other one is reading the data set) then one of the jobs gets abended. This is what is happening.
Back to top
View user's profile Send private message
Phrzby Phil

Senior Member


Joined: 31 Oct 2006
Posts: 1042
Location: Richmond, Virginia

PostPosted: Mon Dec 17, 2007 7:17 pm
Reply with quote

But if you have DISP=OLD on your writing job, then how does the job (or step - I forget which) even start until it has exclusive access, and then how does the system know if it is reading or writing, since it is yet to do either?
Back to top
View user's profile Send private message
abin

Active User


Joined: 14 Aug 2006
Posts: 198

PostPosted: Mon Dec 17, 2007 8:07 pm
Reply with quote

Hi Phil,

Here NDM process which runs in client system is writing the file into our site. We cannot control or I even don't know the particulars of this job. This job has the exclusive access over new version it is creating. The job which abends is the one running at out end. This job tries to read the new version which is being created by the NDM process. This job is scheduled by the controller because dependency is added in the scheduler inorder to schedule the job whenever a new version of the file is catalogued.
Back to top
View user's profile Send private message
Phrzby Phil

Senior Member


Joined: 31 Oct 2006
Posts: 1042
Location: Richmond, Virginia

PostPosted: Mon Dec 17, 2007 8:30 pm
Reply with quote

Does your job have DISP=OLD?
Back to top
View user's profile Send private message
abin

Active User


Joined: 14 Aug 2006
Posts: 198

PostPosted: Mon Dec 17, 2007 8:31 pm
Reply with quote

No It's DISP=SHR
Back to top
View user's profile Send private message
Phrzby Phil

Senior Member


Joined: 31 Oct 2006
Posts: 1042
Location: Richmond, Virginia

PostPosted: Mon Dec 17, 2007 8:38 pm
Reply with quote

I don't know if it will work, but why not try OLD?
Back to top
View user's profile Send private message
abin

Active User


Joined: 14 Aug 2006
Posts: 198

PostPosted: Tue Dec 18, 2007 3:04 pm
Reply with quote

Sure, but I cannot test it in the production. So, could you please explain what change OLD will do. So that I can give an explanation.
Back to top
View user's profile Send private message
Phrzby Phil

Senior Member


Joined: 31 Oct 2006
Posts: 1042
Location: Richmond, Virginia

PostPosted: Tue Dec 18, 2007 8:26 pm
Reply with quote

OLD

Indicates that the data set existed before this step started and that
this step needs exclusive (non-shared) usage of the data set.

It's worth a try.
Back to top
View user's profile Send private message
abin

Active User


Joined: 14 Aug 2006
Posts: 198

PostPosted: Thu Dec 20, 2007 12:40 am
Reply with quote

Hi,

I tried to submit two jobs(same program) which uses same file with OLD. Second submitted job waited for some time and abended.

For me the second job should never be submitted until first job is over. In real first job is an NDM job which runs in client system and I prsently do not have a way of knowing whether NDM job is completed succesfully or not.
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Thu Dec 20, 2007 12:47 am
Reply with quote

Hello,

Quote:
Second submitted job waited for some time and abended.
Yes, it either timed-out or was canceled by the operator.

Quote:
For me the second job should never be submitted until first job is over.
Which appears to make this a job for the scheduler. If you define the second job to be run after the successful completion of the first job, you should have what you need.
Back to top
View user's profile Send private message
abin

Active User


Joined: 14 Aug 2006
Posts: 198

PostPosted: Thu Dec 20, 2007 1:30 am
Reply with quote

Hi,

Quote:
Which appears to make this a job for the scheduler. If you define the second job to be run after the successful completion of the first job, you should have what you need.


BUt the problem is as I mentioned earlier the first job is running at client side and sceduler at my end have no way of knowing the succesfull completion of this job.
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Thu Dec 20, 2007 1:48 am
Reply with quote

Hello,

Quote:
BUt the problem is as I mentioned earlier the first job is running at client side
Sorry 'bout that - i remembered that, but when i quickly looked back before posting, i missed it. icon_redface.gif

Maybe the process needs to be reversed and the data pulled from your end rather than pushed from the client system?
Back to top
View user's profile Send private message
Phrzby Phil

Senior Member


Joined: 31 Oct 2006
Posts: 1042
Location: Richmond, Virginia

PostPosted: Thu Dec 20, 2007 8:03 am
Reply with quote

It's odd that the file would be seen as catalogued before the job finishes.

Perhaps if it were landed as the (+1) of a GDG it might work.
Back to top
View user's profile Send private message
abin

Active User


Joined: 14 Aug 2006
Posts: 198

PostPosted: Sat Dec 22, 2007 3:42 pm
Reply with quote

Quote:
Maybe the process needs to be reversed and the data pulled from your end rather than pushed from the client system?


Hmm If nothing else comes up we may need to do it. But it's difficult to change something at client side. Mainly because how does our shop system knows whether a file is created at client side. As I said I earlier our shop system is blank about client system.

Quote:
It's odd that the file would be seen as catalogued before the job finishes.

Odd yes. but thats whats happening with NDM especially when file is huge. icon_sad.gif

Quote:
Perhaps if it were landed as the (+1) of a GDG it might work.

It is landed as (+1) of a GDG.

Thanks for all the support we implemented dependencies to the job to run when the file is NDmed. If it abends in system, well we will see it at that time. icon_cry.gif
Back to top
View user's profile Send private message
Phrzby Phil

Senior Member


Joined: 31 Oct 2006
Posts: 1042
Location: Richmond, Virginia

PostPosted: Sun Dec 23, 2007 9:20 am
Reply with quote

How about the sender's Connect:Direct process executing, after its COPY, a RUN TASK with PGM=DMRTSUB supplied with a JCL lib and member name of your job to kick off?

You can also pass parameters such as the DSNAME landed.
Back to top
View user's profile Send private message
abin

Active User


Joined: 14 Aug 2006
Posts: 198

PostPosted: Mon Dec 24, 2007 6:42 pm
Reply with quote

HI,

Quote:
How about the sender's Connect:Direct process executing, after its COPY, a RUN TASK with PGM=DMRTSUB supplied with a JCL lib and member name of your job to kick off?


That means changing client NDM card correct? Now, as I mentioned earlier any change in client side is discourged. Well I should say "not possible" instead of discouraged. icon_mad.gif
Back to top
View user's profile Send private message
Phrzby Phil

Senior Member


Joined: 31 Oct 2006
Posts: 1042
Location: Richmond, Virginia

PostPosted: Mon Dec 24, 2007 6:46 pm
Reply with quote

Well, if the sender and receiver cannot cooperate, then I certainly give up.
Back to top
View user's profile Send private message
superk

Global Moderator


Joined: 26 Apr 2004
Posts: 4652
Location: Raleigh, NC, USA

PostPosted: Thu Dec 27, 2007 8:55 pm
Reply with quote

abin wrote:
It is landed as (+1) of a GDG.


abin, that's a rather EXTREMELY IMPORTANT detail to omit from your original post!!! CONNECT:Direct has to place an enqueue on the entire GDG so it can do its job and properly update a relative generation without contention and/or interference from outside processes. We could've eliminated a lot of the guesses to your post had we known that fact.
Back to top
View user's profile Send private message
abin

Active User


Joined: 14 Aug 2006
Posts: 198

PostPosted: Mon Dec 31, 2007 5:48 pm
Reply with quote

HI Kevin,

Sorry that I missed that infomation in my original post.
So do we have a solution.

Quote:
CONNECT:Direct has to place an enqueue on the entire GDG so it can do its job and properly update a relative generation without contention and/or interference from outside processes

What I understand is when a new version of GDG is NDMed, it is catalogued at the begginning and the NDM process keeps an exclusive hold until the process is over.
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> All Other Mainframe Topics

 


Similar Topics
Topic Forum Replies
No new posts Help in Automating Batch JCL jobs mon... JCL & VSAM 3
No new posts process statement for SUPREC, CMPCOLM... TSO/ISPF 4
No new posts How to process dependent file based o... JCL & VSAM 8
No new posts Bind process DB2 4
This topic is locked: you cannot edit posts or make replies. How to process gdg in rexx program wh... CLIST & REXX 12
Search our Forums:

Back to Top