IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

GDG version conflict


IBM Mainframe Forums -> All Other Mainframe Topics
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
srini_igsi
Currently Banned

New User


Joined: 09 Dec 2005
Posts: 30
Location: Pune

PostPosted: Sat Sep 22, 2007 3:49 pm
Reply with quote

Hi,

All of our Jobs are dataset triggered. As soon as a Job is triggered, we first take the back up of the file in a GDG and then we delete the original file in the immediate next step. The Job makes use of the current GDG version(+0) created in the previous step for the rest of the processing.

Let us assume that while processig the first instance of the file, we have received another file. So the next instance of the job gets triggered and as usual it takes the back of the second instance of the file and deletes the same.

So here my doubt is, both the parallel jobs running with milli seonds diff may refer to the current GDG version(+0) created for the first instance of the first file which triggered the first job.

I s my understanding correct?? I jus want to relate the first file to the first instance of the job, second file to the second instance job and so and so.

Aany suggetions, pl let me know.

Thx in advance.
Back to top
View user's profile Send private message
expat

Global Moderator


Joined: 14 Mar 2007
Posts: 8797
Location: Welsh Wales

PostPosted: Sat Sep 22, 2007 3:57 pm
Reply with quote

Well if the same dataset is created and the same job is submitted by the schedule package then the two jobs will not run in parallel because they both have the same job name.
Back to top
View user's profile Send private message
srini_igsi
Currently Banned

New User


Joined: 09 Dec 2005
Posts: 30
Location: Pune

PostPosted: Sat Sep 22, 2007 9:25 pm
Reply with quote

Does it mean that the scheduler itself will restrict the other instance of the job to run while the first instance of the same job is being executed??

The actual issue is whenever the server receives a file, immediately it NDM's the fil to Mainframe with specific name so that the corresponding Job gets triggered and executed. However whenever the server receives another file, the NDM is failing to deliver the file to Mainframe because, still the first file received might be executing in Mainframe there by will restrict the other file to come in. Currently we are deleting the actual file at the end only.

My primary goal is whenever the Mainframe receives more than one file at a time, I shouldn't let the NDM fail to deliver another instance of the file at the same time I should be able to process all the files whatever I come across at any point of time.

Because of this problem Client is suggesting us to move the file deletion step to the top so that we should first take the back of the file then delete the same and then make use of the GDG version created for the rest of the processing.

Pl let me know if you have any suggetions.
Back to top
View user's profile Send private message
murmohk1

Senior Member


Joined: 29 Jun 2006
Posts: 1436
Location: Bangalore,India

PostPosted: Sat Sep 22, 2007 10:00 pm
Reply with quote

Reddy,

Quote:
Does it mean that the scheduler itself will restrict the other instance of the job to run while the first instance of the same job is being executed??

Have you ever submitted same job more than once when first instance of the job is running? If not, try once (dummy job which takes some time to complete) and notice what happens in the spool.
Back to top
View user's profile Send private message
superk

Global Moderator


Joined: 26 Apr 2004
Posts: 4652
Location: Raleigh, NC, USA

PostPosted: Sat Sep 22, 2007 10:22 pm
Reply with quote

I totally get the problem, and I don't have an easy solution. If it were me, I'd have them send the data to a unique dataset, or directly to a GDG +1 generation for each cycle, and I'd use the static dataset as a dummy dataset name trigger to run the schedule, not to actually hold any real data. This way, there'd never be the chance to inadvertantly overlay the data between cycles.

Any chance you can have the process changed so that it can call the schedule upon completion, and avoid the whole dataset trigger mess completely?
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Sat Sep 22, 2007 11:24 pm
Reply with quote

Hello,

IMHO, using the creation of a dataset to trigger a job or some other process only works well when the frequency of the file creation is very limited. If there is a chance of the "next" file arriving before "this" one is processed, i believe the trigger will not work well - as you are seeing.

What other approaches have been looked at?
Back to top
View user's profile Send private message
srini_igsi
Currently Banned

New User


Joined: 09 Dec 2005
Posts: 30
Location: Pune

PostPosted: Sat Sep 22, 2007 11:24 pm
Reply with quote

Hi,

To be straight forward I don't have any exposure to Job scheduling.

All the problems started when the NDM was failing to deliver the files to Mainframe. Whenever there are multiple files queued up in the server, the NDM is frequently failing to push the files to Mainframe.

Whenever a file is being excecuted and the sub sequent files are queued up in the server, the NDM is failing and every time we had to log-on to the server and mannually push the files to Mainframe.


With regards to my first post, if we can assume that parllel job processing is allowed, when parallel jobs are running with milli seonds diff, do both the jobs refer to the current GDG version(+0) created for the first instance of the first file which triggered the first job or the first instance of the job will be able to refer to the GDG version created for it and the sencond instance of the job will be able to refer to the version created for for the second instance of the file??
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Sun Sep 23, 2007 2:12 am
Reply with quote

Hello,

One of the difficulties with the current process is that the server can "Push" whenever it wants.

Is there any chance that the mainframe could run a process periodically that would "Pull" whatever was ready?

Another thought might be to change the process to send the server files to a queue and have the mainframe process pull one "file" at a time from the queue. If the process finds data available, process it and re-submit itself via the internal reader. When the process detects no more data, it woulkd just end. This process would be scheduled to run every n minutes processing anythning available or simply ending when there was no data.

I believe trying to use the creation of a generation to trigger the process thru the scheduling system will cause more headscratching than providing something useful.
Back to top
View user's profile Send private message
srini_igsi
Currently Banned

New User


Joined: 09 Dec 2005
Posts: 30
Location: Pune

PostPosted: Sun Sep 23, 2007 3:06 pm
Reply with quote

Hi Dick,

With regrads to your last point, the GDG version is not going to trigger the Job. The actual file with a standard dataset will first trigger the Job. But as soon as a Job is triggered , our Client is suggesting us to take the original file into a bckup and then delete the same so that the job can make use of the back up taken for the reset of the processing and at the same time the other file which may be queued up in the server can find a way to Mainframe and triggers another instance of the same Job and so and so.

So here what I suspect is, when the first instance of the job is refering to the current version of the GDG, the second instance of the file which triggers the second instance of the job with milli seconds diff might also be refering to the current version of the GDG there by the second instance of the job may refer to the wrong instance of the file (first file) rather the seond instance of the file. This is because the GDG version is updated only after the Job (first job) is succssfully completed.

If my assumption is incorrect, we will not have any issues as each and every instance of the job will be able to refer to its corresponding instance of the file which triggers that job.

However I will talk to our scheduling guys if there is a way to send all the files to a queue and have the mainframe process pull one "file" at a time from the queue. I will also confirm with them whether two instances of the same job can run paralelly.
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Sun Sep 23, 2007 9:43 pm
Reply with quote

Hello,

Quote:
the GDG version is not going to trigger the Job
My bad icon_redface.gif Used gdg incorrectly. My thought was that the uncontrolled incoming was causing the problem.

Quote:
our Client is suggesting us to take the original file into a bckup and then delete the same so that the job can make use of the back up taken for the reset of the processing and at the same time the other file which may be queued up in the server can find a way to Mainframe and triggers another instance of the same Job and so and so.
I believe the same problem would exist if the time between arrivals was small enough.

If a job reads the current generation of the gdg with OLD,DELETE,KEEP, no other process will be able to read that same generation. The same is true if a process is creating a +1. Only one process will create a given generation.

If the incoming files are queued and there is just one jobname to pull entries from the queue and that job creates output, it will always run single-thread and the collisions might be avoided.
Back to top
View user's profile Send private message
Srihari Gonugunta

Active User


Joined: 14 Sep 2007
Posts: 295
Location: Singapore

PostPosted: Mon Sep 24, 2007 10:29 am
Reply with quote

My thought:
Split the job into two.
first job: copy the file to gdg and delete
Second job: process the current version of the gdg
Back to top
View user's profile Send private message
srini_igsi
Currently Banned

New User


Joined: 09 Dec 2005
Posts: 30
Location: Pune

PostPosted: Mon Sep 24, 2007 12:00 pm
Reply with quote

Srihari,

If two or more Jobs are running paralelly and the jobs are done with the back up of diff datasets with milli seconds diff, the Jobs are subjected to refer to the current generation of the dataset taken by the very latest Job . So I don't think this logic is reliable always.
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> All Other Mainframe Topics

 


Similar Topics
Topic Forum Replies
No new posts isfline didnt work in rexx at z/OS ve... CLIST & REXX 7
No new posts How to copy the -1 version of a membe... TSO/ISPF 4
No new posts Copying GDG version(all/few) from pro... CLIST & REXX 13
No new posts XMITIP Latest Version JCL & VSAM 2
No new posts How to determine TLS/SSL version in m... TSO/ISPF 2
Search our Forums:

Back to Top