View previous topic :: View next topic
|
Author |
Message |
venksiv
New User
Joined: 20 Jun 2015 Posts: 26 Location: INDIA
|
|
|
|
Hi all,
I am back requesting your inputs to resolve the below scenario:
* I am executing REXX exec using an event-triggered job.
* This job gets triggered via OPC scheduler whenever a new generation of GDG is created (through FTP/NDM).
* The exec is logically coded to read and process the latest generation
* Now, this logic holds good if there is a single event
* When multiple generations are created consecutively, multiple jobs will be in execution queue and since the job names are same (and unchangeable), there is a possibility of the first job to be consuming the latest generation
* However, this should not happen. No generation should be missed, no generation should be processed more than once and each job must process the corresponding file in order for the preceding conditions to be followed
* Furthermore, during the business hours, even though the jobs are getting triggered, they are not executing immediately due to operations restrictions in Dev region. This adds to the complexity of the scenario - the very first instance of the job will end up processing the latest generation.
Please share your expertise to achieve the desired processing logic - each job should process only one generation and all the generations must be processed. |
|
Back to top |
|
|
Akatsukami
Global Moderator
Joined: 03 Oct 2009 Posts: 1787 Location: Bloomington, IL
|
|
|
|
Have the exec write the absolute generation of the data set it processed. Have it select for processing the most recent generation not written out. |
|
Back to top |
|
|
Pedro
Global Moderator
Joined: 01 Sep 2006 Posts: 2593 Location: Silicon Valley
|
|
|
|
Quote: |
Have the exec write the absolute generation of the data set it processed |
The exec needs a file where it will keep the status of its processing. The next instance of the exec will read the status file and compare against the actual GDGs... it will be able to determine which have already been processed and which have not. It will process the GDG and save a new status file.
Quote: |
Please share your expertise |
My preference is for the first job to process any GDGs that need processing. Any subsequent jobs will start but not find any work to do and just end. Yeah, this goes contrary to your requirement list, but in a scenario where numerous jobs can be submitted before they run, it seems, IMHO, better to make sure all of the data is processed by the jobs that do run. That is, what if one of the jobs does not run for some external reason? |
|
Back to top |
|
|
enrico-sorichetti
Superior Member
Joined: 14 Mar 2007 Posts: 10886 Location: italy
|
|
|
|
well ... not too much after all
suppose the ts writes two rexx scripts ...
the wrapper ...
lists ( IGGCSI00) the GDG involved
reads - DISP(OLD) a PS dataset with just one record with the last generation processed
loop forever
determines the generation to be processed ( leaves loop if no more )
calls the script that processes the <current> generation
on success rewrites the tracking ps
end of loop |
|
Back to top |
|
|
expat
Global Moderator
Joined: 14 Mar 2007 Posts: 8796 Location: Welsh Wales
|
|
|
|
Or maybe the job processes the whole GDG group, and deletes/renames the whole GDG group at the end
That's the way I've done it in the past |
|
Back to top |
|
|
Pete Wilson
Active Member
Joined: 31 Dec 2009 Posts: 590 Location: London
|
|
|
|
yes, or MOD to an existing flat-file and sweep that up at the end and write an EOF or reallocate it to empty it ready for the next cycle |
|
Back to top |
|
|
venksiv
New User
Joined: 20 Jun 2015 Posts: 26 Location: INDIA
|
|
|
|
Thank you very much for your inputs.
I am on long weekend and don't have connectivity to work from home. I will take your design inputs and work on the solution on Monday (IST).
Thanks, again. |
|
Back to top |
|
|
daveporcelan
Active Member
Joined: 01 Dec 2006 Posts: 792 Location: Pennsylvania
|
|
|
|
I second Expat's approach.
It has worked for me many times.
One addition, before processing the whole GDG group, create an empty +1 generation.
If there are no generations to process, you will get a JCL error. |
|
Back to top |
|
|
venksiv
New User
Joined: 20 Jun 2015 Posts: 26 Location: INDIA
|
|
|
|
Thanks, daveporcelan.
Some other task pre-empted and I had to hold the coding of this logic in my tool. But I am supposed to complete this as early as possible.
I will share my findings once I resume the coding.
Thanks, again. |
|
Back to top |
|
|
venksiv
New User
Joined: 20 Jun 2015 Posts: 26 Location: INDIA
|
|
|
|
Hi all,
I have coded the below logic and tested it manually. I also intend to test it during US business hours with actual system triggers.
Note: I allocated one Tracker file - this tracker will contain the absolute generation # that was processed during the previous execution. Hence, I will update this during every execution with the absolute generation # being processed.
Pseudo code:
1. Once the job is triggered, find the latest generation in DASD (using BPXWDYN or LISTCAT). Parse the absolute generation number (say, GEN1).
2. Read the Tracker file. Parse the absolute generation number (say, GEN2).
3. Compare the GEN1 and GEN2:
3.a) If GEN2 < GEN1:
Increment GEN2, parse the fully qualified GDG name and update the Tracker file in OLD disp. Free the Tracker file.
3.b) Else (i.e., logically GEN2 = or > GEN1)
This will indicate that something has gone wrong. Send an email to support team to verify the integrity of Tracker file and last generation that was processed.
4. Process the GDG corresponding to GEN2 (new value) for the current execution of the job.
I sincerely thank everyone who shared design perspective. In case I face any logical issues during the testing, I will share that too. |
|
Back to top |
|
|
Nic Clouston
Global Moderator
Joined: 10 May 2007 Posts: 2454 Location: Hampshire, UK
|
|
|
|
You should not update/free the tracker file until you have successfully processed the data. |
|
Back to top |
|
|
JPVRoff
New User
Joined: 06 Oct 2009 Posts: 45 Location: Melbourne, Australia
|
|
|
|
I have a process like this. Each job simply processes and deletes the current generation. If I needed to, I'd create a backup of the file I was processing - but I don't need to.
Seems to work OK for me... |
|
Back to top |
|
|
venksiv
New User
Joined: 20 Jun 2015 Posts: 26 Location: INDIA
|
|
|
|
Nic,
Thanks for your valuable input. You are absolutely right about not updating/freeing the tracker file until the processing is done. However, a restart mechanism can't be built in place to process the failed request again (the job runs as event-triggered). The process will intimate the team about success or failure of processing of a file. If the process fails even before the email is sent, I will have to wait for the requester to come back for further analysis.
Since all the jobs are going to run sequentially (job names are same), I am going to depend on the tracker for identifying next file to be processed. Also, I can have another tracker-like file which maintains the request number, the gdg generation and the result (pass/fail) with the result being updated after the processing is done. This way I could know if the processing was done on a particular request no matter what the result was.
JPVRoff,
Thanks for sharing your logic. I believe your logic is suitable when there is one generation created at a time. The scenario being discussed involves presence of multiple unprocessed generations of request file when the job executes. This execution should process the oldest of the unprocessed generations and not the latest thought that will be the one referred to if (0) is used on DASD.
Thanks all for sharing your insights and helping me with the solution. My process is yet to be rolled out to users. Will share the issues after the implementation. |
|
Back to top |
|
|
Pete Wilson
Active Member
Joined: 31 Dec 2009 Posts: 590 Location: London
|
|
|
|
'Issues' should be resolved in Testing!
Good luck anyway. |
|
Back to top |
|
|
venksiv
New User
Joined: 20 Jun 2015 Posts: 26 Location: INDIA
|
|
|
|
Affirmative, Pete!
I wrote that with the intention of sharing real-time scenarios.
Scope is ever widening! |
|
Back to top |
|
|
venksiv
New User
Joined: 20 Jun 2015 Posts: 26 Location: INDIA
|
|
|
|
Hi all,
My tool is live now and it was able to process different generations of GDG sequentially using the tracker logic. Thank you for all your guidance on this. |
|
Back to top |
|
|
|