IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

GDG gens missed when they are remotely created without break


IBM Mainframe Forums -> All Other Mainframe Topics
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
harisukumaran

New User


Joined: 14 Jun 2005
Posts: 75

PostPosted: Sat Dec 14, 2013 10:49 am
Reply with quote

I am facing a classic problem which has many classic solutions but none of them can be used because of the special constraints that I am supposed to work within. I could not find a solution that is guaranteed to work across system level upgrades.

So I am hoping that this august forum would bless me with a solution.

Our mainframe based application receives data from remote distributed systems via various file transfer protocols, NDM, FTPS, etc. These files arrive in quick succession, sometimes a few seconds within one another.

They are also accomapanied by TWS triggers that kick off the jobs that process them.

The files are written as generations under the same GDG. My jobs are designed to read the (0) gen of the files. So, if gen1 is written and the trigger is set and almost simultaneously gen2 is written and the trigger is set again, JobA gets triggered twice and 2 JobAs with different jobnums gets triggered and wait to run.

When the first JobA runs gen2 would already be cataloged. So gen1 does not get processed. I wrote a process using rexx to do a listcat of the gdg everytime and use the saved g0000v00 name from the prevous run to locate new generations of the file and thus sequentially step through the g0000v00s and prevent them from being missed.

This has been working more or less but has 2 problems. 1) on rare occassions TWS does not trigger jobs even if SRSTAT triggers are set. On those occassions, new g0000v00s get processed only during the next run. This introduces a lag in the processing. 2) This solution is not guaranteed to work across ystem upgrades since it lists g0000v00s using listcat and if the format of listcat output chalnges there is a remote possibilty that the process may not conitnue to work. I have enhanced the process to set alerts if there a re additional new generations. I have also made the process more independent of the format of listcat.

But I have been told to look for solutions that are guaranteed to work beyond armageddon. Is there any such soultion available ?

We cannot a) Ask the sending systems to put a 'delay' between 2 successive files b) Cannot hold the incoming g0000v00s and have them concatenated and processed periodically by scheduled processes c) Files have to be send out 'as and when they' arrive there is no question of 'holding them'. d) we have to use GDGs. no file names with date and time and d) i will be shot if I mention hfs files or java.

De-blocked
Back to top
View user's profile Send private message
Nic Clouston

Global Moderator


Joined: 10 May 2007
Posts: 2454
Location: Hampshire, UK

PostPosted: Sat Dec 14, 2013 4:45 pm
Reply with quote

Why not have each transfer write to a specific, non-gdg, dataset and after the dataset has been processed copy it to a gdg? This is how it is generally done in my personal experience.
Back to top
View user's profile Send private message
harisukumaran

New User


Joined: 14 Jun 2005
Posts: 75

PostPosted: Sat Dec 14, 2013 6:07 pm
Reply with quote

I don't think this would solve the issue. It could cause the data to be over written and completely lost. 1) remote system writes Batch1 to input sequential file and sets srstat trigger. 2) Jobnum1 is triggered by tws and it comes into queue but does not start running yet. 3) Remote system overwrites input seq file withBatch2 and sets trigger 4) jobnum2 comes into queue. 5) Jobnum1 gets initiator and starts running and processes batch2 from input seq file. It finishes 6) jobnum2 starts running, processes batch2 again and finishes
Back to top
View user's profile Send private message
harisukumaran

New User


Joined: 14 Jun 2005
Posts: 75

PostPosted: Sat Dec 14, 2013 6:22 pm
Reply with quote

Batch1 is lost since it was over written. Jobnum1 and Jobnum2 were 2 instances of JobA that was trying to copy the input seq file to the gdg
Back to top
View user's profile Send private message
Robert Sample

Global Moderator


Joined: 06 Jun 2008
Posts: 8700
Location: Dubuque, Iowa, USA

PostPosted: Sat Dec 14, 2013 6:51 pm
Reply with quote

1. If the job doesn't get started, there is nothing you can do about processing lag, unless you start a job every so many minutes to see what work is out there to do. Your site needs to investigate what is causing TWS to not submit the job at the appropriate time.

2. Use dynamic allocation instead of reading LISTCAT output. Dynamically allocate successive generations and process them until you get a dynamic allocation failure.
Back to top
View user's profile Send private message
PeterHolland

Global Moderator


Joined: 27 Oct 2009
Posts: 2481
Location: Netherlands, Amstelveen

PostPosted: Sat Dec 14, 2013 7:59 pm
Reply with quote

I would set a trigger BEFORE the data is received.

And jobs submitted by TWS and not able to run? Then there is something terrible wrong with the job scheduling/job class/initiator definitions.

I, at least, never happened to see that.
Back to top
View user's profile Send private message
harisukumaran

New User


Joined: 14 Jun 2005
Posts: 75

PostPosted: Sat Dec 14, 2013 10:00 pm
Reply with quote

The scheduling glitch that results in the job not getting triggered occurs very rarely. I have been told that happens when an ETT is set at the time when the current days plan is loading. But that is only one of issues. I need something that has no chance of breaking as a result of a future system upgrade.
I am required to process the gens in the order they arrive so I cannot process (0) gen and work backwards. So I think I do need save the g0000v00 and do listcat. The new generation to be processed is written to sequential file, picked up by a subsequent step allocated dynamically and processed
Back to top
View user's profile Send private message
harisukumaran

New User


Joined: 14 Jun 2005
Posts: 75

PostPosted: Sat Dec 14, 2013 10:22 pm
Reply with quote

Even if there was no scheduling glitch the issues of gens getting skipped would still remain because the jobs getting intiators and beginning to run is independent of new generation s being created by remote system. Ideally the remote system should wait for a signal from mainframe after a gen is processed before sending the next gen. But we cannot ask the remote system to change their process
Back to top
View user's profile Send private message
harisukumaran

New User


Joined: 14 Jun 2005
Posts: 75

PostPosted: Sat Dec 14, 2013 10:34 pm
Reply with quote

If issue will be there even if the trigger is set before the file sent. It will guarantee that the first job would have run a completed before the next file is sent by remote system
Back to top
View user's profile Send private message
harisukumaran

New User


Joined: 14 Jun 2005
Posts: 75

PostPosted: Sat Dec 14, 2013 10:39 pm
Reply with quote

Robert samples solution would have been good if I did not have to process the older generations first. I could have dynamically read each generation and moved them off the gdg one by one via the rexx
Back to top
View user's profile Send private message
harisukumaran

New User


Joined: 14 Jun 2005
Posts: 75

PostPosted: Sat Dec 14, 2013 10:42 pm
Reply with quote

Is there a way to dynamically process generations staring with the oldest one without doing the listcat. Listcat is ine wilcard that is being used that could potentially produce output in a different format after a future system level upgrade.
Back to top
View user's profile Send private message
harisukumaran

New User


Joined: 14 Jun 2005
Posts: 75

PostPosted: Sun Dec 15, 2013 12:15 am
Reply with quote

Catalog search interface via rexx ? Would that be unaffected by future system changes?
Back to top
View user's profile Send private message
gcicchet

Senior Member


Joined: 28 Jul 2006
Posts: 1702
Location: Australia

PostPosted: Sun Dec 15, 2013 3:25 am
Reply with quote

Hi,

even if you manage to correctly generate the job(s) with the correct GDG(s),
in the event of of 2 jobs waiting to run and the fisrt one fails, you will now process files out of sequence unless the process aborts with incorrect sequencing.

Gerry
Back to top
View user's profile Send private message
Nic Clouston

Global Moderator


Joined: 10 May 2007
Posts: 2454
Location: Hampshire, UK

PostPosted: Sun Dec 15, 2013 3:44 am
Reply with quote

Why not implement a signalling system: a file is not sent to you until the sender receives confirmation from you that the last file was processed?
Back to top
View user's profile Send private message
Robert Sample

Global Moderator


Joined: 06 Jun 2008
Posts: 8700
Location: Dubuque, Iowa, USA

PostPosted: Sun Dec 15, 2013 4:28 am
Reply with quote

Quote:
Robert samples solution would have been good if I did not have to process the older generations first. I could have dynamically read each generation and moved them off the gdg one by one via the rexx
You obviously do not understand what I said. Your update program should write, into a file, the last generation processed. When the update program runs, it first reads this file to get the last generation processed and then uses dynamic allocation to access the next generation followed by the next followed by the next until the latest (most recent) generation has been processed. This handles them in the creation sequence, which is what you need.

And the entire thread is a perfect example of a BROKEN DESIGN. If the design were done correctly, all of this would have been considered and the design of the system would have been considerably different. While it is possible to fix broken design operationally, it always takes much more effort and time to fix than doing the proper design in the first place.
Back to top
View user's profile Send private message
harisukumaran

New User


Joined: 14 Jun 2005
Posts: 75

PostPosted: Sun Dec 15, 2013 6:55 am
Reply with quote

As I said before 1) the existing process does save the generation 2) it does use it as a pointer in listcat listing to find the next generation. 3) a subsequent step then dynamically alocates and processes the gen. All of that is already being done. And the process does work ordinarily. But please read this. The main issue is the reliance on listcat to identify the gens that are there. The listing is parsed based on certain assumption s about the format. If these assumptions become false due FUTURE SYSTEM UPGRADES the process may break. THAT is the main issue. If there is a way to process gens in Fifo order without using listcat we will have process that will work across system upgrades. Plea se let me know
Back to top
View user's profile Send private message
harisukumaran

New User


Joined: 14 Jun 2005
Posts: 75

PostPosted: Sun Dec 15, 2013 7:12 am
Reply with quote

See, if this had been possible. I would not have to save the generation or do a a listcat. I simply would have had to dynamically read each generation and move them off the gdg. So the next time the job reads the gdg the processed generation would not be there. So the next gen gets picked up. BUT I need to process fifo order. So this wont work.
Back to top
View user's profile Send private message
Robert Sample

Global Moderator


Joined: 06 Jun 2008
Posts: 8700
Location: Dubuque, Iowa, USA

PostPosted: Sun Dec 15, 2013 7:18 am
Reply with quote

You do NOT need to run LISTCAT to find generations -- if the dynamic allocation failed, look at the return code to see why.

And your process is broken if you think LISTCAT is required -- even more broken when you think a system upgrade will change LISTCAT to make your process not work. Why are you spending all this time worrying about something THAT MAY NEVER OCCUR? And even if LISTCAT output is changed, it would take minutes to change the code to fix the problem -- so you have ALREADY spent far more time and energy worrying about this "problem" than could ever be justified.

And do NOT try to say this is "the requirement" -- that is a phrase used on this forum by those who do not care to think (or cannot think) independently to solve system problems.
Back to top
View user's profile Send private message
harisukumaran

New User


Joined: 14 Jun 2005
Posts: 75

PostPosted: Sun Dec 15, 2013 8:47 am
Reply with quote

Do do see the wisdom in your words mr sample
I can use one of the dynamic allocation programs with rexx to pull dsns from dsns from (0) backwards till I find the saved dsn from last run and then pick the dsn directly after that. No need for listcat. Thank you so much for your patience and time. You rock icon_smile.gif
Back to top
View user's profile Send private message
Nic Clouston

Global Moderator


Joined: 10 May 2007
Posts: 2454
Location: Hampshire, UK

PostPosted: Sun Dec 15, 2013 11:42 pm
Reply with quote

Or you could read the file of holding the last generation processed and then allocate the next - no need to read backwards through the gds's. Simply read forward processing one at a time until allocation fails due to no more generations left to process.
Back to top
View user's profile Send private message
harisukumaran

New User


Joined: 14 Jun 2005
Posts: 75

PostPosted: Mon Dec 16, 2013 4:03 am
Reply with quote

Are you suggesting to read next gen by incrementing g0000v00 number manually example: process g0002v00 if the saved gen is g0001v00, without using reading using relative numbers (0), (-1) and so on ?
Back to top
View user's profile Send private message
Nic Clouston

Global Moderator


Joined: 10 May 2007
Posts: 2454
Location: Hampshire, UK

PostPosted: Mon Dec 16, 2013 4:23 pm
Reply with quote

Naturally. I do not know if you could dynamically allocate using relative generations so, of course, you have to specify the full DSN.
Back to top
View user's profile Send private message
harisukumaran

New User


Joined: 14 Jun 2005
Posts: 75

PostPosted: Mon Dec 16, 2013 5:28 pm
Reply with quote

Manually incrementing the g0000v00 could lead to problems. Also I cannot start from the oldest gen and read forward without determining number of active gens. If you know how to do this without parsing a listcat listing, please let me know. I can use bpxwdyn to read gdg using relative numbers and get the DSN. If I start from (0) and read backwards I don't need to know in advance the numbers of active gens
Back to top
View user's profile Send private message
expat

Global Moderator


Joined: 14 Mar 2007
Posts: 8796
Location: Welsh Wales

PostPosted: Mon Dec 16, 2013 5:43 pm
Reply with quote

Why not have just one job that processes the GDG base, and then copies all data processed to a new GDG and delete the current input GDS datasets ?
Back to top
View user's profile Send private message
Nic Clouston

Global Moderator


Joined: 10 May 2007
Posts: 2454
Location: Hampshire, UK

PostPosted: Mon Dec 16, 2013 5:54 pm
Reply with quote

Quote:
Manually incrementing the g0000v00 could lead to problems.
What problems if you code it correctly?
Quote:
I cannot start from the oldest gen and read forward without determining number of active gens
Why not? You can either loop through processing each dataset one by one or you can determine the number to be processed by incrementing the generation number and testing for the dataset's existence. Once you find one that is not existing then you know how many you need to process.
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> All Other Mainframe Topics Goto page 1, 2  Next

 


Similar Topics
Topic Forum Replies
No new posts How to read unpacked field created in... DFSORT/ICETOOL 12
No new posts Break a record into fields, each fiel... SYNCSORT 2
No new posts Missing SECTION break DFSORT/ICETOOL 9
No new posts REXX identify the last/latest created... CLIST & REXX 7
No new posts Break Auto update screen without message TSO/ISPF 2
Search our Forums:

Back to Top