IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

Multiple jobs writing to same gdg. Best practice?


IBM Mainframe Forums -> JCL & VSAM
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
dbzTHEdinosauer

Global Moderator


Joined: 20 Oct 2006
Posts: 6966
Location: porcelain throne

PostPosted: Fri May 16, 2008 8:19 pm
Reply with quote

Jason,

you will always have the problem: if the copy job goes bust and abends, leaving the GDGs unprotected (NOT exclusively locked ), the +1 jobs are going pee on your parade.

your reluctance to 'program' the scheduler - job dependencies - leaves you in a no-win situation. Schedulers were created to solve this (and many other) production processing problems. Using job dependencies (relying on the EOJ return code of a previous job) provides you with a map of how things are interrelated.
Most shops i have been in are of the
'as few as possible steps per job mentality'
- which means you have a gazillion jobs but the scheduler does not care, and if you fully use all the options of the scheduler, you can really lock down your shop and prevent this ripple type event from occurring. Even if you had a 1000 jobs, two people could schedule them in a couple of days. how long have you been playing around trying to solve this with disp parms/intermediate dependent jobs ?

plus, once you have it done, adding or deleting one new job is not that resource intensive.
Back to top
View user's profile Send private message
jasorn
Warnings : 1

Active User


Joined: 12 Jul 2006
Posts: 191
Location: USA

PostPosted: Sat May 17, 2008 4:07 am
Reply with quote

dbzTHEdinosauer wrote:
Jason,

you will always have the problem: if the copy job goes bust and abends, leaving the GDGs unprotected (NOT exclusively locked ), the +1 jobs are going pee on your parade.

Yes the copy job has this risk but since we might have one copy job abend in a year, resolving a copy job abend is fast, and the many jobs which write to the gdg base don't typically run during the time the copy step runs, and we have a process to detect when generations were dropped, it seems to be worthwhile verses trying to manage 300-500 jobs turning over constantly in a scheduler.
Quote:

your reluctance to 'program' the scheduler - job dependencies - leaves you in a no-win situation. Schedulers were created to solve this (and many other) production processing problems. Using job dependencies (relying on the EOJ return code of a previous job) provides you with a map of how things are interrelated.

There is no reluctance. The 300-500 jobs that write to the gdg base can't be coded as conflicts with the copy job because the limit of conflicts is 255.

And as far as the 300-500 being part of some schedule where they're dependent upon one another, that isn't an option because they absolutely are not dependent upon one another. They have totally independent schedules are run based on when outside firms send us files.

But this isn't an issue as modding the +1 generation and then checking to see if it's empty works perfectly and is contained with the job and isn't dependent upone external forces. The copy job that runs once a day being the exception. But that's not an issue as discussed above.
Quote:

Even if you had a 1000 jobs, two people could schedule them in a couple of days. how long have you been playing around trying to solve this with disp parms/intermediate dependent jobs ?

I've posted a bit and asked around but I've spent about a day working out a solution once I finally concluded none of the solutions anyone offered solved the problem.

Quote:

plus, once you have it done, adding or deleting one new job is not that resource intensive.

That's not true at our shop. Even if there is a way to put these into the scheduler and that would prevent the generations from being overwritten, it takes us half a day to modify one job in the schedule. Given our JOB turnover, that's too expensive.

But all of the solutions using a scheduler all were based on holding up the other jobs while the one that abended was down. Not only do we not want to hold the other jobs up, we can't as they're indenedent jobs and have data from other firms that need to be processed.
Back to top
View user's profile Send private message
Bill Dennis

Active Member


Joined: 17 Aug 2007
Posts: 562
Location: Iowa, USA

PostPosted: Mon May 19, 2008 6:41 pm
Reply with quote

Use DISP=(OLD,DELETE,KEEP) in the copy and get rid of the separate DELETE step. Just be sure the job will actually ABEND on an error. For example, if you use SORT to do the copy/delete it might just pass CC=16 on an error and the files could delete! Two copy steps would even be better, OLD,KEEP on the first and OLD,DELETE,KEEP on the next.
Back to top
View user's profile Send private message
jasorn
Warnings : 1

Active User


Joined: 12 Jul 2006
Posts: 191
Location: USA

PostPosted: Tue May 20, 2008 4:15 am
Reply with quote

Bill Dennis wrote:
Use DISP=(OLD,DELETE,KEEP) in the copy and get rid of the separate DELETE step. Just be sure the job will actually ABEND on an error. For example, if you use SORT to do the copy/delete it might just pass CC=16 on an error and the files could delete! Two copy steps would even be better, OLD,KEEP on the first and OLD,DELETE,KEEP on the next.

Yes, this is the original solution I proposed for the copy job. Not clear to me if we're going to do this or not.
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> JCL & VSAM

 


Similar Topics
Topic Forum Replies
No new posts Finding and researching jobs All Other Mainframe Topics 0
No new posts INCLUDE OMIT COND for Multiple values... DFSORT/ICETOOL 5
No new posts Replace Multiple Field values to Othe... DFSORT/ICETOOL 12
No new posts How to create a list of SAR jobs with... CA Products 3
No new posts Multiple table unload using INZUTILB DB2 2
Search our Forums:

Back to Top