i am having a serious problem:
my job runs in region2 ,and it only triggers when we get file from region1
consider 6 files(1,2,3,4,5,6)
if file 1 is created at region1 then a job triggers and ndm file1 from 1 to 2 then a job runs in region2 and it is processed.
now if file 2 is generated and ndm job is triggered but region2 is down then file2 is not processed in region2.
at the same time file3 is generated at region1 it will be ndm to region2.
now we are having two files file2 and file3 in our region2 and the region2 is up after file 3 is in region2
so it will take file3 as per time and file dependency it will not process file2 ,but we need to process all the 6 files in sequence ,file(1,2,3,4,5,6)
do we ahve any solution for the same,after processing file in region2 we are deleting file,but in region1 we are storing in gdg version of 255
Joined: 26 Apr 2004 Posts: 4650 Location: Raleigh, NC, USA
The only way I see this process working is to logically serialize the data content when the datasets are produced on region1. Then, the process on region2 just needs to look at the content of the data and figure out which of the six datasets need to be processed in which order.
Assuming that both region1 and region 2 jobs are running under same schedulers. You can place dependancy/resource on both the jobs. Next Job1 cannot be triggered until Job2 completes and vice-versa (even both jobs have time dependency, this will work). We are using ZEKE, so we placed REASOURCE on both the jobs. Make sure that for the first time, job1 must run and then job2 should run. And the cycle of dependency should continue.
If both jobs are not maintained under same schedulers, then you must make application changes on both the jobs. The file created in job1 should be a GDG and not flat file. In job2, place exclusive control over the GDG (so that job1 won't create new version). When job2 executes, it should pick all the GDG and at the end of job2 delete all these GDG. Make sure you take backup of these GDG before you delete in same job2.