Joined: 26 Apr 2004 Posts: 4650 Location: Raleigh, NC, USA
I've used two approaches in the past:
- Maintain a logfile. When you use a generation, log it to the log file. On the next batch cycle, check the current existing generations against the log file. If you find a new one, process it, add it to the log file, and continue the processing cycles.
- Rename the used GDG. I'd rename the version number (V00) to (V01) as the indicator that it has been processed. This can really be handy if you have a need to re-process an older generation. Just rename it back to (V00) and let the batch process pick it up on the next cycle.
Joined: 06 Jun 2008 Posts: 8280 Location: Dubuque, Iowa, USA
Either set up individual files for each time the job creating the file can run, or have your job process AND DELETE all members of the GDG . If you need to retain data, you would need a second generation data group defined to retain the data.
suppose if my new file creation job runs at 5.30. then whatever the gdgs created before 5.30 shouldnt be considered
for processing.(should consider only the generation created from 6pm, 7pm jobs)
The last I heard, time travel was still considered not possible by the laws of physics. If YOUR job runs at 5:30, how is it going to know about the generations created at 6 or 7 without time travel? If you say your job is going to wait until they are created, that is what data set triggers in job schedulers are for.
My conclusion is that either you cannot describe your situation adequately for us to help you, or you are attempting to subvert the laws of physics for some reason.