View previous topic :: View next topic
|
Author |
Message |
archana12
New User
Joined: 24 Jan 2008 Posts: 4 Location: Bangalore
|
|
|
|
I have a job that pulls a file from another server. It keeps on pulling different files until the entire data is pulled via FTP job.
I am facing system issue where due to traffic congestion the job gets hung in middle and all the files are not pulled.
I am thinking to make the changes in the job where if this job does not complete for a specific time another job gets triggered which would be err red out so that we are notified about the error.
The hung job does not return any RC. It keeps on running continuously till any manual intervention.
I want to avoid this manual intervention. The job if gets hung and does not complete in half an hour another job in scheduling will be triggered which will be error.
Can this solution be provided in scheduling. If yes please provide me the info if such mechanism can be opted in Tws scheduling |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
You might consider having the remote systems "push" the files to the mainframe. If there is a "last file" that is uploaded, you could use a dataset trigger t osubmit the job after the last dataset was uploaded.
If there is no predictable last file, you could periodically (say every 10 minutes) run a process to check if all of the files are available. When all are available, submit the job that uses these files. When they are not all on the mainframe, store the time of the first try somewhere (some file or database table). When the existence-check job has tried for 1/2 hour, submit the job that presents the error.
If the process needs many files, you might consider running with what is available and using the rest on the next run. One of the systems i supported was the daily processing for 322 manufacturing and distribution pooints in several time zones. With so many, it was quite regular to be "missing files", so we could not wait until all were received. |
|
Back to top |
|
|
archana12
New User
Joined: 24 Jan 2008 Posts: 4 Location: Bangalore
|
|
|
|
Thanks Dick. My concern is the very first step of the job is FTP and gets hung. If incase this job does not complete in 10 min. Can I trigger another job which will be err red out.
Any reference or info how exactly can I proceed will be really helpful. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
It appears that my reply was not clear. To do as i suggested, there would be no ftp step in the job - the ftp processes would be initiated by the remotes and the data "pushed" to the mainframe. Then either a "last dataset" would trigger the submission of "the job" or when there was a periodic check of the availability of the datasets, the job would be triggered when all were available.
None of my sites use tws so i don't know if checking how a long a job has been running can be used to trigger some other job (as well as terminate the problem job).
You might also talk with whoever supports your ftp product and see if there is a way to change the time-out so that the ftp job does not "wait forever". . . |
|
Back to top |
|
|
|