View previous topic :: View next topic
|
Author |
Message |
@shim
New User
Joined: 28 Oct 2021 Posts: 10 Location: India
|
|
|
|
I have a requirement which I am implementing in REXX. But not sure if that's the good approach or it has effect on performance. If you could share your views that would be helpful.
Some external systems puts the data in mainframe with a specific dataset name pattern. It is not scheduled and they can create the file anytime (in a business day they can send more than 20 times). And they want us to be processed immediately not at an scheduled interval.
I am checking through LISTCAT with a DO FOREVER loop to identify the datasets and do the subsequent processing and finally deleting it in the REXX exec.
My question is i am doing this under forever loop and without any sleep/wait. The code continuously apply LISTCAT to check for any new datasets. My understanding is that it will have impacts in terms of performance. Is there any better way to achieve this which can be implemented in REXX. Any advice on this is appreciated. Thanks in advance !! |
|
Back to top |
|
|
Joerg.Findeisen
Senior Member
Joined: 15 Aug 2015 Posts: 1335 Location: Bamberg, Germany
|
|
|
|
Consider using a dataset trigger in the Job Scheduler and let the EXEC run when that trigger is activated once new datasets are created. |
|
Back to top |
|
|
Willy Jensen
Active Member
Joined: 01 Sep 2015 Posts: 734 Location: Denmark
|
|
|
|
As trigger like stated above is the best solution. Failing that, you can certainly use a REXX pgm, but I don't believe that they really need immediate process, just within a reasonable time.
REXX unfortunately does not have a built-in WAIT function, but you can use the USS SLEEP function instead, like:
Code: |
call syscalls 'ON'
do forever
..check for dataset..
address syscall 'sleep 10'
end |
to wait 10 seconds between each iteration. Introducing such a wait will mean no noticable impact on the system, whereas an unrestricted loop should be avoided and might/should quickly cause a S322 abend. |
|
Back to top |
|
|
@shim
New User
Joined: 28 Oct 2021 Posts: 10 Location: India
|
|
|
|
Thanks Joerg & Willy for your response. This helps !! |
|
Back to top |
|
|
Pedro
Global Moderator
Joined: 01 Sep 2006 Posts: 2594 Location: Silicon Valley
|
|
|
|
Quote: |
external systems puts the data in mainframe |
If I was doing this...
That external system knows exactly when it uploaded a new file. I would have that external system to use FTP to submit a job to process the new file. This has the benefit of it knowing the exact name of the file and you do not have to search for it.
In FTP, use:
and then use the PUT command for a file containing JCL to submit a job. |
|
Back to top |
|
|
Joerg.Findeisen
Senior Member
Joined: 15 Aug 2015 Posts: 1335 Location: Bamberg, Germany
|
|
|
|
Pedro wrote: |
In FTP, use:
and then use the PUT command for a file containing JCL to submit a job. |
It might be restricted what can be submitted this way. Users can specify FILE=JES with an FTP PUT statement allowing this into the system in batch. JES under z/OS will make a SAF call under the class of JESJOBS when batch jobs enter the reader. This is done for all batch jobs, not just FTP.
See also JESINTERFACELEVEL (https://www.ibm.com/docs/en/zos/2.4.0?topic=protocol-jesinterfacelevel-ftp-server-statement) |
|
Back to top |
|
|
enrico-sorichetti
Superior Member
Joined: 14 Mar 2007 Posts: 10888 Location: italy
|
|
|
|
if the people sending the file thru FTP belongs to a different organization
( as it happens now quite often )
it would be a very bad idea to have them submit jobs
FTP with file triggering would be the best solution IMO |
|
Back to top |
|
|
|