View previous topic :: View next topic
|
Author |
Message |
Rene Vincent
New User
Joined: 28 Dec 2020 Posts: 3 Location: Canada
|
|
|
|
Hi,
I have a JCL that generates a GDG, MY.FILE(+1).
I would like to capture the generated volume's actual name, MY.FILE.G0001V00, in a file later in the same JCL.
Is this possible?
Thanks. |
|
Back to top |
|
 |
Rohit Umarjikar
Global Moderator

Joined: 21 Sep 2010 Posts: 2992 Location: NYC,USA
|
|
|
|
Welcome!
Try LISTCAT and use DFSORT OR SYNCSORT to copy only what you needed from LISTCAT results.
More Hint- search this forum and you would find similar posts discussed in past. |
|
Back to top |
|
 |
Phrzby Phil
Senior Member
Joined: 31 Oct 2006 Posts: 1037 Location: Richmond, Virginia
|
|
|
|
You want the generated dataset name, not the volume name.
Volume refers to the storage medium: tape, disc, etc. |
|
Back to top |
|
 |
Joerg.Findeisen
Senior Member

Joined: 15 Aug 2015 Posts: 1023 Location: Bamberg, Germany
|
|
|
|
In case REXX is preferred you can use sth like this:
Code: |
address "TSO"
if bpxwdyn('info fi(ALLOC) inrtdsn(dsn)')=0 then say dsn |
ALLOC would be the DD statement from where you want to retrieve the DSN. |
|
Back to top |
|
 |
expat
Global Moderator

Joined: 14 Mar 2007 Posts: 8797 Location: Welsh Wales
|
|
|
|
I just have to ask why you would want to do this |
|
Back to top |
|
 |
sergeyken
Senior Member

Joined: 29 Apr 2008 Posts: 1751
|
|
|
|
Rene Vincent wrote: |
Hi,
I have a JCL that generates a GDG, MY.FILE(+1).
I would like to capture the generated volume's actual name, MY.FILE.G0001V00, in a file later in the same JCL.
Is this possible?
Thanks. |
Do you need this just for fun?
The main goal why GDG had been introduced many years ago was, to avoid by all means the need to know real DSN of each particular dataset of the group. |
|
Back to top |
|
 |
Rene Vincent
New User
Joined: 28 Dec 2020 Posts: 3 Location: Canada
|
|
|
|
The question is 'Why do I want to do this?'.
The JCL will read the GDG, split the contents into multiple files, then FTP each of the files to different directories on our MFT server.
The process that creates the GDG runs every 10 minutes.
If we lose connection to the MFT server, the job will fail and will continue to fail until the connection is restored. At that time, the failed transfers will have to be reissued.
When the JCL fails, it creates a new JCL that will be used to resend the transfers. For each failure, the JCL is appended to with a step to transmit the latest failures. For each of those steps I need to the know the GDG dataset name. I cannot use -1, -2, etc as these will become invalid once the connection is restored and transfers are working. |
|
Back to top |
|
 |
sergeyken
Senior Member

Joined: 29 Apr 2008 Posts: 1751
|
|
|
|
The idea of dynamic creation of new JCL with fixed DSN after FTP failures is a good way to multiply the whole mess in the processing logic.
I’m pretty sure the whole process could be re-organized to avoid this problem. The initial idea is, to split the original JCL in two independent parts.
1. The “splitter” job; every 10 minutes or so takes the original GDG(+0), and splits it into several secondary DSNs ...PARTx.TOFTP.GDG(+1). All those secondary TOFTP.GDG to be accumulated until the second JCL takes care of them.
2. The “transfer” job; whenever FTP connection is operational, it takes all, or one-by-one, ...PARTx.TOFTP.GDG(+0) datasets, and transfers them to required directories. In case of success those GDG(+0) are deleted via DISP=(...,DELETE) - for the next non-transferred TOFTP.GDG(-1) to become TOFTP.GDG(+0), ready for subsequent run of the same “transfer” job. If transfer failed, need to provide DISP=(...,KEEP), to keep the same dataset(s) for the next transfer attempt.
Whenever all non-transferred datasets have been finally transferred, the subsequent attempt to run this “transfer” JCL should fail due to missing TOFTP.GDG(+0)
This is just the primary idea coming into my mind within first three minutes. I’m pretty sure, many other processing logics can be invented when thinking about the task more globally. |
|
Back to top |
|
 |
Rohit Umarjikar
Global Moderator

Joined: 21 Sep 2010 Posts: 2992 Location: NYC,USA
|
|
|
|
In an ideal scenario, 'The process that creates the GDG runs every 10 minutes.' this process needs to be stopped immediately until previous failure is resolved by production support group. If this is not followed then you might as well go in endless loop as if you keep failing for next 1 hour then so many GDG's you will need to concatenate and you may run out of GDG LIMIT and older ones will get scratched. Redesign the process to avoid all this.
Btw, Did you see any of the solution suggested earlier ? These links should help you to get the last cataloged generation.
ibmmainframes.com/about44245.html
ibmmainframes.com/about50967.html |
|
Back to top |
|
 |
Rene Vincent
New User
Joined: 28 Dec 2020 Posts: 3 Location: Canada
|
|
|
|
In no way shape or form is this an 'ideal scenario'. It is a Q&D solution to an issue brought on by covid.
I cannot stopl GDG creation as it is perform by a self-triggering CICS transaction outside of my control.
What I am trying to achieve is a way to resend the failed transmissions when they occur (rarely I hope). At most I could see 20 generations affected (3 hours FTP downtime).
Thank-you all for the suggestions. |
|
Back to top |
|
 |
Rohit Umarjikar
Global Moderator

Joined: 21 Sep 2010 Posts: 2992 Location: NYC,USA
|
|
|
|
Good Luck, You have a solution suggested!! |
|
Back to top |
|
 |
|