View previous topic :: View next topic
|
Author |
Message |
hemanthj642
New User
Joined: 14 Sep 2005 Posts: 21
|
|
|
|
Hi,
I have a request to copy Mainframe file to Notepad.
I used transfer file option after giving 6 (coomand)in the ISPF menu.
As the file is very big it had been transfering longtime hence I aborted.
After that I googled and learnt that mainframe file can be zipped by using the tool PKZIP.
I got the below job in the one of the sites.
//STEP01 EXEC PGM=PKZIP
//SYSPRINT DD SYSOUT=*
//SYSOUT DD SYSOUT=*
//RTFILE DD DISP=SHR,DSN=TMU.FTP1AQF.CDW.INPUT1.G0009V00
//ZIPFIL DD DSN=MJI07.FTP1AQF.CDW.INPUT1.G0009.MANO,
// DISP=(NEW,CATLG,DELETE),
// SPACE=(CYL,(5,5),RLSE)
//SYSIN DD *
-ECHO
-INFILE(RTFILE)
-ARCHIVE_OUTFILE(ZIPFILE)
-ACTION(ADD)
//*
I got the S806 -- load module not found error ,when I execute the above job.
Can anyone please help me what needs to be instaled in my system to avoid this error.
Thanks |
|
Back to top |
|
|
PeterHolland
Global Moderator
Joined: 27 Oct 2009 Posts: 2481 Location: Netherlands, Amstelveen
|
|
|
|
PKZIP needs to be installed, or if it is you need probably a steplib.
But better is to ask your system maintenance people. |
|
Back to top |
|
|
vasanthz
Global Moderator
Joined: 28 Aug 2007 Posts: 1742 Location: Tirupur, India
|
|
|
|
Instead of option 6, you could try FTP GET'ing the file via command prompt.
AFAIK FTP has always been much faster than the option 6 IND$FILE file transfer and FTP does not hold your session. |
|
Back to top |
|
|
enrico-sorichetti
Superior Member
Joined: 14 Mar 2007 Posts: 10873 Location: italy
|
|
|
|
we are just wasting time on the silliness of people
picking <random> jcl from the net and complaining that it does not run in their environment
the topic should have never appeared here
topic will be vaporized in a while |
|
Back to top |
|
|
Robert Sample
Global Moderator
Joined: 06 Jun 2008 Posts: 8696 Location: Dubuque, Iowa, USA
|
|
|
|
Mainframes do not support zip by default. If your site has not spent the money to install zip software, what you are wanting to do cannot be done since there is no zip software to execute. As was said, talk to your site support group as only someone working at your site can tell you what is available at your site. |
|
Back to top |
|
|
Pete Wilson
Active Member
Joined: 31 Dec 2009 Posts: 580 Location: London
|
|
|
|
You could create a compressed DFDSS (ADRDSSU) DUMP of the dataset and xmit that, although both sites would need to have DFDSS to be able to dump & restore the dataset. Also consider RACF access and dataset alias at target site.
1. Run the backup at the source site:
//S0 EXEC PGM=ADRDSSU
//SYSPRINT DD SYSOUT=*
//BKP DD DSN=dump.file.name,DISP=(,CATLG),
// SPACE=(1,(1000,100),RLSE),AVGREC=M
//SYSIN DD *
DUMP ODD(BKP) ALLE COMPRESS OPT(4) SHR TOL(ENQF) -
DS(INC( -
your.file.name -
))
2. XMIT your dump.file.name to the target site.
3. At the target site run this:
//S0 EXEC PGM=ADRDSSU
//SYSPRINT DD SYSOUT=*
//BKP DD DSN=dump.file.name,DISP=SHR
//SYSIN DD *
REST IDD(BKP) TGTA(SRC) NMC NSC VOLCOUNT(ANY) CAT -
DS(INC( -
your.file.name -
)) |
|
Back to top |
|
|
Robert Sample
Global Moderator
Joined: 06 Jun 2008 Posts: 8696 Location: Dubuque, Iowa, USA
|
|
|
|
Pete, your method is NOT guaranteed to work. From the Storage Administration Reference manual with empahsis added:
Quote: |
2.3.7 COPYDUMP Command for DFSMSdss
With the COPYDUMP command, you can make from 1 to 255 copies of DFSMSdss-produced dump data. The data to be copied, a sequential data set, can be on a tape or a DASD volume, and copies can be written to a tape or a DASD volume. If the dump data is produced from multiple DASD volumes by using a physical data set dump operation, you can selectively copy the data from one or more of those volumes.
The COPYDUMP command cannot change the block size of the DFSMSdss dump data set. If you are copying a dump data set to a DASD device, the source block size must be small enough to fit on the target device.
Notes:
1. Extra dump tapes can be used for such things as disaster recovery backup or distribution of dumped data (for example, a newly generated system).
2. COPYDUMP is the only supported method for copying DFSMSdss dump data sets. Using a copy produced by any other method or utility as input to a RESTORE operation can produce unpredictable results. |
I have tried to use an XMIT file of DF/DSS dump and the RESTORE failed; whether or not it will work in any particular case I don't know for sure. |
|
Back to top |
|
|
Anuj Dhawan
Superior Member
Joined: 22 Apr 2006 Posts: 6250 Location: Mumbai, India
|
|
|
|
First this sounds to me as a site-specific question, as the subject line says, "How to zip mainframe file". Well, if you have the software-to-zip, go ahead, do it -- but what are you using how would we know. As has been mentioned by Enrico and Robert.
I thouhgt to suggest of sending an SMTP e-mail but then that will have its own limits set-up at the local site, as it is said that file is "huge". |
|
Back to top |
|
|
Pete Wilson
Active Member
Joined: 31 Dec 2009 Posts: 580 Location: London
|
|
|
|
Robert Sample wrote: |
Pete, your method is NOT guaranteed to work. From the Storage Administration Reference manual with empahsis added:
Quote: |
2.3.7 COPYDUMP Command for DFSMSdss
With the COPYDUMP command, you can make from 1 to 255 copies of DFSMSdss-produced dump data. The data to be copied, a sequential data set, can be on a tape or a DASD volume, and copies can be written to a tape or a DASD volume. If the dump data is produced from multiple DASD volumes by using a physical data set dump operation, you can selectively copy the data from one or more of those volumes.
The COPYDUMP command cannot change the block size of the DFSMSdss dump data set. If you are copying a dump data set to a DASD device, the source block size must be small enough to fit on the target device.
Notes:
1. Extra dump tapes can be used for such things as disaster recovery backup or distribution of dumped data (for example, a newly generated system).
2. COPYDUMP is the only supported method for copying DFSMSdss dump data sets. Using a copy produced by any other method or utility as input to a RESTORE operation can produce unpredictable results. |
I have tried to use an XMIT file of DF/DSS dump and the RESTORE failed; whether or not it will work in any particular case I don't know for sure. |
Robert - I have used this successfully a number of times. I'm not suggesting using COPYDUMP, just create a sandard Logical dataset DUMP file which is then XMIT'd. The dump file can potentially be screwed up depending on method used to XMIT it, but maybe I've been lucky! In any case it's a one-off so no harm in trying it as you'll find out immediately if the RESTORE doesn't like the input dump file.
Actually thinking further on this, it may be enough just to create a compressed copy of the original file by specifying a compressed Dataclas, then XMIT that (by XMIT I mean C:D or FTP or whatever) and ensure the file being created at the target site also gets a compressed Dataclas specified. Using the Dataclas compression is the equivalent of ZIP more or less. The only issue is the target site must have the same Dataclas available to create the target file or it will fail. |
|
Back to top |
|
|
|