IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

More than one file from unix to mainframe system using C:D


IBM Mainframe Forums -> All Other Mainframe Topics
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
ravidhiman

New User


Joined: 09 Oct 2006
Posts: 23
Location: London, UK

PostPosted: Tue Jan 10, 2012 9:14 pm
Reply with quote

Hello All,

I have one situation in my project; I am not quite sure how this situation can be handled. We have UNIX system which send the files to mainframe system using the connect direct.

The connect direct script run after every 30 seconds and take the files from unix box and send to mainframe. When mainframe system see the file, it trigger the job and copy the file into some another file with unique name and delete the original file received from the unix system.

Suppose unix system has 5 files in the box , the connect direct script will run at scheduled time and will pick the files one by one and send to mainframe system. The file going to mainframe should always have same name otherwise the mainframe job will not run.

Now it may possible that connect direct may send second file before mainframe finish the copying of first files. In this case the second file can be lost because unix system cant create the second file on mainframe system as first file already exist with same name.

If mainframe finish copying before the connect direct sends the second file, there is no problem because the mainframe job deleting the original file after copying. So file cant be duplicate.

Does anybody face the similar situation before? If yes please do let know the solution. We don’t want to loose any file. e are open for change in any system either mainframe, connect direct or Unix system.


Many Thanks
Ravi Dhiman
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Wed Jan 11, 2012 5:58 am
Reply with quote

Hello,

One way to do this (if i understand what is happening) is to stop using the single dataset name. We've used a gdg to "stack" several inbound transmissions and then when the mainframe process is run, all generations are processed, copied for backup to a different gdg, and deleted.

I suggest that running this less frequently than every 30 seconds would not cause a problem.
Back to top
View user's profile Send private message
ravidhiman

New User


Joined: 09 Oct 2006
Posts: 23
Location: London, UK

PostPosted: Wed Jan 11, 2012 7:58 pm
Reply with quote

Hello d.sch.

Thanks for suggesting the soultion. We are working on this solution. Soon I will post the test result of this solution.

Do you think if the Connect direct process will run after every 30 second would cause any problem?

Thanks
Ravi.
Back to top
View user's profile Send private message
ravidhiman

New User


Joined: 09 Oct 2006
Posts: 23
Location: London, UK

PostPosted: Wed Jan 11, 2012 11:17 pm
Reply with quote

If i suffix (+1) at the end of the file name at unix side, will it create a new GDG version at mainframe system?

For example : HLQ.XXXX.DATA(+1)

Many Thanks
Ravi
Back to top
View user's profile Send private message
superk

Global Moderator


Joined: 26 Apr 2004
Posts: 4652
Location: Raleigh, NC, USA

PostPosted: Wed Jan 11, 2012 11:59 pm
Reply with quote

IF you have an existing GDG base HLQ.XXXX.DATA and you use a relative GDG of +1, then you should get a new GENERATION cataloged each time.

To me, this looked more like an issue for your scheduling system. i'm thinking it would make more sense to have a job triggered when a new file is created on the Unix system. This event would cause a mainframe batch job to be started, which would invoke DMBATCH and run the process to retrieve the single file. Then, that file could be copied to the target dataset. This could happen as many times as necessary since only one batch job of the same name can execute at a time.
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Thu Jan 12, 2012 12:16 am
Reply with quote

Hello,

Suggest you work with your storage management people for the mainframe and whoever supports connect direct.

Quote:
Do you think if the Connect direct process will run after every 30 second would cause any problem?
Only if this generates more clutter than it is worth. Long ago (35+ years) we used this gdg "stacking" approach to receive multiple sets of data from about 330 manufacturing and distribution sites. Once a day the mainframe process was run to process these files and send return data to all of the remotes. As there was no ftp or connect direct the, we used custom hardware (store and forward message switches) that provided the same style service connect direct does for your system.

If your business requires something more frequent than once a day not a problem, but i do wonder about every 30 seconds. This could make troubleshooting more difficult whenever there is some problem with the transmission of the subsequent mainframe process.
Back to top
View user's profile Send private message
superk

Global Moderator


Joined: 26 Apr 2004
Posts: 4652
Location: Raleigh, NC, USA

PostPosted: Thu Jan 12, 2012 3:52 am
Reply with quote

I also wonder why the process couldn't have been designed so that the mainframe always receives a unique dataset name, with or without the use of GDG's.
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Thu Jan 12, 2012 10:59 am
Reply with quote

Hi Kevin,


I wouldn't guess about the design of the topic process - maybe TS will tell us.

In the case i mentioned, we did not want several hundred dataset names to manage.

Everything that came into the gdg went into a single mainframe process that split the records as needed for the nighly processing. Likewise at the end of this nigtly process, a single output file was split into all of the individual outbound files to send to the mfg/dist sites.

There was no problem if a site submitted none, 1, or multiple files.
Back to top
View user's profile Send private message
expat

Global Moderator


Joined: 14 Mar 2007
Posts: 8797
Location: Welsh Wales

PostPosted: Thu Jan 12, 2012 3:20 pm
Reply with quote

At one site I have worked at, the mainframe ran a scheduled FTP job every hour to search the servers, download available files, delete the server files and process the downloaded data.

Maybe a reverse approach might help more ?
Back to top
View user's profile Send private message
ravidhiman

New User


Joined: 09 Oct 2006
Posts: 23
Location: London, UK

PostPosted: Fri Jan 13, 2012 8:22 pm
Reply with quote

Hello All,

Thanks everybody.

We are going to implement the GDG approach as mentioned above. The Cicon_biggrin.gif will suffix (+1) at the end of the dataset before sending over to mainframe.

I read in the document that GDG has limit of 255 versions. GDG can not accpet more than 255 versions at one time.

Suppose if source system(unix in our case) is sending more than 255 files in one unit of work to mainframe system, how this situation can be handled?

Thanks
Ravi Dhiman
Back to top
View user's profile Send private message
Robert Sample

Global Moderator


Joined: 06 Jun 2008
Posts: 8697
Location: Dubuque, Iowa, USA

PostPosted: Fri Jan 13, 2012 8:45 pm
Reply with quote

Terminology is critical in IT, where similar terms may mean very different things. There is NEVER more than one version available at a time of a GDG generation. You cannot, under any circumstances, have access to 255 versions. A GDG can have 255 GENERATIONS, but each generation will have only one VERSION (V00 normally, but could be V01 or other).
Back to top
View user's profile Send private message
superk

Global Moderator


Joined: 26 Apr 2004
Posts: 4652
Location: Raleigh, NC, USA

PostPosted: Fri Jan 13, 2012 9:10 pm
Reply with quote

The GDG LIMIT can be defined as a maximum value of 255. Once the limit is reached, the next generation is "rolled-in" to the generation data group, and the oldest generation is "rolled-out". Depending on how you defined the EMPTY/NOEMPTY and SCRATCH/NOSCRATCH values, those datasets that are "rolled-out" can remain cataloged and, as such, are still available to your applications.
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> All Other Mainframe Topics

 


Similar Topics
Topic Forum Replies
No new posts Compare 2 files and retrive records f... DFSORT/ICETOOL 3
No new posts FTP VB File from Mainframe retaining ... JCL & VSAM 8
No new posts Extract the file name from another fi... DFSORT/ICETOOL 6
No new posts How to split large record length file... DFSORT/ICETOOL 10
No new posts Sysplex System won't IPL at DR site I... All Other Mainframe Topics 2
Search our Forums:

Back to Top