View previous topic :: View next topic
|
Author |
Message |
Mike 1304
New User
Joined: 15 Oct 2011 Posts: 46 Location: Germany
|
|
|
|
My problem:
I have to send a file (variable size 5K to 2 GB) from z/OS to a java program running under Websphere Application Server on AIX server.
After processing of this file, another file (variable size 100K to 8 GB) has to be sent from the server to z/OS.
My 1st approach was to use MQ/Series.
But the support in my company is very poor, the administrative overhead and the effort for error handling very high.
My 2nd try was to use FTP.
But the first file is ascii and contains multiple x'00' => I loose the data after the X'00'.
Translating on z/OS and sending it with type=binary results in one big string instead of separate records.
The final showstopper is, that the responsible team for FTP define a new standard, that every FTP step must be a separate Job under their responsibility.
This results in creating new 2.500 jobs instead of changing 500 existing jobs.
(preprocessing, FTP, BETA42, FTP, postprocessing = 5 steps => 5 Jobs).
My 3rd try is to use DB2.
Store data in a BLOB by a cobol program on z/OS and read it with the java program.
Store result file in a BLOB by java program and read it with a cobol program under z/OS.
As far as I understood, BLOB-data are streamed thru the network similar to FTP streaming.
My question:
How can I optimize the coding and the transport by using stored procedures?
What is the maximum length of a parameter in a stored procedure?
Can parameters of SPs be variable length (up to 8 GB)?
Or, if e.g. max 2 GB, can I define 4 parameters, each 2 GB?
Is the content of the parameters streamed thru the network like FTP / BLOB?
I searched for answers in the redbook
"DB2 9 for z/OS Stored Procedures: Through the CALL and Beyond",
but can't find any. |
|
Back to top |
|
|
Anuj Dhawan
Superior Member
Joined: 22 Apr 2006 Posts: 6250 Location: Mumbai, India
|
|
|
|
Quote: |
But the first file is ascii and contains multiple x'00' => I loose the data after the X'00'. |
Why do you say so and what does this mean actually?
FTP is designed for doing file transfers and worked quite well for us in my previous shop with files up to 6 or more gigabytes in size. I've not done this but I know for sure that one of my known was having average over 13 GB per day of FTP traffic and their peak traffic was over 30 GB. So your's comes in range of FTP for sure.
And NDM is not among one of the choices? |
|
Back to top |
|
|
Mike 1304
New User
Joined: 15 Oct 2011 Posts: 46 Location: Germany
|
|
|
|
Thanks a lot for yous answer,
but I wrote the history to avoid discussion about FTP am MQ!
Nevertheless:
I tranferred a file and a record contains 5X'00' .
The parameters of FTP were ASCII and locsite NOTRAILINGBLANKS.
This combination causes a record to be cut before the X'00' and the following 2 records (the last in the file) were also cut off.
The z/OS FTP team told me, that this is not possible,
the server FTP team told me, that multiple X'00' is not allowed with type ASCII.
I did not explore more, because the change of the in house standard let explode the effort for rollout.
NDM: I can't buy a license for a FFTP-Tool |
|
Back to top |
|
|
Robert Sample
Global Moderator
Joined: 06 Jun 2008 Posts: 8696 Location: Dubuque, Iowa, USA
|
|
|
|
X'00' typically causes problems -- it is nothing special on a mainframe but many PC / Unix systems treat it as a string terminator and stop handling input after it appears. This is NOT an FTP issue in any way, but an issue with the way the data is perceived after the FTP -- FTP transferred the full file, but the program used to look at the data may not be going past the X'00'. |
|
Back to top |
|
|
Mike 1304
New User
Joined: 15 Oct 2011 Posts: 46 Location: Germany
|
|
|
|
thanks for claryfication!
Has anybody answers to my questions?
Quote: |
My question:
How can I optimize the coding and the transport by using stored procedures?
What is the maximum length of a parameter in a stored procedure?
Can parameters of SPs be variable length (up to 8 GB)?
Or, if e.g. max 2 GB, can I define 4 parameters, each 2 GB?
Is the content of the parameters streamed thru the network like FTP / BLOB? |
|
|
Back to top |
|
|
dbzTHEdinosauer
Global Moderator
Joined: 20 Oct 2006 Posts: 6966 Location: porcelain throne
|
|
|
|
i would agree with the assumption
that writing from a server to the z/OS DB2 instance
(and visa versa)
is the better approach.
why the interest in the parm length?
INSERT everything in the z/OS DB instance with the first step,
then respond to the webserver with only the key to the BLOB row(s).
the server will then activate a module with the
appropriate ORDER Clause (the parm)
process the BLOB,
generate results, which are INSERTed into a BLOB Row on the z/OS DB2 instance,
and return the 'new' ORDER BY clause.
i personnally would do away with any parms,
since the BLOB in question would/should always be the latest
or only, and if more than one per day is allowed, then program for it with your key structure for the BLOB rows.
somewhere (in some job) somebody is going to have to
pay the freight for the data transfer.
now, when does it occur (by what action?)
FTP invocation
or
availablilty of a row after a DB2 INSERT/UPDATE/SELECT of a BLOB on z/OS
by either a z/OS task or server request.
i personnally would try to buy out from the FTP option.
dealing with another group, apparently overloaded with bureaucracy.
though the responsibility lies completely with you,
the job flow is simpler.
z/OS JOB: write to z/OS DB2, signal server - z/OS job complete
servr JOB: read from z/OS DB2 by server, write to z/OS DB2, signal z/OS - server job complete
z/OS JOB: read from z/OS DB2, generate something. |
|
Back to top |
|
|
Mike 1304
New User
Joined: 15 Oct 2011 Posts: 46 Location: Germany
|
|
|
|
Hi Dick,
you describe exactly, what I'm going to do.
My additional questions:
1. Is it possible to handle a multiple BLOB row processing hidden in a stored procedure?
and therefore:
2. Is it possible to have parameters > 2GB in a SP?
3. What's about performace using SP compared to "normal" SQL in a distributed environment? |
|
Back to top |
|
|
dbzTHEdinosauer
Global Moderator
Joined: 20 Oct 2006 Posts: 6966 Location: porcelain throne
|
|
|
|
you are using the BLOB as a message-container.
I would attempt to
populate and 'de-populate' (apparently on both sides) the BLOB
with db2 utilities.
This is the type of thing that could become a monster real quick.
any data-maninpulation by programming will require resources.
that said, i think that you limit your programming to preparing the data, and using it
and leave the INSERT/UPDATE and SELECT to utilities. |
|
Back to top |
|
|
Mike 1304
New User
Joined: 15 Oct 2011 Posts: 46 Location: Germany
|
|
|
|
I understand that your advice is not to use stored procedures?
(of cause stored procedures uses INSERT/UPDATE and SELECT)
"Direct" INSERT/UPDATE and SELECT via JDBC is better than stored procedures? |
|
Back to top |
|
|
|