View previous topic :: View next topic
|
Author |
Message |
MichaelKBS
New User
Joined: 10 Jan 2006 Posts: 24 Location: Germany
|
|
|
|
Hi, I hope someone of you can solve my problem:
I have to unload specific rows of one table that correspond to the content
of a second table. So here is my SELECT-Statement:
SELECT * FROM firsttable A WHERE EXISTS (
SELECT * FROM secondtable B WHERE
B.INBRVSNR = A.DXBRVSNR
);
The result will be a very large amount of records.
While the table rows content a varchar-column of 32000 bytes,
if I use DSNTIAUL, I'm running out of volume space. Because
DSNTIAUL only creates FB-datasets, so that will use the full 32016 Bytes for each result record.
Is there a way to force DSNTIAUL to create VB-datasets?
Or can I use DB2-Unload-Utility using my SELECT?
(I think WHEN-Clause won't work with keyword EXISTS or table-joins) |
|
Back to top |
|
|
wanderer
Active User
Joined: 05 Feb 2007 Posts: 199 Location: Sri Lanka
|
|
|
|
What is the expected size of the record assuming that varchars are downloaded with full length?
The maximum LRECL possible in DSNTIAUL is 32760 bytes and, for that metter in any dataset. So you have around 2760 bytes for your rest of the columns.
Since the file will be too big, you can use multiple volumes of tape if possible.
And perhaps changing your query to the following will speed up things.
SELECT * FROM firsttable A WHERE EXISTS (
SELECT 1 FROM secondtable B WHERE
B.INBRVSNR = A.DXBRVSNR
);
Hope you have an index on B.INBRVSNR in secondtable. |
|
Back to top |
|
|
MichaelKBS
New User
Joined: 10 Jan 2006 Posts: 24 Location: Germany
|
|
|
|
Hello wanderer,
thank you for your answer!
The maximum record length is 32016 Bytes (includes 32000 bytes varchar). I expect up to 500 000 rows that match the select-conditions.
The tables are indexed. The performance is not a problem.
But space request is. Even multivolume is not a solution for I can't grap enough in my departement. And I tried to avoid using tapes. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
How will the unloaded file later be used?
How much of the 32k varchar field is usually not used?
If this were my task, i would probably use a bit of programming code and be able to control the creation of the output file. I'd most likely create some kind of delimited (tab, backslash, tilde) output which is quite easy to use as input to many different types of processes later. |
|
Back to top |
|
|
MichaelKBS
New User
Joined: 10 Jan 2006 Posts: 24 Location: Germany
|
|
|
|
Hi Dick,
how you suggested, I've written a small COBOL-program for unloading the table, using VB output dataset.
That saved over 90 % of required volume space, cause the average size
of 32 k varchar was about 150 Bytes.
But I still wonder if there's no DB2-Utility.
However, Wanderer and Mr. Scherrer, thank you for your answers. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
You're welcome
There are possibly add-on utility products (other than what are part of the actual DB2 product) that would do what you want.
Also, possibly, someone here in the forums may know of a "DB2" way. |
|
Back to top |
|
|
wanderer
Active User
Joined: 05 Feb 2007 Posts: 199 Location: Sri Lanka
|
|
|
|
One crude method will be use the SUBSTR function on that column in SELECT of DSNTIAUL and get only 150 bytes from beginning. Although this '150' needs to be changed if needed after table data changes anytime in future.
There is an option in the UNLOAD utility called NOPAD which is supposed to not pad variable length records and output file is VB. You may give that a try. I haven't tried it so far. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
The SUBSTR may cause problems as the 150 is the average output record length.
The NOPAD sounds promising. . . |
|
Back to top |
|
|
wanderer
Active User
Joined: 05 Feb 2007 Posts: 199 Location: Sri Lanka
|
|
|
|
Oops, missed the 'average' in Michael's post. Thanks Dick for pointing out. I meant the length of the longest populated varchar string. This off-course is not a very good way especially if you have even only one very long string. |
|
Back to top |
|
|
|