'010000017',
'010000034',
'010000035',
'010000037',
'010000049',
'010000050',
'010000063',
'010000067',
'010000085',
'010000087',
'010000100',
and so on 1750000 records.
Now I want to create an sql like this
Select * from policytab where
policynumber IN (
'010000017',
'010000034',
'010000035',
'010000037',
'010000049',
so on till 450 records of above file....
.........
.......
);
Select * from policytab where
policy number IN (
another 450 records
);
and so on.....
The total number of records is not a multiple of 450. After the last record only ); needs to be attached.
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
Hello,
For what it is worth, i'd suggest that you write a simple bit of COBOL that would read your "policy number" qsam file and for each policy#, issue the "select * from the policytab". When the row is retrieved, create whatever output you need. If the policy number does not exist, skip it, report it, or abend as best suits your situation.
You will have complete control in the COBOL code. If you get the sort to generate what you want, you still have to run al of the SQL queries and deal with an unpredictable (per the posted requirement) number of queries and outputs.
If you are concerned that a single run would be too big, you can include logic in the code to start and stop at runtime porovided first/last policy numbers or by actual count (i.e. start at zero; stop at 450). If you set up the code to process "groups" you do not want to hard-code them.
Joined: 18 Nov 2006 Posts: 3156 Location: Tucson AZ
It's possible. Without DFSORT, probably not as neatly.....
With sort you could treat the select as a header and the closing paren as a trailer. The only problem with that is that the max pagesize is only 255.
Another way might be by using an inrec sequence number incrementing by 2 and control break on the 4th digit, 500 to group.
Since you have ICETOOL, I'd wait to see what magic Frank will perform.
Dick, William
Thanks for your comments.
I started it with a single query, but realized that system supports around 450 values in parenthesis(IN statement).
Trouble is that there is one to many situation here. Which would demand a cursor, if I go for a cobol module. Considering the bulk of data, I think that would make it bit inefficient.
In the above situation as well, the multiple data sets created for multiple queries will be a problem.
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
Hello,
With 1.75 million records, the number of entries per "IN" won't much matter. If there is a one-to-many, the amount of selected rows increases.
Using a cursor and selecting the "many" will not be much of a performance issue if the rows returned are determined by the policy# key.
If you use code rather than sort, you can start/stop wherever you want. As Bill suggests, you could "stack" the outputs in a gdg for use later.
For ease of implementation and ease of subsequent enhancement/modification if this requirement grows or another opportunity presents itself, i'd recommend you use "real" code. Also, i'd imagine that if you wrote the COBOL, you would be running in a couple of hours. . . . A SELECT/ASSIGN, an FD, get the start/stop control if used, a file OPEN, a file READ, a cursor DECLARE, a FETCH, an output write for each row FETCHed, end process.
Joined: 15 Feb 2005 Posts: 7129 Location: San Jose, CA
Quote:
Without LINES, how would I cause "page" headers and footers to generate?
I actually have a solution in mind that uses SECTION headers rather than page headers, so REMOVECC could be used. and LINES=n wouldn't matter I'll wait until hernikiten tells me the RECFM and LRECL before I see if I can work it out.
Joined: 15 Feb 2005 Posts: 7129 Location: San Jose, CA
hernikiten,
Here's a DFSORT job that will do what you asked for. Since you said the input file has RECFM=FBA, I assumed the first character was the ANSI carriage control character and the first apostrophe starts in position 2. I also assumed that you wanted the output file to be FB/40 without the ANSI character. I used a division by 450 trick to give each group of 450 records the same value so we could use SECTIONS on that value. I removed the comma from the last record of each group using a modulo 450 trick, and I removed the comma from the last record of the file using a symbol for the sequence number of the last record. Quite tricky, but it does work.