I have data like this, ColA(int) and ColB(varchar(1024))
Code:
ColA ColB
0 05UVWXY06ZABCD07EFGHI08JKLMN09OPQRS10ZZZZZ99NOPEE99zzzzz
1 01ABCDE02FGHIJ03KLMNO04PQRST99ZZZZZ99NOPEE99zzzzz99zzzzz
2 11TUVWX12YZABC13DEFGH14IJKLM99ZZZZZ99NOPEE99zzzzz99zzzzz
In ColB, data are in multiple sets like, first 2 characters are numeric and later 5 chars are alphabetic, now this multiple sets of data in a row needs to be inserted into a separate table in different rows and any data after 99(as 2 first chars in a set)can be ignore/discared in a row.
Currently i have following 2 Options,
I can prepare query like below using unions, since the row has around 88 sets of data, 88 unions might be required(not the best method, 88 tablespace scans, noway)
Code:
SELECT ID,SUBSTR(DUMMY_TEXT,1,2), SUBSTR(DUMMY_TEXT,3,5)
FROM BOBT1.TRY1
UNION ALL
SELECT ID,SUBSTR(DUMMY_TEXT,8,2), SUBSTR(DUMMY_TEXT,10,5)
FROM BOBT1.TRY1
One-time conversion cobol program(only the best method, i can think of right now, because will involve only one tablespace scan).
I just want to know, if there is anyother way of doing it.
I'm not sure if the following solution won't do 88 tablescans, it depends on the number of rows:
* create a table tpos with 3 integer columns (nr, strt1,strt2)
you can create this table in qmf with the following query :
Code:
with cte(cnt,strt1,strt2) as (
select 1,1,3 from sysibm.sysdummy1
union all
select cnt + 1, strt1 + 7 ,strt2 + 7 from cte
where cnt < 88)
select * from cte
* then execute this query :
Code:
insert into result1 (
select colA, integer(substr(colb,strt1,2)) as colBB, substr(colb,strt2,5) as colCC
from
Tpos
,t1
where substr(colb,strt1,2) <> '99'
and strt1 < length(colb))
Wow that's really superb, GuyC
The Real table has 20 million rows, Size of this Varchar column is 21000 and it contains mixture of DATE, CHAR and cobol COMP-3 values stored directly into this column. So, currently the query involves joins, conversions from COMP-3 to numeric and other things, there will be huge number of functions involved.
I will check up with the test data with this approach.