Joined: 28 Sep 2005 Posts: 210 Location: St Katherine's Dock London
Hi - we keep getting requirements from the business to provide them random extracts from tables. We have around 1230 tables / 20k columns in all. Based on the business requirements, we have to write a program to extract the data from the tables using SELECTs.
We could use a Db2 utility to unload the data but the locks that UNLOAD utilities puts on the table space make them unavailable for the time of unload. Hence, we go with writing a program each time.
I am thinking of an idea to make this extraction process very generic so that whenever business comes up with such a request, we break that down into a simple SELECT query. Then we pass this SELECT query into the generic program and fetch all the data.
I have written a small prototype like below for clarity.
//X01 EXEC PGM=IKJEFT01
//SYSPRINT DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*
//SYSOUT DD SYSOUT=*
//SYSABOUT DD SYSOUT=*
//SYSDBOUT DD SYSOUT=*
//SYSTSIN DD *
RUN PROGRAM(GENEXT) PLAN(PLAND)
//INSQL DD *
SELECT CUST_NUM, NAME
WHERE GENDER = 'M'
I am declaring a cursor in the program using this SQL and then doing a FETCH. Now here is where i am stuck...
I can DECLARE the cursor using SELECT statement that i get from instream data. But how can i make sure that i read correctly in to the correct host variables (where i can have INT, SMALL, DATE, TS, CHAR, VARCHAR) which could be any random combinations.
Could you please share your thoughts on this. I might be thinking of doing something which is not technically possible, if so, could you share your ideas on any other way to achieve this.
Do let me know if any details are required, i might not have been able to explain clearly.
Your input doesn't need to be an "actual" SQL statement. You could "annotate" it further to indicate field descriptions, then make use of that knowledge in the program and "strip off" the extra information to leave plain SQL.
Presumably you could arrange to look up the characteristics of the requested data, then you'd know.
Joined: 20 Oct 2006 Posts: 6970 Location: porcelain throne
Hi Gnanas - thanks for the reply. Yes, it does look the same to me as well. If only these utilities can be run without locking the tablespace?
1. you are not going to be able to 'beat' ibm (or 3rd party) unload utilities with program...
2. the locks taken by these utilities are based on bind parms
which your dba's can modify.
3. you are spinning your wheels trying to write a program. at best, as bill suggested, dynamic sql may/can/might/possibly reduce your overall programming effort.
4. essentially, you are trying to improve the performance of your car by using different sparkplugs.
your business units, especially marketing, drive your requirements
and they have to be satisfied.
does not matter if you have 1 table or a gazillion, you will always have this problem.
back to the sparkplug analogy:
you have to involve your dba's,
you have to get a handle on what kind of requests are made,
what tables are impacted more than others,
what can be satisfied with an image of 'yesterdays data'.
there are varied ways to handle these situations:
1. stuff that is asked for daily, weekly, monthly, periodically,
could/should be satisfied with a scheduled batch job.
2. yesterdays' data that can satisfy many requests can be downloaded to pc/servers and made available there in what ever format fits your current pc/server tools. that way, the user can fulfill his requirements thru use of excel and/or other tools.
3. there are various db2 tools to unload:
and they can be configured to NOT lock everything forever.
4. utilize qmf, if you bought it. that is designed to access the db's with as little interference as possible.