View previous topic :: View next topic
|
Author |
Message |
subratarec
Active User
Joined: 22 Dec 2007 Posts: 126 Location: Bangalore
|
|
|
|
Hi,
Need your advice on below scenarios.
One of the DW team trying to interact with DB2 on Z/OS tables. Now these tables have referential integrity among them. One of the main parent table and they have another 6 child tables. They are using an IDE - dbeaver to run their query.
1. It seems they are trying to delete huge number of rows (from all 7 tables) using an ADHOC delete query by joining all these tables. Delete query seems trying to delete around 29 million rows and it's only in Dev. In PROD it may be way more!!.
2. It seems after deleting around 2 million rows because of excessive locks tables gets locked and no more delete allowed. They told me that DBA told them about Current LOCKSIZE (PAGE) is 32KB.
When I told them if they can use multifetch cursor and use DELETE WHERE CURRENT OF in SP and they can issue DB2 COMMIT after certain number of deletes they told me that they are not allowed to create any SP. So that option gone.
They want to use Anonymous block of SQL code which I guess in DB2 for ZOS we call it Compound SQL/Inline SQL/PL through which they want to delete. Now I don't have much knowledge on this Compound SQL (I actually never got a chance to use it).
1. So first query is can we write compound SQL (like in BEGIN ATOMIC and END block) for DB2 on ZOS?
2. Got hit to this link - https://bytes.com/topic/db2/answers/615547-writing-compound-complex-sql
But in this link it is clearly written that we can not use COMMIT/ROLLBACK in compound SQL. Now I don't know what suggestion to give to them cause without COMMIT they delete query will lock the table.
If we can use compound SQL then someone please let me know if there is any detailed documentation on rules of compound SQL in DB2 for Z/OS? In this scenario what they can do to fix their delete? Will rewriting the DELETE query by using INDEX (if at all possible) can ease these delete operation??
So need help/suggestion on this problem. Please share your advice on this.
Thanks |
|
Back to top |
|
|
Rohit Umarjikar
Global Moderator
Joined: 21 Sep 2010 Posts: 3051 Location: NYC,USA
|
|
|
|
I would think of this approach when delete rows are in such amount.
1. Unload the current DB2 table, DS-A
2. Keep the Rows to be deleted , DS-B
3. Use DFSORT to pick what's in DS-A and not in DS-B into DS-C
4. Load-Replace DB2 table using DS-C. |
|
Back to top |
|
|
subratarec
Active User
Joined: 22 Dec 2007 Posts: 126 Location: Bangalore
|
|
|
|
Hi Rohit,
Thanks for your reply. Yes the solution you have provide applicable for Mainframe users. But they are from distributed team (DW and they are working on HADOOP cluster) and they neither have mainframe access nor they know how to work on Mainframe. They have common PROID they use to connect their IDE to DB2 LPAR on Z/OS.
That's the reason they keen to know if like ORACLE or DB2 LUW, DB2 on ZOS has this Compund SQL facility. If that facility available then they will make use of that. So by any chance do you know if they can use Inline SQL/PL ? |
|
Back to top |
|
|
Rohit Umarjikar
Global Moderator
Joined: 21 Sep 2010 Posts: 3051 Location: NYC,USA
|
|
|
|
subratarec ,
The approach I listed can be done in Hadoop/DW, Instead of DFSORT DW needs to use inner joins/NOT EXISTS functions. Instead of DS-A, DS-B and DS-C they would need three temp tables.
I never worked on Compound SQLs and not sure if it ever allowed but if I get to know more will share here. |
|
Back to top |
|
|
|