View previous topic :: View next topic
|
Author |
Message |
xsray
New User
Joined: 16 Sep 2008 Posts: 19 Location: illinois
|
|
|
|
we have a consistant contention issue on a table that many apps access to get a unique number. the process obtains a random number between 1 and 999, then uses that number to read a row on the table which contains the last used unique number in that range. I would like to eliminate this contention by using sequential objects, this would mean creating 999 objects. Each object would start at a specified value. We have several tables partitioned on the returned number. Is there a problem with creating 999 sequential objects? |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
Why do you believe doing basically the sme thing with sequential objects would eliminate the contention?
Creating mutiple new "next available" tables might help. Changing to row-level locking instead of page-level locking might help more. . . |
|
Back to top |
|
|
xsray
New User
Joined: 16 Sep 2008 Posts: 19 Location: illinois
|
|
|
|
according to the DB2 Manual, sequential objects will eliminate any locking and contention, also, the table is already at row level locking. Your thought on multiple tables sounds like a good possibility, thanks.
We process on average 40 transactions a second that need a unique number.
from manual
"The use of sequences can avoid the lock contention problems that can
| result when applications implement their own sequences, such as in a
| one-row table that contains a sequence number that each transaction must
| increment. With DB2 sequences, many users can access and increment the
| sequence concurrently without waiting. DB2 does not wait for a transaction
| that has incremented a sequence to commit before allowing another
| transaction to increment the sequence again. " |
|
Back to top |
|
|
GuyC
Senior Member
Joined: 11 Aug 2009 Posts: 1281 Location: Belgium
|
|
|
|
the problem with 999 sequences is that (AFAIK) you can't use them in one static SQL statement.
Thus you would need to code 999 SQL statements and execute the appropiate one depending on your first randm number.
or code 1 giant SQL statement with a case . |
|
Back to top |
|
|
GuyC
Senior Member
Joined: 11 Aug 2009 Posts: 1281 Location: Belgium
|
|
|
|
you could solve it with one sequence
X = nextval(sequence)
unique nr : mod(x,1000) * 1.000.000.000 + x/1000
and your inserts nicely spreading over the partitions.
only it will be always at the end of each partition (if clustering index is also uniquenr ) |
|
Back to top |
|
|
xsray
New User
Joined: 16 Sep 2008 Posts: 19 Location: illinois
|
|
|
|
Thanks Guy, that seems to work nicely. |
|
Back to top |
|
|
GuyC
Senior Member
Joined: 11 Aug 2009 Posts: 1281 Location: Belgium
|
|
|
|
ur welcome, thx for the feedback |
|
Back to top |
|
|
|