We have a problem with a Table on a Non partitioned tablespace which has a high level of concurrency on it.
The current definition of the tablespace, table and indexes is like this:
Code:
CREATE TABLESPACE TBLSPACE1
IN DBHI
USING STOGROUP SGSMS
PRIQTY 40 SECQTY 14400
ERASE NO
FREEPAGE 10 PCTFREE 5
GBPCACHE CHANGED
TRACKMOD YES
BUFFERPOOL BP1
LOCKSIZE PAGE
LOCKMAX SYSTEM
CLOSE NO
COMPRESS NO
CCSID EBCDIC
DEFINE YES
MAXROWS 255;
CREATE TABLE MDOD.TABLE30
(FIELD001 CHAR(4) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD002 CHAR(2) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD003 CHAR(2) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD004 CHAR(7) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD005 CHAR(2) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD006 CHAR(7) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD007 CHAR(2) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD008 CHAR(7) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD009 CHAR(2) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD010 CHAR(7) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD011 CHAR(2) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD012 CHAR(7) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD013 CHAR(2) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD014 CHAR(7) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD015 CHAR(2) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD016 CHAR(7) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD017 CHAR(2) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD018 CHAR(7) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD019 CHAR(2) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD020 CHAR(7) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD021 CHAR(2) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD022 CHAR(7) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD023 CHAR(2) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD024 CHAR(7) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD025 CHAR(2) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD026 CHAR(7) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD027 CHAR(2) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD028 CHAR(7) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD029 CHAR(2) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD030 CHAR(7) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD031 CHAR(2) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD032 CHAR(7) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD033 CHAR(2) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD034 CHAR(7) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD035 CHAR(2) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD036 CHAR(7) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD037 CHAR(2) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD038 CHAR(7) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD039 CHAR(2) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD040 CHAR(7) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD041 CHAR(4) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD042 CHAR(4) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD043 CHAR(8) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD044 CHAR(8) FOR SBCS DATA NOT NULL WITH DEFAULT,
FIELD045 TIMESTAMP NOT NULL WITH DEFAULT) IN
DBHI.TBLSPACE1
AUDIT NONE
DATA CAPTURE NONE
CCSID EBCDIC
NOT VOLATILE;
---------------------------------------------------------------------
DATABASE=DBHI
INDEX=MDOD.INDEX001 ON MDOD.TABLE30
---------------------------------------------------------------------
CREATE UNIQUE INDEX MDOD.INDEX001
ON MDOD.TABLE30
(FIELD001 ASC)
USING STOGROUP SGSMS
PRIQTY 40 SECQTY 14400
ERASE NO
FREEPAGE 10 PCTFREE 5
GBPCACHE CHANGED
NOT CLUSTER
BUFFERPOOL BP2
CLOSE NO
COPY NO
DEFINE YES
PIECESIZE 2 G;
We are Using DB2 V8 and this table is accesed in batch by a lot of programs, simultaneusly to insert records through a call to a routine.
This routine has a cursor with a FOR UPDATE CLAUSE that is opened by the Record Field FIELD001, and after procesing and changing the values of other fields of that record, updates the held record and returns to the calling COBOL program the updated data.
The current problem with the table is that we have a high concurrency of programs over that table, since many applications needs to execute simultaneusly, and they need a Invoice number that is stored on a combination of several fields, because of a bussiness rule, and the routine is needed to supply the caller program it's invoice number, which is the updated value, and all programs are required to use the same routine
Since the record field, which is an commercial oficine number has a high level of concurrency (many application such as loans, accounting, savings, etc generate operations for many oficines), the table usually locks on the page level and sometimes the timeout time is exceeded, and we have failures on production. We are looking for a way to decrease the locks
Some persons here are suggesting to change the table space and change the parameter MAXROW 255 to MAXROW 1, however, I'm not so sure about it. What would be the effects of this? This is used on-line as well, and I'd was told that it would be better for this table to change the Parms like this:
Code:
ALTER TABLESPACE TBLSPACE1
LOCKSIZE ROW
LOCKMAX SYSTEM
MAXROWS 255
I think that it would be better to lock only the record being updated and that would be all.
Anyway, the calling program proceses, as an example, 100.000 records, and if the first 10000 records are from from oficine 1, then the following records are from oficine 2, and so on, by the time it reaches the 100.000th record, it woukd have locked all 10 oficines, or by the time the routine internally executes the update clause, the lock is released.
More often, the calling programs uses restart routines, which states that after 5000 updates a physical commit is issued, so i think that after each commit the locks are released, but I'm not so sure about this either
What I'd like is to reduce to a minimum acceptable the Locks in this table and applications, since it is a critical point on the batch, and is not an option to set the jobs to execute sequentially one after the other.
And since I'm asking about this, what effects has the TRACKMOD clause?
I Hope my question is clear enough, if needed more data let me know
Joined: 13 Feb 2004 Posts: 551 Location: Bangalore
MAXROWS specifies the number of rows which has to be kept in a data page ... if you are facing contention for LOCKSIZE PAGE you can consider using LOCKSIZE ROW .. but there are trade offs ... ie. resource usage for row level locks will be higher than page level locks ...Lock avoidance is very important when row locking is used....use ISOLATION(CS) CURRENTDATA(NO) or ISOLATION(UR) .....
Joined: 20 Oct 2006 Posts: 6966 Location: porcelain throne
you could make the CALLed routines Store Procs.
You could rewrite the batch cursors to process in segments
(i.e. office 1, then office 2 - yeah open/process/close office 1, open/process/close office 2...)
you could run strobe (or something similar) against the commit frequency.
Reducing the UOW size, both by different cursor usage and reduced commit frequency could reduce contention.
your timeouts will pinpoint (sorta) which processes are your bottlenecks.
Unfortunately, experience has proven that reducing bottleneck A will create bottleneck B.
you need to run a performance analyzer to determine if the locksize is appropriate.
As Ashmir said, there are tradeoffs.
MAXROWS specifies the number of rows which has to be kept in a data page ... if you are facing contention for LOCKSIZE PAGE you can consider using LOCKSIZE ROW .. but there are trade offs ... ie. resource usage for row level locks will be higher than page level locks ...Lock avoidance is very important when row locking is used....use ISOLATION(CS) CURRENTDATA(NO) or ISOLATION(UR) .....
Hello there Ashimer
I has been reading about the Isolation Level, however, can you use those clauses when declaring a cursor? And, can they work around this lock issue?
dbzTHEdinosauer wrote:
you could make the CALLed routines Store Procs.
You could rewrite the batch cursors to process in segments
(i.e. office 1, then office 2 - yeah open/process/close office 1, open/process/close office 2...)
Sadly, in this client/installation, we aren't allowed to use Stored Procedures (MVS Mainframe), so we are bound to use routines instead of SP.
dbzTHEdinosauer wrote:
you could run strobe (or something similar) against the commit frequency.
Reducing the UOW size, both by different cursor usage and reduced commit frequency could reduce contention.
We were thinking on runnign a performance analisys previousyly, before the change, however
dbzTHEdinosauer wrote:
your timeouts will pinpoint (sorta) which processes are your bottlenecks.
Unfortunately, experience has proven that reducing bottleneck A will create bottleneck B.
you need to run a performance analyzer to determine if the locksize is appropriate.
As Ashmir said, there are tradeoffs.
Your DBA's should be heavily involved in this process.
Thanks, unfortanely, i was designed DBA with no experience whatsoever on the area, so pretty much I know is empiric, so I'm doing my best to solve this.
I see... Well, as a first step I'll request to run a performance analisys on the process that is failing, and ask for the report before changing anything on the tables. I'll ask desperatly for a training in this area...