IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

Check for a record Lock


IBM Mainframe Forums -> DB2
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
ojdiaz

New User


Joined: 19 Nov 2008
Posts: 98
Location: Spain

PostPosted: Wed Sep 23, 2009 1:04 pm
Reply with quote

Hi there. Maybe this is a silly question, but I have an Issue with locks and we are looking for an approach to prevent them, if it is possible

In SQL server I think i saw once a way to check if a record was locked, with an instruction. We want to do the same in DB2 in order to set up a wait before attempting to perform an update operation

We are having some timeouts or deadlocks in a routine that has a Cursor open for update for a single record of a table in a tablespace witha Lock=Page and MAXROWS=1. We are setting a lock of 1 row per data page and we want to know if there is a chance to check if a particular record is locked before attempting to update it.

The COBOL routine that process that record is the next:

Code:

DECLARE HIDC0030 CURSOR FOR
SELECT FIELD1,
        FIELD2,
        FIELD3,....
FROM HIDV0031                 
WHERE FIELD1 = :HOST-VAR
FOR UPDATE OF FIELD2,FIELD3...


This cursor, once opened and used, fetchs one single record by the index field FIELD1 and then adds 1 to the values of FIELD2 or 3 and updates them in a next sentence:

Code:

UPDATE HIDV0031
SET FIELD2 = :HOST-VAR2
      ,FIELD3 = :HOST-VAR3
      .....
WHERE CURRENT OF HIDC0030


The problem is that this COBOL Routine is used in batch processing and many process access it simultaneusly, and sometimes locks happens because other apps cannot order their process in a way to prevente this locks, so we were wondering if it could be possible to issue a SQL sentence, or something, to check if the record that fulfills the WHERE clause of the Declare cursor is locked. That would let us set a little timer before attempting to update the record in the nex UPDATE sentence.

By the way, the bind options used in out installation is CS for all Cobol Source that uses DB2

Can this be acomplished

We are getting some timeouts because of this

I'm sorry if the request is not clear enough. I'll clarify if it needs to be done
Back to top
View user's profile Send private message
enrico-sorichetti

Superior Member


Joined: 14 Mar 2007
Posts: 10872
Location: italy

PostPosted: Wed Sep 23, 2009 1:14 pm
Reply with quote

Quote:
We want to do the same in DB2 in order to set up a wait before attempting to perform an update operation


useless...
what makes You think that after a while the situation might be different
if You issue a timed wait, what if after the time has expired the resource is still locked

if You find a way of waiting/enqueuing on a specific resource , the waiting task will be stuck anyway , maybe locking othe resources and so on and so on..

let DB2 do the dirty work, it can do it better than any in house workarounds
Back to top
View user's profile Send private message
ojdiaz

New User


Joined: 19 Nov 2008
Posts: 98
Location: Spain

PostPosted: Wed Sep 23, 2009 6:20 pm
Reply with quote

enrico-sorichetti wrote:
Quote:
We want to do the same in DB2 in order to set up a wait before attempting to perform an update operation


useless...
what makes You think that after a while the situation might be different
if You issue a timed wait, what if after the time has expired the resource is still locked


I know that situation could happen, however we'd want to minimize it's effects. The locks last few secons, but long enough to create the timeouts. Since the parallel process are known to release the locks after a few records processed (we have a high commit frequency) it quite probable that once the timeout has run out the record would be unlocked

enrico-sorichetti wrote:
if You find a way of waiting/enqueuing on a specific resource , the waiting task will be stuck anyway , maybe locking othe resources and so on and so on..

let DB2 do the dirty work, it can do it better than any in house workarounds


The problem is the abends and the restart processes involved in this. We would like to prevent the DB2 errors and instead retry the update until the resource is free, but without issuing the timeouts. Restarting this process is quite difficult and if we could just avoid the DB2 errors involved in the lock, it would solve our problem. Moreover, since at that time, those DB2 tables are used exclusively on those processes, there would be very low risk of resource after resource being lock. I'm not sure if I'm making myself clear

Anyway, aside from the functional recomendation, which is quite useful since I learn a lot, is there a way to check this?
Back to top
View user's profile Send private message
GuyC

Senior Member


Joined: 11 Aug 2009
Posts: 1281
Location: Belgium

PostPosted: Wed Sep 23, 2009 7:19 pm
Reply with quote

DB2 already waits a while when there is a lock (unless it's a deadlock you can't resolve that by waiting).

Maybe you can check out some zparms like IRLMRWT.

also check if you don't have lock escalation.
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Wed Sep 23, 2009 9:40 pm
Reply with quote

Hello,

If you fix the design (flaw) that allows the deadlock to happen, many/most of the time-outs may go away.

If there are still time-outs when there is no deadlock, suggest the process(es) be looked at and see if locks are only acquired when needed rather than on every read. Some code is written to always lock rows - just in case an update is required. IMHO, a very bad design.
Back to top
View user's profile Send private message
ojdiaz

New User


Joined: 19 Nov 2008
Posts: 98
Location: Spain

PostPosted: Fri Sep 25, 2009 2:26 pm
Reply with quote

Hello Dick

Thanks for your feedback

We have a few deadlocks in some on-line apps, but it is caused because of inconsistent programation. Process 1 updates table A, then Table B, process some stuff, finally releases locks by issuing a commit. Process 2 Updates table B, then table A, process some stuff and then issues a commit. There are deadlokcs that are being corrected by their app owners.

However, the Batch problem that affects us is not because of deadlocks but timeouts. Well, we'll see if we can optimize some process.

This code in question is written as i Posted in my first post.

A cursor open for update of a single record that is fectched and then updated (the value must be returned to the caller program) and then the update operation.

In this case, the timeouts happen because of parallel process that access the same resources, and there is no chance to prevent that since time is an issue and the batch must be completed before 8:00 am.

Anyway, thanks for the support. I guess then there is no way to check this. I wanted just to try it and evaluate the results
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Fri Sep 25, 2009 10:04 pm
Reply with quote

Hello,

Quote:
In this case, the timeouts happen because of parallel process that access the same resources,
Parallel processing is a requirement in many systems. As i mentioned earlier, the process should only lock the row(s) to update when they are to be updated - not for the duration of some process(es).

Locking some row(s) for the length of a process is almost always due to a lack of experience on the part of the designer(s) or laziness on the part of the developer(s).

Consider the online systems that support the same functional requirement with hundreds or thousands of concurrent users. Such an implementation would probably not be tolerated by management.
Back to top
View user's profile Send private message
ojdiaz

New User


Joined: 19 Nov 2008
Posts: 98
Location: Spain

PostPosted: Mon Sep 28, 2009 1:44 pm
Reply with quote

dick scherrer wrote:
Hello,

Quote:
In this case, the timeouts happen because of parallel process that access the same resources,
Parallel processing is a requirement in many systems. As i mentioned earlier, the process should only lock the row(s) to update when they are to be updated - not for the duration of some process(es).

Locking some row(s) for the length of a process is almost always due to a lack of experience on the part of the designer(s) or laziness on the part of the developer(s).

Consider the online systems that support the same functional requirement with hundreds or thousands of concurrent users. Such an implementation would probably not be tolerated by management.

Hello Dick

thanks for taking the time to reply

We are locking only one row for each parallel batch job (There are in total 6 batch jobs running in parallel, segmented by several ranges. Job 1 deals with range 1 trhough 100, job 2 with range 101 through 200, and so on) update operation

However, we have a peculiar case (or not?) in here. All DB2 BATCH jobs (COBOL software executing db2) that deals with update operations are required to use a "Restart Routine". This routine keeps tracks of two things: The last record for which a commit was issued (referred to the input records of the caller program) and the number of that record for which the commit was issued. That enables a "sync" point for restarting that program and prevent damage to the data by processing several times the same records and also, not to process all the records again. For this case, the Routine is set to update after every 20 input records of the caller program

The tablespace that causes the timeout problems is already set up to lock only the "rows" that are being updated until each commit is issued. We have lock=PAGE and maxrows = 1

The DB2 cost for issuing an commit for each update operation is way to expensive, and it is quite slower. In a test environment (an exact copy of production environment) we changed the commit frequency of the routine to 1, so after processing each input record the commit should be issued, however, the process execution time was too high, around 5 hours (clock), however, with a smaller commit frequency (after 20 records, the original) the exec time is considerably lower, around 30 minutes.

We can't afford to commit after each record because, since this is a process that deals with the invoicing interest calculation of an accounts app in a bank, well, it is a "must be" that the whole batch is over before 8:00 am, or the commercial offices won't be able to operate once they open to the public, hence our preoccupation.

Anyway, thanks again. I'm sorry if I extended myself too much. Sometimes is frustrating to explain to my users, especially non technical ones, that some technical problems can't be solved that easily.

We'll keep working on an alternative and if I found something useful, I'll post it

Thanks Again

Best Regards

Oliver
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Mon Sep 28, 2009 7:49 pm
Reply with quote

Hello,

Quote:
Anyway, thanks again. I'm sorry if I extended myself too much. Sometimes is frustrating to explain to my users, especially non technical ones, that some technical problems can't be solved that easily.
You're welcome icon_smile.gif

Yup, it can be frustrating. . .

Quote:
The DB2 cost for issuing an commit for each update operation is way to expensive
Has anyone identified why? It sounds like there may be something else gong on as well. I'm not familiar with the mechanics of db2's lock mechanism, but some that i am familiar with are exceedingly trivial. . .

Quote:
We have lock=PAGE
Page locks have caused many problems on many db2 systems. If your dba believes this is the way you need to operate, suggest you change the data so that only one row is on a page. . . Not elegant, but if all 6 of your "control" rows happen to be on the same page, this could cause lots of time-outs. . .
Back to top
View user's profile Send private message
John Alexander1

New User


Joined: 22 Sep 2009
Posts: 7
Location: Sydney Australia

PostPosted: Thu Oct 01, 2009 5:10 am
Reply with quote

Hi,
I think the overhead of checking if the page or row is locked 1st, is unnecessary overhead, why not just try the Update , if it works well & good, & if you get Sqlcode -911 or -913 then go to sleep & retry again later a number of times.
Also if the table is not large then consider having the DBAs put a lot of free space or specify MAXROWS 1, so that you have one row on a page, that way the only deadlock you'l get is if 2 people are trying to update the same row, otherwise the updates should work.

Good luck
john
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Thu Oct 01, 2009 7:58 pm
Reply with quote

Hello,

Quote:
go to sleep & retry again later a number of times
A small number of times. . . icon_wink.gif
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> DB2

 


Similar Topics
Topic Forum Replies
No new posts How to split large record length file... DFSORT/ICETOOL 10
No new posts SFTP Issue - destination file record ... All Other Mainframe Topics 2
No new posts FINDREP - Only first record from give... DFSORT/ICETOOL 3
No new posts To find whether record count are true... DFSORT/ICETOOL 6
No new posts SCOPE PENDING option -check data DB2 2
Search our Forums:

Back to Top