Joined: 25 Aug 2010 Posts: 23 Location: Pune, India
I had been trying to find some simple way to check the performance of DL/I calls. There have been some discussions in the past, but I could not get a clear understanding.
I just want to know if there is a simple way of finding the performance (in terms of CPU time) for an individual DL/I call (something like EXPLAIN tables of DB2).
My intention is to carry out a simple analysis and comparison between a set of IMS DB calls and DB2 queries for research purpose.
I don't know of anything that will tell you "this will cause a sequential scan of the index" like you get with EXPLAIN.
Remember that with IMS, you are getting the segments one call at a time. You don't put in a query then process a list of results.
Biggest thing to avoid if you can: Randomized reads. That means the segments being retrieved are not sequential. IMS has a type of database called HDAM (PHDAM for partitioned). These have the segments scattered across the physical database to make sure the newest records aren't together on the disk packs. It helps prevent bottlenecks when accessing the data.
I've seen this with "people in a case." Case and People are separate databases, both randomized. A program is reading the Case database sequentially, then looking up the names of the people in each case. The read of the People database is then random, so you can't buffer for it, and there is always at least one I/O for each person.
Another big killer is using an indexed database, but qualifying the lookups with fields that aren't in the index. It's a little more insidious. You have a nice index on your database, say..SSN for the People. That menas you have a sequential list of SSNs and each one points directly to a People segment. Now, your program comes along and says "find me the SSNs of people in zip code 43230." IMS says fine, I'll go to each SSN, then randomly read the People segment it points to to get the zip code, and I'll get back to you.