IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

MSU calculation for DB2 (to decide best DB2 licence)


IBM Mainframe Forums -> DB2
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
SRICOBSAS

New User


Joined: 07 Dec 2015
Posts: 19
Location: India

PostPosted: Wed Mar 23, 2016 6:07 pm
Reply with quote

As stated in my earlier posts I have to present on the topic "Measuring DB2 activity in terms of MSU / MIPS". My client wants to know the current MSU / MIPS utilization trend for his existing DB2 v8 so that he can decide on how much MSU / MIPS licence he would want for a new DB2 v 10 purchase.

My earlier post:

ibmmainframes.com/about64937.html

I came to know of certain MSU calculation tools SCPT / SCRT (Sub Capacity Planning Tool and Sub Capacity Reporting Tool). I also came to know that MSU is recorded in Type70 RMF records. Beyond these hints I am not getting further material.

The fact is there are several links to SCPT / SCRT and also RMF type70 / type72 records and even some JCLs to calculate MSU. But I am looking to calculate MSU specifically for DB2 subsystem, meaning what is the breakup of MSU measurement for a whole z/OS system and how much is the MSU measurement for DB2. Once the DB2 MSU calculation is available my customer can get to know the size of his current business and can also correlate it with the existing license cost factor (z/OS and z/OS based software licenses are sold on basis of MSUs)

So I began looking how DB2 activity is currently measured. The DB2 performance measurement techniques available on google measure performance in terms of time taken for a DB2 package to execute (while in a "IN-SQL" state). Standard performance measurement tools used are Omegamon / Strobe / BMC Mainview.

As stated earlier these "time-taken-to-complete" statistics are not helping me to achieve my goal of measuring DB2 performance in terms of MSU / MIPS. My customer says he is finding it difficult to decide on DB2 computing capacity purchase using these "Time-taken-to-complete" statistics

My friends are willing to help. If I give them some DB2 code to execute (where I can code some display statements) they can run this DB2 stuff at their end and mail me the results. I will have to google out how to convert this "space / bytes occupied" and "time-taken-to-complete" statistics into MSU / MIPS

At least I will have to make best use of the available help. There are some constraints my friends are facing (in terms of the accesses they currently have).

My friends say they dont have access permissions to create new plans / bind templates / packages in their shops. All the existing packages that they monitor on Omegamon / Strobe / BMC Mainview are closed for further modifications (meaning they cant insert my DB2 stuff in their existing packages).

But access to a basic DB2 subsystem and privilege to execute COBOL programs is there. Meaning they can create tables / can perform insert-update-delete operations using frontend COBOL apps e.t.c

So now I am limited to basic table creation / basic COBOL stuff.

Of course DB2 DML-cum-DDL access / COBOL access might differ from shop-to-shop but certain things have to be common. No matter what type of tablespaces / stogroups are present, no matter what extents / page characteristics / locking characteristics are set some things must be common across all shops. No matter what is the COBOL setup (say COBOL II or III) the basic "DISPLAY" statements / "EXEC SQL" statements must remain the same.

Since I am limiting myself to calculation of "space / bytes occupied" and "processing time taken" statistics I am planning to write some basic SQL / COBOL stuff with some display statements to help me get the statistics I need
-----------------------------------------

I am assuming DB2 v8

A sample COBOL-DB2 project

VisionPLUS MBS (Merchant BankCard System) module generates several output datasets everyday, some of which which contains details on daily ATM transactions. Once the MBS transactions are validated by TRAMS, several VSAM KSDS are created with each containing about a billion records (imagined average)

Each record is of 4000 characters length containing various fields (label and values) such as:

CUST_ID
CUST_NAME
CUST_HISTORY_CODE
CUST_TYPE_CD
ATM_CARD_BANK_ID
ATM_ID
DT_OF_ATM_TRANSACTION
ATM_TRANSACTION_TYPE_CD
ATM_TRANSACTION_RESULT_CD
CASH_WITHDRAWN
ATM_TRANSACTION_ERR_CD

There are several hundreds of fields. Above are core fields I used in the past to troubleshoot (of course troubleshooting / testing VisionPLUS MBS is complicated stuff, just for purposes of this DB2 MSU / MIPS ppt I have taken above 11 fields)

So if I have to start normalization

1st Normal form- all redundant data groups to be removed and primary keys to be identified
2nd Normal Form- Identify functional dependency on non-repeating data groups (foreign key - primary key relation ship)
3rd Normal Form- All transitive dependencies to be removed

1st Normal form- primary keys will be CUST_ID, CUST_TYPE_CD, ATM_CARD_BANK_ID, ATM_ID, ATM_TRANSACTION_TYPE_CD, ATM_TRANSACTION_RESULT_CD,
ATM_TRANSACTION_ERR_CD (7 primary keys)
2rd Normal form- above fields will be foreign keys in other tables
3rd Normal form- yet to come across in this example


So right now 7 different primary key tables have been identified

CREATE TABLE ATM_CUSTOMER(
CUST_ID VARCHAR(10) NOT NULL,
CUST_NAME VARCHAR(30) NOT NULL,
CUST_ACCT_NO DECIMAL(15) NOT NULL,
CUST_TYPE_CD VARCHAR(4) NOT NULL,
CUST_ADDR VARCHAR(50) NOT NULL,
CUST_PH_NO VARCHAR(15) NOT NULL,
/* To allow for characters like + in ph nos */
CUST_EMAIL_ID VARCHAR(20),
CUST_ATM_CARD_NO VARCHAR(30) NOT NULL,
/*Imagining some banks allow alphanumeric characters in ATM card nos */
ATM_CARD_BANK_ID VARCHAR(4) NOT NULL,
PRIMARY KEY(CUST_ID),
FOREIGN KEY(CUST_TYPE_CD) REFERENCES CUSTOMER_TYPES (CUST_TYPE_CD) ON DELETE CASCADE,
FOREIGN KEY(ATM_CARD_BANK_ID) REFERENCES ATM_BANK_MASTER (ATM_CARD_BANK_ID) ON DELETE CASCADE,
FOREIGN KEY(CUST_ACCT_NO) REFERENCES CUSTOMER_ACCOUNT_MASTER (CUST_ACCT_NO) ON DELETE CASCADE
) IN <schema name / tablespace name>;


CREATE TABLE CUSTOMER_TYPES(
CUST_TYPE_CD VARCHAR(4) NOT NULL,
CUST_TYPE_SHORT_DESC VARCHAR(10),
CUST_TYPE_LONG_DESC VARCHAR(40),
LOAN_ALLOWED CHAR(1) NOT NULL,
PRIMARY KEY(CUST_TYPE_CD) ) IN
<schema name / tablespace name>;

CREATE TABLE ATM_BANK_MASTER(
ATM_CARD_BANK_ID VARCHAR(4) NOT NULL,
BANK_NAME VARCHAR(15) NOT NULL,
ATM_CARD_TYPE VARCHAR(15),
/*Say mastercard or visa, am planning to make this nullable since there could be additional types in future */
ATM_OTHER_BANK_TRANSACTION_ALLOWED CHAR(1) NOT NULL,
ASSOCIATED_ATM_ID VARCHAR(4),
PRIMARY KEY(ATM_CARD_BANK_ID),
FOREIGN KEY(ASSOCIATED_ATM_ID) REFERENCES ATM_MASTER (ATM_ID) ON DELETE NO ACTION, /* ATM ids may change but bank name records shouldnt so no action on this table even if ATM_MASTER changes */
) IN <schema name / tablespace name>;

CREATE TABLE ATM_MASTER(
ATM_ID VARCHAR(4) NOT NULL,
ATM_LOCATION_AREA VARCHAR(10) NOT NULL,
ATM_LOCATION_CITY VARCHAR(10) NOT NULL,
ATM_LOCATION_STATE VARCHAR(10) NOT NULL,
ATM_LOCATION_COUNTRY VARCHAR(20) NOT NULL,
ATM_SUPPLIER_ID VARCHAR(4) NOT NULL,
ATM_BANK_NETWORK_ID VARCHAR(4) NOT NULL,
ATM_OTHER_BANK_TRANSACTION_ALLOWED CHAR(1) NOT NULL,
ATM_OPERATIONAL_CHARECTERISTICS_CD VARCHAR(4) NOT NULL, /* Taking into account ATM may be cash-dispense type only or cash-dispenseent -cum- cheque drop facilty type e.t.c */
PRIMARY KEY(ATM_ID),
FOREIGN KEY(ATM_LOCATION_AREA,ATM_LOCATION_CITY,
ATM_LOCATION_STATE,ATM_LOCATION_COUNTRY)
REFERENCES COUNTRY_MASTER (ATM_LOCATION_AREA,ATM_LOCATION_CITY,
ATM_LOCATION_STATE,ATM_LOCATION_COUNTRY)
FOREIGN KEY(ATM_SUPPLIER_ID) REFERENCES ATM_SUPPLIER_MASTER(ATM_SUPPLIER_ID),
FOREIGN KEY(ATM_BANK_NETWORK_ID) REFERENCES ATM_BANK_NETWORK_MASTER (ATM_BANK_NETWORK_ID) )
IN <schema name / tablespace name>;


(Will attach more)

SO the DB design I have come up with is something like

Primary entry point for the COBOL apps- ATM_MASTER table

************** ****************

*ATM_CUSTOMER*----------References----- *CUSTOMER_ACCOUNT_MASTER*
************** |**********


|

|

|****************

|*CUSTOMER_TYPES*

|****************

|

|

|*****************

|*ATM_BANK_MASTER*

*****************

There are more tables but I will bring them later.

Now for a COBOL code that inserts records into these tables and simultaneously tells me "space / bytes occupied" and "time-taken-to-complete" statistics

Before I put in insert statements let me calculate the length of each record in ATM_MASTER

CUST_ID VARCHAR(10) +
CUST_NAME VARCHAR(30) +
CUST_ACCT_NO DECIMAL(15) +
CUST_TYPE_CD VARCHAR(4) +
CUST_ADDR VARCHAR(50) +
CUST_PH_NO VARCHAR(15) +
CUST_EMAIL_ID VARCHAR(20) +
CUST_ATM_CARD_NO VARCHAR(30) +
ATM_CARD_BANK_ID VARCHAR(4)


(10) + (30) + (15) + (4) + (50)
+ (15) + (20) + (30) + (4) = 178 bytes

So as I keep adding each record I will be totaling the string lengths of each field value. But I know that a maximum of 178 bytes have to be occupied per record- no more. I need this upper limit for a later calculation

Similarly for records of other tables

Need to stop here. Will be back soon to complete
Back to top
View user's profile Send private message
Nic Clouston

Global Moderator


Joined: 10 May 2007
Posts: 2455
Location: Hampshire, UK

PostPosted: Wed Mar 23, 2016 8:08 pm
Reply with quote

Please keep your questions concise - I gave up reading after about 3 lines. I do not even know if there is a question in there!
Back to top
View user's profile Send private message
Robert Sample

Global Moderator


Joined: 06 Jun 2008
Posts: 8696
Location: Dubuque, Iowa, USA

PostPosted: Wed Mar 23, 2016 9:37 pm
Reply with quote

Quote:
But I am looking to calculate MSU specifically for DB2 subsystem, meaning what is the breakup of MSU measurement for a whole z/OS system and how much is the MSU measurement for DB2.
Does the site have zIIP or zAAP processors, or are they planning on using them? If so, your calculations will be complicated since these specialty engines reduce the workload on the processor (CP) for DB2.

I would think you could get a good approximation of the value you want by looking at the SMF type 72 subtype 3 records for the service class (or service classes) for DB2. The Service/Report Class Period Data Section has the service units by total, TCB, IOC, central storage, and SRB. And converting service units to MSU is simple. You may have to deal with multiple periods, depending on how the WLM policy is put together, and how much data you need to collect. But at least you won't be spending hours writing posts and analyzing data that adds little, if any, incremental value to the results.

And if you think people are actually reading your entire posts, you are VERY mistaken. We don't care about the DB2 details of what you're doing, especially since we don't need to know those details to point you to the SMF type 72 subtype 3 records.
Back to top
View user's profile Send private message
vasanthz

Global Moderator


Joined: 28 Aug 2007
Posts: 1742
Location: Tirupur, India

PostPosted: Wed Mar 23, 2016 9:58 pm
Reply with quote

Hello,

Quote:
I came to know of certain MSU calculation tools SCPT / SCRT (Sub Capacity Planning Tool and Sub Capacity Reporting Tool).


You have your answer here, The SCRT report would give you how much MSU, the DB2 subsystem is using broken up by LPAR. There would be a value called TOOL MSU, that is what IBM uses for licensing.

Rest of the post TL:DR

Why do you want to write your own tool when IBM supplies a tool just for licensing purpose?

SCRT is not hard to run, just download from IBM, upload to a PS file, execute. wahlah.. you have your SCRT report.

Just make sure you have appropriate access permissions.
Back to top
View user's profile Send private message
SRICOBSAS

New User


Joined: 07 Dec 2015
Posts: 19
Location: India

PostPosted: Mon Mar 28, 2016 11:47 am
Reply with quote

Hi,

Apologies for the lengthy post. To be frank I was getting very little pointers from Google. And my customer wanted me to train him on preparing "Return On DB2 Investment" reports. I gave him a presentation on Omegamon / Strobe / BMC Mainview which measures DB2 activity in terms of "time taken to complete" (say this many minutes / this many hours e.t.c), but he said he was unable to correlate these "time" statistics with the original licence cost factor- MSUs.

Robert Sample wrote:
-----------------------------------------------------------
"Does the site have zIIP or zAAP processors, or are they planning on using them? If so, your calculations will be complicated since these specialty engines reduce the workload on the processor (CP) for DB2."
--------------------------------------------------------------

I will take a note of this in my presentation.

Robert Sample wrote:
-------------------------------------------------------------------
"I would think you could get a good approximation of the value you want by looking at the SMF type 72 subtype 3 records for the service class (or service classes) for DB2. The Service/Report Class Period Data Section has the service units by total, TCB, IOC, central storage, and SRB. And converting service units to MSU is simple. You may have to deal with multiple periods, depending on how the WLM policy is put together, and how much data you need to collect."
---------------------------------------------------------------

Thanks Robert for this lead. I will work on this further.

vasanthz wrote:
---------------------------------------------------------------
"The SCRT report would give you how much MSU, the DB2 subsystem is using broken up by LPAR. There would be a value called TOOL MSU, that is what IBM uses for licensing.
..............................................................................
..............................................................................
..............................................................................

SCRT is not hard to run, just download from IBM, upload to a PS file, execute. wahlah.. you have your SCRT report.

Just make sure you have appropriate access permissions."

------------------------------------------------------------------

Thanks vasanthz for your inputs. I will work on this further.

Thanks everyone.
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> DB2

 


Similar Topics
Topic Forum Replies
No new posts Null values are considered in Total c... DFSORT/ICETOOL 6
No new posts Allocated space calculation from DCOL... PL/I & Assembler 3
No new posts VSAM file Storage Calculation JCL & VSAM 3
No new posts How to dynamically decide which one t... JCL & VSAM 16
No new posts Need training ppt for DB2 v10 MSU / M... DB2 5
Search our Forums:

Back to Top