IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

Need training ppt for DB2 v10 MSU / MIPS (zCAP/VWLC licence)


IBM Mainframe Forums -> DB2
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
SRICOBSAS

New User


Joined: 07 Dec 2015
Posts: 19
Location: India

PostPosted: Tue Mar 15, 2016 10:30 pm
Reply with quote

Hi DB2 Experts,

Looking for some good links / materials for a corporate training powerpoint on the below topics (one combined presentation for all 6 topics):

1) Measuring z/OS DB2 v 10 performance (in terms of MSUs / MIPS)
2) Measuring storage disk performance (DS8870 and DS8000 disks) again in terms of MSUs / MIPS
3) Generate weekly DB2 and storage disk activity reports and parallely also MSU / MIPS reports
4) Correlate MSU / MIPS reports with original MSU / MIPS allotted as part of DB2 purchase (VWLC and zCAP licencing)- clearly show weekly trend of MSUs / MIPS (so that MSU / MIPS utilzation trends can be shown)
5) Give high level reports on zEnterprise Analytics System (focusing on the DS88870 utilization)
(there is a 6th topic which comes afterwards)

All these training topics are different than standard DB2 performance measurement trainings / ppts (which are available on Google). So I am putting some facts in a pointwise manner (starting off what the customer exactly requires) which I believe might help me in getting more visibility on the above topics

Requirement: Customer needs a weekly one line summary on "Return on Investment" on DB2 v 10 (I believe the customer has not yet purchased it but as and when he does he will compare the cost-versus-returns statistics on both the VWLC as well as the zCAP licencing for DB2). Since the DB2 v 10 purchase has not yet been done I dont have official IBM reports available with me. (The customer is emphasizing training on the 4th topic "Correlation of measured MSU / MIPS statistics with original allotted capacity". Maybe he is considering a LPAR soft capping during the DB2 v 10 purchase)

So this is what I understand from my google search so far

1) Currently DB2 performance is measured (and reports generated) based on the following factors:

DASD volume / space occupied
No of DB2 jobs running on a daily basis, average time they take to complete, shortest and longest DB2 jobs,
No of output files generated by each job
Business value ratings of the DB2 outputs (organization specific / shop specific)
Frequency of job abend (total no of hours taken to resolve these abends)
If job abend required REORGS then total no of times in a month (or maybe 2 months) the REORG indicator showed up (*AREO) while the job was running smoothly, reason why the job abended for want of a REORG even though the indicator didnt show up
Trend in database / database-accessing-application redesign- efficiency of redesign measured in terms of new business value generated (say for example a number of new output files generated), or maybe efficiency measured in terms of reduction in DB2 job completion times
................... (am still googling)
................... (am still googling)

2) From the above factors / reports it is difficult to judge the cost-vs-returns- as per the customer (maybe the above reports are good enough / standard enough world over but not for this customer). This customer cant make out the weekly MSU / MIPS trend (the critical cost factor while purchasing DB2)

3) For example if there are reports on how much time (how many hours) business critical DB2 jobs took then he says it is difficult for him to correlate this fact with the original cost factor of the purchased DB2 MSU / MIPS capacity. Same for other reports- he finds it difficult to correlate DASD volume / space with MSU / MIPS, difficult to correlate abend resolution times / redesign efficiency time statistics with MSU / MIPS e.t.c

4) So over and above the aforementioned reports this customer wants:

How many MSUs / MIPS each DB2 job took (for each cycle that it ran)
If any non DB2 related abends occurred (for example COBOL apps suddenly went out of control and started dumping data in DB2 tablespaces and thus an abend occurred e.t.c) then what redesigning is possible so that MSUs / MIPS can be saved

-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
For a bit more info- the customer already has a z/OS v 1.13 on his shop with DB2 v8 running on it. And he is using an old Hitachi storage system. He has not yet started the official migration to DB2 v 10 (even 3-4 years after IBM announced end of support of earlier DB2 versions- some organizations are slow). And nobody knows whether he is going for an upgrade of his storage systems as well (at least he is not telling me). I bet he has to upgrade his storage drives- Hitachi has ended official support for its Lightening 9900 (which is what this customer is using).

By looking at the above combination I guess this customer is not having enough business to necessitate upgrades. I heard that this customer is going for a merger
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*


After this corporate training is finished the customer will start the MSU / MIPS measurement for the existing DB2 v8 and the existing Lightening 9900 storage system

If the customer were going for a DB2 upgrade then the above measurements / reports would not matter much for him (in any case IBM has a set marketing standard for DB2 v 10 upgrade sale). And definitely the MSU / MIPS measurements of the Hitachi storage drives wont matter. But still the customer will be doing the MSU / MIPS measurement- strange

So heres the twist- this customer IS GOING FOR A NEW PURCHASE ALTOGETHER OF DB2 v 10 (everything will be a new purchase- a new z System / new CICS / new IMS / new storage e.t.c.)- probably a zEnterprise Analytics System

So this is where his DB2 v 8 MSU / MIPS measurement become useful- for a new DB2 v 10 purchase. With the current DB2 v 8 MSU / MIPS reports he will have a fair idea what is the infrastructure utilization of his business and whether this trend is likely to increase.

Of course for the existing setup the customer will be upgrading the version 8 to a higher version (and he doesnt want me to even think about it or consider that for this training)

Now heres the complication- is there a way (any tool / any framework) to forecast the MSU / MIPS measurement by evaluating enhanced features of DB2 v 10 without actually implementing it. This is a very tricky situation.

The customer has applications built in COBOL, PL-1 and SAS (and of course JCL- silly me) and a couple of Windows based J2EE applications and ALL of them connect to the existing DB2 v 8.

Now considering the scenario that he already has a couple of applications running on his shop he wants to know if there is a way that MSUs / MIPS can be calculated if he ported his existing apps on the new MLC z/OS servers (again the MLC doesnt exist). So he wants training on a tool that will simulate a new MLC DB2 v 10(and if required even a new MLC z/OS) and then calculate the MSUs / MIPS accordingly.

So the fact is I dont have a real time DB2 v10 where all the existing DB2 v 8 applications can be ported and the MSU / MIPS measurement done- BY ACTUALLY REDESIGNING APPLICATION BY APPLICATION INTRODUCING ENHANCED FEATURES OF DB2 v 10 AND GIVE A MSU / MIPS BREAKUP FOR EACH ENHANCED DB2 v 10 CAPABILITY.

At this stage I am not even talking of simulating the DS8000 / DS8870 storage drives, just a simulation of the DB2 v 10 (he already has z/OS 1.13). I am pretty sure there are no DB2 or storage simulators. I will have to collect real time reports and put it in my ppt

So now there are additional reports to be generated

5) Over and above the existing MSU / MIPS reports for the existing DB2 jobs, new reports are required that can tell that if such and such a DB2 v 10 feature is used

then so many MSUs / MIPS are saved. For example pureXML. Considering the fact that there is no real time DB2 v 10 setup available- WHAT WOULD BE THE MSU / MIPS SAVINGS

IF pureXML WERE USED.

So now this is the 6th topic- "Forcasted MSU / MIPS meaurement (correlated with real time DB2 v 10 reports) for each enanced DB2 v 10 capabilities- feature by feature"

I am trying to dig as many real time reports from Google on DB2 v 10 that I can.

So to sum up everything I am looking for some good links / materials to prepare a combined ppt on:

1) Measuring z/OS DB2 v 10 performance (in terms of MSUs / MIPS)
2) Measuring storage disk performance (DS8870 and DS8000 disks) again in terms of MSUs / MIPS
3) Generate weekly DB2 and storage disk activity reports and parallely also MSU / MIPS reports
4) Correlate MSU / MIPS reports with original MSU / MIPS allotted as part of DB2 purchase (VWLC and zCAP licencing)- clearly show how much MSUs / MIPS- in a week and how much more processing capacity still remains (.i.e untapped / unutilized processing power)
5) Give high level reports on zEnterprise Analytics System (focusing on the DS88870 utilization)
6) Forcasted MSU / MIPS meaurement (correlated with real time DB2 v 10 reports) for each enanced DB2 v 10 capabilities- feature by feature

Thanks for your patience

SRICOBSAS
Back to top
View user's profile Send private message
SRICOBSAS

New User


Joined: 07 Dec 2015
Posts: 19
Location: India

PostPosted: Tue Mar 15, 2016 10:33 pm
Reply with quote

Summary of my lengthy postc above

Need to give corporate training on below topics. Looking for good links / materials

1) Measuring z/OS DB2 v 10 performance (in terms of MSUs / MIPS)
2) Measuring storage disk performance (DS8870 and DS8000 disks) again in terms of MSUs / MIPS
3) Generate weekly DB2 and storage disk activity reports and parallely also MSU / MIPS reports
4) Correlate MSU / MIPS reports with original MSU / MIPS allotted as part of DB2 purchase (VWLC and zCAP licencing)- clearly show how much MSUs / MIPS- in a week and how much more processing capacity still remains (.i.e untapped / unutilized processing power)
5) Give high level reports on zEnterprise Analytics System (focusing on the DS88870 utilization)
6) Forcasted MSU / MIPS meaurement (correlated with real time DB2 v 10 reports) for each enanced DB2 v 10 capabilities- feature by feature

Constraint 1: No real time DB2 v 10 setup available.

Constraint 1: No real time DS8000 or DS8870 or zEnterprise Analytics Server setup available

Customer is serious for going for a new MLC purchase of a new z/OS (and a new DB2 v 10). Customer is ignoring my pleas to expand current licencing rather than going for a new one (a rich customer!!)

Even though customer is willing to go for the expensive option of a new MLC altogether he still wants me to find ways to reduce licence costs. Customer is asking me to compare different MLC licencing schemes (he is more interested in zCAP- colocated licencing rather than VWLC)- strange

Customer wants to know size of current business. Wants me to train his staff on MSU / MIPS measurement (specifically for DB2 v 8 and Hitachi Lightening 9900). If customer sees very little peak MSU then customer might go for LPAR soft capping (still finding ways to reduce new licence purchase and is still stubborn not going for expansion of current z/OS licence)

Customer wants me to find simulators (or possibly rented DB2 v 10 environments- for temporary periods) so that he can get a DB2 v 10 feature wise MSU / MIPS report (.i.e MSU / MIPS report for pureXML, MSU / MIPS report for Universal tablespaces, MSU / MIPS report for flashcopy DS8000 / DS8870 storage e.t.c). He says this will influence his purchase for a new z/OS & DB2 MLC

Date not set for training, but I need to be ready within 1.5 months

Again thanks for your patience.
Back to top
View user's profile Send private message
SRICOBSAS

New User


Joined: 07 Dec 2015
Posts: 19
Location: India

PostPosted: Fri Mar 18, 2016 3:36 am
Reply with quote

So I researched a bit further. Googled out Omegamon DB2 performance statistics.

One Omegamon screen titled "Object Analysis" is throwing up measurement factors like "Interval Time", "Interval Elapsed", Total I/O and Total Getpage. Lot of other statistics are shown which are dependant on "Getpage":

% of Getpage
% of IO
Getpage per RIO
Getpage
Sync Read
Pre Fetch


Above high level statistics are being shown for core database (including catalog databases) like DSNDB06.

So the Object Activity summary and database wise statistics are something like

Object Activity Summary: Total Getpage= 44

Database level statistics

Database: DB2PM
% of Getpage: 45.4%
Getpage : 20

(....more object activity statistics)

Database: DB2PM DPROPR DSNADMB DSNDB06
% of Getpage: 45.4% 27.2% 13.6% 13.6%
Getpage : 20 12 6 6

So 20+12+6+6= 44 = Total Getpage

A bit more digging shows me that Omegamon measures DB2 thread activity as per "Elapsed Time"

Plan Name
Author
Elapsed Time
CPU Rate
In-DB2 Elapsed time

For example:

Plan Name : KO2PLAN
Author : OMPEUSER
Elapsed Time : 13d 14h
CPU Rate : 0.0
In-DB2 Elapsed time : 20m 43s


The above 5 factors are expanded further (bringing in other factors like Wait Time)

Elapsed Time
Plan Name
Package DBRM (Unicode)
CP CPU Rate
Thread Status
In DB2 CP CPU Time
In DB2 Time
Wait Time
Getpage
Updates
Commits
Interval Start

For example

Elapsed Time : 1 day
Plan Name : DB2PM
Package DBRM (Unicode) : DGO@PC1
CP CPU Rate : 0.0
Thread Status : NOT-IN-DB2
In DB2 CP CPU Time : 00:00:45.456
In DB2 Time : 00:00:50.196
Wait Time : 00:00:01.248
Getpage : 712035
Updates : 0
Commits : 105781
Interval Start : 07/12/11 12:34:03

(theres more......)

Elapsed Time : 1 day 16:40:38.8
Plan Name : DB2PM KO2PLAN
Package DBRM (Unicode) : DGO@PC1 FPE@WR2C
CP CPU Rate : 0.0 0.0
Thread Status : NOT-IN-DB2 NOT-IN-DB2
In DB2 CP CPU Time : 00:00:45.456 00:00:15.756
In DB2 Time : 00:00:50.196 00:07:01.453
Wait Time : 00:00:01.248 00:06:43.423
Getpage : 712035 108054
Updates : 0 32016
Commits : 105781 16808
Interval Start : 07/12/11 12:34:03 07/12/11 12:34:03

Bringing in statistics for an "IN-SQL-CALL" thread

Elapsed Time : 1 day 16:40:38.8 00:02:20.6
Plan Name : DB2PM KO2PLAN DYNSELP1
Package DBRM (Unicode) : DGO@PC1 FPE@WR2C DYNSEL04
CP CPU Rate : 0.0 0.0 0.0
Thread Status : NOT-IN-DB2 NOT-IN-DB2 IN-SQL-CALL
In DB2 CP CPU Time : 00:00:45.456 00:00:15.756 00:00:00.63
In DB2 Time : 00:00:50.196 00:07:01.453 00:01:52.630
Wait Time : 00:00:01.248 00:06:43.423 00:00:01.084
Getpage : 712035 108054 21
Updates : 0 32016 0
Commits : 105781 16808 0
Interval Start : 07/12/11 12:34:03 07/12/11 12:34:03 07/12/11 12:34:03

So the key Omegamon measurement factors are:

Getpage
Elapsed Time
In DB2 CP CPU Time
In DB2 Time
Wait Time

Maybe there are more but the above ones are what I have patience for. I havent googled on Strobe so far (Compuware's Strobe was a performance tool I used long ago)

Another key observation is the measurement factors change if the Plan / Package is actually in a SQL call. So for example "DYNSELP1" statistics might change if the status where "NOT-IN-DB2" (will need to google further)

And there are lot of DB2 thread statuses:

www.ibm.com/support/knowledgecenter/SSUSPS_5.2.0/com.ibm.omegamon.xe.pe_db2.doc_5.2.0/ko2ci/ko2ci00084.htm

So let me now try getting these key Omegamon statistics closer to my goal- MSU / MIPS measurement

Googled a bit on MSU

en.wikipedia.org/wiki/Million_service_units

"A million service units (MSU) is a measurement of the amount of processing work a computer can perform in one hour."

This document

public.dhe.ibm.com/eserver/zseries/zos/wlm/Capping_Technologies_and_4HRA_Optimization.pdf

gives us

•Average consumption in LPAR in last 4h (rolling)
•MSU ≡ “Million Service Units per hour”
≠ Service Units ∙ 3600 / 1000000 (could not intrepret meaning so far)
•Tracked as array of 48 intervals of 5 min = 4h

There is a graph below which shows a comparison between MSU and Time (hours). The graph is showing "100 MSUs at 1 hour, 300 MSUs at 2 hours....." (approximate rounded off values)

Nothing much in the above PDF. But it triggered the idea in me.

Bit more digging tells me that MSU info can be had from RMF (RMF TYPE70 and TYPE72 records). Some key RMF terms

RMF, TYPE70, TYPE72, SMF70WLA, R723MADJ. Lastly a very very important terminology: SU_SEC

SU_SEC is the key to MSU. This is the only factor I can see with which I can correlate allotted MSU / MIPS and Omegamon / Strobe stats

....................To be continued

visionplus.trams@outlook.com

Wordpress link

visionplustrams.wordpress.com/2016/03/12/zos-db2-performance-tuning-tools/

LinkedIn post link:

www.linkedin.com/pulse/need-help-prepare-training-ppt-zos-db2-performance-msu-development

IBMMainframes Link

ibmmainframes.com/about64937.html

IDUG post

www.idug.org/p/fo/et/thread=45803

Database-ITTOOLBOX Link

database.ittoolbox.com/groups/technical-functional/Database-Development-l/training-ppt-on-db2-performance-measurement-in-terms-of-msus-mips-5877521
Back to top
View user's profile Send private message
SRICOBSAS

New User


Joined: 07 Dec 2015
Posts: 19
Location: India

PostPosted: Sat Mar 19, 2016 10:41 pm
Reply with quote

Hi,

I got some information from this link:

www.storagecommunity.org/easyblog/entry/system-z-software-pricing-at-ibm-enterprise2014

Alan Radding and David Chase have delivered some ppts on different licencing schemes for purchase of z/OS and DB2

Looks like I might be closer to my z/OS DB2 purchasing licences and "Return On Investment" summary reports powerpoint soon.

Thanks & Regards
Email:

infra.app.supp.se@gmail.com
visionplus.trams@outlook.com

WordPress link

visionplustrams.wordpress.com/2016/03/12/zos-db2-performance-tuning-tools/

LinkedIn post link:

www.linkedin.com/pulse/need-help-prepare-training-ppt-zos-db2-performance-msu-development

IBMMainframes Link

ibmmainframes.com/about64937.html

IDUG post

www.idug.org/p/fo/et/thread=45803

Database-ITTOOLBOX Link

database.ittoolbox.com/groups/technical-functional/Database-Development-l/training-ppt-on-db2-performance-measurement-in-terms-of-msus-mips-5877521
Back to top
View user's profile Send private message
SRICOBSAS

New User


Joined: 07 Dec 2015
Posts: 19
Location: India

PostPosted: Mon Mar 21, 2016 6:25 am
Reply with quote

Hi,

An IBMer has asked the same question as myself (about conversion of Omegamon performance timings to MSU / MIPS)

www.linkedin.com/groups/2318931/2318931-6117367620002930688

I am continuing to research and following all other articles in this regard

Thanks & Regards

Email: infra.app.supp.se@gmail.com
visionplus.trams@outlook.com

Wordpress link

visionplustrams.wordpress.com/2016/03/12/zos-db2-performance-tuning-tools/

LinkedIn post link:

www.linkedin.com/pulse/need-help-prepare-training-ppt-zos-db2-performance-msu-development

IBMMainframes Link

ibmmainframes.com/about64937.html

IDUG post

www.idug.org/p/fo/et/thread=45803

Database-ITTOOLBOX Link

database.ittoolbox.com/groups/technical-functional/Database-Development-l/training-ppt-on-db2-performance-measurement-in-terms-of-msus-mips-5877521

Storage Community links

www.storagecommunity.org/easyblog/entry/system-z-software-pricing-at-ibm-enterprise2014
www.storagecommunity.org/component/easyblog/entry/need-storage-cost-comparison-analysis-ds8000-ds8780?Itemid=255
Back to top
View user's profile Send private message
SRICOBSAS

New User


Joined: 07 Dec 2015
Posts: 19
Location: India

PostPosted: Mon Mar 21, 2016 6:45 pm
Reply with quote

So this is what I have got so far- mostly all shops are using DB2 job completion times as a performance indicator. However as stated earlier this "job time" statistics does not give our customer a summarized "Return-On-Investment" report (as stated earlier our customer needs DB2 activity in terms of MSU / MIPS so that he can directly correlate it with the licence purchase cost). In fact this customer is already running tools which report on DB2 job completion times (and again he doesnt want me to even think of it)

As stated earlier I am continuing to google on the MSU / MIPS calculation utilities- SCPT / SCRT. However I havent made much of progress.

Some of my LinkedIn friends are willing to help me by saying that they can run some realtime DB2 code (could be any DB2 stuff- Stored Procedures / Triggers / cursors / sequence of database intensive SQL queries e.t.c). My friends are stating that if I give them DB2 code to run they can execute it on their DB2 subsystems, and also parallely run Omegamon / Strobe / BMC Mainview e.t.c (and if I give them guidance they are ready to run RMF as well, most of my friends do not know anything about RMF). They can give me performance statistics in terms of time taken.

Although these friends call themselves as "friends" they are not willing to give me job completion timings statistics on the existing DB2 code they already use in their organizations (these friends say that "professional ethics" overrides "friendship").

Well seriously the above statement was a joke. I am fully aware what would happen if you go out of the way and put spectacles on my face so that I get to take a look at how your z/OS shop works (if anyone approached me with such a preposterous idea I would cut off my friendship immediately!!!). So I am not complaining if I manage to suppress my selfish and lazy alter-ego (I dont want my selfishness and laziness to force my friends to quickly get materials to complete my ppt and put my friends in a hotspot). I will have to work hard to complete my ppt and I will burn the midnight oil for as much time it takes me.

So back to our example DB2 code which I need to generate and send to my friends. Now I have some flexibility here. I can tweak the DB2 code such that in between I can display how much actual storage would be occupied after a particular DDL / DML statement (the DB2 code could be stored procedures / triggers / cursors / sequence of DDL: select-cum-insert-cum-update-delete and DML: create-cum-alter-cum-drop sql statements e.t.c). Parallely I can put in display statements that gives me execution times.

So let me make a long story short here with a stepwise example:

1) I create a set of tables. Since I created them I would know how many bytes a record would take.

2) I can decide on how many records I want to insert (could be hundred / ten thousand / one lakh whatever). Again I can calculate what is the current space occupied with insertion of each record

3) So lets say I have a table with 2 VARCHAR fields totalling say VARCHAR 100 (some combination like "NAME VARCHAR(20), ADDRESS VARCHAR(80)" so 20 + 80 = 100 e.t.c). So if I insert 2 strings with exact total length as 100 then each record occupies 100 bytes. Ten records will occupy 10 * 100 = 1000 bytes and so on

4) Now I create a stored procedure with some standard parameters (ISOLATION LEVEL / LIBRARY / ISO / ENCODING e.t.c)

5) In this stored procedure I put in lots of insert statements. Before and after each insert statement I put in displays that tells me what is the current size of the table (based on our logic above) and current time

6) I run the stored procedure and this way I can get to have some "space / bytes occupied" and "time taken to complete" statistics

7) I strongly suspect that MSU / MIPS is somehow related to the factors "space / bytes occupied" and "time taken to complete"

8) Meanwhile I can google to find out what other statistics I can incorporate in my stored procedures (space occupied and time to complete are the primary ones I can think of right away)

So till the time my example DB2 stored procedure completes and till the time my friends send me the Omegamon / Strobe / BMC Mainview statistics I will try to make some headway with MSU / MIPS research. If I hit the jackpot before my friends tell me the statistics I can convert the timing and space statistics immediately to MSU / MIPS. If my friends pour out their statistics first before I hit the jackpot then at least I will have some statistics at hand.

Either way I will feel elated that I am trying to make some progress.

Anyway I need to take out some time and think of good DB2 code. Additionally I can think of some good COBOL / PL1 / SAS / REXX / JCL code to get some more statistics.

Will be back soon.

Email: infra.app.supp.se@gmail.com
visionplus.trams@outlook.com

Wordpress link

visionplustrams.wordpress.com/2016/03/12/zos-db2-performance-tuning-tools/

LinkedIn post link:

www.linkedin.com/pulse/need-help-prepare-training-ppt-zos-db2-performance-msu-development

IBMMainframes Link

ibmmainframes.com/about64937.html

IDUG post

www.idug.org/p/fo/et/thread=45803

Database-ITTOOLBOX Link

database.ittoolbox.com/groups/technical-functional/Database-Development-l/training-ppt-on-db2-performance-measurement-in-terms-of-msus-mips-5877521

Storage Community links

www.storagecommunity.org/easyblog/entry/system-z-software-pricing-at-ibm-enterprise2014
www.storagecommunity.org/component/easyblog/entry/need-storage-cost-comparison-analysis-ds8000-ds8780?Itemid=255
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> DB2

 


Similar Topics
Topic Forum Replies
No new posts Training on numeric fields data formats SYNCSORT 12
No new posts MIPS/CPU consumption reduction in Batch DFSORT/ICETOOL 4
This topic is locked: you cannot edit posts or make replies. MIPS reduction for Batch job All Other Mainframe Topics 8
No new posts MSU calculation for DB2 (to decide be... DB2 4
No new posts Can I calculate the total MIPS consum... All Other Mainframe Topics 2
Search our Forums:

Back to Top