IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

Tuning the job using Multiple VSAMs


IBM Mainframe Forums -> JCL & VSAM
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
thesumitk

Active User


Joined: 24 May 2013
Posts: 156
Location: INDIA

PostPosted: Tue Jun 18, 2013 3:07 pm
Reply with quote

Hi All,

I need your views/Inputs/Ideas on this please

I am in need of tuning one job using multiple VSAM files and runing sevral programs on to process them as per the business needs.This job runs for almost 90-130 mins depending on the volume of data to process.

I am in process to tune this job ..

What I am doing is to split all the VSAMs in 30 partitions but the challange here that we do not have same key in all the vsam files however the field which is used as a key for master file is present in allmost all the files , Hence decided to split the files behalf of that field only whcih is a file number defined as S9(13).

I am approaching like below

1)-Delete/Define 30 partition vsam files
2-Split the main VSAM and load them to partition
3)-Use the parition files in 30 multiple jobs
4)-same process will ocuur on next schedule

I will be happy to recieve any inputs/corrections/suggetions on this please.

Questions: Can I split the vsam directly to my Partitioned Vsams?
What is the best Idea to tune this job plese so that it take minimum cpu/run time to complete?

Thanks
Sumit
Back to top
View user's profile Send private message
expat

Global Moderator


Joined: 14 Mar 2007
Posts: 8796
Location: Welsh Wales

PostPosted: Tue Jun 18, 2013 3:33 pm
Reply with quote

Is the problem with one VSAM, several VSAM, or every VSAM file.

You will need to analyse the usage of the VSAM files before doing anything else, is it a sequential process, random process, or skip sequential process. Each method has its own optimisation processes.

What are the perceived bottlenecks, is it I/O is it CPU is it something else. If the bottleneck is CPU then you can probably try anything you like and get no discernible improvement. Speak to your capacity planning team to see if they can help find where the problem lies.

If it is I/O, maybe take a look at the implementation of SDS across the VSAm file(s).

You may also speed up processing by sorting the input file(s) into the correct order.
Back to top
View user's profile Send private message
Robert Sample

Global Moderator


Joined: 06 Jun 2008
Posts: 8700
Location: Dubuque, Iowa, USA

PostPosted: Tue Jun 18, 2013 4:36 pm
Reply with quote

Why do you think that splitting into 30 VSAM files will "tune" your job? Do you have any tools to analyze your VSAM file access such as STROBE or VIO+? Questions that come to mind include:

- How many records in the VSAM files?
- Are the VSAM files being updated or only read?
- If you are updating the VSAM files, are you doing only inserts or are you also rewriting records?
- If you are doing inserts, what is the pattern -- only end of file, only beginning of file, or scattered through the file?
- How are the CI and CA splits for the files?
- Have you validated the record length, data component CI szie, and index component CI size?
- Have you addressed the issues expat raised?
Back to top
View user's profile Send private message
thesumitk

Active User


Joined: 24 May 2013
Posts: 156
Location: INDIA

PostPosted: Tue Jun 18, 2013 5:27 pm
Reply with quote

Hi Robert and Expat,

Thanks for your reply!!

I will get back to you with the answers of your questions soon (Perhaps by tommorow morning IST) .. I just got stuck with some critical issue here in production..Thanks
Back to top
View user's profile Send private message
thesumitk

Active User


Joined: 24 May 2013
Posts: 156
Location: INDIA

PostPosted: Wed Jun 19, 2013 12:53 pm
Reply with quote

HI Expat , PLease find below my answers.

Is the problem with one VSAM, several VSAM, or every VSAM file.

You will need to analyse the usage of the VSAM files before doing anything else,
is it a sequential process, random process, or skip sequential process.
Each method has its own optimisation processes.-> All vsams used in this jobs are KSDS and accessed as dynamically
What are the perceived bottlenecks, is it I/O is it CPU is it something else-->It takes long long to process the files (specially in 2 particular core processing steps )
If the bottleneck is CPU then you can probably try anything you like and get no discernible improvement. Speak to your capacity planning team to see if, they can help find where the problem lies.--> I dont think its CPU problem sicne it always takes the cpu time and the SIO,EXCP-Cnt
are always floating, So its active and runing



You may also speed up processing by sorting the input file(s) into the correct order.[/quote]--They all are KSDS
Back to top
View user's profile Send private message
thesumitk

Active User


Joined: 24 May 2013
Posts: 156
Location: INDIA

PostPosted: Wed Jun 19, 2013 1:19 pm
Reply with quote

Why do you think that splitting into 30 VSAM files will "tune" your job? Do you have any tools to
analyze your VSAM file access such as STROBE or VIO+?No I dont have one , I am thiking of the split approach since we have many set of jobs in the system tuned like this only , all the application are runing with 12 or 30 or 60 partitioned wide jobs and the time is saved this way, So if the same process is runing parllerly in parts/partitiones , I think will be done earlier then the one process
Questions that come to mind include:

- How many records in the VSAM files? 15940069 (It depends on the transactions are coming everyday) but it would be somewhat in this figure of 8 digit)-
Are the VSAM files being updated or only read?--> They read,write and rewrite,delete (program matches the accounts in some other vsam/seq files and do the operation needed-
If you are updating the VSAM files, are you doing only inserts or are you also rewriting records?--> Doing insert for new record and rewrite for any changes in already existing record or delete the record if not needed-
If you are doing inserts, what is the pattern - only end of file, only beginning of file, or scattered through the file?--> Its scattered (i.e. where it founds the record it updates it if needed)-
How are the CI and CA splits for the files?
- Have you validated the record length, data component CI szie, and index component CI size? -->I am pasting below the statis
Code:
      KEYLEN----------------31     AVGLRECL-------------227     BUFSPACE------
-----20480     CISIZE--------------8192                                         
        RKP--------------------0     MAXLRECL-------------227     EXCPEXIT------
----(NULL)     CI/CA-----------------90                                         
        STRIPE-COUNT-----------1                                               
        SHROPTNS(2,3)      SPEED     UNIQUE           NOERASE     INDEXED       
NOWRITECHK     UNORDERED        NOREUSE                                         
        NONSPANNED      EXTENDED     EXT-ADDR                                   
      STATISTICS                                                               
        REC-TOTAL-------15991894     SPLITS-CI--------------1     EXCPS---------
---9554815                                                                     
        REC-DELETED-----------10     SPLITS-CA--------------0     EXTENTS-------
--------13                                                                     
        REC-INSERTED--------2575     FREESPACE-%CI---------20     SYSTEM-TIMESTA
        REC-UPDATED-------928675     FREESPACE-%CA---------30          X'CB88C27
263725B92'                                                                     
        REC-RETRIEVED--377811450     FREESPC-------2326126592       

These statistics are from today's versions of one of the VSAM file I can let you know these statistics for all VSAM files , All files are delete define everyday and loaded by the after cycle Backup -
Have you addressed the issues expat raised?--> I provided the answers to Expat

Total 17 Vsam files are used in job and 10 of them will have huge volume of data, other files are smaller and contains common data to process

Please suggest if I can tune this job

Thanks

Code'd
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19243
Location: Inside the Matrix

PostPosted: Wed Jun 19, 2013 6:57 pm
Reply with quote

Hello,

The job may be "tunable" . . . From what you have posted, there is not much for someone to work with.

If you have 2 main problem processes, suggest you analyze why the take so much more resource to run.

Your organization may need to bring in a vsam expert to help for a start and train you/others how to do more.
Back to top
View user's profile Send private message
thesumitk

Active User


Joined: 24 May 2013
Posts: 156
Location: INDIA

PostPosted: Wed Jun 19, 2013 7:19 pm
Reply with quote

Ok..Thanks
Back to top
View user's profile Send private message
Robert Sample

Global Moderator


Joined: 06 Jun 2008
Posts: 8700
Location: Dubuque, Iowa, USA

PostPosted: Wed Jun 19, 2013 7:42 pm
Reply with quote

First, it is not at all clear that your VSAM files need "tuning". 90 to 130 minutes to process 16 million records in pretty reasonable based on my experience. If the problem is that your processing fails to fit in the batch window, then your application may need an entire redesign, rather than "tuning" the VSAM files.

Second, "tuning" VSAM files is not what you are doing. Splitting the records into multiple files does not "tune" the VSAM processing and may actually INCREASE the amount of time required to process, since additional time is then required to determine which of the files to access the record in.

Third, if you really wanted to tune your VSAM file usage, you would be looking at using BLSR, and reviewing the JCL to make sure the data and index buffers are sufficient, changing your free space percentages to something reasonable, and so forth. If you are deleting / defining the VSAM each day, 20% and 30% for the free space values are ludicrous -- especially since the insert count is less than .02 percent of the records.

If your site feels there is benefit in partitioning the data, then by all means do so. But do not think that partitioning the data is "tuning" the VSAM -- it is not. And do not believe that you are reducing processing time by partitioning the data.
Back to top
View user's profile Send private message
Bill Woodger

Moderator Emeritus


Joined: 09 Mar 2011
Posts: 7309
Location: Inside the Matrix

PostPosted: Thu Jun 20, 2013 1:18 am
Reply with quote

You haven't provided the information available with LISTCAT for the index, which may be interesting.

For the dataset you have shown, in addition to Robert's comments, you seem to have a very high number of "records retrieved". If that is from just one day, it means that each record is, on average, being "retrieved" more than 20 times. The EXCP count is high in relative terms, but low enough to indicate possible multiple sequential reads of all, or large parts of, the data.

You can get someone to look at all the points Robert has mentioned, but I'd strongly suspect that if someone looks at the batch programs accessing this file (and perhaps others, if they have similar use-characteristics) they'd find some very dumb coding which would save a huge amount of resources once corrected/re-designed and successfully implemented.

I'd not expect "partitioning" to get you anywhere. The payback is going to be in finding the dumb code. If this has been running for any length of (months, years) some people are going to feel very, very sick. Reading every record 20 times has to be wrong.
Back to top
View user's profile Send private message
expat

Global Moderator


Joined: 14 Mar 2007
Posts: 8796
Location: Welsh Wales

PostPosted: Thu Jun 20, 2013 11:53 am
Reply with quote

It's a long long time since I've played with any VSAM tuning in earnest.

But, if my memory serves me well, isn't Dynamic skip sequential.
If so, doesn't it clear and refresh the VSAM buffers every time that the acess method changes.
Back to top
View user's profile Send private message
Ed Goodman

Active Member


Joined: 08 Jun 2011
Posts: 556
Location: USA

PostPosted: Thu Jun 20, 2013 8:29 pm
Reply with quote

Find the IBM redbook called "VSAM Demystified". I think it's still applicable.

To make them work faster, you can add buffers to the job that load the index into memory, then each keyed read can be done with one fewer I/O. The trick is to pick the right number of buffers, and to use BLSR (Batch local shared resourses)

To get a real measurement of how much you are saving, make sure to note the EXCPs for the VSAM files before and after you make changes.

The BLSR stuff looks like this:
Code:

//*******************************************       
//* BATCH LOCAL SHARED RESOURCES (BLSR)     *       
//* BLSR VSAM BUFFER PROCESSING STATMENTS   *       
//*******************************************       
//BLSRVA    DD DSN=WTSO.ACTUAL.VSAM.FILE,DISP=SHR
//*                                                 
//*******************************************       
//* INPUT DATASETS                          *       
//*******************************************       
//VSAMDD  DD SUBSYS=(BLSR,'DDNAME=BLSRVA')         
//*                                                 


And your program uses 'VSAMDD' in the SELECT statement.

You might want to just try it with no special BUFNI or BUFND parms and see how it goes. It might be such an improvement that everyone is happy with the default buffers.

Be sure to increase the memory allowance for the job to make room for all those extra buffers.
Back to top
View user's profile Send private message
Bill Woodger

Moderator Emeritus


Joined: 09 Mar 2011
Posts: 7309
Location: Inside the Matrix

PostPosted: Fri Jun 21, 2013 2:54 am
Reply with quote

Access Dynamic. Skip-sequential.

I've actually now calculated the ratio of "retrieves" to "records": 23.625184734.

In one day, each record, on average, is retrieved 23.blahblah times. For 16 million records, that has to be plain nuts. Or is it just me?

We used to use a rule-of-thumb. Random access > 5% of file, it'll be faster to process it sequentially with a "two-file match".

It is a long time since I verified it, but I certainly feel that if 2300% of the file is being accessed (only one job is alluded to, perhaps there are more, but it would be surprising if 20 sequential passes of the entire file had been "forgotten") then "something" is just plain, plain, plain, wrong.

/edit-for-clarity on

I'm not suggesting that the 23.blahblah reads are indexed, just conflating the rules we used to consider for accessing a file on a key to the idea that EACH RECORD CAN BE READ 23 TIMES A DAY!

/edit-for-clarity off

"Tuning" the VSAM file in this situation will get IOs down (to some extent) but put CPU up (as CPU is consumed managing data in "memory" much more).

To some extent? Unless the 23.blahblah accesses are one-after-the-other, there is going to be no dramatic fall in IOs (and even minimal buffering would already be masking that). If, from the 16 "other" VSAM files, this data is processed in its entirety each time, the reading of the same data "next time" is going to be the whole distance of the file away.

Even if the accesses are one-after-the-other, "tuning" should only be an absolute "interim and very short term" solution. Given that we've not seen data for the index, first thing I'd consider for "tuning" is to get rid of the freespace. No, it isn't. I'd maybe give that to someone else to look at, but I'd look at the <insert your own favourite description here> program(s).

Look at the program. Expect 90% or more drop in IOs once fixed. Dramatic reduction in elapsed and CPU as well. Cut the freespace. Then assess the potential for further benefits against time spent "tuning".

I can be wrong, of course :-)

...but from what little we've been shown...
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19243
Location: Inside the Matrix

PostPosted: Fri Jun 21, 2013 3:04 am
Reply with quote

Hello,

If the majority of the reads are NOT sequential, suggest you look at the code and see if the code re-reads the same record over and over rather than using the already read record. If an update needs to be done, do it after all of the same "key" is processed.
Back to top
View user's profile Send private message
Pete Wilson

Active Member


Joined: 31 Dec 2009
Posts: 590
Location: London

PostPosted: Fri Jun 28, 2013 4:42 pm
Reply with quote

You're doing ~9.5million excps when there's only ~931k of update/inserts! Until I worked out that the records retrieved figure equates to 23 reads for every record it didn't make sense. In fact it still doesn't. As everyone suggested look
to the application program, there is something seriously wrong there.


All files are delete define everyday and loaded by the after cycle Backup - So is the data from this backup sorted by account number before it is loaded? Also what creates this 'backup'. Is it just a sequential unload of the VSAM file, or is it an output of the batch process that duplicates what goes into the VSAM file?

They read,write and rewrite,delete (program matches the accounts in some other vsam/seq files and do the operation needed- I'd like to place a bet that there are something like 23 of these other files, and that as a record is read from each one the program then goes to find a match and update/insert into the problem VSAM file you want to tune. Hence the 23 x the number of records being retrieved.
Back to top
View user's profile Send private message
thesumitk

Active User


Joined: 24 May 2013
Posts: 156
Location: INDIA

PostPosted: Fri Jun 28, 2013 5:53 pm
Reply with quote

Hi All,

Thanks for inputs and replies !!!

I am pasting the index info below


Code:
INDEX ------ XXX.YYYY.ZZZZ 
IN-CAT --- CATALOG.ONLINE                             
  HISTORY                                               
    DATASET-OWNER-----(NULL)     CREATION--------2013.174
        RELEASE----------------2     EXPIRATION------0000.000                   
      PROTECTION-PSWD-----(NULL)     RACF----------------(NO)                   
      ASSOCIATIONS                                                             
        CLUSTER--XXX.YYYY.ZZZZ 
      ATTRIBUTES                                                               
        KEYLEN----------------31     AVGLRECL---------------0     BUFSPACE------
---------0     CISIZE--------------4096                                         
        RKP--------------------0     MAXLRECL------------4089     EXCPEXIT------
----(NULL)     CI/CA-----------------12                                         
        SHROPTNS(2,3)   RECOVERY     UNIQUE           NOERASE     NOWRITECHK   
 UNORDERED     NOREUSE         EXTENDED                                         
        EXT-ADDR                                                               
      STATISTICS                                                               
        REC-TOTAL-----------9090     SPLITS-CI--------------0     EXCPS---------
----516101     INDEX:                                                           
        REC-DELETED------------0     SPLITS-CA--------------0     EXTENTS-------
---------2     LEVELS-----------------3                                         
        REC-INSERTED-----------0     FREESPACE-%CI----------0     SYSTEM-TIMESTA
MP:            ENTRIES/SECT-----------9                                         
        REC-UPDATED------------2     FREESPACE-%CA----------0          X'CB94174
91CF48983' SEQ-SET-RBA----------------0                                         
        REC-RETRIEVED----------0     FREESPC----------7004160                   
           HI-LEVEL-RBA---------1597440                                         
      ALLOCATION                                                               
        SPACE-TYPE------CYLINDER     HI-A-RBA--------44236800                   
        SPACE-PRI-------------40     HI-U-RBA--------37232640                   
        SPACE-SEC-------------20                                               
      VOLUME                                                                   
        VOLSER------------DVO062     PHYREC-SIZE---------4096     HI-A-RBA------
--44236800     EXTENT-NUMBER----------2                                         
        DEVTYPE------X'3010200F'     PHYRECS/TRK-----------12     HI-U-RBA------
--37232640     EXTENT-TYPE--------X'40'                                         
        VOLFLAG------------PRIME     TRACKS/CA--------------1                   
        EXTENTS:                                                               
        LOW-CCHH-----X'00540000'     LOW-RBA----------------0     TRACKS--------
-------600                                                                     
        HIGH-CCHH----X'007B000E'     HIGH-RBA--------29491199                   
        LOW-CCHH-----X'00AA0000'     LOW-RBA---------29491200     TRACKS--------
-------300                                                                     
        HIGH-CCHH----X'00BD000E'     HIGH-RBA--------44236799                   
      VOLUME                                                                   
        VOLSER-----------------*     PHYREC-SIZE------------0     HI-A-RBA------
---------0     EXTENT-NUMBER----------0                                         
        DEVTYPE------X'3010200F'     PHYRECS/TRK------------0     HI-U-RBA------
---------0     EXTENT-TYPE--------X'FF'                                         
        VOLFLAG--------CANDIDATE     TRACKS/CA--------------0


I will psot shortly the other info/answers ..Thanks

FYI...
I was off at work due to some reasons .. today I am back ..Hence delay in responding
Back to top
View user's profile Send private message
thesumitk

Active User


Joined: 24 May 2013
Posts: 156
Location: INDIA

PostPosted: Fri Jun 28, 2013 6:16 pm
Reply with quote

Hi Wilson, Good Day

PLease find my answers below

All files are delete define everyday and loaded by the after cycle Backup -
So is the data from this backup sorted by account number before it is loaded?
Also what creates this 'backup'. Is it just a sequential unload of the VSAM
file, or is it an output of the batch process that duplicates what goes
into the VSAM file? ---> Yes.. They are loaded by just the sequential unload
They read,write and rewrite,delete (program matches the accounts in
some other vsam/seq files and do the operation needed- I'd like to
place a bet that there are something like 23 of these other files,
and that as a record is read from each one the program then goes
to find a match and update/insert into the problem VSAM file you
want to tune. Hence the 23 x the number of records being retrieved--> Yes,, You won icon_smile.gif ,, There are more then 17 VSAM files doing the same operation as you stated.Thanks
Back to top
View user's profile Send private message
thesumitk

Active User


Joined: 24 May 2013
Posts: 156
Location: INDIA

PostPosted: Fri Jun 28, 2013 6:41 pm
Reply with quote

Below is how the input files looks like and their parameter used.

Code:
66 'DATASET NAME                              '
           XX             DISP=SHR,
           XX             AMP=(AMORG,'BUFND=40','BUFNI=05')
        67 'DATASET NAME                              '                 ,
           XX             DISP=SHR,
           XX             AMP=(AMORG,'BUFND=40','BUFNI=05')
        68 'DATASET NAME                              '
           XX             DISP=SHR,
           XX             AMP=(AMORG,'BUFND=40','BUFNI=05')
        70 'DATASET NAME                              '
           XX             DISP=SHR,
           XX             AMP=(AMORG,'BUFND=40','BUFNI=05')
        71 'DATASET NAME                              '
           XX             DISP=SHR,
           XX             AMP=(AMORG,'BUFND=40','BUFNI=05')
        72 'DATASET NAME                              '
           XX             DISP=SHR,
           XX             AMP=(AMORG,'BUFND=40','BUFNI=05')
        73 'DATASET NAME                              '
           XX             DISP=SHR,
           XX             AMP=(AMORG,'BUFND=40','BUFNI=05')
        74 'DATASET NAME                              '
           XX             DISP=SHR,
           XX             AMP=(AMORG,'BUFND=40','BUFNI=05')
        75 'DATASET NAME                              '
           XX             DISP=SHR,
           XX             AMP=(AMORG,'BUFND=40','BUFNI=05')
        76 'DATASET NAME                              '
           XX             DISP=SHR,
           XX             AMP=(AMORG,'BUFND=40','BUFNI=05')
        77 'DATASET NAME                              '
           XX             DISP=SHR,
           XX             AMP=(AMORG,'BUFND=40','BUFNI=05')
        78 'DATASET NAME                              '
           XX             DISP=SHR,
           XX             AMP=(AMORG,'BUFND=40','BUFNI=05')
        79 'DATASET NAME                              '
           XX             DISP=SHR,
           XX             AMP=(AMORG,'BUFND=40','BUFNI=05')
        80 'DATASET NAME                              '
           XX             DISP=SHR,
           XX             AMP=(AMORG,'BUFND=40','BUFNI=05')
        82 'DATASET NAME                              '
           XX             DISP=SHR,
           XX             AMP=(AMORG,'BUFND=40','BUFNI=05')
        83 'DATASET NAME                              '
           XX             DISP=SHR,
           XX             AMP=(AMORG,'BUFND=40','BUFNI=05')
        84 'DATASET NAME                              '
           XX             DISP=SHR,
           XX             AMP=(AMORG,'BUFND=40','BUFNI=05')
        85 'DATASET NAME                              '
           XX             DISP=SHR,
           XX             AMP=(AMORG,'BUFND=40','BUFNI=05')
        86 'DATASET NAME                              '
           XX             DISP=SHR,
           XX             AMP=(AMORG,'BUFND=40','BUFNI=05')
        87 'DATASET NAME                              '                 ),
           XX             DISP=SHR,BUFNO=50
        88 XX         DD  DSN=&SEQ&RE..SB200P&SUF1..XXXXXXXX.XXXXXXXX(+1),         00003400
           XX             DISP=SHR
        89 'DATASET NAME                              '                 ),
           XX             DISP=SHR
        90 'DATASET NAME                              '                   PPA1443  00003400
           XX             DISP=SHR,                                       PPA1443  00003500
           XX             AMP=(AMORG,'BUFND=40','BUFNI=05')
        92 'DATASET NAME                              '                   PPA1443
           XX             DISP=SHR,                                       PPA1443  00003500
           XX             AMP=(AMORG,'BUFND=40','BUFNI=05')
        94 'DATASET NAME                              '                   P129446
           XX             DISP=SHR,                                                00003500
           XX             AMP=(AMORG,'BUFND=40','BUFNI=05')
        96 'DATASET NAME                              '                   P129446
           XX             DISP=SHR,                                                00003500
           XX             AMP=(AMORG,'BUFND=40','BUFNI=05')
           XX*



FYI...For few VSAMS we are using PATHS as well

Please suggest..Thanks
Back to top
View user's profile Send private message
Bill Woodger

Moderator Emeritus


Joined: 09 Mar 2011
Posts: 7309
Location: Inside the Matrix

PostPosted: Mon Jul 01, 2013 1:13 pm
Reply with quote

There is little of real benefit that you can get by "tuning" the files.

You have 17 VSAM inputs which cause your main VSAM file to be "processed" in some way.
During that processing, each record on you main VSAM file is read approximately 23 times on average.

It would be rare that an add/change/delete process, even from 17 sources, could not operate reading no more than each record ONCE, at a maximum.

You need to look at the whole process, and find out where the dumb things are being done.

This is more important, and will have more impact, than "tuning" the files.

If you are close to your "window" now (no, I don't mean in the office), then "tuning" may ease things a little. But be careful. If you increase buffers, you'll increase CPU. If your site runs currently with high CPU usage, you may not see improvements in throughput, you may even see degradation.

If you fix/redesign the processing, you will get, at a conservative estimate, a 90% reduction in IO with concomitant reduction in CPU which should see elapsed times reduce significantly whatever the load balance on your machine is.

The IDCAMS of the index confirms no nutty mass direct reads. Beyond that, it is too truncated and not from the same day as the data to be more useful. However, probably not so much needed for now.
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> JCL & VSAM

 


Similar Topics
Topic Forum Replies
No new posts combine multiple unique records into ... DFSORT/ICETOOL 2
No new posts SORT JCL to merge multiple tow into s... DFSORT/ICETOOL 6
No new posts Using Multiple IFTHEN and WHEN condit... SYNCSORT 12
No new posts INCLUDE OMIT COND for Multiple values... DFSORT/ICETOOL 5
No new posts Replace Multiple Field values to Othe... DFSORT/ICETOOL 12
Search our Forums:

Back to Top