View previous topic :: View next topic
|
Author |
Message |
vasanthz
Global Moderator
Joined: 28 Aug 2007 Posts: 1742 Location: Tirupur, India
|
|
|
|
In our shop we have SVC dumps enabled from inception. Occasionally we have SVC dumps and it fills up the real storage and causes slow down of CICS transactions and other workloads.
These SVC dumps are sporadic, like when a DB2 runstats fails or when a third party software fails.
We have had like 30 SVC dumps in past 3 months and not looked at the dumps or sent them to anyone. We sort of know the reason for the failures and did not have the need to look at the dump dataset.
Do you have SVC dumps enabled in your shops? or is it needed only on a ad-hoc basis?
Regards,
Vasanth.S |
|
Back to top |
|
|
steve-myers
Active Member
Joined: 30 Nov 2013 Posts: 917 Location: The Universe
|
|
|
|
It's hard to give you good advice about this.
I used to work for an [unnamed] ISV. As you can imagine, SVC dumps represented a major problem on systems where products were alpha and sometimes beta tested. There the goal was to get rid of them fairly quickly, which sometimes hurt us that wanted to analyze them. We often had to make our own copies before automation killed the real dump! And we had to do it quickly.
Now it seems to me you are describing two problems.
'The first problem is a performance hit when the dump is being prepared for writing. Here I don't have any advice I trust. Look at storage limits for hiper spaces. I suspect that's your issue.
The second problem is what to do with the dumps. That's much harder. Someone has to quickly screen the darn things and quickly delete any trash. There's not enough there for automation, so it has to be done manually. |
|
Back to top |
|
|
vasanthz
Global Moderator
Joined: 28 Aug 2007 Posts: 1742 Location: Tirupur, India
|
|
|
|
Hi Steve, Thank you for your thoughts.
Quote: |
Look at storage limits for hiper spaces |
I have not heard about hiper spaces before. I thought an SVC dump just dumped all the virtual memory into a dataset, where does hiper space come into play.
Regards,
Vasanth.S |
|
Back to top |
|
|
steve-myers
Active Member
Joined: 30 Nov 2013 Posts: 917 Location: The Universe
|
|
|
|
vasanthz wrote: |
Hi Steve, Thank you for your thoughts.
Quote: |
Look at storage limits for hiper spaces |
I have not heard about hiper spaces before. I thought an SVC dump just dumped all the virtual memory into a dataset, where does hiper space come into play.
Regards,
Vasanth.S |
Well, yes, but first it takes an image of the address space to a hiper space and then writes the dump data set from the image. This frees the tasks in the failing address space more quickly. Before this the address space was frozen while the dump data set was being written. The theory is this frees the non failing tasks up more quickly. |
|
Back to top |
|
|
vasanthz
Global Moderator
Joined: 28 Aug 2007 Posts: 1742 Location: Tirupur, India
|
|
|
|
This link www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.ieag100/slpsetp.htm
says about MAXSPACE parameter.
Quote: |
SDUMP,MAXSPACE=xxxxxxxxM
Specifies the maximum amount of virtual storage that SVC dump can use to capture volatile virtual storage data, summary dump data, and component-specific data before writing the dump to DASD. The default value is 500 megabytes. The value that can be specified may range from 1 to 99999999 (with, or without, an M suffix). The new value takes effect immediately. If the value specified is lower than the space used, SVC dump will not continue to capture data. |
I looked at our SVC dump files and they are at a maximum of 1.5 GB. So if we set the MAXSPACE parameter to 2000MB, would it help avoid performance issues due to paging,?
The current setting is
Code: |
SET,SDUMP,MAXSPACE=6000M,BUFFERS=500M,Q=NO |
|
|
Back to top |
|
|
|