View previous topic :: View next topic
|
Author |
Message |
Nileshkul
New User
Joined: 09 May 2016 Posts: 43 Location: India
|
|
|
|
I have a VB file to which millions of records are written. Now the file data is redundant. Will making file as DD DUMMY will save any CPU time? |
|
Back to top |
|
 |
Nileshkul
New User
Joined: 09 May 2016 Posts: 43 Location: India
|
|
|
|
I am planning to DD DUMMY file in JCL without removing file write code in cobol program |
|
Back to top |
|
 |
Bill Woodger
Moderator Emeritus
Joined: 09 Mar 2011 Posts: 7309 Location: Inside the Matrix
|
|
|
|
Yes, but not as much as removing the processing to create the file which is no longer used. |
|
Back to top |
|
 |
Nileshkul
New User
Joined: 09 May 2016 Posts: 43 Location: India
|
|
|
|
Thanks for quick response |
|
Back to top |
|
 |
Rohit Umarjikar
Global Moderator

Joined: 21 Sep 2010 Posts: 3083 Location: NYC,USA
|
|
|
|
Quote: |
I am planning to DD DUMMY file in JCL without removing file write code in cobol program |
you should get ride of the step which produces this data set , if its no longer needed, making DD DUMMY to another step is just a half fix.
Quote: |
Will making file as DD DUMMY will save any CPU time?
|
Why not try it? |
|
Back to top |
|
 |
Arun Raj
Moderator
Joined: 17 Oct 2006 Posts: 2481 Location: @my desk
|
|
|
|
Quote: |
you should get ride of the step which produces this data set , if its no longer needed, making DD DUMMY to another step is just a half fix
|
The OP was talking about DUMMYing the data set in the very step where it is being written. And the suggestion made was to possibly remove the actual file processing in the program besides dummying/removing the output DD, with the remaining program functionality(if any) untouched. |
|
Back to top |
|
 |
Rohit Umarjikar
Global Moderator

Joined: 21 Sep 2010 Posts: 3083 Location: NYC,USA
|
|
|
|
I know what TS is asking here, my reply ain't different than what you say and that's why I say is a half fix dummying the output file instead of not producing it at all when not required anymore. |
|
Back to top |
|
 |
Arun Raj
Moderator
Joined: 17 Oct 2006 Posts: 2481 Location: @my desk
|
|
|
|
Your response seemed to suggest that, the OP is creating the data set in stepA and that he is trying to dummy it in stepB, which is not the case here. |
|
Back to top |
|
 |
vasanthz
Global Moderator

Joined: 28 Aug 2007 Posts: 1746 Location: Tirupur, India
|
|
|
|
You would see difference in the EXCP(execute channel path macro call) values as there would be not writes happening to the DUMMY file.
The TCB would show decrease as no records are written and a minuscule difference in SRB as the device need not be allocated as it is DUMMIED out.
As for the CPU, like Bill mentioned the CPU variance in IO would be small compared to removing the logic on the program itself. |
|
Back to top |
|
 |
Phrzby Phil
Senior Member
Joined: 31 Oct 2006 Posts: 1050 Location: Richmond, Virginia
|
|
|
|
The half fix may be the safest.
I once worked at an agency that required every functionality to be tested if any change was made to a program. Hence, some simple changes were just never made. |
|
Back to top |
|
 |
Bill Woodger
Moderator Emeritus
Joined: 09 Mar 2011 Posts: 7309 Location: Inside the Matrix
|
|
|
|
Yes, there can be several stages. The DD DUMMY is a painless way of not actually getting the physical IOs, without the program requiring change, or being somehow changed by the now non-event - DD DUMMY is entirely transparent to a COBOL program. An easy first stage, being done as soon as possible.
I was once told, and have never verified, that putting buffers on DD DUMMY "allows the data to be ignored faster".
I think that all the CPU use within the IO routines stays, and the records are just finally not written. I think. I don't know.
Changing the program itself has at least two obvious potential stages. Remove the IO statements (and any checking associated with them). This is/should be transparent to the "business logic" in the program, but is a much larger change than just the DD DUMMY.
Then there is the clearing out of the processing associated with the creation of the records (again this can be staged). You may arrive at a point where there is still code remaining, but that it is bound to other processing in such a way that there's too great a risk of unintended consequence.
There is always a risk in leaving redundant code operating as well (it can trip future changes, analysis of the program, false impressions on problem-determination/impact-analysis.
Documenting, externally, such that the next person along should be aware, can mitigate the risks. |
|
Back to top |
|
 |
steve-myers
Active Member
Joined: 30 Nov 2013 Posts: 917 Location: The Universe
|
|
|
|
Well, I did try it. I thought I saw 50 million mentioned in this thread, but all I can actually find now is "millions." I couldn't get enough storage for 50 million, but I did try it for 10 million.
These were run under Hercules - 18.45 seconds to write 10,000,000 records to a data set, 1.06 seconds to "write" to DD DUMMY. The loop overhead was 0.10 seconds.
The program wrote a fixed record, so there was no record preparation time, so a real program will show higher times |
|
Back to top |
|
 |
vasanthz
Global Moderator

Joined: 28 Aug 2007 Posts: 1746 Location: Tirupur, India
|
|
|
|
Nice research Steve, I wish there was a like button. How did you find the loop overhead. Just curious. |
|
Back to top |
|
 |
steve-myers
Active Member
Joined: 30 Nov 2013 Posts: 917 Location: The Universe
|
|
|
|
Code: |
L 3,COUNTER
CPUTIME STORADR=STIME,CPU=MIC...
BCT 3,*
CPUTIME STORADR=ETIME,CPU=MIC...
LG 3,ETIME
SG 3,STIME
STG 3,TEMP
LM 0,1,TEMP
D 0,=F'1000' |
Since a BCT was used for the PUT loop, the overhead times had to be basically the same |
|
Back to top |
|
 |
steve-myers
Active Member
Joined: 30 Nov 2013 Posts: 917 Location: The Universe
|
|
|
|
I'd propose several stages- Change the data set name on the existing output data set, so if some as yet unknown task attempts to use the current data set it can be detected and fixed.
- Scan SMF 14s and 15s, or perhaps just 14s, for use of the current data set. This can extend back as far as convenient. Collecting data set names in these records is simple as they are at a fixed position in the records.
- If there has not been any detected problems from the first bullet go with the DD DUMMY idea.
- If no issues have been detected look at removing the code that creates the data set.
|
|
Back to top |
|
 |
steve-myers
Active Member
Joined: 30 Nov 2013 Posts: 917 Location: The Universe
|
|
|
|
This is the PUT loop from the Assembler listing. The PUT macro is expanded.
Code: |
53 LOOP PUT OUTPUT,OUTREC
000084 4110 C9B4 009B4 55+LOOP LA 1,OUTPUT
000088 4100 CB31 00B31 56+ LA 0,OUTREC
00008C 1FFF 57+ SLR 15,15
00008E BFF7 1031 00031 58+ ICM 15,7,49(1)
000092 05EF 59+ BALR 14,15
000094 4630 C084 00084 60 BCT 3,LOOP |
I also located the code the PUT macro calls.
Code: |
PUTDUMMY NOPR 0
NOPR 0
NOPR 0
LR 1,0
BR 14 |
I still remember my reaction when I saw the the SLR/ICM business in the PUT macro to replace the OS/360 L 15,48(0,1). The SLR/ICM code works in an AMODE 31 program, though I didn't know that was the the reason when I first saw the code.
I have no idea what the purpose of the 3 NOPR instructions is. The cynic in me says they're there to sell more hardware. |
|
Back to top |
|
 |
steve-myers
Active Member
Joined: 30 Nov 2013 Posts: 917 Location: The Universe
|
|
|
|
When I prepared the PUT subroutine code, I screwed up. The 3 NOPR 0 instructions are not true NOPRs. Well, for all intents and purposes they are NOPRs, but more like a slow NOPR. A true NOPR is BCR 0,0. These instructions are BCR 15,0. You can read about the difference in Principles of Operation.
I won't try to guess why IBM used them, unless my cynic theory is correct! |
|
Back to top |
|
 |
|