Portal | Manuals | References | Downloads | Info | Programs | JCLs | Mainframe wiki | Quick Ref
IBM Mainframe Computers Forums Index
 
Register
 
IBM Mainframe Computers Forums Index Mainframe: Search IBM Mainframe Forum: FAQ Memberlist Profile Log in to check your private messages Log in
 
DD DUMMY & CPU TIME

 
Post new topic   Reply to topic    IBMMAINFRAMES.com Support Forums -> Testing & Performance analysis
View previous topic :: :: View next topic  
Author Message
Nileshkul

New User


Joined: 09 May 2016
Posts: 16
Location: India

PostPosted: Mon Dec 19, 2016 11:35 pm    Post subject: DD DUMMY & CPU TIME
Reply with quote

I have a VB file to which millions of records are written. Now the file data is redundant. Will making file as DD DUMMY will save any CPU time?
Back to top
View user's profile Send private message

Nileshkul

New User


Joined: 09 May 2016
Posts: 16
Location: India

PostPosted: Mon Dec 19, 2016 11:37 pm    Post subject: Just to add
Reply with quote

I am planning to DD DUMMY file in JCL without removing file write code in cobol program
Back to top
View user's profile Send private message
Bill Woodger

DFSORT Moderator


Joined: 09 Mar 2011
Posts: 7315

PostPosted: Mon Dec 19, 2016 11:49 pm    Post subject: Reply to: DD DUMMY & CPU TIME
Reply with quote

Yes, but not as much as removing the processing to create the file which is no longer used.
Back to top
View user's profile Send private message
Nileshkul

New User


Joined: 09 May 2016
Posts: 16
Location: India

PostPosted: Mon Dec 19, 2016 11:51 pm    Post subject: Thanks
Reply with quote

Thanks for quick response
Back to top
View user's profile Send private message
Rohit Umarjikar

Senior Member


Joined: 21 Sep 2010
Posts: 1788
Location: NY,USA

PostPosted: Tue Dec 20, 2016 2:37 am    Post subject:
Reply with quote

Quote:
I am planning to DD DUMMY file in JCL without removing file write code in cobol program
you should get ride of the step which produces this data set , if its no longer needed, making DD DUMMY to another step is just a half fix.
Quote:
Will making file as DD DUMMY will save any CPU time?
Why not try it?
Back to top
View user's profile Send private message
Arun Raj

Moderator


Joined: 17 Oct 2006
Posts: 2285
Location: @my desk

PostPosted: Wed Dec 21, 2016 12:20 am    Post subject:
Reply with quote

Quote:
you should get ride of the step which produces this data set , if its no longer needed, making DD DUMMY to another step is just a half fix
The OP was talking about DUMMYing the data set in the very step where it is being written. And the suggestion made was to possibly remove the actual file processing in the program besides dummying/removing the output DD, with the remaining program functionality(if any) untouched.
Back to top
View user's profile Send private message
Rohit Umarjikar

Senior Member


Joined: 21 Sep 2010
Posts: 1788
Location: NY,USA

PostPosted: Wed Dec 21, 2016 12:32 am    Post subject:
Reply with quote

I know what TS is asking here, my reply ain't different than what you say and that's why I say is a half fix dummying the output file instead of not producing it at all when not required anymore.
Back to top
View user's profile Send private message
Arun Raj

Moderator


Joined: 17 Oct 2006
Posts: 2285
Location: @my desk

PostPosted: Wed Dec 21, 2016 12:38 am    Post subject:
Reply with quote

Your response seemed to suggest that, the OP is creating the data set in stepA and that he is trying to dummy it in stepB, which is not the case here.
Back to top
View user's profile Send private message
vasanthz

Global Moderator


Joined: 28 Aug 2007
Posts: 1521
Location: Chennai

PostPosted: Wed Dec 21, 2016 12:44 am    Post subject:
Reply with quote

You would see difference in the EXCP(execute channel path macro call) values as there would be not writes happening to the DUMMY file.

The TCB would show decrease as no records are written and a minuscule difference in SRB as the device need not be allocated as it is DUMMIED out.

As for the CPU, like Bill mentioned the CPU variance in IO would be small compared to removing the logic on the program itself.
Back to top
View user's profile Send private message
Phrzby Phil

Active Member


Joined: 31 Oct 2006
Posts: 965
Location: Richmond, Virginia

PostPosted: Wed Dec 21, 2016 2:03 am    Post subject:
Reply with quote

The half fix may be the safest.

I once worked at an agency that required every functionality to be tested if any change was made to a program. Hence, some simple changes were just never made.
Back to top
View user's profile Send private message
Bill Woodger

DFSORT Moderator


Joined: 09 Mar 2011
Posts: 7315

PostPosted: Wed Dec 21, 2016 3:40 am    Post subject: Reply to: DD DUMMY & CPU TIME
Reply with quote

Yes, there can be several stages. The DD DUMMY is a painless way of not actually getting the physical IOs, without the program requiring change, or being somehow changed by the now non-event - DD DUMMY is entirely transparent to a COBOL program. An easy first stage, being done as soon as possible.

I was once told, and have never verified, that putting buffers on DD DUMMY "allows the data to be ignored faster".

I think that all the CPU use within the IO routines stays, and the records are just finally not written. I think. I don't know.

Changing the program itself has at least two obvious potential stages. Remove the IO statements (and any checking associated with them). This is/should be transparent to the "business logic" in the program, but is a much larger change than just the DD DUMMY.

Then there is the clearing out of the processing associated with the creation of the records (again this can be staged). You may arrive at a point where there is still code remaining, but that it is bound to other processing in such a way that there's too great a risk of unintended consequence.

There is always a risk in leaving redundant code operating as well (it can trip future changes, analysis of the program, false impressions on problem-determination/impact-analysis.

Documenting, externally, such that the next person along should be aware, can mitigate the risks.
Back to top
View user's profile Send private message
steve-myers

Active Member


Joined: 30 Nov 2013
Posts: 562
Location: The Universe

PostPosted: Thu Dec 22, 2016 1:32 am    Post subject:
Reply with quote

Well, I did try it. I thought I saw 50 million mentioned in this thread, but all I can actually find now is "millions." I couldn't get enough storage for 50 million, but I did try it for 10 million.

These were run under Hercules - 18.45 seconds to write 10,000,000 records to a data set, 1.06 seconds to "write" to DD DUMMY. The loop overhead was 0.10 seconds.

The program wrote a fixed record, so there was no record preparation time, so a real program will show higher times
Back to top
View user's profile Send private message
vasanthz

Global Moderator


Joined: 28 Aug 2007
Posts: 1521
Location: Chennai

PostPosted: Thu Dec 22, 2016 1:37 am    Post subject:
Reply with quote

Nice research Steve, I wish there was a like button. How did you find the loop overhead. Just curious.
Back to top
View user's profile Send private message
steve-myers

Active Member


Joined: 30 Nov 2013
Posts: 562
Location: The Universe

PostPosted: Thu Dec 22, 2016 2:33 am    Post subject:
Reply with quote

Code:
         L     3,COUNTER
         CPUTIME STORADR=STIME,CPU=MIC...
         BCT   3,*
         CPUTIME STORADR=ETIME,CPU=MIC...
         LG    3,ETIME
         SG    3,STIME
         STG   3,TEMP
         LM    0,1,TEMP
         D     0,=F'1000'

Since a BCT was used for the PUT loop, the overhead times had to be basically the same
Back to top
View user's profile Send private message
steve-myers

Active Member


Joined: 30 Nov 2013
Posts: 562
Location: The Universe

PostPosted: Thu Dec 22, 2016 10:00 am    Post subject:
Reply with quote

I'd propose several stages
  • Change the data set name on the existing output data set, so if some as yet unknown task attempts to use the current data set it can be detected and fixed.
  • Scan SMF 14s and 15s, or perhaps just 14s, for use of the current data set. This can extend back as far as convenient. Collecting data set names in these records is simple as they are at a fixed position in the records.
  • If there has not been any detected problems from the first bullet go with the DD DUMMY idea.
  • If no issues have been detected look at removing the code that creates the data set.
Back to top
View user's profile Send private message
steve-myers

Active Member


Joined: 30 Nov 2013
Posts: 562
Location: The Universe

PostPosted: Thu Dec 22, 2016 5:14 pm    Post subject:
Reply with quote

This is the PUT loop from the Assembler listing. The PUT macro is expanded.
Code:
                                     53 LOOP     PUT   OUTPUT,OUTREC
000084 4110 C9B4            009B4    55+LOOP     LA    1,OUTPUT
000088 4100 CB31            00B31    56+         LA    0,OUTREC
00008C 1FFF                          57+         SLR   15,15
00008E BFF7 1031            00031    58+         ICM   15,7,49(1)
000092 05EF                          59+         BALR  14,15
000094 4630 C084            00084    60          BCT   3,LOOP
I also located the code the PUT macro calls.
Code:
PUTDUMMY NOPR  0
         NOPR  0
         NOPR  0
         LR    1,0
         BR    14
I still remember my reaction when I saw the the SLR/ICM business in the PUT macro to replace the OS/360 L 15,48(0,1). The SLR/ICM code works in an AMODE 31 program, though I didn't know that was the the reason when I first saw the code.

I have no idea what the purpose of the 3 NOPR instructions is. The cynic in me says they're there to sell more hardware.
Back to top
View user's profile Send private message
steve-myers

Active Member


Joined: 30 Nov 2013
Posts: 562
Location: The Universe

PostPosted: Fri Dec 23, 2016 12:06 am    Post subject:
Reply with quote

When I prepared the PUT subroutine code, I screwed up. The 3 NOPR 0 instructions are not true NOPRs. Well, for all intents and purposes they are NOPRs, but more like a slow NOPR. A true NOPR is BCR 0,0. These instructions are BCR 15,0. You can read about the difference in Principles of Operation.

I won't try to guess why IBM used them, unless my cynic theory is correct!
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic    IBMMAINFRAMES.com Support Forums -> Testing & Performance analysis All times are GMT + 6 Hours
Page 1 of 1

 

Search our Forum:

Similar Topics
Topic Author Forum Replies Posted
No new posts Comparing 2 Files using Current time arunsoods DFSORT/ICETOOL 5 Fri Sep 22, 2017 6:00 pm
No new posts Regarding time parameter shanthi gude JCL & VSAM 7 Mon Sep 04, 2017 2:31 pm
No new posts DSNACCOX (can it be run on 1 db/ts, t... SRICOBSAS DB2 5 Sat May 06, 2017 12:59 am
No new posts LISTIDR compiled date/time jerryte IBM Tools 3 Thu Apr 20, 2017 7:37 pm
No new posts Application not run by time HH:MM tri... jzhardy IBM Tools 1 Sun Apr 09, 2017 3:22 pm

Facebook
Back to Top
 
Job Vacancies | Forum Rules | Bookmarks | Subscriptions | FAQ | Polls | Contact Us