IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

which execution will be fast


IBM Mainframe Forums -> Mainframe Interview Questions
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
Catherine Wesley

New User


Joined: 18 Jan 2015
Posts: 2
Location: India

PostPosted: Mon Jan 26, 2015 2:27 pm
Reply with quote

Hi all,
01 APPLE PIC9(3) VALUE 100.
01 ORANG PIC9(3) COMP-3 VALUE 200.
01 GUAV PIC9(3) COMP VALUE 300.
01 PINEA PIC9(3) VALUE 400.
01 STRAW PIC9(3) COMP-3 VALUE 500.
01 JACK PIC9(3) COMP VALUE 600.

ADD APPLE TO ORANG
ADD APPLE TO GUAV
ADD APPLE TO PINEA
ADD ORANG TO GUAV
ADD ORANG TO STRAW
ADD GUAV TO JACK.
In the above code please let me know which one of the addition will be fast
Back to top
View user's profile Send private message
Akatsukami

Global Moderator


Joined: 03 Oct 2009
Posts: 1788
Location: Bloomington, IL

PostPosted: Mon Jan 26, 2015 2:35 pm
Reply with quote

It really doesn't matter; all will execute at the rate of hundreds of millions of machine instructions per second.
Back to top
View user's profile Send private message
Catherine Wesley

New User


Joined: 18 Jan 2015
Posts: 2
Location: India

PostPosted: Mon Jan 26, 2015 3:34 pm
Reply with quote

Hi,
But this is an interview question asked recently..any answers plz
Back to top
View user's profile Send private message
Bill Woodger

Moderator Emeritus


Joined: 09 Mar 2011
Posts: 7309
Location: Inside the Matrix

PostPosted: Mon Jan 26, 2015 5:02 pm
Reply with quote

The last will be "fastest". No conversion, and binary arithmetic is faster than decimal.
Back to top
View user's profile Send private message
prino

Senior Member


Joined: 07 Feb 2009
Posts: 1306
Location: Vilnius, Lithuania

PostPosted: Mon Jan 26, 2015 7:02 pm
Reply with quote

Catherine Wesley wrote:
But this is an interview question asked recently..any answers plz

Tell the interviewer that (s)he's an imbecile for asking such totally bogus questions.
Back to top
View user's profile Send private message
Terry Heinze

JCL Moderator


Joined: 14 Jul 2008
Posts: 1249
Location: Richfield, MN, USA

PostPosted: Mon Jan 26, 2015 7:54 pm
Reply with quote

prino wrote:
Catherine Wesley wrote:
But this is an interview question asked recently..any answers plz

Tell the interviewer that (s)he's an imbecile for asking such totally bogus questions.

But use tact in the way you word it. icon_smile.gif
Back to top
View user's profile Send private message
enrico-sorichetti

Superior Member


Joined: 14 Mar 2007
Posts: 10872
Location: italy

PostPosted: Mon Jan 26, 2015 8:36 pm
Reply with quote

Quote:
But use tact in the way you word it.


it would be nice to express concern for the solitude of his/her only neuron

icon_cool.gif
Back to top
View user's profile Send private message
Bill Woodger

Moderator Emeritus


Joined: 09 Mar 2011
Posts: 7309
Location: Inside the Matrix

PostPosted: Mon Jan 26, 2015 11:01 pm
Reply with quote

Well, there is such a thing as being too blasé about this.

It's not clear how the question was asked, but the answer would reveal whether the interviewee has knowledge of how data is represented and stored in a COBOL program, and the performance impacts of that.

Also, until all CPU-time is "free", someone pays. no matter how little a single execution of an individual program would be impacted.

IBM's COBOL Performance Tuning guides have always referenced such things.

With Enterprise COBOL 5.2 (due at the end of February, announced with the z13, making ARCH(11) instructions available immediately (a first for COBOL) - same for the other languages) there is a new RULES compiler option with a sub-option aptly named "LAXPERF".

With LAXPERF (lax consideration of performance) your compile will be identical to now.

With NOLAXPERF, you'll get warnings about doing dumb things.

If IBM have gone to the trouble of including this, then it is not only some interviewer who (maybe) finds this important.

The details of NOLAXPERF are not yet available, but from the Announcement Letter:
Quote:

RULES suboption LAXPERF|NOLAXPERF

NOLAXPERF causes the compiler to issue warning messages for the following instances of inefficient COBOL usage:

Loop (Perform varying ix1 from 1 by 1 until ix1 > ix1-max) should be flagged if:
ix1 is coded as USAGE DISPLAY value (not packed or binary).
There are different data types used for different operands in the VARYING clause.
Accessing a Table item with a subscript defined without binary/packed or a non-INDEX-NAME
MOVEs (COMPUTEs, comparisons) with conversion of numbers because of different storage representation
MOVE of character-string to another variable, but with lots of padding (100 bytes or more), like:
from-field pic x(10) to-field pic x(3200)
move from-field to to-field, which will move 10 bytes and then fill spaces into another 32KB of memory
Slow or non-optimal compiler options: NOAWO, NOBLOCK0, NOFASTSRT, NUMPROC(NOPFD), OPT(0), SSRANGE, TRUNC(STD|BIN)

NOLAXPERF provides performance tips to COBOL programmers on a per-program basis.


I'm not sure that the implementation of this is the best, but it seems an excellent idea.
Back to top
View user's profile Send private message
prino

Senior Member


Joined: 07 Feb 2009
Posts: 1306
Location: Vilnius, Lithuania

PostPosted: Tue Jan 27, 2015 12:56 am
Reply with quote

Having this option on COBOL may seem useful, but the level of knowledge of individual programmers is far too low to understand as to what is on. Also, these micro-optimizations are likely to have only minimal impact on the generated code.

On "that" system we recently got access to Enterprise PL/I V4.3, and being a long time PL/I only user, I had a look at some of the code generated. Yes, EPLI uses (and has been doing so for a long time) lots of newer instructions when the higher ARCH levels are specified, but when the compiler generates code that is shite in the first place, no amount of micro-optimization will save any time.

You want an example? Look at this:

Given the following PL/I declarations:

Code:
2 idata, /* static, */
  3 #c   fixed bin (31) init (0),
  3 km   fixed    (9,1) init (0),
  3 time fixed      (9) init (0),
  3 c2   char       (2) init ((2)'00'x),

Code:
2 ld,
  3 data(0:17),
    4 #c   fixed bin (31),
    4 km   fixed    (9,1),
    4 time fixed      (9),
    4 c2   char       (2),

Code:
2 sp,
  3 data(21),
    4 #c   fixed bin (31),
    4 km   fixed    (9,1),
    4 time fixed      (9),
    4 c2   char       (2),

The old (AD 1990) OS V2.3.0 compiler translates the following statements

Code:
ld.data = idata;
sp.data = idata;

in the following short and sweet code:

Code:
L     4,208(0,6)
L     9,1556(0,3)
MVC   LIFT_WORK.LD.DATA.#C(16),LIFT_STATIC.IDATA
MVC   LIFT_WORK.LD.DATA.#C+16(256),LIFT_WORK.LD.DATA.#C
MVC   LIFT_WORK.LD.DATA.#C+272(16),LIFT_WORK.LD.DATA.#C+256

MVC   LIFT_WORK.SP.DATA.#C+16(16),LIFT_STATIC.IDATA
MVC   LIFT_WORK.SP.DATA.#C+32(256),LIFT_WORK.SP.DATA.#C+16
MVC   LIFT_WORK.SP.DATA.#C+288(64),LIFT_WORK.SP.DATA.#C+272

However, the recent (AD 2012) Enterprise PL/I V4R3 compiler produces the following:

Code:
011999 |          LA       r0,0
000000 |          LA       r2,17
000000 |          IIHF     r0,F'18'
000000 |          LAY      r14,IDATA(,r15,9880)
011999 | @4L470   DS       0H
011999 |          SLLK     r1,r0,4
011999 |          AHI      r0,H'1'
011999 |          ALRK     r1,r5,r1
011999 |          LAY      r1,LIFT_WORK(,r1,22624)
011999 |          MVC      LIFT_WORK(16,r1,0),IDATA(r14,0)
011999 |          BRCTH    r0,@4L470
012000 |          LA       r0,1
000000 |          IIHF     r0,F'21'
012000 | @4L471   DS       0H
012000 |          SLLK     r1,r0,4
012000 |          AHI      r0,H'1'
012000 |          ALRK     r1,r5,r1
012000 |          LAY      r1,LIFT_WORK(,r1,23056)
012000 |          MVC      LIFT_WORK(16,r1,0),IDATA(r14,0)
012000 |          BRCTH    r0,@4L471

Now I'm not in the least an expert on compiler construction, nor do I know how efficient the current RISC-ified z/OS CPUs are, but it seems to me that the old code must be an order of magnitude more efficient than the multi-instruction loops generated by a compiler that has benefited(?) from 22 years of compiler development...
Back to top
View user's profile Send private message
Terry Heinze

JCL Moderator


Joined: 14 Jul 2008
Posts: 1249
Location: Richfield, MN, USA

PostPosted: Tue Jan 27, 2015 2:51 am
Reply with quote

My 2¢ worth: Although it's still important to consider machine efficiency, I've found myself worrying more about readability and maintainability in recent years, since developer time is much more costly than saving a few microseconds of CPU time. This doesn't excuse sloppy programming techniques though.
Back to top
View user's profile Send private message
Bill Woodger

Moderator Emeritus


Joined: 09 Mar 2011
Posts: 7309
Location: Inside the Matrix

PostPosted: Tue Jan 27, 2015 3:11 am
Reply with quote

There's nothing that using a different data-type does which impacts that, is there?
Back to top
View user's profile Send private message
Bill Woodger

Moderator Emeritus


Joined: 09 Mar 2011
Posts: 7309
Location: Inside the Matrix

PostPosted: Tue Jan 27, 2015 3:32 am
Reply with quote

Prino,

Yes, the experience/knowledge to be able to use all of the information when everything is whacked into the same option is part of my concern with its implementation. As is the possibility of it producing lots of warnings which perhaps can't be avoided but which have to be checked each time, else you end up with the bad case of "compiling that program will give you a 04, so ignore that".

I've no experience of PL/I, but always assumed it would produce better code than COBOL. Perhaps that used to be true? With COBOL, IBM has often put effort into existing cpu-hogs to reduce their impact. However, COBOL was way, way behind on use of new machine instructions, those only coming with V5, summer of 2013.

Despite a lack of new instructions, other work through compiler options (NUMPROC and TRUNC, for instance) extended the benefits provided by earlier compiler options, although many sites use the "lax" options of those, which have an impact on the use of numeric data (which we tend to use a fair amount). But if some idiot changes those options in their system without being aware of what they are doing, they'll cause big problems unless the system is already suitable for those changes (data conforming to PICture).

Tom Ross, IBM's Captain COBOL, did a presentation on a client they had who had moved from PL/I to COBOL. They found it generally faster, but mourned the loss of some PL/I functionality where their COBOL equivalents were slower. I'm prepared to bet the performance for PL/I was from the second compiler-series that you mention :-)
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> Mainframe Interview Questions

 


Similar Topics
Topic Forum Replies
No new posts Capturing Job Execution Information All Other Mainframe Topics 3
No new posts Parallel Sysplex - subprogram execution CICS 7
No new posts A way to see a particular file fast w... TSO/ISPF 10
No new posts Prod parallel execution on mainframe ... CICS 1
This topic is locked: you cannot edit posts or make replies. JCL execution fail COBOL program COBOL Programming 5
Search our Forums:

Back to Top