Hi all,
01 APPLE PIC9(3) VALUE 100.
01 ORANG PIC9(3) COMP-3 VALUE 200.
01 GUAV PIC9(3) COMP VALUE 300.
01 PINEA PIC9(3) VALUE 400.
01 STRAW PIC9(3) COMP-3 VALUE 500.
01 JACK PIC9(3) COMP VALUE 600.
ADD APPLE TO ORANG
ADD APPLE TO GUAV
ADD APPLE TO PINEA
ADD ORANG TO GUAV
ADD ORANG TO STRAW
ADD GUAV TO JACK.
In the above code please let me know which one of the addition will be fast
Joined: 09 Mar 2011 Posts: 7309 Location: Inside the Matrix
Well, there is such a thing as being too blasé about this.
It's not clear how the question was asked, but the answer would reveal whether the interviewee has knowledge of how data is represented and stored in a COBOL program, and the performance impacts of that.
Also, until all CPU-time is "free", someone pays. no matter how little a single execution of an individual program would be impacted.
IBM's COBOL Performance Tuning guides have always referenced such things.
With Enterprise COBOL 5.2 (due at the end of February, announced with the z13, making ARCH(11) instructions available immediately (a first for COBOL) - same for the other languages) there is a new RULES compiler option with a sub-option aptly named "LAXPERF".
With LAXPERF (lax consideration of performance) your compile will be identical to now.
With NOLAXPERF, you'll get warnings about doing dumb things.
If IBM have gone to the trouble of including this, then it is not only some interviewer who (maybe) finds this important.
The details of NOLAXPERF are not yet available, but from the Announcement Letter:
Quote:
RULES suboption LAXPERF|NOLAXPERF
NOLAXPERF causes the compiler to issue warning messages for the following instances of inefficient COBOL usage:
Loop (Perform varying ix1 from 1 by 1 until ix1 > ix1-max) should be flagged if:
ix1 is coded as USAGE DISPLAY value (not packed or binary).
There are different data types used for different operands in the VARYING clause.
Accessing a Table item with a subscript defined without binary/packed or a non-INDEX-NAME
MOVEs (COMPUTEs, comparisons) with conversion of numbers because of different storage representation
MOVE of character-string to another variable, but with lots of padding (100 bytes or more), like:
from-field pic x(10) to-field pic x(3200)
move from-field to to-field, which will move 10 bytes and then fill spaces into another 32KB of memory
Slow or non-optimal compiler options: NOAWO, NOBLOCK0, NOFASTSRT, NUMPROC(NOPFD), OPT(0), SSRANGE, TRUNC(STD|BIN)
NOLAXPERF provides performance tips to COBOL programmers on a per-program basis.
I'm not sure that the implementation of this is the best, but it seems an excellent idea.
Joined: 07 Feb 2009 Posts: 1316 Location: Vilnius, Lithuania
Having this option on COBOL may seem useful, but the level of knowledge of individual programmers is far too low to understand as to what is on. Also, these micro-optimizations are likely to have only minimal impact on the generated code.
On "that" system we recently got access to Enterprise PL/I V4.3, and being a long time PL/I only user, I had a look at some of the code generated. Yes, EPLI uses (and has been doing so for a long time) lots of newer instructions when the higher ARCH levels are specified, but when the compiler generates code that is shite in the first place, no amount of micro-optimization will save any time.
You want an example? Look at this:
Given the following PL/I declarations:
Code:
2 idata, /* static, */
3 #c fixed bin (31) init (0),
3 km fixed (9,1) init (0),
3 time fixed (9) init (0),
3 c2 char (2) init ((2)'00'x),
Code:
2 ld,
3 data(0:17),
4 #c fixed bin (31),
4 km fixed (9,1),
4 time fixed (9),
4 c2 char (2),
Code:
2 sp,
3 data(21),
4 #c fixed bin (31),
4 km fixed (9,1),
4 time fixed (9),
4 c2 char (2),
The old (AD 1990) OS V2.3.0 compiler translates the following statements
Code:
ld.data = idata;
sp.data = idata;
in the following short and sweet code:
Code:
L 4,208(0,6)
L 9,1556(0,3)
MVC LIFT_WORK.LD.DATA.#C(16),LIFT_STATIC.IDATA
MVC LIFT_WORK.LD.DATA.#C+16(256),LIFT_WORK.LD.DATA.#C
MVC LIFT_WORK.LD.DATA.#C+272(16),LIFT_WORK.LD.DATA.#C+256
Now I'm not in the least an expert on compiler construction, nor do I know how efficient the current RISC-ified z/OS CPUs are, but it seems to me that the old code must be an order of magnitude more efficient than the multi-instruction loops generated by a compiler that has benefited(?) from 22 years of compiler development...
Joined: 14 Jul 2008 Posts: 1248 Location: Richfield, MN, USA
My 2¢ worth: Although it's still important to consider machine efficiency, I've found myself worrying more about readability and maintainability in recent years, since developer time is much more costly than saving a few microseconds of CPU time. This doesn't excuse sloppy programming techniques though.
Joined: 09 Mar 2011 Posts: 7309 Location: Inside the Matrix
Prino,
Yes, the experience/knowledge to be able to use all of the information when everything is whacked into the same option is part of my concern with its implementation. As is the possibility of it producing lots of warnings which perhaps can't be avoided but which have to be checked each time, else you end up with the bad case of "compiling that program will give you a 04, so ignore that".
I've no experience of PL/I, but always assumed it would produce better code than COBOL. Perhaps that used to be true? With COBOL, IBM has often put effort into existing cpu-hogs to reduce their impact. However, COBOL was way, way behind on use of new machine instructions, those only coming with V5, summer of 2013.
Despite a lack of new instructions, other work through compiler options (NUMPROC and TRUNC, for instance) extended the benefits provided by earlier compiler options, although many sites use the "lax" options of those, which have an impact on the use of numeric data (which we tend to use a fair amount). But if some idiot changes those options in their system without being aware of what they are doing, they'll cause big problems unless the system is already suitable for those changes (data conforming to PICture).
Tom Ross, IBM's Captain COBOL, did a presentation on a client they had who had moved from PL/I to COBOL. They found it generally faster, but mourned the loss of some PL/I functionality where their COBOL equivalents were slower. I'm prepared to bet the performance for PL/I was from the second compiler-series that you mention :-)