Joined: 28 Dec 2006 Posts: 27 Location: Baltimore, MD 21215
Thanks Bill
Between the many other things I am doing, I did realize that you and others meant to employ a REDEFINE (which i was not doing).
But I finally got down to doing that and it worked !!
So the final working solution is:
Joined: 14 Jan 2008 Posts: 2501 Location: Atlanta, Georgia, USA
Bill said
Quote:
A binary with nine digits is pretty terrible for calculations. According to a reputable source, the compiler will have to convert to a double-word, call routines to do double-word maths then convert it back to a fullword. Did you try making it a packed field? Same redefines works, no calcs needed for that either.
Bill,
Wow, haven't looked at at an Assembler expansion in quite a while. Calling a run-time routine when the number of fullword digits exceeds 8?
Joined: 09 Mar 2011 Posts: 7309 Location: Inside the Matrix
Sorry, a bit of conflation. The subroutine use might occur with TRUNC(BIN), it is not going to use a subroutine to do the full-to-doubleword definitely if not BIN, and I've not checked if this is a time BIN would use a subroutine.
The nine digits is heavier on processing that 10-17 digits, because of the need to convert full-word to double-word, then do the maths, then covert back to fullword.
10-17 digits just does the maths, no need to convert to/from. So, 10-17 digit binary math from Cobol will mostly be faster than 9 digit. 1-4 fastest, 10-17 second, 9 third, 18 fourth.
If using 9 digits, avoid maths anyway :-)
I once "tuned" some subscripts effectively holding addresses from 8 to 9 digits. Didn't check it, if I can find an old compiler somewhere, maybe I'll do it sometime.
I squeezed everything out of the program, a "tool" of mine which was "discovered" and then used across all departments. For our small systems, it was about 3-5 seconds of CPU, but for the larger ones, 10-30. So, the tuning was for those who didn't want to admit the benefits of using it, because of having an extra three minutes on the end of the "promotion" process.
I got it down to under one second, irrespective of system size (mainly through doing things different ways). I calculated this would save a lot of time. I sent the new docs around (SCRIPT/GML/DCF, like the manuals) and highlighted the JCL change to a time limit of one CPU second, explicitly stating the only way this would be exceeded would be if it was looping eternally.
One guy ran it, 322. He thought to himself, "I'm very important, my system is very important, this took 30 seconds before, I need to change this". In mid-afternoon I noticed a job running with a familiar program name (it was called OCCULT, since general routines in our project group had to start OC and I'd already used OCTOPUS) and a squid-load of CPU against it. They guy had kept upping and re-running, till he'd got 1440 on the step and gone out to lunch :-)
The reason for the loop? It could deal with Cobol and Assembler programs. As Assembler programs can be much bigger than Cobol, I had a limit. I had asked everyone before making the change "do you have any really big Assembler programs?" "Oh, no," these particular people said, "we have some Assembler, but they're only small".
One of the "small" programs, was allocating a huge lump of storage. In fact, it wasn't really a program, it was just a means of allocating a huge lump of storage. When I asked "really big" they thought in terms of lines of code :-) My program was looping, looking at the same lump of storage for ever, just never all of it, so not finding the next program in the load module.
Of course, when I had asked everyone to "system test" the new version, those lazy lazers had just picked one of their systems, not bothered to run it on all of them. Wonderful to be so important, isn't it :-)
I thought at the time, "well, not worth changing the 8's to 9's, it'll never save the CPU time wasted today".
There is a possibility I actually slowed the thing down doing that change :-)