View previous topic :: View next topic
|
Author |
Message |
delago
New User
Joined: 29 Jul 2005 Posts: 21 Location: Brazil
|
|
|
|
Hi gu'ys.
I need to talk about a topic, because I need sugestions for a future project.
The sitution: a project where a need to calculate a risc of financial operation generic. In the model especificated for the user (my client), I will use a lot of arithmetical expressions with hard difficult level.
Example a arithmetical express:
VaR_i^cred=max{max{K_i;piso_i_EAD_LGD}×LGD_i;piso_i_EAD }×EAD_i+SD_i×LGD_i
In the example, a need to run this arithmetical expression any times, because have a payment's month-a-month. Any times I using the last results for arithmetic expressions futures.
The concept is very hard to know.
With programing with language cobol, the codification is the bigest and hard to know. And the worst: the performance is not better the user (my client) will waiting on the result.
My question: what language and enviroment I will use to better results for cmplex arithmetical expressions?
I thank any suggestions.
Tks.
Fernando Delago - Brazil/SP |
|
Back to top |
|
|
Bill Woodger
Moderator Emeritus
Joined: 09 Mar 2011 Posts: 7309 Location: Inside the Matrix
|
|
|
|
Are you saying you don't want to use Cobol? So why ask the question in a Cobol forum? |
|
Back to top |
|
|
Phrzby Phil
Senior Member
Joined: 31 Oct 2006 Posts: 1042 Location: Richmond, Virginia
|
|
|
|
How many millions of these calcs are you doing per day? |
|
Back to top |
|
|
prino
Senior Member
Joined: 07 Feb 2009 Posts: 1306 Location: Vilnius, Lithuania
|
|
|
|
Every mainframe compiler worth its price will (should) result in (more or less) the same code. If you want to speed up tings, you will have to move invariant expressions out of loops and possibly cache intermediate results. |
|
Back to top |
|
|
Bill O'Boyle
CICS Moderator
Joined: 14 Jan 2008 Posts: 2501 Location: Atlanta, Georgia, USA
|
|
|
|
Language Environment offers Callable Service routines for a wide variety of mathematical calculations. COBOL/370 (and greater) offers comparable FUNCTION's as well, but not to the extent of LE.
LE can be used t/w most languages, including Assembler, but the version/release (for HLL) must be compatible.
Bill |
|
Back to top |
|
|
Robert Sample
Global Moderator
Joined: 06 Jun 2008 Posts: 8696 Location: Dubuque, Iowa, USA
|
|
|
|
COBOL is perfectly adequate at complex arithmetic. Things like USAGE and number of digits (before AND after the decimal point) can impact the calculations but should not stop you from using COBOL.
FORTRAN was designed for mathematical calculations. I haven't used it in probably 18 - 20 years but it might be worth considering.
In general, though, as prino mentioned, the compilers all generate close to the same code so any optimization achieved has to be done by hand using the methods described in his post and numerical analysis books. And don't expect magnificent savings -- once you've got a baseline time for execution, you're only likely to get tincremental benefits no matter what you do because sometimes things just take a certain amount of time, period. |
|
Back to top |
|
|
PeterHolland
Global Moderator
Joined: 27 Oct 2009 Posts: 2481 Location: Netherlands, Amstelveen
|
|
|
|
PL/1, Fortran, APL? |
|
Back to top |
|
|
Kjeld
Active User
Joined: 15 Dec 2009 Posts: 365 Location: Denmark
|
|
|
|
All compiled mainframe languages basically produce the same code for arithmetic expressions.
The important thing is to define the operands in the compute statement as binary in order to avoid any type conversion packed or display formats during the computation. Intermediate results are ususally stored in the registers for fastest computations.
Newer z-series have special floation point instructions, that alledgedly can be used by some languages. PL/1, Cs and Enterprise Java have been mentioned.
I assume that you would be in total control of the computation if you use Assembler. |
|
Back to top |
|
|
Bill O'Boyle
CICS Moderator
Joined: 14 Jan 2008 Posts: 2501 Location: Atlanta, Georgia, USA
|
|
Back to top |
|
|
delago
New User
Joined: 29 Jul 2005 Posts: 21 Location: Brazil
|
|
|
|
Hi folks,
Clarifying some doubts:
- My process in my job increases exponentially with each execution, for dealing with complex arithmetic calculations for each tranche of a financing transaction. Example: In an operation of 60 services will be implemented over 50 calculations using values ​​up to six decimal places, and this process can be performed with supplies 60 to 10 times (ie a prediction).
- In the first measurement of the performance of online transaction, we got almost 7 seconds of the end time. After some changes (almost restructuring of the system) can reduce the time by almost 1 second.
- I'm trying to prevent future re-implementations to not degrade performance.
- Some time ago, I heard that some programming languages ​​to lower level, had better performance on the mainframe. This statement is true?
Thanks guys.
Fernando Delago - Brazil/São Paulo |
|
Back to top |
|
|
Bill O'Boyle
CICS Moderator
Joined: 14 Jan 2008 Posts: 2501 Location: Atlanta, Georgia, USA
|
|
|
|
Are you using Floating Point (COMP-1/COMP-2) or Fixed Point (COMP-3) data types?
Floating Point in COBOL can be very costly, as it may cause the compiler to BALR/CALL run-time routines.
Fixed-Point (COMP-3) can support up the 16-Bytes, if your compiler supports the ARITH(EXTEND) option (OS/390 COBOL 2.2.1 and greater).
Could you post some of your calculations, without causing any trouble with management, as it's hard to visualise what you're doing.
COBOL does come with some extra "baggage" and Assembler may perform these calculations more efficiently.
If your issuing (for example) WS-AMOUNT ** 6 (WS-AMOUNT to the power of 6), then you're performing exponentiation on fixed-point data, by virtue of the "**".
Bill |
|
Back to top |
|
|
prino
Senior Member
Joined: 07 Feb 2009 Posts: 1306 Location: Vilnius, Lithuania
|
|
|
|
Can you just show us a few of these calculations? A calculation is a calculation, and not a trade-secret. It might give us a much better understanding of what's required. |
|
Back to top |
|
|
delago
New User
Joined: 29 Jul 2005 Posts: 21 Location: Brazil
|
|
|
|
Ok folks... it's a few...I "break" in little arithmetic expressions for better performance, because read in IBM manual this alteration help me.
Code: |
000467 046700 MOVE ZEROS TO WK-VAR-CALC
000468 046800*----------------------------------------------------------------*
000469 046900 COMPUTE WK-CALC01 = WTXA-ALFA - WUM
000470 047200 COMPUTE WK-CALC02 = WTXA-INI * WTXA-ALFA
000471 047700 COMPUTE WK-CALC03 = WK-CALC01 / WK-CALC02
000472 048100 COMPUTE WK-CALC04 = WK-CALC03 - WVLR-COMP-INI
000473 048400 COMPUTE WK-CALC05 = WK-CALC04 * WTXA-BETA
000474 048500 * WTXA-INI * WTXA-INI
000475 049000 COMPUTE WK-CALC06 = WTXA-INI * WPRZ-POT-1
000476 049500 COMPUTE WK-CALC07 = WUM - WTXA-BETA + WK-CALC06
000477 050000 COMPUTE WK-CALC08 = WK-CALC05 / WK-CALC07
000478 050300*----------------------------------------------------------------*
000479 050400 COMPUTE WTXA-INI = WTXA-INI - WK-CALC08
000480 050500*----------------------------------------------------------------* |
and the VAR declaration...
Code: |
000178 017800 03 WK-VAR-CALC.
000179 017900 05 WK-CALC01 USAGE IS COMP-2 VALUE ZEROS.
000180 018000 05 WK-CALC02 USAGE IS COMP-2 VALUE ZEROS.
000181 018100 05 WK-CALC03 USAGE IS COMP-2 VALUE ZEROS.
000182 018200 05 WK-CALC04 USAGE IS COMP-2 VALUE ZEROS.
000183 018300 05 WK-CALC05 USAGE IS COMP-2 VALUE ZEROS.
000184 018400 05 WK-CALC06 USAGE IS COMP-2 VALUE ZEROS.
000185 018500 05 WK-CALC07 USAGE IS COMP-2 VALUE ZEROS.
000186 018600 05 WK-CALC08 USAGE IS COMP-2 VALUE ZEROS.
000187 018700 05 WK-CALC09 USAGE IS COMP-2 VALUE ZEROS.
000188 018800 05 WK-CALC10 USAGE IS COMP-2 VALUE ZEROS.
000189 018900 05 WK-CALC11 USAGE IS COMP-2 VALUE ZEROS.
000190 019000 05 WK-CALC12 USAGE IS COMP-2 VALUE ZEROS. |
Sometimes, the I need the previous result to calculate next result.
I utilizing a concept of CEA. |
|
Back to top |
|
|
Phrzby Phil
Senior Member
Joined: 31 Oct 2006 Posts: 1042 Location: Richmond, Virginia
|
|
|
|
I repeat my question from April 7:
Quote: |
How many millions of these calcs are you doing per day? |
|
|
Back to top |
|
|
Bill O'Boyle
CICS Moderator
Joined: 14 Jan 2008 Posts: 2501 Location: Atlanta, Georgia, USA
|
|
|
|
Is there a good reason why you're using COMP-2 (Long Floating Point)?
Besides being a "Challenging" format for COBOL, floating point may not yeld the correct expected results and truncation can take place within intermediate results.
If COMP-2 is your desired final format, then perform the calculations in fixed-point, either packed-decimal or binary and MOVE the result to the COMP-2 field when you're done.
Compile your code with LIST,NOOFFSET compile options (generates Assembler expansion) and you might be surprised that some of these calculations require a CALL/BALR to a COBOL run-time routine, which you don't want unless you don't have any other choice.
Bill |
|
Back to top |
|
|
Bill Woodger
Moderator Emeritus
Joined: 09 Mar 2011 Posts: 7309 Location: Inside the Matrix
|
|
|
|
Locate and digest "IBM Enterprise COBOL Version 4 Release 2
Performance Tuning".
I'm imagining that it is an "online" application. Can you estimate how many groups of calculations between ENTER and and returning the result in a variety of cases (as Phil has been encouraging)? Blast a couple of hundred thousand calcs in a little batch program, and see how much CPU you use. It'll give you some idea when you scale it down.
It may be a bigger task to convince everyone (including youself) that the calculations lead to the results expected by the user. As an example, for fun, do the same thing (carefully, lots of parentheses) as one big compute, and compare the answers (to the calculations) and the run times.
How is the user doing this currently? It is one thing to have the definition of the calculation, but you might have to get to how that definition is implemented currently to be able to get your answers the same.
Definitely follow Bill's advice about looking at the generated code. Look particularly at your WK-CALC05, I think you should seperate that out. Look at the intermediate fields the compiler generates.
I don't know what this
Code: |
MOVE ZEROS TO WK-VAR-CALC
|
Is for. As far as I know, that is going to move character zeros to the group item. Shouldn't matter, as you do not rely on any initial values for those fields. So don't do the move. If you want to zeroise all those at the group level and rely on the initial value, move LOW-VALUES. |
|
Back to top |
|
|
delago
New User
Joined: 29 Jul 2005 Posts: 21 Location: Brazil
|
|
|
|
Hi Phrzby Phil.
The is very difficult to mensure exactly the quantity of calcs I executing on the day.
I have a estimate :
On-line : run 19500 SQL statements, where for each calculating I need to do 3 calcs to 1 result.
19500 x 3 = 58500 mathematical operations (estimated).
This operation is executed 20.000 for day.
Now, I alter the process to decrease a number of SQL statments for better performance, but I need some ideias for the future implementations note degrade the performance.
Tks
Fernando Delago - Brazil/São Paulo |
|
Back to top |
|
|
Phrzby Phil
Senior Member
Joined: 31 Oct 2006 Posts: 1042 Location: Richmond, Virginia
|
|
|
|
I just wanted to be sure that you have enough calcs per day to make computer efficiency more important than human readability.
Human meaning, of course, not just you, the author, but also the entry-level person who will inherit this at 3am when you are on vacation or have been promoted. |
|
Back to top |
|
|
|