Opened 9 years ago
Last modified 7 years ago
#3568 new discussion
Performance issues in DIVISION_SIM
Reported by: | Federico Bergero | Owned by: | somebody |
---|---|---|---|
Priority: | low | Milestone: | Future |
Component: | Run-time | Version: | |
Keywords: | Cc: | Francesco Casella, Adrian Pop, Willi Braun |
Description
Hi all, I'm running a model that has lot of divisions.
The C generated code inserts the macro DIVISION_SIM for each division. This macro (as far as I understand) checks for zero and allows zero division at initialization (when initial()==1).
This is OK for user-friendliness but in my case I see a ~50% performance penalty for these checks. My model has no division by zero.
Using the original DIVISION_SIM macro the simulation takes 26.81 seconds.
If I replace this macro for the normal division it takes 14.15 s.
Are these checks necessary for something else than showing a meaningful error message? If no, perhaps using the division
should be the by-default behavior showing a not-so meaningful "Division by zero" message and asking the user to run the model with a specific flag that includes the check.
Also the check of initial() is not necessary after the initialization. Perhaps the equations could be split into initials allowing division by zero, and dynamic ones not allowing it but without checking initial() on every division.
What you guys think?
Attachments (1)
Change History (16)
by , 9 years ago
Attachment: | airconds_det.mo added |
---|
comment:1 by , 9 years ago
Cc: | added; removed |
---|
comment:2 by , 9 years ago
The generated code contains different equations for simulation and initialization anyway. Therefore, it should be easy to remove the initial()
check anyway.
comment:3 by , 9 years ago
Cc: | added; removed |
---|
comment:4 by , 9 years ago
Also in this particular example the denominator of the division is a parameter-dependent expression.
Thus it doesn't make sense to check every time since after initialization its value never changes.
comment:5 by , 9 years ago
I think we need a compiler flag for a normal and a performance version of the simulation runtime.
A ~50% performance penalty for user-friendly small macro is quite surprisingly for at least for me.
comment:6 by , 9 years ago
I understand from Federico it can be actually be more than 50%, which really shocked me. He started from the same model (a huge array of explicit ODEs, with a lot of divisions by parameters on the right-hand-side), compiled it with both OMC and with u-Modelica and then simulated it with DASSL in both cases. Both simulations took the same number of steps (the model and the ODE integrator are the same), but the u-Modelica one was 4X faster! Part of this was due to deeper C compiler optimization, but a significant part was due to these macros (there are lots of divisions in his model), and also to multiple indirections to access variable and parameter values, which apparently take up much time.
I think we should look into this issue more carefully. Federico, maybe you can provide the full test case in this ticket, it shouldn't take much space.
I agree that we should be able to choose between different degrees of optimization, as more optimizations to reduce the simulation time do increase the compilation time, and it might not be worth it in many cases. However, in this specific case I don't think it ever makes sense to check conditions on parameters (or parameter-dependent expressions) at each time step. When generating the code, the check for these conditions should be moved to the initialization phase, and performed only once. I think this should be easily done, without any significant penalty to the compilation time.
In fact, one could even get slightly faster performance by replacing the division by a parameter expression with the multiplication by its inverse, to be computed once during initialization.
I guess all these optimizations get done automatically if the parameters are evaluated, but there is no reason not to perform them also in the case they are not.
comment:7 by , 9 years ago
Hi, the example is attached (as aircont_det.mo).
I didn't use any extra flag for parameter evaluation. I could try that, which flag should I turn on?
follow-up: 10 comment:9 by , 9 years ago
Replying to casella:
I guess
--preOptModules+=evaluateAllParameters
Unfortunately, this module (as well as a lot of other modules) does nothing even if it is selected for pre/post optimization unless additional flags are set. This need to be cleaned up soon. I think we should simply remove a lot of debug flags.
However, for now the module evaluateAllParameters
need in addition the debug flag -d=evalAllParams
.
follow-up: 11 comment:10 by , 9 years ago
Replying to lochel:
Replying to casella:
I guess
--preOptModules+=evaluateAllParameters
Unfortunately, this module (as well as a lot of other modules) does nothing even if it is selected for pre/post optimization unless additional flags are set. This need to be cleaned up soon. I think we should simply remove a lot of debug flags.
Do you mind opening a ticket, so we keep track of this? Thanks!
comment:11 by , 9 years ago
Cc: | added |
---|
Replying to casella:
Do you mind opening a ticket, so we keep track of this? Thanks!
Yes, I will do so. However I will first bring this up on the next OSMC dev meeting on Monday.
comment:12 by , 9 years ago
Hi I tried the
-d=evalAllParams --preOptModules+=evaluateAllParameters
options and in effect the generated code doesn't use the macro. But at least for my model the results is completely erroneous (DASSL makes different number of steps). Perhaps there is something wrong in the model?
comment:14 by , 7 years ago
Cc: | added |
---|
This is the example. I'm compiling it with -O3.