Opened 9 years ago

Closed 9 years ago

Last modified 7 years ago

#3553 closed defect (fixed)

The evalfunc backend module is slow due to removeSimpleEquations

Reported by: Francesco Casella Owned by: Volker Waurich
Priority: high Milestone:
Component: Backend Version:
Keywords: Cc: Willi Braun, andrea.bartolini@…

Description

The current implementation of the evalfunc module makes one call to removeSimpleEquations to handle the case when function evaluation introduces constant variable assignments.

The current implementation of removeSimpleEquations is very slow, so as a consequence also evalfunc is.

Once the new removeSimpleEquations module is complete, we should re-evaluate whether to keep this tail call, or rather to replace the evaluated variables specifically without the convenient use of removeSimpleEquations.

Change History (15)

comment:1 by Willi Braun, 9 years ago

In the following Testcase ScalableTestSuite.Thermal.Advection.ScaledExperiments.SteamPipe the module evalFunc is quite time consuming.

comment:2 by Francesco Casella, 9 years ago

See also #3695.

Until #3695 is fixed, could you please introduce a debug flag to disable the call to removeSimpleEquations in evalfunc?

comment:3 by Volker Waurich, 9 years ago

I will do so but I am not sure whether everything will work.

comment:4 by Volker Waurich, 9 years ago

I have tested the model and it seems that a RemoveSimpleEquations call is not applied since no call can be evaluated. But each function call is traversed completely. And there are 2*N calls which are quite deep including expensive if-else-constructs. So, RemoveSimpleEquations is not to blame in that case but evalfunc. At least it scales linear. But it takes the lions share of the backend time. I recommend to simply switch of the module if needed.

Last edited 9 years ago by Volker Waurich (previous) (diff)

comment:5 by Martin Sjölund, 9 years ago

If evalfunc scales so poorly, would it not be better to analyze each function first to determine if there are branches that could be selected by having a constant input expression? At least functions without constant inputs can be skipped, right? Do you know which function calls take so long to evaluate?

comment:6 by Francesco Casella, 9 years ago

There are N function calls to Modelica.Media.Water.WaterIF_97_base.setState_ph and N function calls to Modelica.Media.Water.WaterIF_97_base.density.

The latter simply returns state.d and should be inlined, so I don't see any problem with that.

The former includes the complete IF97 whiz-bang thing, including lateInline black magic. Of course one could switch off evalfunc entirely, but in general it could happen that there are some other functions in the model that would benefit at being partially evaluated. It would be nice if OMC could quickly tell there's not much to be gained without traversing the whole thing.

In fact, there's nothing particularly wrong at traversing this function once, the problem is OMC is doing it N times, without recognizing it is always the same function. So, another line of attack could be to employ some caching, remembering the results of previous analysis on the same function with input arguments having the same variability (or at least remembering those cases where there was nothing to do).

BTW, without implementing #3488 it will be quite complicated to test models that need special flags for improved performance. Volker, do you think it would take much time to implement such an annotation?

in reply to:  5 comment:7 by Volker Waurich, 9 years ago

Replying to sjoelund.se:
This evalFunc thing was implemented to get the Spice-models running. Therefore, you need to evaluate function calls even if all inputs are unknowns. Besides that, you have to evaluate if-else-branches whether they all output the same even if the conditions cannot be evaluated. I implemented this using hash-tables whereas you always initialize a new one for every new function. And since there are a lot of function calls inside function calls everything is recursive.

in reply to:  6 ; comment:8 by Volker Waurich, 9 years ago

Replying to casella:

In fact, there's nothing particularly wrong at traversing this function once, the problem is OMC is doing it N times, without recognizing it is always the same function. So, another line of attack could be to employ some caching, remembering the results of previous analysis on the same function with input arguments having the same variability (or at least remembering those cases where there was nothing to do).

That's how it should be done from my point of view. I will put it on my schedule if no one else has a better idea.

BTW, without implementing #3488 it will be quite complicated to test models that need special flags for improved performance. Volker, do you think it would take much time to implement such an annotation?

If the frontend provides these annotations, it should be no problem to adapt the optimization modules. But I think this testing procedure should be discussed with more people.

in reply to:  8 comment:9 by Francesco Casella, 9 years ago

Replying to vwaurich:

Replying to casella:

BTW, without implementing #3488 it will be quite complicated to test models that need special flags for improved performance. Volker, do you think it would take much time to implement such an annotation?

If the frontend provides these annotations, it should be no problem to adapt the optimization modules. But I think this testing procedure should be discussed with more people.

Feel free to do so in one of the next developer's meetings :)

in reply to:  8 comment:10 by Vitalij Ruge, 9 years ago

If the frontend provides these annotations, it should be no problem to adapt the optimization modules. But I think this testing procedure should be discussed with more people.

Can you have a look at
https://trac.openmodelica.org/OpenModelica/ticket/3197#comment:5
too (from Rüdiger).

The idea is graphical support from OMEdit for setting

e.g. define a record with

record OMCSetting
 Boolean addDerAliases = true;
 String preOptModules[:] = {"resolveLoops", "remove" ,..};
 annotations(__OpenModelica(OMCSetting=true));
end OMCSetting;

record SimSetting
 Sting solver = "dassl"
 Real tol = 1e-8;
 annotations(__OpenModelica(SimSetting=true));
end SimSetting;
record OMCSetting
 Boolean addDerAliases = true;
 String preOptModules[:] = {"resolveLoops", "remove" ,..};
end OMCSetting;

model M
OMCSetting omcSetting;
annotations(__OpenModelica(OMCSetting=omcSetting));
end M;

and later we can add this record in some internal libirray? New flags -> Update Library.

if possible :)

Last edited 9 years ago by Vitalij Ruge (previous) (diff)

comment:11 by Francesco Casella, 9 years ago

The idea is interesting, but I am a bit uncomfortable with the idea of adding an object OMCSetting to my models.

For example, the ScalableTestSuite library is by no means meant to be used in OMC only. Should we add tool-specific code to those models for each tool that we want to try the library on?

The Modelica code should be (as much as possible) tool independent. I'd rather restrict the tool-specific stuff to vendor annotations, which were designed for this purpose. Those annotations should be the only place where tool-specific stuff shows up.

My 2 cts :)

Last edited 9 years ago by Francesco Casella (previous) (diff)

comment:12 by Martin Sjölund, 9 years ago

Milestone: 1.9.41.9.5

Milestone pushed to 1.9.5

comment:13 by Martin Sjölund, 9 years ago

Milestone: 1.9.51.10.0

Milestone renamed

comment:14 by Volker Waurich, 9 years ago

Resolution: fixed
Status: newclosed

I have commited a functionality to cache already evaluated functions and stop evaluationg them again, if it has been unsuccessful before.
For ScalableTestSuite.Thermal.Advection.ScaledExperiments.SteamPipe, the time for evalFunc is now independet of the model scaling.
Therefore, I will close the ticket.

comment:15 by Martin Sjölund, 7 years ago

Milestone: 1.10.0

Milestone deleted

Note: See TracTickets for help on using tickets.