Opened 9 years ago

Closed 9 years ago

Last modified 7 years ago

#3487 closed enhancement (fixed)

Size-dependent strategies for solving systems of equations in OMC

Reported by: Francesco Casella Owned by: Patrick Täuber
Priority: high Milestone:
Component: Backend Version:
Keywords: Cc: Adrian Pop

Description

Some methods that are used in OMC to solve systems of equations may only make sense for a certain range of system sizes, but their complexity could prevent them to be usable for larger systems, or their overhead could make them inconvenient for smaller systems. This impacts both the optimization and the simulation flags.

For example, moderately sized sparse linear systems can be handled effectively by tearing, which has the advantage of moving most of the computational load to the code generation phase, which is just carried out once. However, for increasing system size the time taken by the tearing algorithm could become prohibitevely large, and the torn system might still end up being very sparse, so at some point it is better to avoid tearing, and use a sparse solver instead. The same probably applies to nonlinear systems as well.

The break-even point might actually change depending on the end-user need: if the simulation is very long or repeated many times without recompiling, it makes sense to spend more time in the compilation phase; the opposite while one is trying to debug a model with a short compile-run-modify cycle.

Currently, it is only possible to specify which method to use on a system-wide basis, but if a model contains both small and large systems, it is not possible to handle this situation appropriately.

It should then be possible to specify the range of sizes where the optimization should be applied, e.g.
--disableLinearTearing>1000 or --dynamicTearing>3<50.

For the simulation option, the -ls and -nls options allow to select one linear and one nonlinear solver globally, which is not flexible enough. It should instead be possible to have multiple selections depending on the system size, and possibly also on other features of the system (e.g. sparsity ratio, availability of analytical Jacobian, etc.). I'm not sure what is the best syntax here to allow for maximum generality and flexibility without getting too complicated.

The ultimate goal is to select good default values for all these flags, so that the compiler automatically select the best solution strategy for the specific problem, without the need of input from the end user.

Change History (8)

comment:1 by Martin Sjölund, 9 years ago

A linear solver options sounds OK. And adding some setting for both sparsity and size. Say:

-linearSparseSolverMaxDensity=0.01 -linearSparseSolverMinSize=10 -lss=klu -ls=lapack. I guess it would be rather cheap to implement support for different linear solvers at runtime.

in reply to:  1 comment:2 by Francesco Casella, 9 years ago

Replying to sjoelund.se:

A linear solver options sounds OK. And adding some setting for both sparsity and size. Say:

-linearSparseSolverMaxDensity=0.01 -linearSparseSolverMinSize=10 -lss=klu -ls=lapack. I guess it would be rather cheap to implement support for different linear solvers at runtime.

Sounds good! I guess the min size ought to be bigger, but we should probably do some testing with the ScalableTestSuite library before deciding what is a reasonable break-even value.

comment:3 by Martin Sjölund, 9 years ago

Milestone: 1.9.41.9.5

Milestone pushed to 1.9.5

comment:4 by Martin Sjölund, 9 years ago

Milestone: 1.9.51.10.0

Milestone renamed

comment:5 by Patrick Täuber, 9 years ago

Owner: changed from Lennart Ochel to Patrick Täuber
Status: newaccepted

comment:6 by Francesco Casella, 9 years ago

@ptaeuber introduced this feature in this commit and PR 529. It already had a very positive effect on the performance of the ScalableTestSuite library.

Compare the Hudson log before and after these changes, specifically the models DistributionSystemModelica_N_XX_M_XX, which feature a large implicit linear system of equations.

The changes substantially reduced the backend time and simulation times:

ModelsizedensityOldBackendSimNewBackendSim
N_40_M_40 6479 0.0% 176 12.2 37 6.1
N_56_M_56 12655 0.0% 1105 20.8 121 14.8

The smaller models have less than 4000 equations in the big linear system, so they still resort to tearing. It seems to me that the N_28_M_28 model, which has 3191 equations and a very low density, would still benefit a lot if we skipped tearing. Probably also the N_20_M_20, with 1596 equations.

Therefore the time being, I would suggest to change the defaults to maxSizeLinearTearing=1000 and -lssMinSize=1001

The implemented changes also allowed to compile and simulate the much larger models N_80_M_80, N_112_M_112, and N_160_M_160, because of the reduced back-end time. Unfortunately, the time to compile the C code becomes very large, the reason why this is so should be investigated. I have commented out the experiment annotation from the N_160_M_160 to skip it in future runs, because it's going to take several hours to process it, and that makes no sense.

comment:7 by Francesco Casella, 9 years ago

Resolution: fixed
Status: acceptedclosed

There are various criteria to decide what is the "optimal" default for maxSizeLinearTearing and they give different results. For the time being, I guess the current defaults are ok.

comment:8 by Martin Sjölund, 7 years ago

Milestone: 1.10.0

Milestone deleted

Note: See TracTickets for help on using tickets.