#3486 closed defect (invalid)
Performance issue with optimization module constantLinearSystem
| Reported by: | Lennart Ochel | Owned by: | Lennart Ochel |
|---|---|---|---|
| Priority: | high | Milestone: | never |
| Component: | Backend | Version: | |
| Keywords: | constantLinearSystem | Cc: |
Description (last modified by )
This module scales badly for large systems. The following table shows the time of this module in comparison to the entire back end time for ScalableTestSuite.Electrical.TransmissionLine.ScaledExperiments.TransmissionLineModelica_N_xxx.
| time [s] for | ||
|---|---|---|
| N | constantLinearSystem | backend |
| 10 | 0.03 | 0.10 |
| 20 | 0.05 | 0.17 |
| 40 | 0.10 | 0.38 |
| 80 | 0.29 | 0.86 |
| 160 | 1.46 | 3.47 |
| 320 | 16.75 | 27.64 |
| 640 | 260.1 | 333.84 |
That means that the module need more than 75% of the back end time for N=640.
Change History (8)
comment:1 by , 10 years ago
| Description: | modified (diff) |
|---|
comment:2 by , 10 years ago
comment:3 by , 10 years ago
One option could be to add >XXX and <XXX suffixes to pre-optimization and post-optimization flags, meaning that the optimization should be applied onto BLT blocks only if they contain more or less than XXX equations.
After some experimenting for a while with test cases, we could find out reasonable default values that apply automatically.
follow-up: 5 comment:4 by , 10 years ago
Yes, I agree. This module is currently only deactivated for optimizing the initialization system. I did this, since there is no test that covers any improvement by using this module for the initialization. I think it could be helpful to perform the symbolic consistency check for over-determined models. However there is no such test case yet.
This module is still activated for post-optimizing the simulation system. Hence, we need a strategy to handle the bad scaling for large systems. Maybe we can get some performance improvement by modifying the implementation itself. Anyway, I expect that this module will not be better than O(N^3). One way to handle this would be to introduce a (changeable) maximum system size to define which system will be skipped.
comment:7 by , 10 years ago
| Resolution: | → invalid |
|---|---|
| Status: | new → closed |
The time measurement from above is wrong. It seems that the analysis of the initialization system itself takes most of the time. I will investigate this more closely.
comment:8 by , 10 years ago
| Milestone: | Future → never |
|---|
Sorting away the closed as invalid, won't fix and duplicate tickets from Future.

One general comment. As far as I understand, the back-end handles algebraic systems of equations using the same methods specified by the pre- and post-optimization options, irrespectively of their size. This is not a good idea, as some techniques might be excellent for systems of a limited size, but they may not scale up acceptably, due to the technique's inherent complexity.
I don't think that turning on and off certain modules when calling omc is a solution to this, because the same system model might contain sub-systems of different size that might call for different optimization techniques.
In order to handle large-scale systems, we need a framework to handle this flexibility and, with some good heuristics, to make the choice of the optimal method transparent to the end-user.