Opened 9 years ago

Closed 7 years ago

#3816 closed defect (fixed)

Matching and sorting scales as O(N^3) in models with large numbers of discrete variables

Reported by: Francesco Casella Owned by: Patrick Täuber
Priority: critical Milestone: 1.12.0
Component: Backend Version:
Keywords: Cc:

Description

Consider the models BreakerNetwork_N_XX in the ScalableTestSuite test results. These are the times spent by prepare postOptimizeDAE

N time
160 0.514
320 4.7
640 49
1280 436

which is O(N3).

Note that in other similar large models, but lacking discrete variables, the scaling is roughly linear, see, e.g., DistributionSystemModelica_N_XX_M_XX

NxMtime
1960.138
4000.652
7840.414
16000.883

Change History (13)

comment:1 by Francesco Casella, 9 years ago

Component: *unknown*Backend
Milestone: Future1.10.0
Owner: changed from somebody to Lennart Ochel
Priority: highcritical

comment:2 by Henning Kiel, 9 years ago

The culprit is mapSortEqnsDAE (Tarjan's algorithm)
It would be worth looking at that, because I see it called 27 times during simulation where three calls have considerable running times (one is in the prepare postOptimizeDAE phase)

in reply to:  2 comment:3 by Francesco Casella, 8 years ago

Replying to hkiel:

The culprit is mapSortEqnsDAE (Tarjan's algorithm)

I understand that the complexity of Tarjan's algorithm is linear with the number of nodes and edges in the graph, so this should scale as O(N). I guess there's something very inefficient in the algorithm's implementation.

comment:4 by Francesco Casella, 8 years ago

In the meantime the performance reported by execstat on Hudson has improved by a constant factor around 2.5 w.r.t. the values reported when I opened this ticket, but it is still O(N3).

comment:5 by Martin Sjölund, 8 years ago

Milestone: 1.10.01.11.0

Ticket retargeted after milestone closed

in reply to:  description comment:6 by Francesco Casella, 8 years ago

Status update, with results from libraries.openmodelica.org

N time
160 0.4
320 3.6
640 27.9
1280 237.3

which is still O(N3).

comment:7 by Martin Sjölund, 8 years ago

Milestone: 1.11.01.12.0

Milestone moved to 1.12.0 due to 1.11.0 already being released.

comment:8 by Francesco Casella, 7 years ago

Owner: changed from Lennart Ochel to Patrick Täuber
Status: newassigned

@ptaeuber, could you have a look at this? I noticed that (according to @hkiel's comment), this large amount of time is also spent by matching and sorting.

Maybe there is something wrong (or grossly naive) in how we deal with the sorting of hybrid systems? This phenomenon only takes place when there are many events and discrete variables involved.

Thanks!

comment:9 by Francesco Casella, 7 years ago

Summary: Prepare PostOptimizeDAE scales as O(N^3) in models with large numbers of discrete variablesMatching and sorting scales as O(N^3) in models with large numbers of discrete variables

BTW, the BreakerNetwork_N_XX models show similar times also for tearingSystem. I guess the root cause is the same, please have a look at that also.

I am changing the summary to reflect more precisely what the problem is.

comment:10 by Patrick Täuber, 7 years ago

Status: assignedaccepted

comment:11 by Henning Kiel, 7 years ago

Most of the time is spent in Differentiate.differentiateExp() for big models:

N time tearing time tearing w/o LC
(init) (sim) (init) (sim)
320 3.86 4.04 1.44 1.41
640 33.7 33.3 7.97 7.27
1280 281.6 275.9 46.85 42.3

Most of the time is spent for loop checking in the differentiation. So with removed loop check (w/o LC) times are very much lower. See #4454. I would go for removing the check completely as proposed in PR1698.

comment:12 by Patrick Täuber, 7 years ago

Since removing the loop check much less time is spent for prepare postOptimizeDAE (which is actually the matching and sorting of the initial system) as well:

N time (prepare postOptimizeDAE)
320 0.226
640 0.6898
12802.188

comment:13 by Francesco Casella, 7 years ago

Resolution: fixed
Status: acceptedclosed

Excellent result, thanks for your efforts! A two orders of magnitude improvement on the largest model sounds really impressive :)

I guess I can now close this ticket for good.

Note: See TracTickets for help on using tickets.