#3813 closed defect (fixed)
Execution time of simCode: created modelInfo and variables scales badly
Reported by: | Francesco Casella | Owned by: | Lennart Ochel |
---|---|---|---|
Priority: | critical | Milestone: | |
Component: | Backend | Version: | |
Keywords: | Cc: | Per Östlund, Adrian Pop |
Description
Based on results from the ScalableTestSuite and from my own large network models, the phase of SimCode which creates modelInfo and variables scales as O(N1.48), N being the number of equations.
DistributionSystemModelica_N_56_M_56
, at 65000 equations, takes about 10 seconds. A system with a million equations would take over 10 minutes.
There is most likely some operation in this part of the code that scales as N2 - removing it would give dramatic improvements on the largest models. I see no reason why this part of the code should take more than O(N) seconds to execute.
Change History (7)
comment:1 by , 9 years ago
Cc: | added |
---|
comment:2 by , 9 years ago
Sorting is one of the potential bottlenecks. The other one (taking the same amount of time for some models; not scaled up fully) is:
((outVars, _, _, hs)) := BackendVariable.traverseBackendDAEVars(knvars1, extractVarsFromList, (outVars, aliasVars1, knvars1, hs)); ((outVars, _, _, hs)) := BackendVariable.traverseBackendDAEVars(aliasVars1, extractVarsFromList, (outVars, aliasVars1, knvars1, hs)); ((outVars, _, _, hs)) := BackendVariable.traverseBackendDAEVars(extvars1, extractVarsFromList, (outVars, aliasVars1, knvars1, hs));
comment:3 by , 9 years ago
Notification: Performance of createVars pre-calc start values: time 35.76/1074, memory: 0.8873 GB / 115.7 GB Notification: Performance of createVars eval start values: time 0.1095/1074, memory: 0 / 115.7 GB Notification: Performance of createVars simulation: time 61.78/1135, memory: 1.167 GB / 116.9 GB Notification: Performance of createVars initialization: time 24.73/1160, memory: 1.488 MB / 116.9 GB Notification: Performance of createVars sort: time 4.961/1165, memory: 0.9805 GB / 117.9 GB Notification: Performance of createVars fixIndex: time 0.2962/1165, memory: 213.9 MB / 118.1 GB Notification: Performance of createVars set var index: time 0.3655/1166, memory: 242.4 MB / 118.3 GB
The pre-calc start values is possible to disable, but might generate a bigger C-code? I'm a bit unsure about this.
comment:4 by , 9 years ago
extractVarsFromList seems to have been based on a hashset of default size, which makes it go from expected linear-time to quadratic since the data structure used for conflict handling is linear search. 720k variables, 4000 size hash set = ~180 elements in the list, and we will search on average all of them (few actual conflicts)... So expected 720k²/4000 comparisons between long strings. Yaaay. I'll see if just changing the size has a good effect.
comment:5 by , 9 years ago
With a more reasonable hash-size:
Notification: Performance of createVars pre-calc start values: time 8.55/1072, memory: 0.8762 GB / 115.6 GB Notification: Performance of createVars eval start values: time 0.1186/1072, memory: 1.406 kB / 115.6 GB Notification: Performance of createVars simulation: time 9.263/1082, memory: 1.159 GB / 116.8 GB Notification: Performance of createVars initialization: time 0.368/1082, memory: 1.488 MB / 116.8 GB Notification: Performance of createVars sort: time 5.334/1087, memory: 0.9805 GB / 117.8 GB Notification: Performance of createVars fixIndex: time 0.3115/1088, memory: 213.9 MB / 118 GB Notification: Performance of createVars set var index: time 0.3956/1088, memory: 242.4 MB / 118.2 GB
(Most variables in initialization overlap with simulation, so it is expected that the phase is a lot quicker. Before, it had a long time due to resolving hash conflicts.)
comment:6 by , 9 years ago
Resolution: | → fixed |
---|---|
Status: | new → closed |
The situation has improved a lot, now DistributionSystemModelica_N_56_M_56
takes 4 seconds, and, more importantly, it seems that now the time is close to O(N), or O(N*log(N).
DistributionSystemModelica_N_28_M_28
has 16000 equations and takes 0.95 s, while DistributionSystemModelica_N_112_M_112
, which has 16 times as many, takes 22 s.
One may argue that 85 us per equations are still too much time, but it's now less than 1% of the total build time on the larger models, and it scales properly, so I'd close the ticket, at least for the time being.
So this has to do with sorting variables (I am fairly sure). The problem is the variables are quite many and so the comparisons become expensive. There is even some memory allocation going on (which could be removed as well).
One way to deal with this would be to do a bucket sort kind of thing instead of our default sort (for things like Absyn.Path, DAE.ComponentRef, etc.). So something like sending in functions that tell us: bucketIsSorted, nextBucket. So something like having:
A.B.C, B.A, A.C.D, A.B.D
->
A: B.C, C.D, B.D
B: A (len=1, trivial)
A: B.C, C.D, B.D
->
B: C,D
C: D (len=1, trivial)
I guess everyone knows radix sort. This would be similar to postman's sort. It would be interesting to implement it anyway. Costs twice the amount of memory compared to a quicksort, but I guess it might consume less than mergesort.