Opened 9 years ago
Last modified 9 years ago
#3668 assigned defect
Memory allocation of the back-end and code generation grows quadratically with the system size
Reported by: | Francesco Casella | Owned by: | Willi Braun |
---|---|---|---|
Priority: | high | Milestone: | Future |
Component: | Backend | Version: | v1.9.4-dev-nightly |
Keywords: | Cc: | andrea.bartolini@… |
Description (last modified by )
Consider the attached test package: it has one large algebraic system of equations driven by a scalar differential equation.
I have used these settings, which include some that are essential for the efficient handling of the large algebraic system:
setCommandLineOptions("--preOptModules-=clockPartitioning --postOptModules-=detectJacobianSparsePattern --postOptModules+=wrapFunctionCalls --disableLinearTearing --removeSimpleEquations=new --indexReductionMethod=uode --tearingMethod=omcTearing -d=dumpSimCode,gcProfiling,execstat,nogen,initialization,backenddaeinfo,discreteinfo,stateselection"); simulate(LargeAlgebraic.M_2000, method = "rungekutta", stopTime = 1, numberOfIntervals = 10, simflags = "-lv LOG_STATS,LOG_LS -ls=klu");
Under Windows, the size of memory allocated by the back-end and code generation phases grows approximately as O(N2), N being the size of the algebraic system:
N | Memory (MB) |
2000 | 260 |
4000 | 1000 |
6000 | 2000 |
Note that the number of non-zero elements in the incidence matrix of the system grows as O(N), as there are 3 non-zero elements in each row.
This is not sustainable for systems that have more than a few thousands unknowns.
Attachments (3)
Change History (11)
by , 9 years ago
Attachment: | LargeAlgebraic.mo added |
---|
comment:1 by , 9 years ago
Cc: | added |
---|
comment:2 by , 9 years ago
Description: | modified (diff) |
---|---|
Summary: | Memory allocation of the back-end grows quadratically with the system size → Memory allocation of the back-end and code generation grows quadratically with the system size |
by , 9 years ago
Attachment: | LargeAlgebraic_MemoryUsage.pdf added |
---|
comment:3 by , 9 years ago
According to these results, it seems that the main culprit is matching and sorting, followed by preparePostOptimizeDAE, postOptWrapFunctionCalls, postOptRemoveSimpleEquations.
Is there any reason why any of these functions should allocate O(N2) memory? I am in particular baffled by matching and sorting: the number of E-V nodes and of edges is definitely O(N), why should O(N2) memory be needed?
comment:4 by , 9 years ago
Status: | new → accepted |
---|
This seems to be connected to very bad structure in the backend. I managed it already to dramatically reduce the memory consumption for the first 13 modules continuing from matching/sorting.
comment:5 by , 9 years ago
Basically all the memory is consumed by computation of symbolic Jacobians.
This calculation is part of the analysis of strong components, which is first performed right after the matching/sorting. The analysis may be updated after each post-optimization module, which makes things even worth.
by , 9 years ago
Attachment: | LargeAlgebraic_MemoryUsage2.pdf added |
---|
Impact of symbolic Jacobian computation for SCC analysis on the memory consumption for model LargeAlgebraic.
comment:6 by , 9 years ago
Probably due to double recursion in the function that computes the symbolic jacobians. Willi will look into that and reimplement more efficiently.
comment:7 by , 9 years ago
Owner: | changed from | to
---|---|
Status: | accepted → assigned |
comment:8 by , 9 years ago
Owner: | changed from | to
---|
A first analysis of the backend memory usage for model LargeAlgebraic.