Opened 9 years ago
Last modified 9 years ago
#3668 assigned defect
Memory allocation of the back-end and code generation grows quadratically with the system size
Reported by: | casella | Owned by: | wbraun |
---|---|---|---|
Priority: | high | Milestone: | Future |
Component: | Backend | Version: | v1.9.4-dev-nightly |
Keywords: | Cc: | andrea.bartolini@… |
Description (last modified by casella)
Consider the attached test package: it has one large algebraic system of equations driven by a scalar differential equation.
I have used these settings, which include some that are essential for the efficient handling of the large algebraic system:
setCommandLineOptions("--preOptModules-=clockPartitioning --postOptModules-=detectJacobianSparsePattern --postOptModules+=wrapFunctionCalls --disableLinearTearing --removeSimpleEquations=new --indexReductionMethod=uode --tearingMethod=omcTearing -d=dumpSimCode,gcProfiling,execstat,nogen,initialization,backenddaeinfo,discreteinfo,stateselection"); simulate(LargeAlgebraic.M_2000, method = "rungekutta", stopTime = 1, numberOfIntervals = 10, simflags = "-lv LOG_STATS,LOG_LS -ls=klu");
Under Windows, the size of memory allocated by the back-end and code generation phases grows approximately as O(N2), N being the size of the algebraic system:
N | Memory (MB) |
2000 | 260 |
4000 | 1000 |
6000 | 2000 |
Note that the number of non-zero elements in the incidence matrix of the system grows as O(N), as there are 3 non-zero elements in each row.
This is not sustainable for systems that have more than a few thousands unknowns.
Attachments (3)
Change History (11)
Changed 9 years ago by casella
comment:1 Changed 9 years ago by casella
- Cc andrea.bartolini@… added
comment:2 Changed 9 years ago by casella
- Description modified (diff)
- Summary changed from Memory allocation of the back-end grows quadratically with the system size to Memory allocation of the back-end and code generation grows quadratically with the system size
Changed 9 years ago by lochel
comment:3 Changed 9 years ago by casella
According to these results, it seems that the main culprit is matching and sorting, followed by preparePostOptimizeDAE, postOptWrapFunctionCalls, postOptRemoveSimpleEquations.
Is there any reason why any of these functions should allocate O(N2) memory? I am in particular baffled by matching and sorting: the number of E-V nodes and of edges is definitely O(N), why should O(N2) memory be needed?
comment:4 Changed 9 years ago by lochel
- Status changed from new to accepted
This seems to be connected to very bad structure in the backend. I managed it already to dramatically reduce the memory consumption for the first 13 modules continuing from matching/sorting.
comment:5 Changed 9 years ago by lochel
Basically all the memory is consumed by computation of symbolic Jacobians.
This calculation is part of the analysis of strong components, which is first performed right after the matching/sorting. The analysis may be updated after each post-optimization module, which makes things even worth.
Changed 9 years ago by lochel
Impact of symbolic Jacobian computation for SCC analysis on the memory consumption for model LargeAlgebraic.
comment:6 Changed 9 years ago by casella
Probably due to double recursion in the function that computes the symbolic jacobians. Willi will look into that and reimplement more efficiently.
comment:7 Changed 9 years ago by casella
- Owner changed from lochel to wbr
- Status changed from accepted to assigned
comment:8 Changed 9 years ago by casella
- Owner changed from wbr to wbraun
A first analysis of the backend memory usage for model LargeAlgebraic.