1 | | I am no |
| 1 | There are improvements, but the problem is still there. |
| 2 | |
| 3 | I first tried to run a batch of simulations. It is a simple bash script |
| 4 | {{{ |
| 5 | ulimit -s 65000 |
| 6 | omc Test_N_2_M_4.mos > log_test.txt |
| 7 | omc Test_N_3_M_4.mos >> log_test.txt |
| 8 | omc Test_N_4_M_4.mos >> log_test.txt |
| 9 | omc Test_N_6_M_4.mos >> log_test.txt |
| 10 | omc Test_N_8_M_4.mos >> log_test.txt |
| 11 | omc Test_N_11_M_4.mos >> log_test.txt |
| 12 | }}} |
| 13 | that runs several {{{.mos}}} scripts in sequence and piles up the results in a log file. Each {{{.mos}}} file loads the models, sets some flag and runs {{{simulate}}} |
| 14 | {{{ |
| 15 | loadModel(Modelica);getErrorString(); |
| 16 | loadFile("../GIT/PowerGrids/PowerGrids/package.mo");getErrorString(); |
| 17 | loadFile("../GIT/ScalableTestGrids/ScalableTestGrids/package.mo");getErrorString(); |
| 18 | cd("Temp");getErrorString(); |
| 19 | simulate(ScalableTestGrids.Models.Type1_N_2_M_4, cflags="-O0");getErrorString(); |
| 20 | }}} |
| 21 | All tests until {{{N_8_M_4}}} went fine and scaled more or less linearly on all phases, as expected. The {{{N_11_M_4}}} failed with {{{mmap(PROT_NONE) failed}}} and dumped core, apparently somewhere during the first phases of compilation, because no {{{.c}}} file was produced, and no output was sent to the log file, or if it did, the buffer was not flushed before the core dump. I saw a python script working for a few minutes, so the core dump should have been transmitted to the home base. I guess it was huge, maybe you can do some post-mortem analysis on it. |
| 22 | |
| 23 | Shortly before the crash, {{{htop}}} reported about 16 GB used out of 72 available used, so there was still plenty of memory available. Of course I have no idea about fragmentation. |
| 24 | |
| 25 | I then re-ran {{{Test_N_11_M_4.mos >> log.txt}}} (400.000 equations) outside the bash script, and this time it went fine. Not sure why. I understand Linux does not release the memory until the process is ended, but I guess this doesn't hold for the entire duration of the bash script; I'm starting separate omc processes sequentially in that script, so I'm a bit puzzled. |
| 26 | |
| 27 | I'm now running {{{Test_N_16_M_4.mos}}} (800.000 equations), let's see how it works. At some point, we should also see larger models in this [https://libraries.openmodelica.org/branches/master/ScalableTestGrids_noopt/ScalableTestGrids_noopt.html report], though I'm afraid that memory and time limits will not allow to go beyond {{{Type1_N_6_M_4}}}. Unless @sjoelund.se gives it a bit more slack, that is :) |
| 28 | |
| 29 | I guess the 400.000 equations is currently the limit for practical applications, even though there are some cases where you may be willing to go a bit further and wait for one or two hours of compilation, provided that the simulation is fast. A notable case was reported in [https://www.sciencedirect.com/science/article/pii/S0920379617300832?via%3Dihub this paper], that we were able to write a few years ago. In that case the simulation was run a lot of times and was really fast, compared to CFD, so waiting for the compilation time was not a big deal. |
| 30 | |
| 31 | I understand the way out of this problem in the long term is array-based code generation, but that will take some time, and it would be a shame that we have a regression compared to what we could accomplish in 2016. |