#5200 closed defect (fixed)
The backend does not handle reductions correctly
Reported by: | Francesco Casella | Owned by: | Lennart Ochel |
---|---|---|---|
Priority: | blocker | Milestone: | 1.14.0 |
Component: | New Instantiation | Version: | |
Keywords: | Cc: | Martin Sjölund, Willi Braun, Per Östlund |
Description
Please consider the following test model
model TestMixture package Medium = Modelica.Media.IdealGases.MixtureGases.FlueGasSixComponents; Medium.BaseProperties medium; Medium.MolarMass MM[:] = Medium.data.MM; equation medium.p = 1e5; medium.T = 300; medium.X[1] = time; medium.X[2] = 1 - time; medium.X[3:6] = zeros(4); end TestMixture;
Run it with the new front-end (-d=newInst
), as the old front-end does not flatten it correctly. The following error is reported
assert | debug | division by zero at time 0, (a=1) / (b=0), where divisor b expression is: sum(medium.state.X[j] / ({0.0280134, 0.00201588, 0.0280101, 0.0319988, 0.01801528, 0.0440095})[j] for j in 1:6)
This comes from the inlining of the molarMass function
function TestMixture.Medium.molarMass input Medium.ThermodynamicState state; output Real MM(quantity = "MolarMass", unit = "kg/mol", min = 0.001, max = 0.25, nominal = 0.032); algorithm MM := 1.0 / sum(state.X[j] / {0.0280134, 0.00201588, 0.0280101, 0.0319988, 0.01801528, 0.0440095}[j] for j in 1:6); end TestMixture.Medium.molarMass;
that contains an expression with a reduction, which is not handled correctly. Reductions in equations are normally handled by the front-end, but in this case the equation is created by inlining, so the back-end must handle it.
Apparently the back-end cannot handle this properly. So far this kind of expression never showed up, because the old front-end was flattening it incorrectly, but in fact it shows up in all the models using mixture gases from Modelica.Media, so it should definitely be handled correctly when the NF is rolled out, and possibly already with 1.14.0, so that models using mixture gases can actually be used in OMC even before the NF replaces the OF - this has been an open problem for a long time, see #2858.
Change History (6)
comment:1 by , 6 years ago
comment:2 by , 6 years ago
Cc: | added |
---|
As discussed in today's webmeeting, one option is to have the NF handle this reduction.
As we already did elsewhere, I would suggest to do this optionally under some flag, so that we can get the models to work, and then later disable the flag and deal with this properly in the back-end.
comment:3 by , 6 years ago
The issue seems to be that the generated code for the sum reduction doesn't actually do any summing. It just calculates each value and then throws them away, meaning that the result is initialized to 0.0 and then never touched again. It might be that the NF sets the wrong type somewhere in the reduction and tripping the code generation up, causing it to skip some parts of the generation.
follow-up: 5 comment:4 by , 6 years ago
Component: | Backend → New Instantiation |
---|---|
Resolution: | → fixed |
Status: | new → closed |
Fixed in 11d7be6. The NF wasn't generating the fold expression used by the code generation. I'm assuming the code generation would work correctly with this fix, but it seems the backend actually optimizes away the whole function now. The model simulates now, but I'll leave it to Hudson to check whether the result is correct or not.
comment:5 by , 6 years ago
Replying to perost:
The issue seems to be that the generated code for the sum reduction doesn't actually do any summing. It just calculates each value and then throws them away, meaning that the result is initialized to 0.0 and then never touched again. It might be that the NF sets the wrong type somewhere in the reduction and tripping the code generation up, causing it to skip some parts of the generation.
I'm afraid the issue here runs much deeper.
As far as I understand (but I may be wrong, please correct me if so) the current back-end cannot really perform proper structural analysis on equations that are not fully flattened out. We wish it could, we want the NF to support this in the future, we are working on it (see #5144) but that's not the default behaviour, and it won't be for a long time. In general, the current back-end was designed with fully scalarized equations in mind, so what we get if that is not the case is not guaranteed to be correct.
More specifically, if you turn on -d=optdaedump
, you see that even though medium.MM
obviously depends on medium.X[1]
and medium.X[2]
, which are time-varying variables, the back-end incorrectly moves the equation computing medium.MM
to the known variables section
Known variables only depending on parameters and constants - globalKnownVars (17) ======================================== 1: MM[1]:VARIABLE(flow=false min = 0.001 max = 0.25 unit = "kg/mol" nominal = 0.032 ) = 0.0280134 type: Real [6] 2: MM[2]:VARIABLE(flow=false min = 0.001 max = 0.25 unit = "kg/mol" nominal = 0.032 ) = 0.00201588 type: Real [6] 3: MM[3]:VARIABLE(flow=false min = 0.001 max = 0.25 unit = "kg/mol" nominal = 0.032 ) = 0.0280101 type: Real [6] 4: MM[4]:VARIABLE(flow=false min = 0.001 max = 0.25 unit = "kg/mol" nominal = 0.032 ) = 0.0319988 type: Real [6] 5: MM[5]:VARIABLE(flow=false min = 0.001 max = 0.25 unit = "kg/mol" nominal = 0.032 ) = 0.01801528 type: Real [6] 6: MM[6]:VARIABLE(flow=false min = 0.001 max = 0.25 unit = "kg/mol" nominal = 0.032 ) = 0.0440095 type: Real [6] 7: medium.p_bar:VARIABLE(flow=false unit = "bar" ) = 1.0 "Absolute pressure of medium in [bar]" type: Real 8: medium.T_degC:VARIABLE(flow=false unit = "degC" ) = 26.85000000000002 "Temperature of medium in [degC]" type: Real 9: medium.MM:VARIABLE(flow=false min = 0.001 max = 0.25 unit = "kg/mol" nominal = 0.032 ) = 1.0 / sum(medium.state.X[j] / {0.0280134, 0.00201588, 0.0280101, 0.0319988, 0.01801528, 0.0440095}[j] for j in 1:6) "Molar mass (of mixture or single fluid)" type: Real
and of course when it computes medium.MM it gets a division by zero, because medium.X[1]
and medium.X[2]
haven't been computed yet.
We have two clear development goals for OMC. The first is (tentatively) set to Spring 2019, and is to replace the old FE with the NF, hopefully solving all the coverage issues that have been plaguing OMC so far. The second is to progressively move towards an architecture that avoids the expansion of arrays for increased efficiency. I don't know when this will be accomplished, but it's definitely in a much farther future time frame, particularly considering the lack of manpower currently available for this purpose.
IMHO, relying on the accomplishment of the second goal to accomplish the first would be a very bad strategic mistake.
As we already discussed, I think the only reasonable way to proceed is for sure to design the NF to support unexpanded arrays, but then add a further phase of expansion that is conditioned by some flags that will be disabled in the future, so that we can continue using the existing back-end, which is well-tested and really not bad except for large array-based models. I don't think it's worth asking ourselves the same question "should we try to fix this specific array-induced issue in the back-end right now?" times and again. The answer is no, because we lack the resources to do so and, most importantly, because this must be done in a coherent way and with a well-laid general plan, not as a collection of spot developments.
I would then recommend to solve this issue (as well as #5163 and all other similar ones) according to this principle.
comment:6 by , 6 years ago
I checked the TestMixture
with the nightly build. The algorithm in the function is still flattened with the iterator
MM := 1.0 / sum(state.X[j] / {0.0280134, 0.00201588, 0.0280101, 0.0319988, 0.01801528, 0.0440095}[j] for j in 1:6);
but then the outcome of the inlining in the evalFunc
back-end module is the correct and expected one
1/1 (1): medium.MM = 1.0 / (35.69720205330306 * medium.state.X[1] + 496.0612734885013 * medium.state.X[2] + 35.70140770650587 * medium.state.X[3] + 31.25117191894696 * medium.state.X[4] + 55.50843506179199 * medium.state.X[5] + 22.7223667617219 * medium.state.X[6]) [dynamic |0|0|0|0|]
which is later optimized to
6/6 (1): medium.MM = 1.0 / (35.69720205330306 * medium.Xi[1] + 496.0612734885013 * medium.Xi[2]) [dynamic |0|0|0|0|]
because the last four elements of medium.Xi[:]
are identically zero.
So it seem that the fold expression was the missing thing, and the backend can complete the flattening to scalars in this case.
The results of the simulation are correct.
Maybe it is possible to use the same code used by the NF to handle the reduction, which should be turned into