Opened 6 years ago

Last modified 5 years ago

#5288 assigned enhancement

Achieve successful simulation of 100% MSL 3.2.3 models with NF

Reported by: Francesco Casella Owned by: Francesco Casella
Priority: blocker Milestone: 2.0.0
Component: *unknown* Version:
Keywords: Cc: Lennart Ochel, Andreas Heuermann, Karim Adbdelhak, Per Östlund

Description (last modified by Francesco Casella)

We have a strategic goal of simulating 100% of the MSL models with annotation(experiment(StopTime)), using the new frontend. This involves some remaining issues with the NF, but also some back-end/runtime issues, that were already present with the old FE and are not solved by the new one.

At this point in time, there are 15 models that do not simulate successfully in MSL 3.2.3, and other 23 models that don't pass the verification phase. There are also 55 models that still don't have a reference result file, see #5296.

I have already identified the following open issues:

I will update the list as I identify the root cause of the remaining failing models.

I think it would be nice if we could fix them before we release 1.14.0. That would be a clearly recognizable milestone for that release.

Attachments (1)

MSL NF Issues.ods (24.4 KB ) - added by Francesco Casella 5 years ago.

Download all attachments as: .zip

Change History (26)

comment:1 by Francesco Casella, 6 years ago

Description: modified (diff)

comment:2 by Francesco Casella, 6 years ago

Description: modified (diff)

comment:3 by Francesco Casella, 6 years ago

Description: modified (diff)

comment:4 by Francesco Casella, 6 years ago

Description: modified (diff)

comment:5 by Per Östlund, 6 years ago

Regarding #5240, the lack of function evaluation for external functions, this only affects one model as far as I know: ReadRealMatrixFromFile. The issue with that model is that it has declarations such as:

final parameter Real[:, :] A = readRealMatrix(file, matrixName, dim1[1], dim1[2]);

I.e. the size of A is determined by an external function call. We could evaluate the call and determine the dimensions of A from the result, but this means that the contents of the matrix file will be hardcoded into the model. The alternative would be to allow A to have unknown dimensions and determine the size during simulation, but that might be tricky.

So it's a rather weird model, and something you'd normally use external objects for. The model also doesn't work with the old frontend even though it flattens, because the old frontend just sets the dimensions of A to [1, 1] and hopes no one will notice.

in reply to:  5 comment:6 by Francesco Casella, 6 years ago

Replying to perost:

Regarding #5240, the lack of function evaluation for external functions, this only affects one model as far as I know: ReadRealMatrixFromFile.

Correct.

The issue with that model is that it has declarations such as:

final parameter Real[:, :] A = readRealMatrix(file, matrixName, dim1[1], dim1[2]);

I.e. the size of A is determined by an external function call. We could evaluate the call and determine the dimensions of A from the result, but this means that the contents of the matrix file will be hardcoded into the model.

I guess that is the intention. If there ever was a structural parameter, A is certainly one. I would just evaluate it and go ahead.

comment:7 by Francesco Casella, 6 years ago

Description: modified (diff)

comment:8 by Francesco Casella, 6 years ago

Description: modified (diff)

comment:9 by Per Östlund, 6 years ago

I added the ModelicaIO_readMatrixSizes function from ModelicaIO as a known external function in the NF (in e4913d6f), so that it now can be evaluated without needing general support for external functions. That's enough to handle the ReadRealMatrixFromFile model, since e.g. the dimensions of the readRealMatrix calls are determined by the function arguments and not by the file contents.

The flat model isn't very good since A in the example above is split into scalars, with one subscripted call to readRealMatrix for each element of the matrix, but at least it works. I've implemented support for evaluating readRealMatrix too, but the NF doesn't evaluate those bindings since they're non-structural parameter expressions.

But anyway, we now have 100% flattening of MSL 3.2.3 with the NF, while ModelicaTest trunk is at 99.6%, or 566 out of 568 (ModelicaTest 3.2.3 is a bit worse due to some missing each, which has been fixed in trunk).

The two models still not working in ModelicaTest is ModelicaTest.Math.TestMatrices2 which fails because of some mismatched dimensions which may or may not be equal, and ModelicaTest.Fluid.Dissipation.Verifications.PressureLoss.StraightPipe.dp_twoPhaseOverall_DP which fails because a function is called with an argument which is a record with more fields than expected. The last one is a bit dubious, but the specification doesn't specify the rules for type compatibility of function arguments (I've opened a ticket about it).

in reply to:  9 ; comment:10 by Francesco Casella, 6 years ago

Replying to perost:

The flat model isn't very good since A in the example above is split into scalars, with one subscripted call to readRealMatrix for each element of the matrix, but at least it works. I've implemented support for evaluating readRealMatrix too, but the NF doesn't evaluate those bindings since they're non-structural parameter expressions.

I'm not sure I understand why this happens. Since A is a final parameter, why isn't it evaluated outright?

in reply to:  10 ; comment:11 by Per Östlund, 6 years ago

Replying to casella:

Replying to perost:

The flat model isn't very good since A in the example above is split into scalars, with one subscripted call to readRealMatrix for each element of the matrix, but at least it works. I've implemented support for evaluating readRealMatrix too, but the NF doesn't evaluate those bindings since they're non-structural parameter expressions.

I'm not sure I understand why this happens. Since A is a final parameter, why isn't it evaluated outright?

Because the NF currently doesn't evaluate non-complex parameters, final or not, only structural parameters. But evaluating all final parameters would be fairly trivial if you think that's a good idea.

in reply to:  11 ; comment:12 by Francesco Casella, 6 years ago

Replying to perost:

But evaluating all final parameters would be fairly trivial if you think that's a good idea.

In fact, I though this was already implemented, see your own comment in ticket:5301#comment:3. Maybe this specific case was left out for some reason, please check.

In fact, in principle I think one should only evaluate final parameters if everything in the binding equation is constant or a structural parameter, otherwise you may want to change that at runtime, and get the final parameter to be updated accordingly.

However, I understand most library developers assume that final parameters are just constant-evaluated no matter what, and that's consistent with what Dymola does, so I think it would be fine, at least for the time being.

We started some discussion for future Modelica at the Design Meeting in Regensburg, and one proposal from Henrik Tidefelt was to redesign the whole concept of constants and parameters, and deprecate the Evaluate annotation. I guess there will be a lot of discussion, but that's probably the right direction, but it's a long-term goal. Anyway, for the time being let's try to get the same coverage as Dymola, this seems a reasonable goal to me.

in reply to:  12 comment:13 by Per Östlund, 6 years ago

Replying to casella:

Replying to perost:

But evaluating all final parameters would be fairly trivial if you think that's a good idea.

In fact, I though this was already implemented, see your own comment in ticket:5301#comment:3. Maybe this specific case was left out for some reason, please check.

That comment was only for complex parameters, i.e. parameters that have child elements. In that case we try really hard to evaluate the binding, to avoid having to move the binding to an initial equation.

A in this case is just an array of Reals, and the size of arrays in a non-function context must always be known. So for parameters of basic type, array or not, there's never any need to move their bindings to an initial equation since we can just subscript them when scalarizing the array. The NF doesn't evaluate bindings of non-structural basic type parameters at all, it only simplifies them.

comment:14 by Francesco Casella, 6 years ago

OK.

That said, I still think that evaluating all final parameters (that do not depend on fixed = false parameters, that is, see #5385) is probably a good idea, at least for the time being, because that's what people expect to happen, though I understand it's really not written anywhere in the specification, that only refers to the fact that using modifiers or redeclarations to change them is verboten.

in reply to:  14 comment:15 by Per Östlund, 6 years ago

Replying to casella:

That said, I still think that evaluating all final parameters (that do not depend on fixed = false parameters, that is, see #5385) is probably a good idea, at least for the time being, because that's what people expect to happen, though I understand it's really not written anywhere in the specification, that only refers to the fact that using modifiers or redeclarations to change them is verboten.

That would mean that if a final parameter depends on a non-final parameter, we'd have to evaluate the non-final parameter too, as well as any parameter that parameter depends on, and so on. In other words, declaring a parameter as final would basically turn it and all its dependencies into constants that can't be changed after compilation. Is that really the expected behaviour?

If a final parameter only depends on other final parameters or constants though, then there's not really any reason to not evaluate it. And from what I've observed it's very common that final parameters are given constant values (like extends PartialMedium(final reducedX = true)), but it's not really something I've reflected on when looking at various libraries.

comment:16 by Francesco Casella, 6 years ago

If you look at the case mentioned in #5301, it seems to me that the expected behaviour is to evaluate the final parameter no matter what. The fact that the parameter is a record in this case is unessential. I suspect there are many other such cases in the MSL (and in other libraries), though I don't have concrete references at hand right now.

Maybe we could actually use the testsuite to figure out how many final parameters with non-constant (o structural) binding equations are found in the testsuite, and then double-check by inspection whether evaluating them gives any significant advantage or not.

comment:17 by Francesco Casella, 6 years ago

I'll try to discuss this a bit on the modelica-design list.

in reply to:  17 comment:18 by Per Östlund, 6 years ago

Replying to casella:

I'll try to discuss this a bit on the modelica-design list.

Please do, I'm not convinced that evaluating final parameters that depend on non-final parameters is a good idea. In the case mentioned in #5301 I don't see why it would be expected that body1.R_start would normally be evaluated. We still do evaluate it, but only to work around issues in the backend (and the model still doesn't work correctly).

It looks more to me like R_start is declared final because it describes a fixed relationship between sequence_start and angles_start that shouldn't change, whereas angles_start is meant to be set by the user and could very well be given at simulation time. sequence_start on the other hand is declared as Evaluate = true like many of the other parameters in Body, but this is not the case for angles_start. Whether this is an oversight from the developer of the model or a conscious decision I don't know though.

comment:19 by Francesco Casella, 6 years ago

@perost, you are right, see discussion in ticket:5301#comment:6, as well as #5390.

comment:20 by Francesco Casella, 6 years ago

Cc: Lennart Ochel Andreas Heuermann Karim Adbdelhak Per Östlund added
Owner: changed from somebody to Francesco Casella
Status: newassigned

Status update: as of today there are 30 models of the MSL 3.2.3 that do not simulate or do so producining results that fail the verification against results provided by the MA, when using the NF. This doesn't take into account the 9 models for which we don't have reference results (yet), I wrote Leo Gall about this last week - for the time being I assume they are ok.

I analyzed what the problem is for each of them, see the attached spreadsheet. The list of open tickets with an identified root cause as of May 13 is: #5170, #5328, #5350, #5452, #5458, #5459, #5462, #5470, #5473, #5475.

I would invite the task force made by Lennart, Andreas and Karim to check on the tickets mentioned above.

I will update this comment as soon as I get new results from the analysis of the remaining issues.

Last edited 6 years ago by Francesco Casella (previous) (diff)

comment:21 by Francesco Casella, 5 years ago

Milestone: 1.14.02.0.0

We tried to resolve all these issues in time for 1.14.0, but it turns out this is not really possible, so I'll reschedule this ticket as a blocker (this time for good) for 2.0.0

comment:22 by Francesco Casella, 5 years ago

Status update as of 7 June 2019, MSL 3.2.3 simulation using the NF

  • 11 models do not simulate
  • 6 models simulate but produce wrong results
  • 9 models fail the verification phase, probably due to a false negative of the CVS compare tool, as the errors are tiny
  • 3 models fail the verification phase, certainly due to a false negative of the CVS compare tool (e.g., chaotic systems)

There are also 9 models for which we still don't have the reference results, I am pushing Leo Gall to provide them ASAP.

So, we are left with 11+6 = 17 problematic models out of 424, which is 4%. We should get to 0% in 2.0.0, but I guess this is as good as it can get for 1.14.0.

comment:23 by Francesco Casella, 5 years ago

There are currently only 7 MSL 3.2.3 models that do not simulate with OMEdit in OpenModelica 1.14.0-dev. Overall, there are 14 models which either do not simulate or do so providing clearly wrong results.

Additionally, there are 13 models whose results probably differ from the reference ones only because of small numerical errors which are not scaled correctly.

Last edited 5 years ago by Francesco Casella (previous) (diff)

comment:24 by Francesco Casella, 5 years ago

List of remaining open issues:

  • #5328 ODE solver behaves erratically in two MultiBody examples (issue with state selection?)
  • #5350 MSL 3.2.3 FFT function does not work correctly
  • #5452 Numerical issues due to missing index reduction
  • #5459 Unnecessary choice of dynamic state selection leads to code generation failure
  • #5462 Inaccurate simulation of systems with noEvent and saturations using the C runtime
  • #5473 AST_BatchPlant test case fails
  • #5595 Equations involving semiLinear are not solved correctly for tearing.
Last edited 5 years ago by Francesco Casella (previous) (diff)

in reply to:  24 comment:25 by Francesco Casella, 5 years ago

List of remaining open issues:

  • #5328 ODE solver behaves erratically in two MultiBody examples (issue with state selection?)
  • #5350 MSL 3.2.3 FFT function does not work correctly
  • #5452 Numerical issues due to missing index reduction
  • #5462 Inaccurate simulation of systems with noEvent and saturations using the C runtime
  • #5473 AST_BatchPlant test case fails
  • #5767 Issues with BranchingDynamicPipes and homotopy-based initialization
  • #5768 Model HeatExchangerSimulation fails when using homotopy

As o 15/01/2020 only 15/424 models have issues.

Last edited 5 years ago by Francesco Casella (previous) (diff)

by Francesco Casella, 5 years ago

Attachment: MSL NF Issues.ods added
Note: See TracTickets for help on using tickets.