Opened 5 years ago

Last modified 5 years ago

#5947 new discussion

Backend cannot solve for variable indexed array elements

Reported by: Karim Adbdelhak Owned by: Karim Adbdelhak
Priority: high Milestone: Future
Component: Backend Version:
Keywords: variable index, residual Cc: Francesco Casella, Per Östlund, Philip Hannebohm, Bernhard Bachmann, Lennart Ochel, massimo ceraolo, adrien.guironnet@…

Description

Indexing an array with non constant expressions is allowed due to modelica specification. Our current backend is only able to have these variable indices as part of the right hand side and can never solve for these variables. This is a basic matching problem and can not be done, since we cannot evaluate during compilation what values the index can take.

This is a problem that was identified in ticket #4120

Consider the following model:

model testArr
  Boolean x(start = false);
  Integer y[2];
  Real z[2];
equation
  when sample(0.0, 0.1) then
    x = not pre(x);
  end when;
  y[1] = if x then 1 else 2;
  y[2] = if x then 2 else 1;
  z[y[1]] = sin(time);
  z[y[2]] = cos(time);
end testArr;

The indexing of z is not constant and therefore our current backend cannot solve for these variables (in this example it could be done, but there can be an example where it depends on a state). Equation 4 and 5 have to be solved for either z[1] or z[2] and if correctly processed it would work, but it fails.

Dymola is able to solve this and i suppose they re-formulate them to be residuals. There is a way i could implement this in the backend (i need to adapt the adjacency matrix) but that would also allow a lot of stuff that is not valid to pass. We would only be able to recognize that during runtime and that could be a headache.

Q: Is this relevant for real models?
Q: Should we only care for special cases?
Q: Should we care for invalid models failing with bad or unrelated error messages?

Change History (3)

comment:1 by Francesco Casella, 5 years ago

Cc: massimo ceraolo adrien.guironnet@… added

A1: I'd say it is probably not too relevant for today's models (-> we have much more urgent and pressing needs to get existing models to run well in the backend), but as we move towards supporting large array-based model, this kind of constructs may become more and more relevant in the future.

A2: probably, but we need to issue good diagnostic messages when models that are not covered get compiled

A3: absolutely! Models failing with incomprehensible error messages cast a dubious shade on the soundness of the whole thing. Most importantly, we should avoid as hell situations in which invalid models get compiled anyway and produce arbitrary results

A related issue is how do we handle this situation with sparse solvers. Consider this example:

model foo
  Integer N = 1000;
  input Integer idx;
  Real x[N];
equation
  for i in 1:N loop
    der(x[i]) = f(i, x[idx])
  end for;
end foo;

The Jacobian of this system has a very sparse structure, because the Jacobian only has 1000 non-zero element. However, I understand that the current backend is such that as soon as I am indexing an array with a variable index, structural analysis assumes a dependency on the entire vector, which is only later specialized to the specific element at runtime.

Of course this is sub-optimal but reasonable if arrays have 5 or 10 elements, but it cannot work if we have 1000's or 100,000's of elements.

in reply to:  1 comment:2 by Karim Adbdelhak, 5 years ago

Replying to casella:

A related issue is how do we handle this situation with sparse solvers. Consider this example:

model foo
  Integer N = 1000;
  input Integer idx;
  Real x[N];
equation
  for i in 1:N loop
    der(x[i]) = f(i, x[idx])
  end for;
end foo;

The Jacobian of this system has a very sparse structure, because the Jacobian only has 1000 non-zero element. However, I understand that the current backend is such that as soon as I am indexing an array with a variable index, structural analysis assumes a dependency on the entire vector, which is only later specialized to the specific element at runtime.

Of course this is sub-optimal but reasonable if arrays have 5 or 10 elements, but it cannot work if we have 1000's or 100,000's of elements.

This is not considered variable index in the future, because we know about all (countable) values the index can become due to the for loop iteration range. It is still in design phase, but the idea would be to have slice entries instead of single index entries in adjacency matrices and matchings. That would allow efficient processing of these kind of things.

Also the access of x in this example is not really important since we do not have to solve for it whatsoever, it is a state. If it was an arbitrary other unknown that might be a problem though. Also for initialization it might pose a problem.

I guess we will have to keep these things in mind for the new backend.

comment:3 by Francesco Casella, 5 years ago

After the discussion, we also noticed that in many cases it could be possible to deal with variable indeces, provided they end up "in the left-hand side" of the causalized equations. This is what happens if the selection is due to some control action, which is normally causal. Then, one could have something like this

model foo
  Integer N = 1000;
  input Integer idx;
  Real aux;
  Real x[N];
algorithm
  aux = x[idx];
equation
  for i in 1:N loop
    der(x[i]) = f(i, aux)
  end for;
end foo;
Note: See TracTickets for help on using tickets.