Opened 4 years ago
Last modified 3 years ago
#6236 new enhancement
Solve simple linear systems in closed form if that leads to simple solution
Reported by: | Francesco Casella | Owned by: | Karim Adbdelhak |
---|---|---|---|
Priority: | high | Milestone: | |
Component: | Backend | Version: | 1.16.0 |
Keywords: | Cc: | federico.terraneo@…, Philip Hannebohm, Andreas Heuermann, Vitalij Ruge |
Description
Consider the following linear system, which is obtained when building object-oriented 1D, 2D, and 3D thermal models because of pairs of conductances being series connected:
Q = 2*g*(Tb - T2); Q = 2*g*(T1 - Tb);
with unknowns Q
and Tb
(T1
and T2
are states).
This system has a simple closed-form solution:
Q := g*(T1 - T2); Tb := (T1 + T2)/2;
that can be computed with a lot less overhead than it is needed to use a numerical linear solver such as LAPACK's dgesv
.
We could consider trying to solve 2x2 (and possibly 3x3) systems by gaussian elimination using a variant of the ASSC algorithm. If the outcome is simple assignments, we could put them in the generated code, otherwise we should discard it and continue using the numerical solution instead.
Change History (15)
comment:1 by , 4 years ago
Cc: | added |
---|
comment:3 by , 4 years ago
Cramer's rule always works, if the system is not singular, but in general it is the least efficient way to solve a system. I guess for 2x2 systems this could still be ok.
In any case the problem with the singularity when g = 0
is always there. In this case, for example:
Q - 2*g*Tb = -2*g*T2; Q + 2*g*Tb = 2*g*T1;
A = [ 1 -2*g 1 2*g] b = [-2*g*T2 2*g*T1] x = [ Q Tb]
x = 1/(4*g)*[2*g 2*g * [-2*g*T2 = -1 1 ] 2*g*T1] = 1/(4*g)*[-4*g*g*T2 + 4*g*g*T1 2*g*T2 + 2*g*T1]
Now, if g<>0 you can simplify it and you get
Q = g*(T1-T2) Tb = 0.5*(T1+T2)
otherwise the system is singular. If the backend can figure out that g<>0
is required to have a non-singular system, then it could add assert(g<>0, "Singular system") and then proceed with the simplification.
What do you think?
follow-up: 5 comment:4 by , 4 years ago
You're absolutely right. Your remark about the assert made me realize the final equations for Q
and Tb
have no trouble with any choice of parameter or state value. So the singularity is not really an issue, I guess.
For a general system
[a11 a12 * [x1 = [b1 a21 a22] x2] b2]
Gauß initially gives something like (depending on the choice of pivots and implementation details)
x1 = b1/a11 - a12/a11 * (b2 - b1*a12/a11)/(a22 - a21*a12/a11) x2 = (b2 - b1*a21/a11) / (a22 - a21*a12/a11)
where you divide by a11
a lot. This was my problem with zero pivots. Initially I thought this was bad, also I thought this problem amplifies for larger systems. But of course it can be simplified away at the symbolic stage. I didn't realize this at first.
Anyway, with Cramer you immediately get
x1 = (b1*a22 + b2*a12) / (a11*a22 + a21*a12) x2 = (a11*b2 + a21*b1) / (a11*a22 + a21*a12)
which is obviously the same. But I think subsequent simplification will have less work to do. Maybe I'm wrong about that. Also for this small system building three determinants (or hard coding this solution) should be less effort than doing elimination and back-substitution with all the bookkeeping.
My point is by using Cramer we can arrive at the equations for x
in a simpler and more systematic way. If we were to solve the system numerically Gauß would surely always win. But symbolic Gauß isn't necessarily much faster for small systems.
A 3x3 system is more work admittedly. So let's do the hard coded version for 2x2 and try Gauß elimination beyond that.
follow-up: 7 comment:5 by , 4 years ago
Replying to phannebohm:
You're absolutely right. Your remark about the assert made me realize the final equations for
Q
andTb
have no trouble with any choice of parameter or state value.
Hmm, not really. The final equations are valid for g<>0
. When g = 0
, Tb
is actually undefined. Of course you could take it to be the mid-point between T1
and T2
, but I'm really hesitant at silently accepting singular systems and picking one solution arbitrarily.
follow-up: 8 comment:6 by , 4 years ago
As to Gaussian elimination, I'm really unsure it makes sense to perform it symbolically, if you cannot guarantee that the divisors are nonzero. Pivoting is an essential part of that algorithm, and you can't really do that symbolically, it is a numerical algorithm.
I'd just stick to Cramer's rule for 2x2 systems. However, we really need to figure out that there are good symbolic simplifications, otherwise you may end up with unnecessarily repeated computations, e.g. (a11*a22 + a21*a12)
would be computed twice.
I'm not sure if we really want to go above 2x2. Maybe it would make sense for sparse equations, e.g. if there are no more than 2N nonzero elements in a NxN system, otherwise the complexity will just blow up very quickly and it's very unlikely you'll get anything simple out of that.
comment:7 by , 4 years ago
Replying to casella:
Hmm, not really. The final equations are valid for
g<>0
. Wheng = 0
,Tb
is actually undefined. Of course you could take it to be the mid-point betweenT1
andT2
, but I'm really hesitant at silently accepting singular systems and picking one solution arbitrarily.
This is a removable singularity, so it's not entirely arbitrary. But technically you're correct. For parameters or constants I tend to agree with your approach. If the singular assert condition (i.e. the determinant) depends on other variables however, I would prefer to silently accept it as long as the resulting equations don't become singular. Except maybe for the initial system, I'm not sure.
comment:8 by , 4 years ago
Replying to casella:
As to Gaussian elimination, I'm really unsure it makes sense to perform it symbolically, if you cannot guarantee that the divisors are nonzero. Pivoting is an essential part of that algorithm, and you can't really do that symbolically, it is a numerical algorithm.
Whops, your example can not really be solved numerically, except with a numerical linear solver such as LAPACK's dgesv
, so I thought we are talking about symbolic solutions.
I'd just stick to Cramer's rule for 2x2 systems. However, we really need to figure out that there are good symbolic simplifications, otherwise you may end up with unnecessarily repeated computations, e.g.
(a11*a22 + a21*a12)
would be computed twice.
Yes, we could introduce auxiliary variables for the determinants D
and Di
so that xi = Di/D
and nothing gets computed twice. But then we cannot simplify the fractions beforehand.
I'm not sure if we really want to go above 2x2. Maybe it would make sense for sparse equations, e.g. if there are no more than 2N nonzero elements in a NxN system, otherwise the complexity will just blow up very quickly and it's very unlikely you'll get anything simple out of that.
This is a good question. For sparse systems there aren't as many determinants to be built. There may be an optimal sparsity and size NxN, for which this is still faster than a numerical solver. Also for the Di
s there are a lot of sub-determinants that can be computed only once.
follow-ups: 10 12 comment:9 by , 4 years ago
You need to be careful if you symbolically introduce Cramer's rule into the system. You end up duplicating all the expressions if you do it wrong...
If you are just going to do it numerically as fallback, start by implementing it as a numerical solver automatically used for 2x2/3x3.
Perhaps have a special kind of handling in the SimCode to translate 2x2 or 3x3 linear systems to an algorithm section (or perhaps better a function in order to have temporaries) where all constantly known cells are precalculated and everything else is calculated at most 1 time.
follow-up: 11 comment:10 by , 4 years ago
Replying to sjoelund.se:
You need to be careful if you symbolically introduce Cramer's rule into the system. You end up duplicating all the expressions if you do it wrong...
Sure. This only makes sense if there are simplifications. Possibly guarded by asserts that catch singular values of parameters. We should check that the final expressions do not contain all that duplicate stuff, and if they do, discard this step.
Perhaps have a special kind of handling in the SimCode to translate 2x2 or 3x3 linear systems to an algorithm section (or perhaps better a function in order to have temporaries) where all constantly known cells are precalculated and everything else is calculated at most 1 time.
I'm not sure if an algorithm section will be translated into something be faster than dgesv at the end of the day. If it requires dynamic allocation of memory, we may as well keep the good old FORTRAN stuff. The idea is to do this only if the solution is a really simple one that can be computed explicitly with a few floating point operations.
comment:11 by , 4 years ago
Cc: | added |
---|
Replying to casella:
Sure. This only makes sense if there are simplifications. Possibly guarded by asserts that catch singular values of parameters.
BTW, these asserts should be moved to the initial section of the code. Currently, if simulate
model Test Real y = x*p; parameter Real p = 3; Real x; equation y = time; end Test;
which is solved by the backend as
x := y/p;
the generated code contains
data->localData[0]->realVars[0] /* x variable */ = DIVISION_SIM(data->localData[0]->realVars[1] /* y variable*/, data->simulationInfo->realParameter[0] /* p PARAM */, "p",equationIndexes);
The division involves a macro that calls a function which tests for nonzero denominator and handles all the exception paraphernalia at each time step. This is a waste of time, we should only do the check once at the beginning of the simulation. Afterwards, the simulation code could just be
data->localData[0]->realVars[0] = data->localData[0]->realVars[1] / data->simulationInfo->realParameter[0];
comment:12 by , 4 years ago
Replying to sjoelund.se:
Perhaps have a special kind of handling in the SimCode to translate 2x2 or 3x3 linear systems to an algorithm section (or perhaps better a function in order to have temporaries) where all constantly known cells are precalculated and everything else is calculated at most 1 time.
There is no need for that. We can use local auxiliary variables to store the values, like we do when calculating multi-valued functions to determine which branch we are on.
follow-up: 14 comment:13 by , 4 years ago
Cc: | added |
---|
@vitalij points out in ticket:6254#comment:8 that we have a --maxSizeSolveLinearSystem
flag in OMC already. Is this already doing something like we want to do now?
comment:14 by , 4 years ago
Replying to casella:
@vitalij points out in ticket:6254#comment:8 that we have a
--maxSizeSolveLinearSystem
flag in OMC already. Is this already doing something like we want to do now?
For the task the defaul qrDecompositionHouseholder:
https://github.com/OpenModelica/OpenModelica/blob/master/OMCompiler/Compiler/BackEnd/ResolveLoops.mo#L2793 has an issues with normalization.
I guess remove the normalization
and remove makeTmpEqnForExp
:
can work!
makeTmpEqnForExp
: Add some helper variables, which can avoid the wished simplication here!
I don't belive any more it's a good idea, to add here some variables.
Note: remove normalization make the approach numerical weak. But for small system is should work well.
We actually talked about this some time ago. In principle the ASSC algorithm already does part of the job so this looks like a good idea.
Problems arise however as soon as we have non-literal values as coefficients, because we can no longer guarantee a good choice for the pivot in each step of the elimination. Even in your small example, we would need to know that (in the second step)
4*g
is not zero. In that exampleg = 0
makes the system singular anyways, but a 3x3 system could be solvable and we just pick the wrong pivot and never know, until simulation.Since scaling is not an issue here, I suggest we resort to Cramer's rule instead, since it is a deterministic (no pun intended) way of solving these small systems. And then let's hope simplify can tidy up. What do you think?