Opened 7 years ago
Last modified 3 years ago
#4630 new defect
Rethink mathematical simplifications
Reported by: | Henning Kiel | Owned by: | somebody |
---|---|---|---|
Priority: | high | Milestone: | |
Component: | *unknown* | Version: | v1.13.0-dev-nightly |
Keywords: | Cc: | Martin Sjölund, Francesco Casella, Patrick Täuber, Lennart Ochel, Willi Braun, Volker Waurich |
Description (last modified by )
I propose to have in this ticket a list of (more complex) mathematical simplifications to be implemented in OMC (ExpressionSimplify).
To quote sjoelund.se from github discussion:
acos(cos(x)) is not equal to x... For example acos(cos(13)) = 0.4336293856408271 And cos(acos(13)) is an error, not 13... acos(cos(x)) could be mod(x, pi)?
The same argumentation holds for typical math simplification
e^(ln(x)) = x
which is an error for x<0
always correct simplifications - implement
- ln(ex) = x
- sin2(x)+cos2(x) = 1
- cosh2(x) - sinh2(x) = 1
- sin(acos(x)) = sqrt(1-x2)
- cos(asin(x)) = sqrt(1-x2)
- tan(atan(x)) = x
questionable simplifications
- asin(sin(x)) = ?
- atan(tan(x)) = ?
- sin(asin(x)) = ?
- acos(cos(x)) = mod(x, pi) -> is wrong, it is a triangle function
- asin(sin(x)) is shifted triangle function
incorrect simplifications - do not implement
- cos(acos(x)) = x (only true for -1<=x<=1)
- sin(asin(x)) = x (only true for -1<=x<=1)
- eln(x)= x (only true for x>0)
Change History (12)
comment:1 by , 7 years ago
Description: | modified (diff) |
---|
comment:2 by , 7 years ago
Cc: | added |
---|---|
Description: | modified (diff) |
comment:4 by , 7 years ago
Replying to hkiel:
incorrect simplifications - do not implement
- eln(x)= x (only true for x>0)
We could indeed substitute eln(x) with x if we also add
assert(x>0, "x originally appeared as argument of log() in the system equations");
to the set of equations. And, if we are lucky, maybe the tool can infer that x>0 always holds, e.g. because of min/max attributes.
follow-up: 6 comment:5 by , 7 years ago
ExpressionSimplify has the simplification exp(... * log(x) * ...) -> x ^ (... * ...)
which is wrong for x<0.
However, log(exp(x))
is not simplified.
follow-up: 7 comment:6 by , 7 years ago
Replying to hkiel:
ExpressionSimplify has the simplification
exp(... * log(x) * ...) -> x ^ (... * ...)
which is wrong for x<0.
other tool do e.g
exp(log(-1)) = exp(i *pi) = -1
see
http://m.wolframalpha.com/input/?i=exp%28log%28-42%29%29
comment:7 by , 7 years ago
Replying to anonymous:
Replying to hkiel:
ExpressionSimplify has the simplification
exp(... * log(x) * ...) -> x ^ (... * ...)
which is wrong for x<0.
other tool do e.g
exp(log(-1)) = exp(i *pi) = -1
see
http://m.wolframalpha.com/input/?i=exp%28log%28-42%29%29
That's possible of course, but then one has to leave the scope of real numbers...
comment:9 by , 5 years ago
Milestone: | 1.14.0 → 1.16.0 |
---|
Releasing 1.14.0 which is stable and has many improvements w.r.t. 1.13.2. This issue is rescheduled to 1.16.0
comment:11 by , 4 years ago
Milestone: | 1.17.0 → 1.18.0 |
---|
Retargeted to 1.18.0 because of 1.17.0 timed release.
I support the idea.
In fact, I think this topic is much broader than just getting the right simplifications for trigonometric functions. For example, it is obvious that replacing cos(x)2 + sin(x)2 with 1 only gives benefits, but what if I carefuly write a polynomial in a model using Horner's rule
and the back-end "simplifies" this equation to
Although from a mathematical point of view these two equations are equivalent, it is well known that from a numerical point of view the first one is way better than the second one. Another example is
being transformed to
or vice-versa. The two equations are equivalent from a mathematical point of view; however, the residual of the first is always well-defined, while the residual of the second can only be computed if f(c) > 0, which could be problematic for a numerical solver if c is selected as a tearing variable. Another example is reported in #4293
The general question is:
where beneficial could mean several things:
This question may be hard or even impossible to answer in some cases, particularly when there is a long chain of symbolic transformations involved and one at an early stage could have a bad consequence later on. The main source of difficulty is that all transformations are applied locally, but have global consequences on the system of equations to be solved, which are hard to take into account in general. For example, some transformations of one equation could be good or bad depending on which other equations are solved together with it.
As far as I know, this issue has been investigated extensively in the context of tearing, where it is of course crucial that equations being solved symbolically in the torn section do not cause trouble. I wonder how much this aspect has been taken into account with other symbolic transformations; what I saw in #4293 did not leave me completely at ease.
I am adding some more people in cc: who may be interested.