#5281 closed defect (fixed)
Add support for spatialDistribution
Reported by: | Francesco Casella | Owned by: | Andreas Heuermann |
---|---|---|---|
Priority: | blocker | Milestone: | 1.18.0 |
Component: | Run-time | Version: | |
Keywords: | Cc: | Karim Adbdelhak, Michael Wetter |
Description (last modified by )
The spatialDistribution
operator is still not supported by OMC. A basic implementation as suggested in the specification is not too difficult, and should be available for the 2.0.0 release.
The operator is used by several libraries, see, e.g., Buildings.Fluid.FixedResistances.Validation.PlugFlowPipes.PlugFlowAIT
Attachments (10)
Change History (48)
comment:1 by , 4 years ago
Milestone: | 2.0.0 → 1.18.0 |
---|
comment:2 by , 4 years ago
Owner: | changed from | to
---|---|
Status: | new → assigned |
comment:3 by , 4 years ago
Cc: | added |
---|---|
Owner: | changed from | to
comment:4 by , 4 years ago
Our first try for the C interface will be:
void spatialDistributionFunc(Data* data, threadData* threadData, long unsigned int index, double in0, double in1, double posX, int positiveVelocity); void initialSpatialDistributionFunc(Data* data, threadData* threadData, long unsigned int index, double* initialPoints , double* initialValues, int length);
comment:5 by , 4 years ago
Thank you guys! I made some test cases for you, see the attached package.
There are three test cases of increasing complexity. The first involves direction reversal and events; the second also involves non-trivial initialization of the buffer, but still has constant velocity except for the sign change. The last example has a continuously changing velocity, with sign change, events, and nontrivial initialization.
All test cases feature the exact analytical solution, computed with the integral formula of the specification. They were successfully tested with Dymola 2021.
Enjoy :)
comment:6 by , 4 years ago
P.S. these tests should probably go to the testsuite. I guess we could also add them to ModelicaTest, but I have no time to care about that right now. Maybe you can open a PR on the ModelicaTest yourself, once everything has been smoothed out.
comment:7 by , 4 years ago
OK, as usual Dymola had unsaved changes and I only figured that out now. Please use the updated attachment.
follow-up: 9 comment:8 by , 4 years ago
@casella I'm happily coding away on the runtime code and I need some clarification on what cases of initialPoints
and initialValues
3.7.2.2 spatialDistribution of the specification allowes:
- Having one or multiple events in the initialized spatial distribution
parameter Real initialPoints[:] = (0, 0.5, 0.5, 1) parameter Real initialValues[:] = (1, 1 , 0 , 0)
Initialize to have a event aty=0.5
, will trigger event iteration if reachingout0
orout1
. This should be allowed.
- initialPoints not covering all of interval [0,1]
parameter Real initialPoints[:] = (0.5) parameter Real initialValues[:] = (1)
I would just add value0
at positions0
and1
. Would work just fine.
- To many data points for one position.
So if 1. is allowed more then 2 positions with the same vale
parameter Real initialPoints[:] = (0, 0.5, 0.5, 0.5, 1) parameter Real initialValues[:] = (1, 1 , 100, 0 , 0)
From intuition I would not allow it because when adding values to our internal representation of the spatialDistributionz(y,t_i)
I would only save the pre value and the value after the event iteration (if an event occured). If it is allowed I would just remove the intermediate (point,value) pairs. I can't think of a case where doing a event iteration with out1 = 0, 100, 1 will be different then just out1 = 0, 1.
- initialPoints being not in interval [0,1]
parameter Real initialPoints[:] = (-1, 0) parameter Real initialValues[:] = (0, 1)
I guess it is not allowed but would make sense for me. Then you would save the pointsx
for the time before0
. (Or something similar)
- unsorted initialPoints
parameter Real initialPoints[:] = (0, 0.75, 0.5, 0.25, 1 ) parameter Real initialValues[:] = (0, 7.5 , 5 , 2.5 , 10 )
I would not allow it, because it is most likely an error of the user, but I can just sort the pairs increasing (for position) and that's it.
comment:9 by , 4 years ago
Replying to AnHeuermann:
@casella I'm happily coding away on the runtime code
Great!
and I need some clarification on what cases of
initialPoints
andinitialValues
3.7.2.2 spatialDistribution of the specification allowes:
- Having one or multiple events in the initialized spatial distribution
parameter Real initialPoints[:] = (0, 0.5, 0.5, 1) parameter Real initialValues[:] = (1, 1 , 0 , 0)Initialize to have a event aty=0.5
, will trigger event iteration if reachingout0
orout1
. This should be allowed.
Absolutely. I guess the only way to identify events is when you have two different values at the same time. If there was a disconinuity in the derivatives only, I guess that will be lost, but it's ok.
- initialPoints not covering all of interval [0,1]
parameter Real initialPoints[:] = (0.5) parameter Real initialValues[:] = (1)I would just add value0
at positions0
and1
. Would work just fine.
I guess we were lazy with the specification, we shold have clearly state that those points should start at 0 and end at 1. Maybe we can open a PR about it. I would be inclined to at least issue a warning (if not an error) in this case, because probably you meant to cover the whole 0-1 interval but you messed up somehow.
- To many data points for one position. So if 1. is allowed more then 2 positions with the same vale
parameter Real initialPoints[:] = (0, 0.5, 0.5, 0.5, 1) parameter Real initialValues[:] = (1, 1 , 100, 0 , 0)From intuition I would not allow it because when adding values to our internal representation of the spatialDistributionz(y,t_i)
I would only save the pre value and the value after the event iteration (if an event occured).
Yes.
If it is allowed I would just remove the intermediate (point,value) pairs. I can't think of a case where doing a event iteration with out1 = 0, 100, 1 will be different then just out1 = 0, 1.
Absolutely. This could also be clarified in the specification.
- initialPoints being not in interval [0,1]
parameter Real initialPoints[:] = (-1, 0) parameter Real initialValues[:] = (0, 1)I guess it is not allowed but would make sense for me. Then you would save the pointsx
for the time before0
. (Or something similar)
I would reject this case with an error. Don't waste your time trying to support it.
- unsorted initialPoints
parameter Real initialPoints[:] = (0, 0.75, 0.5, 0.25, 1 ) parameter Real initialValues[:] = (0, 7.5 , 5 , 2.5 , 10 )I would not allow it, because it is most likely an error of the user, but I can just sort the pairs increasing (for position) and that's it.
The specification is very clear in this case: "Elements in the initialPoints array must be sorted in non-descending order.". If they aren't, just give an error.
follow-up: 11 comment:10 by , 4 years ago
We now have a spatialDistribution operator that can handle a fairly complicated in0
and changes in velocity as long the velocity is positive and we don't have events.
Now I'm working on handling events. For that I need to know what we expect to happen:
Case 1
Initialization with initialPoints=(0.0,1.0)
, initialValues=(0.0, 0.0)
, so z(y,0)=0, forall y in [0,1]
.
Let's have positive velocity v=der(x)
and in0(0)=1
.
We now need to add an event at the left side of the spatial distribution and when the event is transported to the right it has to trigger an event.
Case 2
Initialization with initialPoints=(0.0,1.0)
, initialValues=(0.0, 0.0)
, so z(y,0)=0, forall y in [0,1]
.
Let's have positive velocity v=der(x)
and in0(0)=0
.
And now let's say we trigger an event at time t=0.5
that doesn't change the value of in0 (in0(0.5) = pre(in0(0.5))
). Should we add an event to the spatial distribution?
Case 3
Initialization with initialPoints=(0.0,1.0)
, initialValues=(0.0, 0.0)
, so z(y,0)=0, forall y in [0,1]
.
Let's have positive velocity v=der(x)
and in0(0)=0
.
And now let's say we trigger an event at time t=0.5
that does change x
by doing reinit(x,pre(x)+0.1)
. We could interpolate on the left side and extrapolate on the right side, add an event to the left side. Since in0(0.5)
doesn't change that event would would look like
(0,in0)<->(0,in0)<->[rest of list]<->(1,someValue)
in our internal double ended list to store z(y,t)
. Then we will trigger an event if the new event was transported to the right.
That should probably work.
Case 4
Initialization with initialPoints=(0.0,1.0)
, initialValues=(0.0, 0.0)
, so z(y,0)=0, forall y in [0,1]
.
Let's have positive velocity v=der(x)
and in0(0)=0
.
And now let's say we trigger an event at time t=0.5
that does change x
by doing reinit(x,pre(x)+100)
. We could try to extrapolate to this completely new interval but in my option one would need to provide new values for initialPoints
and initialValues
to do this correctly.
comment:11 by , 4 years ago
Replying to AnHeuermann:
We now have a spatialDistribution operator that can handle a fairly complicated
in0
and changes in velocity as long the velocity is positive and we don't have events.
Very good.
Case 1
Initialization withinitialPoints=(0.0,1.0)
,initialValues=(0.0, 0.0)
, soz(y,0)=0, forall y in [0,1]
.
Let's have positive velocityv=der(x)
andin0(0)=1
.
We now need to add an event at the left side of the spatial distribution
Yes, because you want to fill in the circular buffer with a sharp transition from 0 to 1. Note that, in general, detecting if in0
is equal or not to the left-most initialization value (hence whether or not to trigger an event) is non-trivial, because equality conditions on Real numbers are not numerically sound. You may use something like
abs(in0 - initialValues[1])/max(in0.nominal, abs(in0)) < 1e-12
Of course, if nominal is undefined, use 1 instead.
and when the event is transported to the right it has to trigger an event.
Yes, because after some delay we also want a sudden change to emerge from out1
Case 2
Initialization withinitialPoints=(0.0,1.0)
,initialValues=(0.0, 0.0)
, soz(y,0)=0, forall y in [0,1]
.
Let's have positive velocityv=der(x)
andin0(0)=0
.
And now let's say we trigger an event at time
t=0.5
that doesn't change the value of in0 (in0(0.5) = pre(in0(0.5))
). Should we add an event to the spatial distribution?
In principle we should, because there could be a sharp change in the derivatives. But that is probably overkill. I guess it's enough to check that there are no discontinuities, and then let the error control algorithm somehow take care of sharp corners. BTW, please check the equality as shown above.
Case 3
Initialization withinitialPoints=(0.0,1.0)
,initialValues=(0.0, 0.0)
, soz(y,0)=0, forall y in [0,1]
.
Let's have positive velocityv=der(x)
andin0(0)=0
.
And now let's say we trigger an event at time
t=0.5
that does changex
by doingreinit(x,pre(x)+0.1)
.
This doesn't make much sense from a modelling point of view. When you use transport delay you always have a finite velocity. If you kick the belt conveyor by a finite lenght in zero time, the values in the corresponding interval on the left are really undefined. In particular if the input changes during the same event, which is not unlikely. Do you use the value before or after the event? That's like medieval discussions on how many angels can fit the tip of a pin.
I would definitely just throw an error in this case, like "You are not allowed to reinit the x input of spatialDistribution". Just make sure one can identify which call to spatialDistribution is actually involved in the original model.
Case 4
Initialization withinitialPoints=(0.0,1.0)
,initialValues=(0.0, 0.0)
, soz(y,0)=0, forall y in [0,1]
.
Let's have positive velocityv=der(x)
andin0(0)=0
.
And now let's say we trigger an event at time
t=0.5
that does changex
by doingreinit(x,pre(x)+100)
.
Ditto.
follow-ups: 14 15 comment:12 by , 4 years ago
So now I have mostly finished the implementation of the spatialDistribution operator in the runtime.
It seems to work for positive and negative velocity (as long the sign doesn't change) and can handle events.
Regarding:
In principle we should, because there could be a sharp change in the derivatives. But that is probably overkill. I guess it's enough to check that there are no discontinuities, and then let the error control algorithm somehow take care of sharp corners.
I have the code for doing both, but at the moment I don't generate events when the input value didn't changed. Otherwise we will get so many events and whenever one event leaves the spatial distribution we will add a new one on the other side.
I could add a flag to enable that though.
What's mssing
- Changes in direction (v is flipping the sign).
- Don't detect loops for models like:
model SingleBeltEvent Real leftInput; Real rightInput = 0; Real leftOutput; Real rightOutput; constant Real[:] initialPoints(each min = 0, each max = 1) = {0.0, 1.0}; constant Real[:] initialValues = {0.0, 0.0}; Real v = 10; Boolean posV; Real x(start=0, fixed=true); equation if rightOutput > 0.5 then leftInput = 1; else leftInput = 0; end if; v = der(x); posV = v >= 0; (leftOutput, rightOutput) = spatialDistribution(leftInput, rightInput, x, posV, initialPoints, initialValues); annotation( experiment(StartTime = 0, StopTime = 2, Tolerance = 1e-06, Interval = 0.004)); end SingleBeltEvent;
- Performance improvement. We are very precise and only interpolate if a value leaves the spatial distribution. But you can add a lot of data points to that list, if the velocity is very close to zero. So we could do:
a) Some magic to reduce the list length after reaching some
N
number of list elements
or
b) Have a minimum
deltaX
between two list elements.
Also we are currently allocating and freeing data every time we add or remove a element to our double ended list we use to save
z(y,t)
. We can probably use the garbage collector to improve this.
I'll fix the first one and I hope @kabdelhak can take care of the second one.
For the third point we can wait until we have everything running correctly.
follow-up: 16 comment:13 by , 4 years ago
I'll fix the first one
Done. That was kinda easy...
But I musst admit that my solution is not perfect. We overshoot a tiny bit (more or less around the step size).
Now I'm testing the tests provided by @casella in comment 7. It seems my linear interpolation is not good enough and my event update not steep enough.
See attached picture for Test1 comparing out1
to the exact solution out1_exact
.
Also my function insist that x
is changing during an event. I'm probably just updating my saved pre-value of x wrong.
by , 4 years ago
Attachment: | Test1_out1_OMEdit.png added |
---|
comment:14 by , 4 years ago
Replying to AnHeuermann:
In principle we should, because there could be a sharp change in the derivatives. But that is probably overkill. I guess it's enough to check that there are no discontinuities, and then let the error control algorithm somehow take care of sharp corners.
I have the code for doing both, but at the moment I don't generate events when the input value didn't changed. Otherwise we will get so many events and whenever one event leaves the spatial distribution we will add a new one on the other side.
I could add a flag to enable that though.
This seems really a tiny detail, but why not, if you already have implemented it. I'd keep it off by default
comment:15 by , 4 years ago
Replying to AnHeuermann:
We are very precise and only interpolate if a value leaves the spatial distribution.
But you can add a lot of data points to that list, if the velocity is very close to zero.
Well, if the velocity is very small, you have a large delay, so you need many point. That's how it is.
As long a the queue can be expanded dynamically (via malloc) I don't see a problem. I mean, even a million data points would tak 8 MB, on today's computers that's nothing.
So we could do:
a) Some magic to reduce the list length after reaching some
N
number of list elements
or
b) Have a minimum
deltaX
between two list elements.
I don't think that is necessary, as long as the buffer length can be increased dynamically, see above.
Also we are currently allocating and freeing data every time we add or remove a element to our double ended list we use to save
z(y,t)
. We can probably use the garbage collector to improve this.
If you use a list, absolutely. But I'm afraid this could cause some memory management issue. One nice application of this operator is modelling of pipes in district heating. You'll have a lot of pipes and very long simulations. I don't think having millions of malloc's and free's for 8-byte chunks will be particularly efficient.
I would have used a circular buffer, where you just need to allocate an array and keep a couple of pointers that point to the two ends of the buffer and a counter of the total number of elements. To add an element, you move the pointer to the right (and possibly wrap it around when you get to the end) or to the left (ditto). To remove it, just move the pointer to the left (resp. right). When the buffer is full, you allocate one that is two (or four, or ten) times bigger and copy the values of the full buffer there. What do you think?
comment:16 by , 4 years ago
Replying to AnHeuermann:
Now I'm testing the tests provided by @casella in comment 7. It seems my linear interpolation is not good enough and my event update not steep enough.
The Dymola output is much closer. But I guess they may be using higher-order interpolations, of course piecewise linear is very crude.
follow-up: 18 comment:17 by , 4 years ago
The problem with the strange approximation of the outputs is now solved.
We now only change the saved spatial distribution after an step was accepted. So if the integrator decides to re-do a step it will affect the saved spatial distribution.
Currently @kabdelhak and myself are trying to improve event detection.
Our first approach was to simply trigger an event after it was thrown off the spatial distribution. That means we are around one time step to late with the event iteration.
To solve this we created a zero crossing function that will change sign at events. While the idea sounded very promising we are now a bit stuck. It seems that the function itself is still somewhat wrong.
Our idea:
zeroCrossFunc(x(time))
will output 1 or -1, switching at every saved event point.
But apparently if we remove one event from our stored spatial distribution, the sign of zeroCrossFunc()
will change again and that obstructs the runtime from detecting the event.
So we need to come up with a better idea to improve the zero crossing function.
comment:18 by , 4 years ago
Replying to AnHeuermann:
The problem with the strange approximation of the outputs is now solved.
We now only change the saved spatial distribution after an step was accepted. So if the integrator decides to re-do a step it will affect the saved spatial distribution.
Sounds like a good idea :)
So we need to come up with a better idea to improve the zero crossing function.
I would suggest the following approach.
You allocate memory for a certain maximum number of zero-crossing functions. Every time in0
has a discontinuity and v > 0
, you set x0 = x
and create a zero crossing function zcf = x0 + 1 - x
. This function is initially positive, and will become negative when the event reaches the other end of the pipeline, i.e. x = x0 + 1
. Once that event has been processed, you delete the corresponding zcf
from the pool, because the discontinuity has left the pipeline and is gone forever.
Every time in1
has a discontinuity and v < 0
, you create zcf = x - x0 + 1
.
Then, you can return the product of all active zcf
's to the solver, as the crossing function associated with that instance of spatialDistribution()
. If there are none, you just return 1
. The function will always start positive, and then it will cross zero when one discontinuity reaches either boundary of the pipeline. Once the event has been processed, you discard its zcf
, the product becomes positive again, and the game commences again.
Of course, this solution can accommodate a finite number of events in the pipeline at any point in time. If that is exceeded, you should give an error. Or reallocate a larger memory pool, as with circular buffers.
What do you think?
follow-up: 24 comment:19 by , 4 years ago
I found a working solution yesterday evening.
I'll attach my description of it. I think while implementing it i switched some sign though.
by , 4 years ago
Attachment: | ZeroCrossingFunction.png added |
---|
comment:20 by , 4 years ago
There are currently two problems we still need to solve:
- For TestSpatialDiscretization.Test1 we are behind the correct solution by one time step. That is very strange. I'll investigate. Simple solution: Smaller steps...
- Also for TestSpatialDiscretization.Test1 one needs to change
(out0, out1) = spatialDistribution(in0, in1, x, v > 0, {0, 1}, {0, 0});
toisPosVel = v > 0; (out0, out1) = spatialDistribution(in0, in1, x, isPosVel, {0, 1}, {0, 0});
because forv>0
we generateRELATIONHYSTERESIS(tmp1, data->localData[0]->realVars[20] /* v variable */, 0.0, 3, Greater);
and it doesn't seem to change the value at the event when the velocity becomes negative.
@kabdelhak Can you look into this one?
comment:21 by , 4 years ago
@AnHeuermann i think i know what we could do. I currently do not look inside the spatialDistribution
call for events, but if i traverse its arguments i should find all of them.
If that does not work i will just create an auxiliary variable as you suggested.
Will try that later today, currently on smth. else!
comment:22 by , 4 years ago
And for
loadModel(Modelica, {"3.2.3"}); getErrorString(); loadModel(Buildings); getErrorString(); simulate(Buildings.Fluid.FixedResistances.Validation.PlugFlowPipes.PlugFlowAIT); getErrorString();
there seems to be a problem where the function calls for storeSpatialDistribution
needs to have equationIndexes
for the DIVISION_SIM
macro.
I gave 'em the spatial distribution index but would like to use the unique equation index for that.
@kabdelhak Can you do that?
Also we are having some broken pointers to double ended lists.
assert | debug | dataDoubleEndedList: invalid node-pointer
I'll look into that.
comment:23 by , 4 years ago
All our small examples are now running fine.
And I can now simulate Buildings.Fluid.FixedResistances.Validation.PlugFlowPipes.PlugFlowAIT
. But because the simulation runs for so long the step size is to big, resulting in deltaX:=|old_x-x| > 1
and that case is not handled yet.
We could reduce the step size, but with a maximum step size of 1e-6
I need around a minute until it complains at time 4. So when I need at least a minute real time for 4 seconds simulation and the stopTime of the Model is defined at 603900 I may want to get a new cup of tea.
So handling deltaX>>1
is obviously a must have.
comment:24 by , 4 years ago
Replying to AnHeuermann:
I found a working solution yesterday evening.
I'll attach my description of it. I think while implementing it i switched some sign though.
@AnHeuermann, I have one fundamental doubt with your implementation. I always thought that ZCFs should be continuous functions, that cross the zero axis at the event but are continuous in a neighbourhood of it. in other words, x > 2
generates zcf = x - 2
, not zcf = noEvent(if x > 2 then 1 else -1)
. My proposal in comment:18 fulfilled this requirement.
I checked the IDA solver user's guide, section 2.3 Rootfinding. It says that the roots are sought with a modified secant method. From what I understand of your proposal, the zcf would be discontinuous when the event is triggered. The secant method won't be very efficient in this case.
Do I miss something?
comment:25 by , 4 years ago
In fact, the secant method applied to a step function is the bisection method, whereby you gain a factor 2 accuracy in the solution at each iteration, or a factor 1000 every 10 iterations.
Conversely, if the zcf is twice continuously differentiable, the convergence is in general superlinear. In this specific case, though, the zcf's I propose in comment:7 are linear, so the secant method will actually converge in one step.
I'm not sure if this issue is part of the inefficiency that you are reporting in comment: 23, but I guess going from bisection to single-step convergence would probably accelerate the event handling process by one order of magnitude.
comment:26 by , 4 years ago
Okay, I fixed some more bugs and now we can simulate examples from Building. I added results for:
- Buildings.Fluid.FixedResistances.Validation.PlugFlowPipes.PlugFlowAIT
- Buildings.Fluid.FixedResistances.Validation.PlugFlowPipes.PlugFlowULg
- Buildings.Fluid.FixedResistances.Validation.PlugFlowPipes.TransportWaterAir
Not sure how to feed the txt reference files from GitHub/modelica-buildings into OMEdit to compare the results.
@casella or @kabdelhak
Can you comment on the results I'm going to upload and compare to Dymola?
by , 4 years ago
by , 4 years ago
by , 4 years ago
comment:27 by , 4 years ago
You allocate memory for a certain maximum number of zero-crossing functions. Every time in0 has a discontinuity and v > 0, you set x0 = x and create a zero crossing function zcf = x0 + 1 - x. This function is initially positive, and will become negative when the event reaches the other end of the pipeline, i.e. x = x0 + 1. Once that event has been processed, you delete the corresponding zcf from the pool, because the discontinuity has left the pipeline and is gone forever.
Every time in1 has a discontinuity and v < 0, you create zcf = x - x0 + 1.
Then, you can return the product of all active zcf's to the solver, as the crossing function associated with that instance of spatialDistribution(). If there are none, you just return 1. The function will always start positive, and then it will cross zero when one discontinuity reaches either boundary of the pipeline. Once the event has been processed, you discard its zcf, the product becomes positive again, and the game commences again.
What do you think?
Sounds good.
I have one fundamental doubt with your implementation. I always thought that ZCFs should be continuous functions, that cross the zero axis at the event but are continuous in a neighbourhood of it. in other words, x > 2 generates zcf = x - 2, not zcf = noEvent(if x > 2 then 1 else -1).
I understand you concerns and you are right. It would be better to use zero crossings that are continuous differentiable around events. But for most events we generate discrete functions looking like
tmp3 = GreaterEqZC(data->localData[0]->realVars[102] /* pipe.cor.timDel.u variable */, 0.0, data->simulationInfo->storedRelations[1]); gout[2] = (tmp3) ? 1 : -1;
So we would need to change most zero crossing functions to improve convergence speed of root finding algorithms. But for the moment this zero crossing is just as inefficient as all the others, but otherwise working fine.
comment:28 by , 4 years ago
For better zcf's see #6411. For the time being we can keep them as they are.
comment:29 by , 4 years ago
@AnHeuermann, please check the attached TestSpatialDistribution.mo
. The third test there works fine, while the first two fail at time = 0.5 with
Simulation process failed. Exited with code -1. Boolean isPositiveDirection doesn't match with direction x is moving.
Maybe related to #6372?
I would also suggest to add them to the testsuite.
by , 4 years ago
Attachment: | PlugFlowAIT.mat added |
---|
by , 4 years ago
Attachment: | PlugFlowULg.mat added |
---|
by , 4 years ago
Attachment: | TransportWaterAir.mat added |
---|
comment:30 by , 4 years ago
@AnHeuermann, I loaded the simulation results of the three Buildings models using the operator. The results are mostly good, but there are still some discrepancies, please check.
Thanks!
by , 4 years ago
Attachment: | TestSpatialDistribution.mo added |
---|
comment:31 by , 4 years ago
I just updated the test package, as per email to @AnHeuermann on 02/04/2021.
Test1
works relatively fine if you simulate until time = 0.5. However, there are some not-so-nice numerical glitches if you compare out1
with the out1_exact
. I guess they should be avoided - it is fine if we have some approximation, but at least monotonicity and smoothness should be preserved, otherwise bad artefacts could be generated downstream, e.g. if there is a high-pass filter at the output of the spatialDistribution
operator.
At time = 0.5 the velocity changes abruptly from +4 to -4. This should be allowed, according to the specification, but unfortunately there is some inconsistency with event handling that triggers an assert in that case, when you simulate with stopTime = 1.
Boolean isPositiveDirection doesn't match with direction x is moving.
Simulation process failed. Exited with code 0xffffffffffffffff.
I suspect this is related to the assert issue of #6372.
Same story for Test2
.
The third test is mostly successful, though it shows that there are still some issues with the handling of events: when discontinuities in the buffer reach the output position, an event is either not generated or not handled correctly, so the output transient is spread until the next communication time step (every 0.001 s in this case). It's not that bad, bu I understood we wanted to actually store the two discontinuous values when an event is generated at the input, and generate another event when the discontinuity reached either end of the spatial buffer, so that the discontinuity was delayed but preserved. Did I get this right?
@AnHeuermann, it would be good if you could have a look at these issues on the test model I just updated and fix them. Then, I can have a closer look at the Buildings examples and finally close this ticket for good.
comment:32 by , 4 years ago
Cc: | added |
---|
follow-up: 35 comment:34 by , 4 years ago
The first of the issues is solved, the "input" side doesn't contain glitches any more.
Now to the problem with the wrong isPositiveDirection
.
stdout | error | Boolean isPositiveDirection doesn't match with direction x is moving
The main problem is, that we have one event to many, search for it and while doing it step back and forth.
For example TestSpatialDistribution.Test2
there is a state event at time t=0.5
. We do the event iteration and then want to do the next step to t=0.502
. We detect another event, so we search the event with bisection and "find" it at t=0.51
.
Apparently at the end of the event iteration the value of out1
is still at the pre-value. If I manage to fix that I have a way to handle the isPositiveDirection
thing. Then we would still have one event to many, but at least it wouldn't destroy anything.
Compare attached screenshot with out1
and the reference out1_exact
to see how it is currently looking.
comment:35 by , 4 years ago
Replying to AnHeuermann:
The first of the issues is solved, the "input" side doesn't contain glitches any more.
Excellent, thanks! In fact, this may not be such a big deal, as long as out0
is wrong only when v > 0
, since in this case it shouldn't be used. But I think that anyway it is better to have it smooth and correct anyway.
Now to the problem with the wrong
isPositiveDirection
.
stdout | error | Boolean isPositiveDirection doesn't match with direction x is movingThe main problem is, that we have one event to many, search for it and while doing it step back and forth.
For example
TestSpatialDistribution.Test2
there is a state event at timet=0.5
. We do the event iteration and then want to do the next step tot=0.502
.
So far so good.
We detect another event
Why does this ever happen?
so we search the event with bisection and "find" it at
t=0.51
.
Apparently at the end of the event iteration the value of
out1
is still at the pre-value.
I guess you need to update the values at the buffer end after the event iteration. Do I get this right?
If I manage to fix that I have a way to handle the
isPositiveDirection
thing. Then we would still have one event to many, but at least it wouldn't destroy anything.
Compare attached screenshot with
out1
and the referenceout1_exact
to see how it is currently looking.
Doesn't look very nice, thought it true that once v < 0, the value of out1 is largely irrelevant.
comment:36 by , 4 years ago
@AnHeuermann, I understand you have a fix that allows to continue the situation, even though the event handling is not as good as it should be. Since flow reversals are rare events in simulations, I would suggest you to push it, so we can close this issue, because the feature is now implemented.
We can then open another one for (small) improvements to it.
comment:37 by , 4 years ago
Resolution: | → fixed |
---|---|
Status: | assigned → closed |
The feature is now implemented in master.
comment:38 by , 4 years ago
Description: | modified (diff) |
---|
@casella, @kabdelhak, and myself had a meeting about it and came up with a rough plan on how to implement the
spatialDistribution
operator into the C runtime of OpenModelica.@kabdelhak will work on the backend and code generation and I will work in the simulation runtime and the testcases.
We are working on https://github.com/AnHeuermann/OpenModelica/tree/spatialDistribution.