﻿id	summary	reporter	owner	description	type	status	priority	milestone	component	version	resolution	keywords	cc
3678	Efficient flattening (and code generation) for large-scale network models	Francesco Casella	Per Östlund	"I have tried to summarize in a representative model the features of large-scale power system models that are currently stressing the compiler performance in terms of code generation time.

Please have a look at the attached test package.

The basic model {{{ResistorSource}}} is a resistor with a voltage controlled-source in series, whose voltage is determined by a sub-model of type {{{FirstOrder}}}, containing a first order linear system. The forcing signal of the first order system is by default bound to zero, but it can be changed with a binding equations when instantiating the ResistorSource model. Many basic models are instantiated and connected together; some have a modifier on the binding equation, some don't.

{{{SystemSmall}}} shows a simple example, while {{{GenerateSystemLarge}}} automatically generates the Modelica source code for systems of arbitrary size. With the default parameters, the large system has 40000 equations, and the front-end takes 43 seconds to flatten it on my pc. If you want to go up to the scale we need, you can multiply the size by a factor 10-20, though, to use Per's words, life's too short to study that case with the current compiler :)

We need to show that there are strategies that can be implemented in OMC in order to shorten this processing time drastically. 

Regarding the front-end, I understand from Adrian that it would be possible to use some caching strategy, to avoid re-doing all the lookup times and again for each instance. Basically, the first time the compiler encounters this declaration
{{{
  TestCaching.ResistorSource rs_1(R = 1, u = sin(time));
}}}
it will do all the lookup and flattening for the ResistorSource class with a time-varying binding equation on u, and a constant binding equation on R. This structure could be labelled as a specific type. Then, the next time the compiler encounters another declaration with exactly the same type (except for numerical values of parameters!), it could avoid re-doing the flattening and instantiation again, and just use the cached results.

Would it be possible to come up with a prototype in the front-end that can use this strategy to process the {{{SystemLarge}}} model? It would then be very interesting to compare its performance with the currently available one, and understand if this kind of strategy pays off.

A second further stage could involve the back-end. Assuming we use a native DAE solver (IDA/Kinsol), then we could somehow avoid to actually flatten all the individual instances in the front end, and instead pass to the back-end the collected types and just pointers to the various instances. Code to compute the DAE residuals could be generated only once for each type, and then just be called many times, one for each instance. This could save a tremendous amount of time and space which is currently spent generating the same code N times, and then compiling it to executable form. 

Note that if we use a DAE integrator and a-priori knowledge about the system, we could skip most of the current back-end stages: matching and sorting, index reduction, etc.

I understand this second step is a lot more far-fetched, but would it be possible to come up with a prototype that could run on this demo example, to gauge the performance improvements?"	enhancement	new	high	Future	*unknown*				Andrea Bartolini
