Opened 10 years ago
Last modified 10 years ago
#2884 new discussion
Performance tracing for compiler and simulation stages
Reported by: | Lennart Ochel | Owned by: | somebody |
---|---|---|---|
Priority: | high | Milestone: | Future |
Component: | *unknown* | Version: | trunk |
Keywords: | Cc: | Adrian Pop, Martin Sjölund, Adeel Asghar, Volker Waurich, Marcus Walther |
Description
I think it is necessary to trace the performance of both, the OpenModelica compiler, as well as the generated simulations. Therefore a new Hudson job could be added that runs a (small) set of models for each revision and measures the elapsed time for the different compiler and simulation stages:
- Compiler: Front end, back end (in total and for each optimization module), SimCode, …
- Simulation: Non-linear solvers, linear solvers, event iteration, …
Therefore, it is probably necessary to run this on an independent machine.
Of course it would be also interesting to test different OS and hardware configurations… But that would be too much. I think for the beginning it is good enough to have at least one setup for the measurements.
The set of models should contain a couple of the biggest models from some libraries, as well as a couple of scalable models (that are probably some kind of artificial) to see if everything scales as expected and to detect bottlenecks and bad commits.
The results of such a performance tracing should be provided as a text summary and as intuitive graphs. With these results it would be possible to detect and fix much more efficiently a lot of performance issues. Also it would help to provide always an efficient OpenModelica build.
Attachments (1)
Change History (3)
comment:1 by , 10 years ago
Type: | defect → discussion |
---|
by , 10 years ago
Attachment: | ExecStatScaling.png added |
---|
comment:2 by , 10 years ago
We have such a benchmark running on our HPCOM-Jenkins. With the help of the synthetic N-Pendulum, we can scale the model easily and take a look how long the HPCOM-backend parts run. This helps us to improve the performance. I have attached a screenshot (ExecStatScaling.png), were you can see that we have some performance problems with the GRS and the DAE stuff in HPCOM.
HPCOM-Jenkins benchmark