Opened 9 years ago
Closed 7 years ago
#3862 closed discussion (fixed)
Improving coverage testing
Reported by: | Lennart Ochel | Owned by: | Martin Sjölund |
---|---|---|---|
Priority: | high | Milestone: | |
Component: | Testing Framework | Version: | |
Keywords: | Cc: | Adrian Pop, Martin Sjölund, Rüdiger Franke, Francesco Casella |
Description (last modified by )
Recently, I pushed some bad changes which broke two PNlib models. Hence, the coverage (number of verified simulations) went down, but nobody noticed it. It would be good to catch these cases as soon as possible – also for all the other libraries.
I already had a short discussion with Adrian and what I understood is that Hudson can only send emails for broken tests, which means tests that were at 100% before and went down.
I think that's not good enough and we should get somehow a notification if one of the coverage signals (compile, simulation, verified) goes down (even if it was not at 100% before).
Any comments?
Change History (3)
comment:1 by , 9 years ago
Description: | modified (diff) |
---|
comment:2 by , 7 years ago
Milestone: | Future |
---|---|
Owner: | changed from | to
Status: | new → accepted |
comment:3 by , 7 years ago
Resolution: | → fixed |
---|---|
Status: | accepted → closed |
This was resolved by the new library coverage testing (https://github.com/OpenModelica/OpenModelicaLibraryTesting)