﻿id	summary	reporter	owner	description	type	status	priority	milestone	component	version	resolution	keywords	cc
4948	Issues with verification reporting on Hudson	Francesco Casella	Adrian Pop	"Please check the [https://libraries.openmodelica.org/branches/newInst/ScalableTestSuite/ScalableTestSuite.html report of the ScalableTestSuite Hudson test]. For some models, e.g. {{{SimpleAdvection_N_XXX}}}, the smaller ones pass the verification successfully, while for the larger ones the verification result cell is brown. 

The reference files have been created using the same logic for all sizes, so I don't think there is something wrong with the reference files (though it could still be).

It doesn't seem to be a matter of number of variables, either. If you check {{{SimpleAdvection_N_XXX}}}, the first two cases have the same number of checked signals, despite their different size. On the other hand, in this class of models the number of simulation intervals grows with N, so maybe the problem is that some datafiles have too many time points (not signals). In some cases, you really need a lot of detail, so if such a limit exists, I would suggest to remove it, or at least to set it to a much higher value than the current one.

The other issue is that some (not all, see e.g. [https://libraries.openmodelica.org/branches/newInst/ScalableTestSuite/files/ScalableTestSuite_ScalableTestSuite.Electrical.TransmissionLine.Verification.TransmissionLineCheck.diff.html this link]) of the models that fail the verification have broken links to the javascript and csv files that demonstrate the mismatch, see, e.g., [https://libraries.openmodelica.org/branches/newInst/ScalableTestSuite/files/ScalableTestSuite_ScalableTestSuite.Power.ConceptualPowerSystem.ScaledExperiments.PowerSystemStepLoad_N_4_M_4.diff.html this link]. "	defect	closed	critical	1.13.0	Testing Framework		fixed		
