The dataset algorithm performance assessment based upon all efforts white paper is now available for discussion. When making posts please remember to follow the house rules. Please also take time to read the full pdf before commenting and where possible refer to one or more of section titles, pages and line numbers to make it easy to cross-reference your comment with the document.
The recommendations are reproduced below:
• Assessment criteria should be developed entirely independently of the dataset developers and should be pre-determined and documented in advance of any tests.
• It is crucial that the purpose to which a dataset could be put be identified and that a corresponding set of assessment criteria are derived that are suitable for that purpose.
• The output of an assessment should be to determine whether a dataset is fit for a particular purpose and to enable users to determine which are most suitable datasets for their needs. Outputs should be clearly documented in such a form as to enable a clear decision tree for users.
• Validation of an algorithm should always be carried out on a different dataset from that used to develop and tune the algorithm.
• A key issue is to determine how well uncertainty estimates in datasets represent a measure of the difference between the derived value and the “true” real world value.
• It would be worthwhile to consider the future needs for the development of climate services by indentifying an appropriate set of regions or stations that any assessment should include.
• New efforts resulting from this initiative should be coordinated with on-going regional and national activities to rescue and homogenize data.