Esta página aún no está traducida al español. Puede verla sólo en inglés.
The study summarizes the results of comparing the data provided by various institutional or commercial providers of solar irradiance models against ground measurements collected at 129 sites distributed globally. It helps solar developers navigate through all the available databases.
As a result of a new collaborative effort of international experts in the field of solar energy, International Energy Agency (IEA) has recently released the report titled “Worldwide Benchmark of Modeled Solar Irradiance Data 2023”. This work was done under one of the IEA’s tasks in the PV Power Systems Programme. Specifically, under Task no. 16 with the title “Solar Resource for High Penetration and Large-Scale Applications”.
The report presents a benchmark of model-derived direct normal irradiance (DNI) as well as global horizontal irradiance (GHI) data at the sites of 129 globally distributed ground-based radiation measurement stations.
The development of accurate solar irradiance models plays a vital role in designing, financing, and operating solar power assets.
Their ability to provide sub-hourly datasets of many years allows solar developers to make valuable studies of their future assets. Besides, the capacity to keep the models up to date provides solar operators and asset managers a reliable reference for their performance assessments at hand.
Most solar models evaluated in the IEA study use inputs from geostationary satellite images (Meteosat Second Generation) as the main data source. Some modeled data sets use imagery from more than one satellite to reach global coverage, whereas other data sets only evaluate a part of the satellite field of view.
Models that use different methodologies are also included in the report. This is the case of one model based on Numerical Weather Data Prediction reanalysis (NWP) and another one based on imagery from polar satellites.
The IEA study looked at 129 ground stations. These were selected after discarding several stations that were not fulfilling the data quality requirements from an initial set of 161 sites.
The study underlined the importance of the reference data quality. One of the conclusions was confirming that “without a stringent quality control procedure, no real validation can be done, with the risk of obtaining invalid results”.
The applied tests are related to several aspects and have several automated tests for missing timestamps, missing values, K-Tests, closure tests, extremely rare limits tests, physically possible limits tests, and tracker-off tests.
The quality control included visual inspections as well, covering aspects such as shading assessment, closure test, AM/PM symmetry check for GHI, and calibration check using the clear-sky index.
The benchmark results show “noticeable deviations in performance between the various modeled data sets” that were evaluated.
In particular, deviation metrics of data sets based mainly on geostationary satellite imagery are closer to each other than to the NWP-based and polar satellite-based data sets. Specifically, the report mentions that “lowest average deviation metrics are often achieved by a single data set (Solargis)”.
A set of complete performance metrics were calculated, including rMBD, of paramount interest for the analysis as it is directly related to the overall under- or over-estimation.
Besides rMBD, the study also calculated the Combined Performance Index (CPI), which combines several aspects of the model performance into one index. This includes the magnitude of deviations between modeled and ground-measured data (described with rRMSD), the similarity of data distributions (described with rKSI) and the relative frequency of exceedance situations (described with rOVER).
A small rCPI indicates good performance of the test data set.
Future work for the IEA Task 16 should include updates of this benchmark study, including sites over regions that are so far not covered well or at all.
It will also involve new modeled data sets and updated versions of the current data sets. In addition, the participants of this Task are planning further analysis to evaluate the expected positive impact of various post-processing methods on the modeled data (known as “site adaptation”).
All these activities will provide valuable insights for the solar energy industry and keep strengthening collaboration among members of this Task. Its ultimate goal is to lower planning and investment costs for PV power systems by enhancing the quality of the resource assessments and solar forecast.
If you want to learn more about Solargis solar data validation, see our documentation.