Esta página aún no está traducida al español. Puede verla sólo en inglés.
Solar power’s star is rising resolutely, as lower costs of production open up new markets around the world and solar projects scale up, rapidly. In the last two decades, the size of solar installations has increased dramatically, and we are witnessing the rise of ‘megaprojects’.
Such giants include the 1.6GW Benban project in Egypt, India’s 2.2GW Bhadla park, and Asia Renewable Energy Hub, the planned 15GW wind/solar complex in Australia. But these massive projects are not the only evidence of increasing scale – in general, the size of utility-scale solar farms has gone up from tens of MW in 2013 to hundreds of MW in 2020.
This brings new opportunities, as scale reduces costs further and ensures the continued growth of the solar sector into untapped markets. Yet this growth may expose inadequacies in traditional approaches to managing weather-related risks.
Climate change, grid integration, large-scale project design in the hyper-competitive post-subsidy market, a trend towards ‘data shopping’: these factors could hamper solar’s growth if they are not addressed responsibly.
Faced with these challenges, solar developers relying on outdated standards of solar resource assessment and forecasting may be left unable to accurately evaluate the future production of their assets. Unreliable data affects valuations of proposed and existing projects, with an impact that is felt through the entire project lifecycle.
And, in the world of the ‘megaprojects’, there is little room for errors. On average, a 1% decrease in asset performance translates to lost revenue of EUR 1,000/MW/year. For a gigawatt scale project, this could equate to €25,000,000 over an average project lifetime. An entirely avoidable loss – but also an entirely achievable gain, with the right data partner.
To support solar developers as they build at greater scale than ever before, we have ranked the five most critical data challenges facing the successful global deployment of large scale solar. Over the coming weeks, we’ll be covering each of these challenges in more depth.
Unlike fossil fuels, solar energy has stood unaided across many markets globally in the last few years. Financial support and subsidies from governments are no longer a certainty, so the solar sector has learnt to compete on financial grounds alone, adopting new technologies such as bi-facial PV modules and trackers to support ever lower prices at auctions.
In this environment, project design is being pushed to the limits, with very little margin for error. To identify the optimum configuration for a project, solar developers need to run hundreds of simulations. Without good data inputs, developers risk making costly mistakes as assets fail to deliver on projected production.
Using the best modern data sets enables solar developers to create profitable project designs that squeeze every kilowatt-hour out of assets. The full potential of next-gen technologies such as bi-facial, tracking, and high DC/AC ratios, can only be unlocked with accurate, high-resolution data – ensuring that solar stays competitive in the years ahead.
Solar is facing a changing risk profile. Resource risks are increasing due to fluctuations of established weather patterns caused by climate change. Solar resource and meteorological data reveal weather extremes affecting solar asset owners already, as seen, for example, with the recent underperformance of US East Coast solar assets.
Meanwhile, wildfires in Australia and on the US West Coast have threatened the physical integrity of assets and led to widespread underperformance. This challenge will intensify as the industry experiences an increasingly volatile climate.
Reducing resource risk is seen as a high priority for leading businesses in solar energy. Controlling the weather is outside of our capabilities, but what solar developers, owners and operators can control is their knowledge. Vital, trusted intelligence on the solar resource, albedo, and specific geographical conditions at project sites can be the difference between an asset which retains its value throughout its lifecycle, and one which fails to live up to its potential.
The industry has increasingly realised the value of using a range of validated meteorological and environmental data to underpin a more complete evaluation of risks faced by projects. Extreme wind or temperature, increased soiling, corrosion, dust, snow, changing albedo or flash floods – all require specific, accurate data to evaluate and forecast with confidence.
A critical challenge of the global solar energy rollout has been grid integration. Solar generation is seasonal and it stops at night. Grids can also be affected by high solar power variation due changes in cloud cover. This challenge drives continued investment in ‘solar plus storage’ and solar/wind hybrid projects, as the industry readies for large-scale grid integration and to meet the technical requirements of modern smart grids.
As energy infrastructure increases in complexity, this raises serious questions on the standard of data currently in use across the solar sector. To overcome the limitations of solar PV and to meet new grid requirements, the industry must rely on accurate and highly granular data. Underestimating the need for reliable data to effectively operate solar projects may result in curtailment due to overproduction and penalties from grid operators if they fail to deliver on their commitments. With large-scale projects, this is essential – no operator wants to be held responsible for a blackout.
Large-scale solar projects take longer to progress from project award to financial close. In that time, developers can gain more certainty around their projects’ profitability by taking measurements of solar radiation and, increasingly, albedo. But taking reliable on-site measurements is often neglected, missing out on a valuable source of information needed to adapt satellite-based model outputs to the site-specific geographical conditions. Even where measurements are taken, they are often not up to the required standard to add value.
Running on-site measurement campaigns is not easy – the high-frequency data needs to be gap-free, precise and highly accurate. If the industry works with professionals to ensure time is not wasted on poor quality data acquisition, then developers will be able to boost investor confidence in their projects – vital in an environment of narrow financial margins. The best available uncertainty of solar data, from the satellite-based models, is currently around 5%. When cross-checked with 24 months of high-quality on-site solar measurements, the uncertainty of long-term estimates can be reduced to around 3%.
As solar asset owners seek valuation for their projects, they may receive several energy production estimates, based on different solar radiation datasets. The obvious course of action to deliver value for shareholders is to pick the highest estimation of a project’s potential production potential. Or is it?
This practice, called ‘data shopping’, sweeps fundamental questions about the reliability and accuracy of solar resource assessment under the carpet. The industry needs data it can trust – but how can the most accurate data providers be identified?
Solar asset stakeholders throughout the value chain are increasingly investigating this reliance on suboptimal data. It is vital that the industry is empowered to differentiate between the best solar data and less accurate datasets that may result in long-term underperformance.