Táto stránka zatiaľ nie je preložená do slovenčiny. Pozrieť si ju môžete v angličtine.
Join our upcoming webinar "How site adaptation reduces your energy yield uncertainty" on May 19, 2026 at 10:00 AM CEST. Register here
Solargis Evaluate combines the most extensively validated commercial irradiance dataset — independently ranked #1 by the IEA PVPS — with the design and simulation tools that turn it into a bankable yield assessment. There's no gap between the data you sourced and the report you submit.
| What dedicated data providers offer | What Solargis Evaluate delivers | |
| Data methodology | Some providers rely on empirical models built on statistical correlations — these can perform well in familiar regions but degrade in complex terrain or high-aerosol environments. Some mix datasets from multiple sources and convert monthly averages into synthetic hourly values, requiring manual tweaking to produce acceptable results. | Physics-based solar model — simulates how sunlight travels through the atmosphere accounting for clouds, aerosols, water vapor, and terrain. Consistent accuracy across all climates and geographies; results are fully traceable and reproducible. |
| Historical record | Some providers cover 15–20 years of data. They explicitly exclude older satellite generations, citing lower resolution and geolocation quality, limiting their ability to capture long-term climate variability and interannual patterns. | 30+ years of validated solar and meteorological data: the longest commercially available record, built into the same platform as your simulation. |
| Time resolution | Sub-hourly satellite data is available from some providers, but often from a shorter historical record. | 15-minute time series data as standard: 30+ years of history. 1-minute data available as an add-on. Data is directly fed into simulation without format conversion. |
| Validation transparency | Providers typically publish headline accuracy statistics. Site locations, raw ground measurements, and quality control methodology are rarely disclosed for independent verification. | Full transparency: validation site coordinates, raw ground-measured data, quality control reports, and statistics are all publicly available. Every claim is independently verifiable. |
| Validation breadth | Validation studies focus on irradiance accuracy and are usually limited to GHI across a defined set of stations. | Validated at 320 GHI sites and 235 DNI sites globally. In addition to that, separate validations of soiling, snow, and meteorological parameters across diverse climates are publicly available. |
| Independent benchmark | Accuracy comparisons are typically self-published and based on the provider's own selection of stations and methodology. | Ranked #1 in the IEA PVPS 2023 Worldwide Benchmark of Modeled Solar Irradiance Data for lowest average deviation metrics (IEA PVPS, 2023). The most comprehensive independent comparison across 10 global models, conducted by a neutral third party. |
| Ground albedo | Most providers offer monthly albedo at ~1 km resolution, if at all. Some apply a single fixed reflectance value regardless of location or season. | Monthly ground albedo from MODIS satellite data at 1 km resolution — varies by location, season, and surface cover. Available daily time series at 0.5 km resolution for due diligence analysis. |
| Uncertainty calculation | P90 scenarios often rely on assumed uncertainty values rather than systematic propagation. Where uncertainty is quantified, it's typically a geographic or regional average. | Uncertainty (P90 and other Pxx) is calculated by combining three components: solar irradiance model, PV simulation, and interannual variability - using the root-sum-square method. Each is quantified separately from at least 10 years of historical data and derived site-specifically, not as a regional average. |
| Integration with simulation | Data is delivered as a file for import into a separate simulation tool. Any mismatch between the data version used for simulation must be managed manually. | Data flows directly into simulation within the same platform. No additional imports and exports, no versioning risk. The dataset that powers your layout is the dataset that powers your energy yield report. |
The most widely used simulation tools were designed when projects were simpler and computing power was scarce. Hourly datasets, view factor shading, string-level mismatch, and expert-guessed loss inputs were reasonable approximations then.
For modern utility-scale PV — bifacial modules, single-axis trackers, complex terrain and 25-year forecasts, they introduce systematic error that compounds over the project lifetime.
Solargis Evaluate's simulation engine handles today's utility-scale PV: bifacial modules, single-axis trackers, complex terrain, and validated loss models.
| What dedicated PV simulation tools offer | What Solargis Evaluate delivers | |
|
Input data resolution |
Hourly TMY data is the standard input: 8,760 data points representing a synthetic 'typical' year. It misses actual year-to-year variability, systematically underestimates risk exposure, and introduces error margins easily reaching 10%. |
15-minute time series spanning 30+ years as default: more than 1,000,000 data points per parameter, 120x more than hourly TMY. Captures real historical variability including extreme weather events and short-term irradiance fluctuations. |
|
Simulation speed |
Detailed 3D simulation of large, terrain-complex projects on a desktop CPU takes hours. Running multiple design variants in a single session isn't practical. |
Cloud GPU processing delivers a step change in simulation speed — large, terrain-complex projects that take hours on a desktop CPU are processed in cloud in a matter of minutes. |
|
Shading model |
View factor models are computationally fast but use simplified geometric approximations. Accuracy degrades for complex terrain, irregular layouts, and bifacial rear-side modeling. |
3D Monte Carlo backward ray tracing: accurate near-shading simulation across any terrain complexity. Shading objects and nearby structures can be modeled directly in the Energy System Designer — obstacles that only physically rigorous ray tracing can handle correctly. |
|
Bifacial rear irradiance |
View factor models average rear irradiance uniformly using a fixed albedo — ignoring mounting hardware, terrain, and row geometry. Rear-side shading from frames and adjacent rows is simplified or ignored. Soiling and spectral correction are not applied to the rear side. Ray tracing, where offered, is typically applied to the rear side only: front-side irradiance is calculated at module or string level. |
3D ray tracing calculates rear irradiance at cell level on both front and rear sides — accounting for mounting hardware, row structure, terrain geometry, and inter-row shadows. Every obstruction that reduces rear-side gain is modeled explicitly. |
|
Ground albedo for bifacial |
A single fixed albedo value (typically 0.20) is applied uniformly across the site throughout the year. Bifacial gain is highly sensitive to albedo; a static estimate introduces systematic error at sites with seasonal snow cover or varying vegetation. |
Monthly ground albedo from satellite captures snow cover, vegetation cycles, and surface type. Fed directly into Monte Carlo rear-side ray tracing for bifacial systems, ensuring consistency between what you designed and what gets simulated. |
|
Tracker–bifacial interaction |
Backtracking algorithms minimize front-side inter-row shading. The influence of tilt angle on rear-side shadow patterns and bifacial gain is not factored into tracker optimization. |
Simulation links tracker tilt angle, rear-side shading geometry, and ground albedo — providing an accurate bifacial energy model for single-axis tracker configurations across the full annual cycle. |
|
Electrical mismatch |
String-level mismatch models are the norm. Where cell-level simulation is available, it is typically restricted to projects under 5 MWp and degrades in performance at larger scales — making it impractical for the utility-scale projects. |
Cell-level IV curve aggregation using the De Soto single diode model — detailed mismatch simulation for any project size. |
|
Soiling losses |
Most tools rely on a user-entered percentage. Industry surveys show 62% of practitioners use an expert guess for soiling and snow losses (SERENDI-PV, 2023). |
Physics-based soiling model driven by satellite-derived atmospheric inputs (PM2.5, PM10, dust, precipitation, and local climate conditions) validated at 51 sites globally. Mean bias +0.1%, standard deviation 0.9%. |
|
Snow losses |
Rarely modeled explicitly. When included, it's usually a fixed percentage based on the user's judgment. |
Snow loss model validated at 27 sites. Included as standard in the simulation chain. |
A bankable yield assessment isn't just a number — it's a methodology lenders can follow from first principles to final output. One desktop tool dominates the market, accepted as the de facto standard through industry habit, while specialist data providers underpin the resource side of the study. Both leave gaps.
Solargis Evaluate closes both gaps in one platform.
| What standalone tools offer | What Solargis Evaluate delivers | |
|
Basis for acceptance |
Desktop simulation tools are accepted by lenders and technical advisors largely because of their ubiquity. That acceptance is built on familiarity, not on independent validation of the underlying methodology or its accuracy. |
Solargis provides bankable data accepted by banks, investors, and other stakeholders. Our algorithms are based on real-world physics, grounded in peer-reviewed scientific literature and built on transparent, traceable, and validated models. Our scientific, rigorously documented approach leaves no room for doubts in any simulation step. |
|
Audit trail |
When data, simulation, and reporting live in separate tools, the audit trail must be reconstructed manually from exported files and saved inputs. Lenders and technical advisors increasingly flag these gaps under scrutiny. |
From raw satellite data to bankable reports, everything runs in one integrated system. The audit trail is unbroken: no file handoffs, no version mismatches and no gaps to explain. |
|
Software version traceability |
Desktop simulation tools are version-dependent. Studies run on different software versions produce different results, with no central record of which version generated which output. Reproducing a result months later requires discipline the workflow doesn't enforce. |
Cloud platform with a single maintained version. Every simulation is logged with the software state at the time it ran — results are reproducible and auditable across projects, teams, and time. |
|
Data origin |
Resource data is sourced separately: imported from a TMY file or third-party dataset and then used as simulation input. The connection between the source data and the final yield figure depends on the analyst maintaining the link manually. |
Your simulation runs on data produced by the same organization that built it. Solargis owns the full chain: from satellite data to simulation output. |
|
Independent data recognition |
Data providers typically back their accuracy claims with self-published comparisons or third-party validation studies of their own selection. Desktop tools rely on the user to source and validate their own input data. |
Ranked #1 in the IEA PVPS 2023 benchmark for lowest average deviation metrics (IEA PVPS, 2023). 320 GHI validation sites, and publicly available QC reports give reviewers a methodology they can verify. |
|
Validated loss models |
Soiling, snow, and degradation inputs typically rely on the user's judgment or flat-percentage assumptions — regardless of whether the simulation engine is a desktop tool or a data provider's add-on. These are exactly the assumptions technical advisors challenge most frequently. |
Data-driven, validated models for key losses: soiling loss model validated at 51 sites globally and snow loss model validated at 27 sites across the US and Europe. |
|
Track record |
Desktop simulation tools have broad market acceptance but limited transparency about how their models have evolved. Data-focused providers have credibility in resource assessment but limited history as end-to-end yield modeling platforms. |
On the market since 2010; 16 years of continuous R&D. 9,000+ utility-scale projects supported annually across 100+ countries. Accepted and trusted by banks, investors, and other market stakeholders since day one as a basis for bankable yield assessments. |
|
Transparency |
Simulation methodology details are often limited or require specialist knowledge to interpret. Validation statistics may be published, but underlying data (site coordinates, raw measurements, QC results) is rarely disclosed for independent verification. |
Full transparency: every model is documented in the public knowledge base, and every validation claim is backed by publicly available site coordinates, raw ground-measured data, and QC reports. Any lender or technical advisor can verify independently. |
This comparison is based on publicly available information as of April 2026.