PV System Energy Test

1 downloads 0 Views 273KB Size Report
Black & Veatch Energy, San Francisco, CA, USA; e. SunPower Corporation, San Jose, CA, USA; f. DNV GL, San Ramon, CA, g. First Solar, San Francisco, CA; h.
PV System Energy Test Sarah Kurtz,a Pramod Krishnani,b Janine Freeman,a Robert Flottemesch,c Evan Riley,d Tim Dierauf,e Jeff Newmiller,f Lauren Ngan,g Dirk Jordan,a and Adrianne Kimberh a

National Renewable Energy Laboratory, Golden, CO, USA; bBelectric, Newark, CA, USA; c Constellation, Baltimore, MD, USA; dBlack & Veatch Energy, San Francisco, CA, USA; e SunPower Corporation, San Jose, CA, USA; fDNV GL, San Ramon, CA, gFirst Solar, San Francisco, CA; h Incident Power Consulting, Oakland, CA, USA Abstract — The performance of a photovoltaic (PV) system depends on the weather, seasonal effects, and other intermittent issues. Demonstrating that a PV system is performing as predicted requires verifying that the system functions correctly under the full range of conditions relevant to the deployment site. This paper discusses a proposed energy test that applies to any model and explores the effects of the differences between historical and measured weather data and how the weather and system performance are intertwined in subtle ways. Implementation of the Energy Test in a case study concludes that test uncertainty could be reduced by separating the energy production model from the model used to transpose historical horizontal irradiance data to the relevant plane. Index Terms — photovoltaic systems, model verification, photovoltaic performance, energy yield, performance guarantee.

defines the test boundary. For example, measuring module (instead of ambient) temperature implies that overheating of the modules due to poor air circulation will be interpreted as hot weather rather than as a poorly performing system. The final test result may depend on the choice of the test boundary. Two examples of test boundaries are shown in Fig. 1. The choices shown in Fig. 1a are consistent with the data available in historical weather files. The choices shown in Fig. 1b raise two concerns: 1) these data are not typically available from historical weather data, and 2) poor system installation may not be detected by the test.

I. INTRODUCTION Accurate prediction and verification of photovoltaic (PV) plant output is essential for improving predictive models and for proving a performance guarantee [1]. Reducing the uncertainty of the verification is beneficial to all parties. Variable weather conditions complicate verification of PV plant performance and require revision of projected energy yield based on differences between the historical weather data and weather observed during the test. A test that extends through an entire year is beneficial because it can identify seasonal issues such as wintertime rowto-row shading or summertime shading from local trees. If a model has a high accuracy at all times of year, then the test time can be reduced accordingly. But the purpose of an energy test (which measures the integrated output over time), rather than a capacity test (which measures the power generated under specific conditions), is to confirm performance over all conditions. An energy test is meant to define whether the measured system performance agrees with what is expected from the installed equipment, as computed with the model. However, because local weather strongly affects the generated electricity, it is useful to differentiate the system being tested from external factors, namely the weather. A well-designed test defines a test boundary that isolates the system being characterized (the system boundary). The system output is defined by identifying the production meter. Identifying the locations and types of sensors for characterizing the weather

Fig. 1a. Test boundary when global horizontal irradiance, ambient temperature, and wind speed are measured.

Fig. 1b. Test boundary when plane-of-array irradiance and module temperature are measured.

It is the purpose of this paper to discuss the subtleties of an energy test and to provide insight into its use. First, we define terminology for discussing the models, the proposed Energy Test, why differences between historical weather data and measured weather data can confuse the test, and how each of these affects the test boundary discussed above. The Test is applied as a case study. The lessons learned from the case study are described related to 1) inconsistencies in the histori-

cal and measured weather data, 2) systematic errors introduced by excluding questionable data, and 3) valuable datascreening techniques. Finally, some key challenges in accurately applying the test are summarized and strategies for reducing test uncertainty are presented. II. DEFINING THE ENERGY TEST TERMINOLOGY An accurate prediction of system energy output [1] typically requires an energy production model [2-7] of the effects of irradiance, temperature, shading, system design, and the local environment. The terms used for describing the energy in this study are defined in Fig. 2 [8]. These definitions provide an unambiguous way to differentiate the predicted energy (based on historical weather data) from the expected energy (based on the actual weather data during the test), as shown in Fig. 2.

faces depending on the weather and other conditions [9]. If GHI is measured, an important part of the PV performance model is the choice of the transposition and decomposition model. If POA irradiance is measured, then no transposition or decomposition model is needed, although the sensor azimuthal and tilt orientations must be verified. Studies have often found that inaccuracies in the transposition model are the largest source of bias error in energyproduction models based on GHI (referred to here as “GHIbased models”). To avoid this bias error, an energy-production model based on POA irradiance (referred to here as “POAbased models”) may be used, but then historical weather files are usually not available. Transposition to the POA for a tracking system is complicated by the tolerances in the tracking system and is not addressed here.

Fig. 3. Drawing to show how an energy-production model based on global horizontal irradiance (GHI) requires two steps: 1) a transposition model to convert GHI to plane-of-array (POA) irradiance, and 2) the model predicting PV output from POA irradiance. If only GHI data are available, a decomposition model is also used, adding uncertainty.

Fig. 2.

Definition of predicted, expected, and measured energies.

To be consistent, the predicted and expected outputs must be calculated using the same model. If the expected energy differs from the predicted energy, this difference is considered to be outside of the test boundary and is not relevant to the test result (which compares the expected and measured energies.) The expected energy differs from the predicted energy because the plant rarely operates under measured weather that is identical to historical data.

To be consistent, the predicted and expected outputs must be calculated using the same GHI-based model. If POA irradiance is measured, then a POA-based model may be used, but this breaks the direct link between the predicted and expected outputs. If both GHI and POA irradiance are measured, then the transposition model can be verified independently from the PV performance (POA-based model). If the test boundary causes measurement of weather data not found in the historical weather files (such as module temperature and POA irradiance), then the test boundary may be found to penetrate the intended system boundary, as shown in Fig. 1b. In this case, some attributes of system performance may fall outside of the test, as described in Table 1.

III. RAMIFICATIONS OF HISTORICAL WEATHER DATA FORMAT The power generation of a PV system is most directly linked to the plane-of-array (POA) irradiance, but historical weather data is most conveniently supplied for the horizontal plane and transposed to the POA using a transposition and decomposition model (Fig. 3). A choice is made to use global horizontal irradiance (GHI) (consistent with historical weather data) or POA irradiance (more closely predicting the PV system output) in the energy-production model. Multiple transposition models have been developed to correct for the variation in diffuse light that strikes different sur-

IV. INTRODUCTION TO TEST METHOD The Test Method has been described in detail elsewhere and is being further developed by the International Electrotechnical Commission (IEC) as a draft standard [10]. In short, the model and weather data used to generate the predicted energy are carefully documented. The measured data are screened for anomalies according to recommended guidelines, including exclusion of data periods for which more than 10% of the production or irradiance data are missing or erroneous. Excluded or corrected data must be discussed and agreed upon.

TABLE I EXAMPLES OF CHOICES THAT CAN MIX SYSTEM PERFORMANCE WITH WEATHER BY PLACING SOME ASPECTS OF SYSTEM PERFORMANCE OUTSIDE OF THE TEST BOUNDARY (SEE FIG. 3) Measurement penetrating system boundary Module temperature POA irradiance POA irradiance

System installation detail Circulation of air around modules Inaccurate positioning of sensor Ground albedo

System performance Module temperature affects system efficiency Available irradiance may be erroneously measured Ground reflection affects irradiance reaching array

Associated uncertainty for one-year evaluation Module operating temperature increase of 5°C leads to 0.5%–1.5% loss An error of 10° azimuth in the positioning of the POA sensor may cause ~1% error A change in albedo of 0.2 causes < 1% change for tilt of 30°

The modeling is repeated using the measured meteorological data to create the expected output. Then the measured data are compared with the expected data. High-quality data are of utmost importance for any performance test. Key elements include accurate sensor calibrations, consistency of data logging, prompt response to equipment (sensor or data logger) malfunction, and frequent cleaning of irradiance sensors. Missing or inaccurate data will compromise the quality of any performance test.

The first comparison of results showed differences of between 0.1% and 8%, with the inclusion or exclusion of system outages being the primary variable. No information had been provided about the cause of system outages, so some analysts included data for outages. Others excluded the data, providing an analysis consistent with model verification. Part of the Test Method is a discussion of which points should be included or excluded. When the handling of data was agreed upon, the results converged within 0.1%.

V. CASE STUDY

VI. LESSONS LEARNED FROM CASE STUDY

The methods described above were implemented in a case study to verify that multiple analysts can apply the method to the same data set with consistent results and to identify opportunities for improving the method. A year of 15-min data from a large fixed-tilt PV system in California was chosen. Data from each of two meteorological stations included GHI from both a thermopile and a reference cell, ambient and module temperature, and average and maximum wind speed. The electrical data were directly from the meter at the point of grid connection as well as utility-grade meter data from each individual inverter. Anomalies were added to the data to increase the probability of seeing differences in the evaluations by different individuals. The party supplying the data also supplied a PVsyst [2] model for the electricity production. The model documentation included the hardware and meteorological input files, a listing of the settings in PVsyst, the report generated by the PVsyst simulation, and a complete listing of the results for the 8,760 hours in the year. Based on this documentation, the modeled results could be duplicated within 0.1%. The use of 15-min data resulted in exclusion of an entire hour whenever one data point of the 15min data set was missing or flagged for exclusion because of the >90% screening criterion. Eight individuals evaluated the data; four completed the full analysis from start to finish, including the PVsyst modeling. The results of the case study were evaluated to compare the experience of each analyst according to: • methodologies used • data anomalies identified and how they were addressed • filtered data (aggregated measured generated electricity and irradiance after filtering and addressing anomalies) • end result in terms of the PVsyst output.

A. Alignment of meteorological files As part of the case study evaluation, it was noted that the original predicted energy was calculated using a TMY3 file, which contains more information than the meteorological files created from the measured data. If PVsyst has only GHI data, it estimates the horizontal diffuse irradiance (DHI) from the clearness index. To quantify the effect of using the measured or estimated DHI, the TMY3 data were extracted in different ways and the PVsyst model was rerun. The annual predicted energies are summarized in Table 2 for calculations using direct-normal irradiance (DNI) or DHI data with the GHI data. The largest difference (0.88%) was found between using the entire TMY3 data set and using only the GHI data from the TMY3 file. This uncertainty appears in the predicted energy rather than in the expected energy. So, this difference would not affect the outcome of a performance guarantee, which compares the measured energy to the expected energy. Nevertheless, it raises the question of whether we require that the same transposition model be used for both the predicted and expected energies in order to be consistent when using the added data would provide more accurate information on which to base a financial decision. TABLE II COMPARISON OF PREDICTED ENERGY FROM TMY3 DATA Data Used All TMY3 data GHI and DNI GHI and DHI GHI only

Annual Predicted Energy (MWh) 1,933 1,940 1,945 1,950

Even though the philosophy of consistency between the modeling of the original predicted energy (based on historical

weather data) and the expected energy (based on measured data) requires that these are treated identically, the predicted energy may be more accurate if additional information is used. B. Treatment of erroneous data The draft Test Method required that the entire hour of data be treated as missing data if >10% of the data were missing or erroneous. The case study data set showed spurious data when the system was starting up. A decision was made to exclude the spurious data rather than setting those values to zero. The combination of the requirement of 90% of the data for each hour and excluding a point whenever the system starts up leads to systematic exclusion of most short system outages. To demonstrate the potential impact, we give an extreme scenario in which a system trips off every day, but the maintenance crew resets it within the hour. The actual loss in production could approach 10% (depending on the hour when it trips off and the fraction of the hour when it is off); but if these time periods were excluded because of the start-up transient, the analysis would systematically neglect this loss in production. Because the transient data at start up or shut down systematically occur near outages within the data set, the result is especially sensitive to treatment of these data. The draft Test Method allows replacement of missing data with modeled data rather than setting all numbers to zero, but this suggestion was not directly discussed as a strategy to “fix” hours with >10% missing data. The implication was that entire hours of data might be replaced with the modeled data. However, as discussed above, for the case of hours with partial data, there is a clear benefit to replacing the rejected data with data from the model and then retaining the remaining data points for the rest of the hour—rather than throwing out the entire hour, which tends to throw out the outages preferentially, introducing bias into the test result. C. Systematic data verification methodologies The analysts in the study used a wide range of techniques for identifying potentially erroneous data. Most analysts used flags to keep track of the suspicious data rather than simply

changing the data directly. The flagged data were then easily summarized for discussion. Those analysts relying on visual inspection alone sometimes missed erroneous data. Table III summarizes a systematic filtering methodology that was quite successful. Additionally, this system included data streams for each inverter, allowing for additional checks. There were two data loggers. When a data logger was reset, half of the inverters reported zero output. Most of the analysts missed these events. After questionable data were identified, discussions of how to address each situation were essential. Such discussions may be greatly simplified if a contract specifies treatment of specific situations. VII. CHALLENGES IN APPLYING THE TEST A. Inconsistent historical and measured weather data As described above, a choice is made between 1) using a consistent GHI-based model for both the predicted and expected energy calculations, and 2) using the available data to provide the lowest uncertainty calculations of the predicted and expected energies. Consistency is preferred, but reduced uncertainty is also desired, causing considerable debate about the preferred approach. Consistency requires that the historical and measured weather data have the same formats. But allowing differences between the data sets provides opportunity to reduce uncertainty in either the predicted or expected energy calculations (Table IV). Italics in Table IV indicate an opportunity to reduce uncertainty if the historical and measured data are not required to be consistent; other notes in Table IV indicate that the data constrain the accuracy. Data recorded at high frequency show short-term measurements of higher irradiance than is seen with hourly averaging [11]. An hour that has constant irradiance of 500 W/m2 differs from an hour that has 0 irradiance for a half an hour and 1000 W/m2 irradiance for the other half an hour, but these will be identical in an hourly data set [12,13]. Full instrumentation to collect GHI, DHI, DNI, and POA irradiance data adds cost to a project, so DHI and DNI are

TABLE III SYSTEMATIC APPROACH FOR SETTING FLAGS DURING VERIFICATION OF 15-MIN DATA Flag Type

Description

Time series Range Missing

Missing, duplicated timestamps Unreasonable value Individual values are missing Values stuck at a single value over time. Use derivative. Unreasonable change between data points. Use derivative. Significant difference between like sensors Values between like sensors change over time

Dead Jump Pairwise Time-series pairwise

Irradiance 2 (W/m ) – < -6 or > 1400 – < 0.0001 while value is > 5

Temperature (° C) – < -30 or > 50 –

Wind Speed (m/s) – < 0 or > 32 – < 0.001 while value is > 1

– System dependent – < 0.0001 while value is > 5% nameplate

> 800 in 15 min

> 4 in 15 min

> 10 in 15 min

> 80% nameplate

> 200

>3

> 0.5

Power < 0 while 2 irradiance > 100 W/m

Drift of average > 1%

Drift of average >2





< 0.0001

Power (kW)

often omitted. As described above for a model implemented in PVsyst, using the complete TMY3 file instead of using only the GHI data resulted in a change of 0.9% in the predicted energy. This 0.9% could be considered small compared with the variability of the weather and the uncertainty of the absolute values in the historical weather data. But it could also be considered large to stakeholders who are estimating their marginal return on investment. TABLE IV ASPECTS OF WEATHER DATA THAT CAN BE CONSISTENT OR DIFFERENT TO REDUCE UNCERTAINTY Data Aspect Frequency of data aggregation Use of DNI and DHI GHI vs POA

Sensor type

Effect on Predicted Energy Hourly historical data limit modeling of variable conditions Historical data with DNI and DHI allow more accurate transposition to POA Transposition of GHI to POA is required, adding uncertainty Historical data have been validated relative to thermopile data

Effect on Expected Energy Frequent measured data can drive a more accurate model Allows more accurate transposition, but adds cost; most data sets are limited to GHI Measurement of POA avoids the uncertainty of the transposition Matched sensors improve repeatability by avoiding angle-ofincidence and spectral corrections

In the unusual case when measured historical weather data are available for each POA and for matched reference cells, then the choice of measuring POA with a matched sensor would be obvious. A strategy for providing both consistency and low uncertainty is to separate the GHI model into a transposition model and a POA-based model, as shown in Fig. 1. Error in the transposition model then is only seen in the predicted energy, whereas the expected energy has substantially lower uncertainty. Because the outcome of a performance guarantee relies on a comparison of the expected and measured energies, this can substantially reduce the uncertainty of the test even if it does not reduce the overall risk to the investor.

because of error in sensor placement or another effect (e.g., dark gravel in front of the POA sensor). To assess the systematic error that could be introduced by a POA sensor that is incorrectly positioned or located in a location with an inappropriate albedo, we modeled several fixed-tilt systems, as shown in Figs. 4 and 5. Both the Perez and Hay transposition models were used [14]. The models had been carefully developed for specific systems and validated as described in reference [15]. The albedo and tilt (Fig. 4) and azimuthal orientation (Fig. 5) were then varied while all other model parameters were kept fixed. Figure 4 shows that the effect of albedo depends strongly on tilt, as expected, with plausible variations as high as 2%. A direct measurement of the albedo within the plant could limit this uncertainty to