This study presents a series of automated tests for tower and aircraft time series to identify instrumentation problems, flux sampling issues, and physically plausible but unusual situations. The tests serve as a quality control safety net. Special flags are developed to represent various potential problems, such as inconsistencies between different tower levels and flux errors due to aircraft height fluctuations. Critical values for parameters are empirically determined from real turbulent time series data. When these values are exceeded, the record is flagged for further inspection. The inspection step is necessary to verify instrumentation problems or identify plausible physical behavior. The tests are applied to tower data from the Risø Air Sea Experiment and Microfronts95 and aircraft data from the Boreal Ecosystem–Atmosphere Study.
The study focuses on fast-response turbulence time series and does not assume any statistical distribution for the data. It develops techniques to qualify control instrumentation behavior for tower and aircraft time series and formulates simple estimates of flux sampling errors. The quality control and flux sampling procedures assign flags to records. Hard flags identify abnormalities from instrumental or data recording problems, while soft flags identify unusual behavior that may be removed for certain calculations or reserved for special studies.
The final step in quality control is visual inspection of hard-flagged records to verify instrumental or data recording problems or identify plausible physical behavior. The study also considers three types of sampling errors: systematic, random, and mesoscale variability. The systematic error is due to failure to capture large transporting scales, the random error is due to inadequate sampling of main eddies, and mesoscale variability leads to flux dependence on averaging scale.
The study analyzes data from the RASEX, Microfronts95, and BOREAS experiments. It discusses flux sampling errors, including systematic, random, and mesoscale variability. It also examines flux events, observed flux sampling errors, flux induced by altitude fluctuations, flux loss due to temporal (spatial) resolution, and quality control parameters. The study concludes that visual inspection is essential to distinguish between instrumental problems and plausible physical behavior. The results show that the frequency of soft-flagged flux sampling errors varies depending on the variable and the flux type. The study also highlights the importance of considering the resolution of the data to capture the smallest-scale turbulent flux. The study concludes that the quality control procedures are essential for ensuring the reliability of flux measurements.This study presents a series of automated tests for tower and aircraft time series to identify instrumentation problems, flux sampling issues, and physically plausible but unusual situations. The tests serve as a quality control safety net. Special flags are developed to represent various potential problems, such as inconsistencies between different tower levels and flux errors due to aircraft height fluctuations. Critical values for parameters are empirically determined from real turbulent time series data. When these values are exceeded, the record is flagged for further inspection. The inspection step is necessary to verify instrumentation problems or identify plausible physical behavior. The tests are applied to tower data from the Risø Air Sea Experiment and Microfronts95 and aircraft data from the Boreal Ecosystem–Atmosphere Study.
The study focuses on fast-response turbulence time series and does not assume any statistical distribution for the data. It develops techniques to qualify control instrumentation behavior for tower and aircraft time series and formulates simple estimates of flux sampling errors. The quality control and flux sampling procedures assign flags to records. Hard flags identify abnormalities from instrumental or data recording problems, while soft flags identify unusual behavior that may be removed for certain calculations or reserved for special studies.
The final step in quality control is visual inspection of hard-flagged records to verify instrumental or data recording problems or identify plausible physical behavior. The study also considers three types of sampling errors: systematic, random, and mesoscale variability. The systematic error is due to failure to capture large transporting scales, the random error is due to inadequate sampling of main eddies, and mesoscale variability leads to flux dependence on averaging scale.
The study analyzes data from the RASEX, Microfronts95, and BOREAS experiments. It discusses flux sampling errors, including systematic, random, and mesoscale variability. It also examines flux events, observed flux sampling errors, flux induced by altitude fluctuations, flux loss due to temporal (spatial) resolution, and quality control parameters. The study concludes that visual inspection is essential to distinguish between instrumental problems and plausible physical behavior. The results show that the frequency of soft-flagged flux sampling errors varies depending on the variable and the flux type. The study also highlights the importance of considering the resolution of the data to capture the smallest-scale turbulent flux. The study concludes that the quality control procedures are essential for ensuring the reliability of flux measurements.