Georectification and map projection at Level 1B2 makes use of high frequency navigation information to correct for distortions due to aircraft attitude fluctuations. Here, the data from Fig. 4 have been mapped to a Universal Transverse Mercator projection.  

Georectification and map projection at Level 1B2 makes use of high frequency navigation information to correct for distortions due to aircraft attitude fluctuations. Here, the data from Fig. 4 have been mapped to a Universal Transverse Mercator projection.  

Source publication
Article
Full-text available
The Airborne Multiangle SpectroPolarimetric Imager (AirMSPI) is an eight-band (355, 380, 445, 470, 555, 660, 865, 935 nm) pushbroom camera, measuring polarization in the 470, 660, and 865 nm bands, mounted on a gimbal to acquire multiangular observations over a ±67° along-track range. The instrument has been flying aboard the NASA ER-2 high altitud...

Citations

... There are a number of spaceborne and airborne remote sensing instruments with polarization capabilities such as the POLDER instrument (Deschamps et al., 1994), RSP (Cairns et al., 1999), AirHARP (Martins et al., 2018), SPEX airborne (Smit et al., 2019), and AirMSPI (Diner et al., 2013) Published by Copernicus Publications on behalf of the European Geosciences Union. 1420 A. Weber et al.: Calibration of the polarization-resolving cameras of specMACS which have successfully applied various polarization-based retrievals. ...
... It amounts to 5.4 %, 5.4 %, and 6.9 % for the red, green, and blue channels of polLL and 4.8 %, 4.9 %, and 6.2 % for polLR for the same typical signal level and DOLP as in Table 3. The uncertainties in the DOLP are large compared to other polarimetric instruments like RSP, AirHARP, or AirMSPI Diner et al., 2013). However, the uncertainties in the transfer matrices are a very conservative estimate, as discussed above. ...
Article
Full-text available
The spectrometer of the Munich Aerosol Cloud Scanner (specMACS) is a high-spatial-resolution hyperspectral and polarized imaging system. It is operated from a nadir-looking perspective aboard the German High Altitude and LOng range (HALO) research aircraft and is mainly used for the remote sensing of clouds. In 2019, its two hyperspectral line cameras, which are sensitive to the wavelength range between 400 and 2500 nm, were complemented by two 2D RGB polarization-resolving cameras. The polarization-resolving cameras have a large field of view and allow for multi-angle polarimetric imaging with high angular and spatial resolution. This paper introduces the polarization-resolving cameras and provides a full characterization and calibration of them. We performed a geometric calibration and georeferencing of the two cameras. In addition, a radiometric calibration using laboratory calibration measurements was carried out. The radiometric calibration includes the characterization of the dark signal, linearity, and noise as well as the measurement of the spectral response functions, a polarization calibration, vignetting correction, and absolute radiometric calibration. With the calibration, georeferenced, absolute calibrated Stokes vectors rotated into the scattering plane can be computed from raw data. We validated the calibration results by comparing observations of the sunglint, which is a known target, with radiative transfer simulations of the sunglint.
... For example, POLDER establishes a radiometric model with improved calibration parameters for the transmittance and polarization phenomena of optics and filter units [28], and this technique obtains good calibration results. The Multiangle SpectroPolarimetric Imager (MSPI) [29], DPC [21], and Cloud and Aerosol Polarimetric Imager (CAPI) [30] also utilize this technique. Although each parameter in the radiometric model accurately describes the optic physical properties of the system, the parameters are often coupled and difficult to determine independently. ...
... Although each parameter in the radiometric model accurately describes the optic physical properties of the system, the parameters are often coupled and difficult to determine independently. Previous studies on polarization calibration of polarization imagers have mainly focused on visible and nearinfrared channels [21], [28], [29], [30], and the calibration results of SWIR channels have rarely been reported. For PMAI with only SWIR channels, the conventional radiometric calibration coefficient fitting method can be used for radiometric calibration (it is used thusly by MERSI [25]), but the uncertainty of the results needs to be assessed. ...
Article
The short-wave infrared Polarization and Multi-Angle Imager (PMAI) onboard Fengyun-3 precipitation satellite is a new spaceborne imaging polarimeter for clouds and aerosols, with polarization channels of 1030, 1370, and 1640 nm. This study presents a detailed description and assessment of the calibration model of PMAI. For radiometric intensity calibration, multiple parameters in the radiometric model are fitted into a single coefficient to simplify calibration. Results show that the radiometric calibration uncertainty of the full image plane is better than 0.02, and the calibration coefficient increases as field of view increases. The maximal unsaturated incident radiance of all channels is equivalent to 100% albedo, and signal-to-noise ratio at the referenced radiance is greater than 115 and 182 for the polarized and unpolarized channels, respectively. The response of all channels shows high linearity and good uniformity of the full image plane. Based on results of intensity calibration, a polarization calibration model using a fully linear polarized light source is introduced with a polarization measurement matrix established by a simplified method and a calculation method. Assessment of polarization measurement indicates that the uncertainties of the obtained degree of linear polarization (DoLP) and angle of linear polarization (AoLP) based on the two methods are highly consistent. When fully linearly polarized light is incident, the measurement error of DoLP using the simplified polarization measurement matrix is within 0.02 and that of AoLP is less than 1°. Therefore, the simplified radiometric intensity and polarization calibration model meets the measurement accuracy requirements and improves the calibration efficiency.
... Satellite-borne spectroradiometers in particular have substantially advanced the way we view our home planet, and their information content will increase in the future as the technology evolves from multispectral to hyperspectral capabilities. Multi-angle polarimeters (MAPs), such as the Polarization and Directionality of the Earth's Reflectance (POLDER) (Deschamps et al., 1994), Airborne Multi-angle Spectro-Polarimetric Imager (AirMSPI) (Diner et al., 2013), Spectro-polarimeter for Planetary EXploration (SPEX) (Smit et al., 2019), Research Scanning Polarimeter (RSP) (Cairns et al., 2003), Multi-viewing Multi-Channel Multi-Polarization Imager (3MI) (Fougnie et al., 2018) and Multi-Angle Imager for Aerosols (MAIA) (Van Harten et al., 2021), have even greater information content compared to other existing single-viewing angle spectroradiometers, such as the MODerate Resolution Imaging Spectrometer (MODIS), Visible Infrared Imaging Radiometer Suite (VI-IRS), and Ocean and Land Color Instrument (OLCI), owing to their ability to perform measurements at multiple viewing angles and different polarimetric states . ...
Article
Full-text available
Multi-angle polarimeters (MAPs) are powerful instruments to perform remote sensing of the environment. Joint retrieval algorithms of aerosols and ocean color have been developed to extract the rich information content of MAPs. These are optimization algorithms that fit the sensor measurements with forward models, which include radiative transfer simulations of the coupled atmosphere and ocean systems (CAOSs). The forward model consists of sub-models to represent the optics of the atmosphere, ocean water surface and ocean body. The representativeness of these models for observed scenes and the number of retrieval parameters are important for retrieval success. In this study, we have evaluated the impact of three different ocean bio-optical models with one, three and seven optimization parameters on the accuracy of joint retrieval algorithms of MAPs. The Multi-Angular Polarimetric Ocean coLor (MAPOL) joint retrieval algorithm was used to process data from the airborne Research Scanning Polarimeter (RSP) instrument acquired in different field campaigns. We performed ensemble retrievals along three RSP legs to evaluate the applicability of bio-optical models in geographically varying water of clear to turbid conditions. The average differences between the MAPOL aerosol optical depth (AOD) and spectral remote sensing reflectance (Rrs(λ)) retrievals and the MODerate resolution Imaging Spectroradiometer (MODIS) products were also reported. We studied the distribution of retrieval cost function values obtained for the three bio-optical models. For the one-parameter model, the spread of retrieval cost function values is narrow regardless of the type of water even if it fails to converge over coastal water. For the three- and seven-parameter models, the retrieval cost function distribution is water type dependent, showing the widest distribution over clear, open water. This suggests that caution should be used when using the spread of the cost function distribution to represent the retrieval uncertainty. We observed that the three- and seven-parameter models have similar MAP retrieval performances in all cases, though they are prone to converge at local minima over open-ocean water. It is necessary to develop a screening algorithm to divide open and coastal water before performing MAP retrievals. Given the computational efficiency and the algorithm stability requirements, we recommend the three-parameter bio-optical model as the coastal-water bio-optical model for future MAPOL studies. This study provides important practical guides on the joint retrieval algorithm development for current and future satellite missions such as NASA's Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) mission and ESA's Meteorological Operational-Second Generation (MetOp-SG) mission.
... The synthetic measurements used in this study are chosen to mimic an airborne multi-angle imager such as AirMSPI (Airborne Multiangle SpectroPolarimetric Imager) operating in a step-and-stare mode (Diner et al., 2013), with the exception that all measurements are acquired simultaneously in this synthetic scenario. This is a similar configuration to that which can be achieved with the upcoming CloudCT mission (Schilling et al., 2019), which will utilize a constellation of small satellites to obtain simultaneous multi-angle imagery. ...
Article
Full-text available
Our global understanding of clouds and aerosols relies on the remote sensing of their optical, microphysical, and macrophysical properties using, in part, scattered solar radiation. Current retrievals assume clouds and aerosols form plane-parallel, homogeneous layers and utilize 1D radiative transfer (RT) models. These assumptions limit the detail that can be retrieved about the 3D variability in the cloud and aerosol fields and induce biases in the retrieved properties for highly heterogeneous structures such as cumulus clouds and smoke plumes. In Part 1 of this two-part study, we validated a tomographic method that utilizes multi-angle passive imagery to retrieve 3D distributions of species using 3D RT to overcome these issues. That validation characterized the uncertainty in the approximate Jacobian used in the tomographic retrieval over a wide range of atmospheric and surface conditions for several horizontal boundary conditions. Here, in Part 2, we test the algorithm's effectiveness on synthetic data to test whether the retrieval accuracy is limited by the use of the approximate Jacobian. We retrieve 3D distributions of a volume extinction coefficient (σ3D) at 40 m resolution from synthetic multi-angle, mono-spectral imagery at 35 m resolution derived from stochastically generated cumuliform-type clouds in (1 km)3 domains. The retrievals are idealized in that we neglect forward-modelling and instrumental errors, with the exception of radiometric noise; thus, reported retrieval errors are the lower bounds. σ3D is retrieved with, on average, a relative root mean square error (RRMSE) < 20 % and bias < 0.1 % for clouds with maximum optical depth (MOD) < 17, and the RRMSE of the radiances is < 0.5 %, indicating very high accuracy in shallow cumulus conditions. As the MOD of the clouds increases to 80, the RRMSE and biases in σ3D worsen to 60 % and −35 %, respectively, and the RRMSE of the radiances reaches 16 %, indicating incomplete convergence. This is expected from the increasing ill-conditioning of the inverse problem with the decreasing mean free path predicted by RT theory and discussed in detail in Part 1. We tested retrievals that use a forward model that is not only less ill-conditioned (in terms of condition number) but also less accurate, due to more aggressive delta-M scaling. This reduces the radiance RRMSE to 9 % and the bias in σ3D to −8 % in clouds with MOD ∼ 80, with no improvement in the RRMSE of σ3D. This illustrates a significant sensitivity of the retrieval to the numerical configuration of the RT model which, at least in our circumstances, improves the retrieval accuracy. All of these ensemble-averaged results are robust in response to the inclusion of radiometric noise during the retrieval. However, individual realizations can have large deviations of up to 18 % in the mean extinction in clouds with MOD ∼ 80, which indicates large uncertainties in the retrievals in the optically thick limit. Using less ill-conditioned forward model tomography can also accurately infer optical depths (ODs) in conditions spanning the majority of oceanic cumulus fields (MOD < 80), as the retrieval provides ODs with bias and RRMSE values better than −8 % and 36 %, respectively. This is a significant improvement over retrievals using 1D RT, which have OD biases between −30 % and −23 % and RRMSE between 29 % and 80 % for the clouds used here. Prior information or other sources of information will be required to improve the RRMSE of σ3D in the optically thick limit, where the RRMSE is shown to have a strong spatial structure that varies with the solar and viewing geometry.
... Satelliteborne spectroradiometers in particular have substantially advanced the way we view our home planet, and their information content will increase in the future as the technology evolves from multi-to hyperspectral capabilities. Multi-angle polarimeters (MAPs), such as the POLarization and Directionality of the Earth's Reflectance (POLDER) (Deschamps et al., 1994), borne Multi-angle Spectro-Polarimetric Imager (AirMSPI) (Diner et al., 2013), Spectro-polarimeter for Planetary Exploration (SPEX) (Smit et al., 2019), Research Scanning Polarimeter (RSP) (Cairns et al., 2003), Multi-viewing Multichannel Multipolarization Imager (3MI) (Fougnie et al., 2018) and Multi-Angle Imager for Aerosols (MAIA) (Van Harten et al., 2021) have even greater information content compared to other existing single viewing angle spectroradiometers, such as the MODerate resolution Imaging Spectrometer (MODIS), Visible Infrared Imaging Radiometer System (VIIRS), and Ocean and Land Colour 35 Instrument (OLCI), owing to their ability to perform measurements at multiple viewing angles and different polarimetric states (Dubovik et al., 2019). ...
Preprint
Full-text available
Multi-angle polarimeters (MAP) are powerful instruments to perform remote sensing of the environment. Joint retrieval algorithms of aerosols and ocean color have been developed to extract the rich information content of MAPs. These are optimization algorithms that fit the sensor measurements with forward models, which include radiative transfer simulations of the coupled atmosphere and ocean systems (CAOS) based on adjustable atmosphere and ocean properties. The forward mode consists of sub-models to represent the optics of the atmosphere, ocean water surface, and ocean body. The representativeness of these models for observed scenes is important for retrieval success. In this study, we have evaluated the impact on MAP retrieval accuracy of three different ocean bio-optical models with 1, 3, and 7 optimization parameters that represent the spectral variation of inherent optical properties (IOP(λ)s) of the water body. The Multi-Angular Polarimetric Ocean coLor (MAPOL) joint retrieval algorithm was used to process data from the airborne Research Scanning Polarimeter (RSP) instrument acquired in different field campaigns. We performed ensemble retrievals along three RSP legs to evaluate the applicability of bio-optical models along geographically varying waters. The average differences between the MAPOL aerosol optical depth (AOD) and spectral remote sensing reflectance (Rrs(λ)) retrievals and the MODerate resolution Imaging Spectroradiometer (MODIS) products are also reported. We studied the distribution of retrieval cost function values obtained for the ensemble retrievals using the 3 bio-optical models under clear to highly turbid waters. For the 1-parameter model, retrieval cost function values show narrow distributions over any type of water, regardless of the cost function values, whereas for the 3 and 7- parameter models, the retrieval cost function distribution is water type dependent, showing the widest distribution over clear, open waters. We observed that the 3 and 7-parameter models have similar MAP retrieval performances relative to the 1- parameter model. We also demonstrated that the 3 and 7-parameter bio-optical models can be used to accurately represent both clear, open, and turbid, coastal waters, whereas the 1-parameter model is most successful over extremely clear waters. Given the computational efficiency requirements, we recommend the 3-parameter bio-optical model as the coastal water bio-optical model for future MAPOL studies. This study guides MAP algorithm development for current and future satellite missions such as NASA’s Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) mission and ESA’s Meteorological Operational-Second Generation (MetOp-SG) mission.
... This is a carefully considered choice as we want to test the limits of the retrieval on non-smooth media to detect, for example, a smoothing bias. The synthetic measurements used in this study are chosen to mimic an airborne multi-angle imager such as AirMSPI operating 335 in step-and-stare mode (Diner et al., 2013), with the exception that all measurements are acquired simultaneously in this synthetic scenario. This is a similar configuration to that which can be achieved with the upcoming Cloud-CT mission (Schilling et al., 2019), which will utilize a constellation of small satellites to obtain simultaneous multi-angle imagery. ...
Preprint
Full-text available
Our global understanding of clouds and aerosols relies on the remote sensing of their optical, microphysical, and macrophysical properties using, in part, scattered solar radiation. Current retrievals assume clouds and aerosols form plane-parallel, homogeneous layers and utilize 1D radiative transfer (RT) models. These assumptions limit the detail that can be retrieved about the 3D variability of cloud and aerosol fields and induce biases in the retrieved properties for highly heterogeneous structures such as cumulus clouds and smoke plumes. In Part 1 of this two-part study, we validated a tomographic method that utilizes multi-angle passive imagery to retrieve 3D distributions of species using 3D RT to overcome these issues. That validation characterized the uncertainty in the approximate Jacobian used in the tomographic retrieval over a wide range of atmospheric and surface conditions for several horizontal boundary conditions. Here in Part 2, we test the algorithm’s effectiveness on synthetic data to test whether retrieval accuracy is limited by the use of the approximate Jacobian. We retrieve 3D distributions of volume extinction coefficient (σ3D) at 40 m resolution from synthetic multi-angle, mono-spectral imagery at 35 m resolution derived from stochastically-generated ‘cumuliform’ clouds in (1 km)3 domains. The retrievals are idealized in that we neglect forward modelling and instrumental errors with the exception of radiometric noise; thus reported retrieval errors are lower bounds. σ3D is retrieved with, on average, a Relative Root Mean Square Error (RRMSE) < 20 % and bias < 0.1 % for clouds with Maximum Optical Depth (MOD) < 17, and the RRMSE of the radiances is < 0.5 %, indicating very high accuracy in shallow cumulus conditions. As the MOD of the clouds increases to 80, the RRMSE and biases in σ3D worsen to 60 % and −35 %, respectively, and the RRMSE of the radiances reaches 16 %, indicating incomplete convergence. This is expected from the increasing ill-conditioning of the inverse problem with decreasing mean-free-path predicted by RT theory and discussed in detail in Part 1. We tested retrievals that use a forward model that is better conditioned but less accurate due to more aggressive delta-M scaling. This reduces the radiance RRMSE to 9 % and the bias in σ3D to −8 % in clouds with MOD ~80, with no improvement in the RRMSE of σ3D. This illustrates a significant sensitivity of the retrieval to the numerical configuration of the RT model which, at least in our circumstances, improves the retrieval accuracy. All of these ensemble-averaged results are robust to the inclusion of radiometric noise during the retrieval. However, individual realizations can have large deviations of up to 18 % in the mean extinction in clouds with MOD ~80, which indicates large uncertainties in the retrievals in the optically thick limit. Using the better conditioned forward model tomography can also accurately infer optical depths (OD) in conditions spanning the majority of oceanic, cumulus fields (MOD < 80) as the retrieval provides OD with bias and RRMSE better than −8 % and 36 %, respectively. This is a significant improvement over retrievals using 1D RT, which have OD biases between −30 % and −23 % and RRMSE between 29 % and 80 % for the clouds used here. Prior information or other sources of information will be required to improve the RRMSE of σ3D in the optically thick limit, where the RRMSE is shown to have strong spatial structure that varies with the solar and viewing geometry.
... The future spaceborne CloudCT mission (Schilling et al., 2019) will provide the required simultaneous multi-angle imagery for tomographic retrievals. Existing airborne instruments such as AirMSPI (Airborne Multi-angle Spectro Polarimetric Imager; Diner et al., 2013) and AirHARP (Airborne Hyper-Angular Rainbow Polarimeter; McBride et al., 2020) and the space-borne MISR (Multiangle Imaging SpectroRadiometer) and MAIA (Multi-Angle Imager for Aerosols) also have the potential for tomographic retrievals, though they must additionally deal with the effects of cloud evolution (Ronen et al., 2021), as they do not acquire their observations simultaneously. The availability of these measurements makes the continued development of tomographic algorithms especially timely. ...
Article
Full-text available
Our global understanding of clouds and aerosols relies on the remote sensing of their optical, microphysical, and macrophysical properties using, in part, scattered solar radiation. These retrievals assume that clouds and aerosols form plane-parallel, homogeneous layers and utilize 1D radiative transfer (RT) models, limiting the detail that can be retrieved about the 3D variability in cloud and aerosol fields and inducing biases in the retrieved properties for highly heterogeneous structures such as cumulus clouds and smoke plumes. To overcome these limitations, we introduce and validate an algorithm for retrieving the 3D optical or microphysical properties of atmospheric particles using multi-angle, multi-pixel radiances and a 3D RT model. The retrieval software, which we have made publicly available, is called Atmospheric Tomography with 3D Radiative Transfer (AT3D). It uses an iterative, local optimization technique to solve a generalized least squares problem and thereby find a best-fitting atmospheric state. The iterative retrieval uses a fast, approximate Jacobian calculation, which we have extended from Levis et al. (2020) to accommodate open and periodic horizontal boundary conditions (BCs) and an improved treatment of non-black surfaces. We validated the accuracy of the approximate Jacobian calculation for derivatives with respect to both the 3D volume extinction coefficient and the parameters controlling the open horizontal boundary conditions across media with a range of optical depths and single-scattering properties and find that it is highly accurate for a majority of cloud and aerosol fields over oceanic surfaces. Relative root mean square errors in the approximate Jacobian for a 3D volume extinction coefficient in media with cloud-like single-scattering properties increase from 2 % to 12 % as the maximum optical depths (MODs) of the medium increase from 0.2 to 100.0 over surfaces with Lambertian albedos <0.2. Over surfaces with albedos of 0.7, these errors increase to 20 %. Errors in the approximate Jacobian for the optimization of open horizontal boundary conditions exceed 50 %, unless the plane-parallel media providing the boundary conditions are optically very thin (∼0.1). We use the theory of linear inverse RT to provide insight into the physical processes that control the cloud tomography problem and identify its limitations, supported by numerical experiments. We show that the Jacobian matrix becomes increasing ill-posed as the optical size of the medium increases and the forward-scattering peak of the phase function decreases. This suggests that tomographic retrievals of clouds will become increasingly difficult as clouds become optically thicker. Retrievals of asymptotically thick clouds will likely require other sources of information to be successful. In Loveridge et al. (2023a; hereafter Part 2), we examine how the accuracy of the retrieved 3D volume extinction coefficient varies as the optical size of the target medium increases using synthetic data. We do this to explore how the increasing error in the approximate Jacobian and the increasingly ill-posed nature of the inversion in the optically thick limit affect the retrieval. We also assess the accuracy of retrieved optical depths and compare them to retrievals using 1D radiative transfer.
... Cloud structure was also recovered from simulated and empirical airborne multi-view images , emulating the JPL's Airborne Multiangle SpectroPolarimetric Imager (AirMSPI) (Diner et al., 2013). This was done by computed tomography (CT), which inverts a 3D radiative transfer model, to retrieve the extinction coefficient in 3D; 3D cloud geometry is an important product for constraining the retrieval of cloud properties. ...
Article
Full-text available
A method to derive the 3D cloud envelope and the cloud development velocity from high spatial and temporal resolution satellite imagery is presented. The CLOUD instrument of the recently proposed C3IEL mission lends itself well to observing at high spatial and temporal resolutions the development of convective cells. Space-borne visible cameras simultaneously image, under multiple view angles, the same surface domain every 20 s over a time interval of 200 s. In this paper, we present a method for retrieving cloud development velocity from simulated multi-angular, high-resolution top of the atmosphere (TOA) radiance cloud fields. The latter are obtained via the image renderer Mitsuba for a cumulus case generated via the atmospheric research model SAM and via the radiative transfer model 3DMCPOL, coupled with the outputs of an orbit, attitude, and camera simulator for a deep convective cloud case generated via the atmospheric research model Meso-NH. Matching cloud features are found between simulations via block matching. Image coordinates of tie points are mapped to spatial coordinates via 3D stereo reconstruction of the external cloud envelope for each acquisition. The accuracy of the retrieval of cloud topography is quantified in terms of RMSE and bias that are, respectively, less than 25 and 5 m for the horizontal components and less than 40 and 25 m for the vertical components. The inter-acquisition 3D velocity is then derived for each pair of tie points separated by 20 s. An independent method based on minimising the RMSE for a continuous horizontal shift of the cloud top, issued from the atmospheric research model, allows for the obtainment of a ground estimate of the velocity from two consecutive acquisitions. The mean values of the distributions of the stereo and ground velocities exhibit small biases. The width of the distributions is significantly different, with higher a distribution width for the stereo-retrieved velocity. An alternative way to derive an average velocity over 200 s, which relies on tracking clusters of points via image feature matching over several acquisitions, was also implemented and tested. For each cluster of points, mean stereo and ground positions were derived every 20 s over 200 s. The mean stereo and ground velocities, obtained as the slope of the line of best fit to the mean positions, are in good agreement.
... The future space-borne CloudCT mission (Schilling et al., 2019) will provide the required simultaneous multi-angle imagery for tomographic retrievals. Existing airborne instruments 155 such as AirMSPI (Diner et al., 2013) and AirHARP (McBride et al., 2020) and the space-borne MISR and MAIA instruments also have potential for tomographic retrievals, though they must additionally deal with the effects of cloud evolution (Ronen et al., 2021) as they do not acquire their observations simultaneously. The availability of these measurements makes the continued development of tomographic algorithms especially timely. ...
Preprint
Full-text available
Our global understanding of clouds and aerosols relies on the remote sensing of their optical, microphysical, and macrophysical properties using, in part, scattered solar radiation. These retrievals assume clouds and aerosols form plane-parallel, homogeneous layers and utilize 1D radiative transfer (RT) models, limiting the detail that can be retrieved about the 3D variability of cloud and aerosol fields and inducing biases in the retrieved properties for highly heterogeneous structures such as cumulus clouds and smoke plumes. To overcome these limitations, we introduce and validate an algorithm for retrieving the 3D optical or microphysical properties of atmospheric particles using multi-angle, multi-pixel radiances and a 3D RT model. The retrieval software, which we have made publicly available, is called Atmospheric Tomography with 3D Radiative Transfer (AT3D). It uses an iterative, local optimization technique to solve a generalized least-squares problem and thereby find a best-fitting atmospheric state. The iterative retrieval uses a fast, approximate Jacobian calculation, which we have extended from Levis et al. (2020) to accommodate open as well as periodic horizontal boundary conditions (BC) and an improved treatment of non-black surfaces. We validated the accuracy of the approximate Jacobian calculation for derivatives with respect to both the 3D volume extinction coefficient and the parameters controlling the open horizontal boundary conditions across media with a range of optical depths and single scattering properties and find that it is highly accurate for a majority of cloud and aerosol fields over oceanic surfaces. Relative root-mean-square errors in the approximate Jacobian for 3D volume extinction coefficient in media with cloud-like single scattering properties increase from 2 % to 12 % as the Maximum Optical Depths (MOD) of the medium increases from 0.2 to 100.0 over surfaces with Lambertian albedos < 0.2. Over surfaces with albedos of 0.7, these errors increase to 20 %. Errors in the approximate Jacobian for the optimization of open horizontal boundary conditions exceed 50 % unless the plane-parallel media providing the boundary conditions are very optically thin (~0.1). We use the theory of linear inverse RT to provide insight into the physical processes that control the cloud tomography problem and identify its limitations, supported by numerical experiments. We show that the Jacobian matrix becomes increasing ill-posed as the optical size of the medium increases and the forward scattering peak of the phase function decreases. This suggests that tomographic retrievals of clouds will become increasingly difficult as clouds becoming optically thicker. Retrievals of asymptotically thick clouds will likely require other sources of information to be successful. In Part 2 of this study, we examine how the accuracy of the retrieved 3D volume extinction coefficient varies as the optical size of the target medium increases using synthetic data. We do this to explore how the increasing error in the approximate Jacobian and increasingly ill-posed nature of the inversion in the optically thick limit affects the retrieval. We develop a method to improve retrieval accuracy in the optically thick limit. We also assess the accuracy of retrieved optical depths and surface irradiances and compare them to retrievals using 1D radiative transfer.
... Cloud structure was also recovered from simulated and empirical airborne multi-view images , emulating the JPL's Airborne Multiangle SpectroPolarimetric Imager (AirMSPI) (Diner et al. 2013). This was done by computed tomography (CT), which inverts a 3D radiative transfer model, to retrieve the extinction coefficient in 3D. ...
Preprint
Full-text available
A method to derive the 3D cloud envelope and the cloud development velocity from high spatial and temporal resolution satellite imagery is presented. The CLOUD instrument of the recently proposed C3IEL mission lends itself well to observing at high spatial and temporal resolutions the development of convective cells. Space-borne visible cameras simultaneously image, under multiple view angles, the same surface domain, every 20 s over a time interval of 200 s. In this paper, we present a method for retrieving cloud development velocity from simulated multi-angular-high-resolution TOA radiance cloud fields. The latter are obtained via the radiative transfer model 3DMCPOL, for a deep convective cloud case generated via the atmospheric research model Meso-NH, and via the image renderer Mitsuba for a cumulus case generated via the atmospheric research model SAM. Matching cloud features are found between simulations via block matching. Image coordinates of tie points are mapped to spatial coordinates via 3D stereo reconstruction of the external cloud envelope for each acquisition. The accuracy of the retrieval of cloud topography is quantified in terms of RMSE and bias that are respectively, less than 25 m and 15 m for the horizontal components and less than 40 m and 25 m for the vertical component. The inter-acquisition 3D velocity is then derived for each pair of tie points separated by 20 s. An independent method based on optimizing the superposition of the cloud top, issued from the atmospheric research model, allows to obtain a ground estimate for the velocity from two consecutive acquisitions. The distribution of retrieved velocity and ground estimate exhibits small biases but significant discrepancy in terms of distribution width. Furthermore, the average velocities derived from the mean altitude from ground for a cluster of localized cloud features identified over several acquisitions, both in the simulated images and in the physical point cloud, are in good agreement.