Conference Paper

Endmember Extraction Using the Physics-Based Multi-Mixture Pixel Model

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

A method of incorporating the multi-mixture pixel model into hyperspectral endmember extraction is presented and discussed. A vast majority of hyperspectral endmember extraction methods rely on the linear mixture model to describe pixel spectra resulting from mixtures of endmembers. Methods exist to unmix hyperspectral pixels using nonlinear models, but rely on severely limiting assumptions or estimations of the nonlinearity. This paper will present a hyperspectral pixel endmember extraction method that utilizes the bidirectional reflectance distribution function to model microscopic mixtures. Using this model, along with the linear mixture model to incorporate macroscopic mixtures, this method is able to accurately unmix hyperspectral images composed of both macroscopic and microscopic mixtures. The mixtures are estimated directly from the hyperspectral data without the need for a priori knowledge of the mixture types. Results are presented using synthetic datasets, of multi-mixture pixels, to demonstrate the increased accuracy in unmixing using this new physics-based method over linear methods. In addition, results are presented using a well-known laboratory dataset.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In [2], the author derives an analytical model to express the measured reflectances as a function of parameters intrinsic to the mixtures, e.g., the mass fractions, the characteristics of the individual particles (density, size) and the single-scattering albedo. Other popular approximating models include the discrete-dipole approximation [16] and the Shkuratov's model [17] (interested readers are invited to consult [3] or the more signal processing-oriented papers [18] and [19]). However these models also strongly depend on parameters inherent to the experiment since it requires the perfect knowledge of the geometric positioning of the sensor with respect to the observed sample. ...
... Note that a similar non-LMM coupled with a group-sparse constraint on p n has been explicitly adopted in [58] and [59] to make more robust the unmixing of hyperspectral pixels. In (19) _ d v is greater (respectively lower) than a threshold . h As in the PPNM-based detection procedure, the threshold h can be related to the PFA and PD through closed-form expressions. ...
Article
When considering the problem of unmixing hyperspectral images, most of the literature in the geoscience and image processing areas relies on the widely used linear mixing model (LMM). However, the LMM may be not valid and other nonlinear models need to be considered, for instance, when there are multi-scattering effects or intimate interactions. Consequently, over the last few years, several significant contributions have been proposed to overcome the limitations inherent in the LMM. In this paper, we present an overview of recent advances in nonlinear unmixing modeling.
... Recent works aim to combine the macroscopic mixture (LMM) and the microscopic mixture characterized by Hakpe's model [Close et al., 2012a]. In [Close et al., 2012b;Dranishnikov et al., 2013], the intimate mixture effect is taken as an additional endmember to LMM, yielding ...
Thesis
This thesis aims to propose new nonlinear unmixing models within the framework of kernel methods and to develop associated algorithms, in order to address the hyperspectral unmixing problem.First, we investigate a novel kernel-based nonnegative matrix factorization (NMF) model, that circumvents the pre-image problem inherited from the kernel machines. Within the proposed framework, several extensions are developed to incorporate common constraints raised in hypersepctral images analysis. In order to tackle large-scale and streaming data, we next extend the kernel-based NMF to an online fashion, by keeping a fixed and tractable complexity. Moreover, we propose a bi-objective NMF model as an attempt to combine the linear and nonlinear unmixing models. The decompositions of both the conventional NMF and the kernel-based NMF are performed simultaneously. The last part of this thesis studies a supervised unmixing model, based on the correntropy maximization principle. This model is shown robust to outlier bands. Two correntropy-based unmixing problems are addressed, considering different constraints in hyperspectral unmixing problem. The alternating direction method of multipliers (ADMM) is investigated to solve the related optimization problems
... expressed in the reflective domain, where f p ¼ f 1p , …, f Rp  à T correspond to the microscopic proportions, {w r } are the albedo-domain endmember signatures and Rð Á Þ is the mapping function from the albedo domain to the reflectance domain.The same authors have introduced as well an unsupervised algorithm to perform MMP-based SU[120,121]. ...
... The linear assumption in the MVES algorithms takes the form of straight edges between the endmember vertices. If non-linear mixing is present however, these edges may be better represented by convex (Close et al., 2012) or concave (Keshava and Mustard, 2002) lines. ...
Thesis
Full-text available
The use of Visible and Near Infrared (VNIR) imaging spectroscopy is a cornerstone of planetary exploration. This work shall present an investigation into the limitations of scale, both spectral and spatial, in the utility of VNIR images for identifying small scale hydrothermal and potential hydrated environments on Mars, and regions of the Earth that can serve as martian analogues. Such settings represent possible habitable environments; important locations for astrobiological research. The ESA/Roscosmos ExoMars rover PanCam captures spectrally coarse but spatially high resolution VNIR images. This instrument is still in development and the first field trial of an emulator fitted with the final set of geological filters is presented here. Efficient image analysis techniques are explored and the ability to accurately characterise a hydrothermally altered region using PanCam data products is established. The CRISM orbital instrument has been returning hyperspectral VNIR images with an 18 m 2 pixel resolution since 2006. The extraction of sub-pixel information from CRISM pixels using Spectral Mixture Analysis (SMA) algorithms is explored. Using synthetic datasets a full SMA pipeline consisting of publically available Matlab algorithms and optimised for investigation of mineralogically complex hydrothermal suites is developed for the first time. This is validated using data from Námafjall in Iceland, the region used to field trial the PanCam prototype. The pipeline is applied to CRISM images covering four regions on Mars identified as having potentially undergone hydrothermal alteration in their past. A second novel use of SMA to extract a unique spectral signature for the potentially hydrated Recurring Slope Lineae features on Mars is presented. The specific methodology presented shows promise and future improvements are suggested. The importance of combining different scales of data and recognising their limitations is discussed based on the results presented and ways in which to take the results presented in this thesis forward are given. 4
... expressed in the reflective domain, where f p ¼ f 1p , …, f Rp  à T correspond to the microscopic proportions, {w r } are the albedo-domain endmember signatures and Rð Á Þ is the mapping function from the albedo domain to the reflectance domain.The same authors have introduced as well an unsupervised algorithm to perform MMP-based SU[120,121]. ...
Chapter
Mainly due to the limited spatial resolution of the data acquisition devices, hyperspectral image pixels generally result from the mixture of several components that are present in the observed surface. Spectral mixture analysis (or spectral unmixing) is a key processing step which aims at identifying the spectral signatures of these materials and quantifying their spatial distribution over the image. The main purpose of this chapter is to introduce the spectral unmixing problem and to discuss some linear and nonlinear models and algorithms used to solve it. We will show that, capitalizing on several decades of methodological developments in the geoscience and remote sensing community, most of the unmixing algorithms proposed to unmix remotely sensed images can be directly applied in the chemometrics field to process hyperspectral data arising from various scanning microscopic techniques such as scanning transmission electron microscopy and Raman imaging.
... This kernel becomes the linear kernel for γ → 0, and approximates the inverse of the reflectance-SSA relation derived by Hapke for large γ. An additive approach for combining the LMM with intimate mixtures has been proposed in [79]- [81], where an unknown nonlinear mixture was added to the LMM as an additional endmember. The mixing equation of this multi-mixture pixel (MMP) model is ...
Article
Full-text available
In hyperspectral unmixing, the prevalent model used is the linear mixing model, and a large variety of techniques based on this model has been proposed to obtain endmembers and their abundances in hyperspectral imagery. However, it has been known for some time that nonlinear spectral mixing effects can be a crucial component in many real-world scenarios, such as planetary remote sensing, intimate mineral mixtures, vegetation canopies, or urban scenes. While several nonlinear mixing models have been proposed decades ago, only recently there has been a proliferation of nonlinear unmixing models and techniques in the signal processing literature. This paper aims to give an historical overview of the majority of nonlinear mixing models and nonlinear unmixing methods, and to explain some of the more popular techniques in detail. The main models and techniques treated are bilinear models, models for intimate mineral mixtures, radiosity-based approaches, ray tracing, neural networks, kernel methods, support vector machine techniques, manifold learning methods, piece-wise linear techniques, and detection methods for nonlinearity. Furthermore, we provide an overview of several recent developments in the nonlinear unmixing literature that do not belong into any of these categories.
... The intimate mixture model is based on the photometric model of Hapke [14], which considers multiple scattering between different materials at the particle level [6]. Close et al. [7], [8] applied Hapke's average albedo model to solve fully unsupervised nonlinear unmixing in the case of intimate mixtures. In the multilayered scene, there are multiple interactions among scatters at different layers, which often happen between vegetation and soil [1]- [5]. ...
Article
Full-text available
Nonlinear spectral mixture models have recently received particular attention in hyperspectral image processing. In this paper, we present a novel optimization method of nonlinear unmixing based on a generalized bilinear model (GBM), which considers the second–order scattering of photons in a spectral mixture model. Semi-nonnegative matrix factorization (semi-NMF) is used for the optimization to process a whole image in matrix form. When endmember spectra are given, the optimization of abundance and interaction abundance fractions converge to a local optimum by alternating update rules with simple implementation. The proposed method is evaluated using synthetic datasets considering its robustness for the accuracy of endmember extraction and spectral complexity, and shows smaller errors in abundance fractions rather than conventional methods. GBM-based unmixing using semi-NMF is applied to the analysis of an airborne hyperspectral image taken over an agricultural field with many endmembers, and it visualizes the impact of a nonlinear interaction on abundance maps at reasonable computational cost.
... Broadwater and Banerjee derived various kernel-based unmixing techniques that implicitly relied on the Hapke model [14]–[16]. In [17]–[19], Close et al. combined linear and intimate mixing processes in single models to improve flexibility. Conversely, scenes acquired over vegetated areas are also known to be subjected to more complex interactions that cannot be properly taken into account by a simple LMM [20]–[27]. ...
Article
Full-text available
Spectral unmixing is a crucial processing step when analyzing hyperspectral data. In such analysis, most of the work in the literature relies on the widely acknowledged linear mixing model to describe the observed pixels. Unfortunately, this model has been shown to be of limited interest for specific scenes, in particular when acquired over vegetated areas. Consequently, in the past few years, several nonlinear mixing models have been introduced to take nonlinear effects into account. These models have been proposed empirically, however without any thorough validation. In this paper, the authors take advantage of two sets of real and physical-based simulated data to validate the accuracy of various nonlinear models in vegetated areas. These physics-based and analysis models, and their corresponding unmixing algorithms, are evaluated with respect to their ability of fitting the measured spectra and of providing an accurate estimation of the abundance coefficients, considered as the spatial distribution of the materials in each pixel.
Article
Hyperspectral unmixing is a crucial task in hyperspectral image processing and analysis. It aims to decompose mixed pixels into pure spectral signatures and their associated abundances. However, most current unmixing methods ignore the reality that the same pixel of a hyperspectral image has many different reflections simultaneously. To address this issue, we propose a multi-task autoencoding model for multiple reflections, which can improve the algorithm’s robustness in complex environments. Our proposed framework uses 3D-CNN-based networks to jointly learn spectral-spatial priors and adapt to different pixels by complementing the advantages of other unmixing methods. The proposed method can quantitatively evaluate each area of data, which helps improve the algorithm’s interpretability. This paper presents MAHUM (Multi-tasks Autoencoder Hyperspectral Unmixing Model), which stacks multiple models to deal with various reflections of complex terrain. We also perform sensitivity analysis on some parameters and show experimental results demonstrating our method’s ability to express the adaptability of different materials in different methods quantitatively.
Article
Full-text available
The hyperspectral imagery provides images in hundreds of spectral bands within different wavelength regions. This technology has increasingly applied in different fields of earth sciences, such as minerals exploration, environmental monitoring, agriculture, urban science, and planetary remote sensing. However, despite the ability of these data to detect surface features, the measured spectrum is composed of several components that make it a mixed spectrum due to the low spatial resolution observed of the employed sensors or the presence of multiple materials in its instantaneous field of view (IFOV). The existence of a mixed spectrum severely prevents the accurate processing of the hyperspectral data. Therefore, it is necessary to separate these mixtures through the so-called spectral unmixing methods. Spectral unmixing is performed to decompose a mixed pixel in hyperspectral images into a set of spectra (endmembers) and their abundances. Typically, two types of spectral mixing models (linear and nonlinear) are considered. In the linear mixing model (LMM), the reflected radiance at the sensor is the outcome of interference with one material, where a pixel is assumed to be a linear combination of endmembers weighted by their abundances. The nonlinear model, on the other hand, is used when the mixing scale is microscopic or materials are mixed intrinsically. In recent years, the linear mixing model has been a very popular model for hyperspectral processing in the last decades, and a large effort has been put into using this model for unmixing applications, resulting in an overabundance of linear unmixing methods and algorithms. Over the last decades, the linear mixing model has been utilized in the detection of minerals and their abundances. However, as early as 40 years ago, it has been observed that strong nonlinear spectral mixing effects are present in many situations, for instance, when there are multi scattering effects or intimate mineral interactions. While such nonlinear unmixing techniques have received much less attention than linear ones. Therefore, this paper aims to give an overview of the majority of nonlinear mixing models and methods used in hyperspectral image processing, and many recent developments in this field. Besides, several of the more popular nonlinear unmixing techniques are explained in detail. In this regard, nonlinear unmixing methods can be categorized into two groups: physics-based methods and data-driven techniques. The most important methods of these two groups are divided into bilinear and multi-linear models, intimate mineral mixture models, radiosity based approaches, ray tracing, neural network, kernel methods, manifold learning, and topology methods. A comprehensive review of these methods can be found in which bilinear and multi-linear models and neural networks have become more popular among researchers over the years. The current study should give the reader that is interested in working with nonlinear unmixing techniques a reasonably good introduction into the most commonly used methods and approaches.
Article
Full-text available
Much work in the study of hyperspectral imagery has focused on macroscopic mixtures and unmixing via the linear mixing model. A substantially different approach seeks to model hyperspectral data non-linearly in order to accurately describe intimate or microscopic relationships of materials within the image. In this paper we present and discuss a new model (MacMicDEM) that seeks to unify both approaches by representing a pixel as both linearly and non-linearly mixed, with the condition that the endmembers for both mixture types need not be related. Using this model, we develop a method to accurately and quickly unmix data which is both macroscopically and microscopically mixed. Subsequently, this method is then validated on synthetic and real datasets.
Conference Paper
Full-text available
Proposed and existing hyperspectral remote sensors, provide information about the scene of interest at resolutions ranging from few meters to few kilometers in terrestrial and space applications. Understanding the type of information extracted with image exploitation algorithms and how does it relates to actual spectra on the ground are important problems when we look into algorithms that perform unmixing of hyperspectral images for subpixel analysis. In this paper, we investigate how spatial resolution affects the capability of unmixing algorithms based on geometric models to extract information from a scene. We study the performance of the positive matrix factorization for unmixing of hyperspectral at different spatial resolutions and how does it compare with other approaches such as MaxD, and SMACC. Hyperspectral imagery collected using the AISA sensor at 1m and 4m are used for the experiments. The results obtained illustrate some of the effects that algorithms assumptions have on unmixing results.
Article
Full-text available
Linear and nonlinear spectral mixture analysis has been studied for deriving the fractions of spectrally pure materials in a mixed pixel in the past decades. However, not much attention has been given to the collinearity problem in spectral unmixing. In this paper, quantitative analysis and detailed simulations are provided which show that the high correlation between the endmembers, including the virtual endmembers introduced in a nonlinear model, has a strong impact on unmixing errors through inflating the Gaussian noise. While distinctive spectra with low correlations are often selected as true endmembers, the virtual endmembers formed by their product terms can be highly correlated with others. Therefore, it is found that a nonlinear model generally suffers the collinearity problem more in comparison with a linear model and may not perform as expected when the Gaussian noise is high, despite its higher modeling power. Experiments were conducted to illustrate the effects.
Article
Full-text available
Applications of nonlinear unmixing models based on the Shkuratov theory to fit laboratory spectra of intimate mixtures and to derive imaginary refraction indices of minerals are presented. The tests are performed using mafic minerals (pyroxenes, olivine, plagioclase). Abundance estimates of end-members are accurate to within 5-10% for the analyzed mixtures, while the estimate particle sizes are within the intervals of actual sizes, given the reflectance spectra of the end-members only. The type of mixture (areal versus intimate) can be tested. A quantitative modeling of basalt, a dark rock mainly composed of bright minerals, is also presented. The limitations of our modeling method are discussed.
Article
Full-text available
The shortest path k-nearest neighbor classifier (SkNN), that utilizes nonlinear manifold learning, is proposed for analysis of hyperspectral data. In contrast to classifiers that deal with the high dimensional feature space directly, this approach uses the pairwise distance matrix over a nonlinear manifold to classify novel observations. Because manifold learning preserves the local pairwise distances and updates distances of a sample to samples beyond the user-defined neighborhood along the shortest path on the manifold, similar samples are moved into closer proximity. High classification accuracies are achieved by using the simple k-nearest neighbor (kNN) classifier. SkNN was applied to hyperspectral data collected by the Hyperion sensor on the EO-1 satellite over the Okavango Delta of Botswana. Classification accuracies and generalization capability are compared to those achieved by the best basis binary hierarchical classifier, the hierarchical support vector machine classifier, and the k-nearest neighbor classifier on both the original data and a subset of its principal components.
Conference Paper
Full-text available
There are a number of challenges in developing consistent error-free maps of vegetation at the species level from hyperspectral imagery. One of the primary difficulties stems from bi-directional reflectance distribution function (BRDF) effects. Similarly, in applying classification models from one hyperspectral scene to another, BRDF effects also limit the classification accuracy. Other sources of nonlinearity, especially in coastal environments such as coastal wetlands arise from the variable presence of water in pixels as a function of position in the landscape. In a previous paper, we develop an approach to modeling these nonlinearities by deriving nonlinear coordinates that describe the hyperspectral data manifold. In this paper, we examine whether a nonlinear manifold model can he aligned from one hyperspectral scene to another
Conference Paper
Full-text available
Nonlinear manifold learning algorithms, mainly isometric feature mapping (Isomap) and local linear embedding (LLE), determine the low-dimensional embedding of the original high dimensional data by finding the geometric distances between samples. Researchers in the remote sensing community have successfully applied Isomap to hyperspectral data to extract useful information. Although results are promising, computational requirements of the local search process are exhorbitant. Landmark-Isomap, which utilizes randomly selected sample points to perform the search, mitigates these problems, but samples of some classes are located in spatially disjointed clusters in the embedded space. We propose an alternative approach to selecting landmark points which focuses on the boundaries of the clusters, rather than randomly selected points or cluster centers. The unique Isomap is evaluated by SStress, a good- of-fit measure, and reconstructed with reduced computation, which makes implementation with other classifiers plausible for large data sets. The new method is implemented and applied to Hyperion hyperspectral data collected over the Okavango Delta of Botswana.
Article
Full-text available
In this paper, we examine the accuracy of manifold coordinate representations as a reduced representation of a hyperspectral imagery (HSI) lookup table (LUT) for bathymetry retrieval. We also explore on a more limited basis the potential for using these coordinates for modeling other in water properties. Manifold coordinates are chosen because they are a data-driven intrinsic set of coordinates, which naturally parameterize nonlinearities that are present in HSI of water scenes. The approach is based on the extraction of a reduced dimensionality representation in manifold coordinates of a sufficiently large representative set of HSI. The manifold coordinates are derived from a scalable version of the isometric mapping algorithm. In the present and in our earlier works, these coordinates were used to establish an interpolating LUT for bathymetric retrieval by associating the representative data with ground truth data, in this case from a Light Detection and Ranging (LIDAR) estimate in the representative area. While not the focus of the present paper, the compression of LUTs could also be applied, in principle, to LUTs generated by forward radiative transfer models, and some preliminary work in this regard confirms the potential utility for this application. In this paper, we analyze the approach using data acquired by the Portable Hyperspectral Imager for Low-Light Spectroscopy (PHILLS) hyperspectral camera over the Indian River Lagoon, Florida, in 2004. Within a few months of the PHILLS overflights, Scanning Hydrographic Operational Airborne LIDAR Survey LIDAR data were obtained for a portion of this study area, principally covering the beach zone and, in some instances, portions of contiguous river channels. Results demonstrate that significant compression of the LUTs is possible with little loss in retrieval accuracy.
Conference Paper
Full-text available
An endmember detection algorithm for hyperspectral imagery using the Dirichlet process to determine the number of endmembers in a hyperspectral image is described. This algorithm provides an estimate of endmember spectra, proportion maps, and the number of endmembers needed for a scene. Updates to the proportion vector for a pixel are sampled using the Dirichlet process. As opposed to previous methods that prune unnecessary endmembers, the proposed algorithm is initialized with one endmember and new endmembers are added through sampling as needed. Results are shown on a two-dimensional dataset and a simulated dataset using endmembers selected from an AVIRIS hyperspectral image.
Conference Paper
Full-text available
A critical step for fitting a linear mixing model to hyperspectral imagery is the estimation of the abundances. The abundances are the percentage of each end member within a given pixel; therefore, they should be non-negative and sum to one. With the advent of kernel based algorithms for hyperspectral imagery, kernel based abundance estimates have become necessary. This paper presents such an algorithm that estimates the abundances in the kernel feature space while maintaining the non-negativity and sum-to-one constraints. The usefulness of the algorithm is shown using the AVIRIS Cuprite, Nevada image.
Conference Paper
Full-text available
Many available techniques for spectral mixture analysis involve the separation of mixed pixel spectra collected by imaging spectrometers into pure component (endmember) spectra, and the estimation of abundance values for each end- member. Although linear mixing models generally provide a good abstraction of the mixing process, several naturally occurring situations exist where nonlinear models may provide the most accurate assessment of endmember abundance. In this paper, we propose a combined linear/nonlinear mixture model which makes use of linear mixture analysis to provide an initial model estimation, which is then thoroughly refined using a multi-layer neural network coupled with intelligent algorithms for automatic selection of training samples. Three different algorithms for automatic selection of training samples, such as border training algorithm (BTA), mixed signature algorithm (MSA) and mophological erosion algorithm (MEA) are developed for this purpose. The proposed model is evaluated in the context of a real application which involves the use of hyperspectral data sets, collected by the Digital Airborne (DAIS 7915) and Reflective Optics System (ROSIS) imaging spectrometers of DLR, operating simultaneously at multiple spatial resolutions.
Article
Full-text available
Spectral mixtures observed in hyperspectral imagery often display nonlinear mixing effects. Since most traditional unmixing techniques are based upon the linear mixing model, they perform poorly in finding the correct endmembers and their abundances in the case of nonlinear spectral mixing. In this paper, we present an unmixing algorithm that is capable of extracting endmembers and determining their abundances in hyperspectral imagery under nonlinear mixing assumptions. The algorithm is based upon simplex volume maximization, and uses shortest-path distances in a nearest-neighbor graph in spectral space, hereby respecting the nontrivial geometry of the data manifold in the case of nonlinearly mixed pixels. We demonstrate the algorithm on an artificial data set, the AVIRIS Cuprite data set, and a hyperspectral image of a heathland area in Belgium.
Article
Full-text available
Localized training data typically utilized to develop a classifier may not be fully representative of class signatures over large areas but could potentially provide useful information which can be updated to reflect local conditions in other areas. An adaptive classification framework is proposed for this purpose, whereby a kernel machine is first trained with labeled data and then iteratively adapted to new data using manifold regularization. Assuming that no class labels are available for the data for which spectral drift may have occurred, resemblance associated with the clustering condition on the data manifold is used to bridge the change in spectra between the two data sets. Experiments are conducted using spatially disjoint data in EO-1 Hyperion images, and the results of the proposed framework are compared to semisupervised kernel machines.
Article
Full-text available
Nonlinear spectral mixture analysis for hyperspectral imagery is investigated without prior information about the image scene. A simple but effective nonlinear mixture model is adopted, where the multiplication of each pair of endmembers results in a virtual endmember representing multiple scattering effect during pixel construction process. The analysis is followed by linear unmixing for abundance estimation. Due to a large number of nonlinear terms being added in an unknown environment, the following abundance estimation may contain some errors if most of the endmembers do not really participate in the mixture of a pixel. We take advantage of the developed endmember variable linear mixture model (EVLMM) to search the actual endmember set for each pixel, which yields more accurate abundance estimation in terms of smaller pixel reconstruction error, smaller residual counts, and more pixel abundances satisfying sum-to-one and nonnegativity constraints.
Article
Full-text available
In this article we apply an analytical solution of the radiosity equation to compute vegetation indices, reflectance spectra, and the spectral bidirectional reflectance distribution function for simple canopy geometries. We show that nonlinear spectral mixing occurs due to multiple reflection and transmission from surfaces. We compare radiosity-derived spectra with single scattering or linear mixing models. We also develop a simple model to predict the reflectance spectrum of binary and ternary mineral mixtures of faceted surfaces. The two facet model is validated by measurements of the reflectance.
Article
Macroscopic and microscopic mixture models and algorithms for hyperspectral unmixing are presented. Unmixing algorithms are derived from an objective function. The objective function incorporates the linear mixture model for macroscopic unmixing and a nonlinear mixture model for microscopic unmixing. The nonlinear mixture model is derived from a bidirectional reflectance distribution function for microscopic mixtures. The algorithm is designed to unmix hyperspectral images composed of macroscopic or microscopic mixtures. The mixture types and abundances at each pixel can be estimated directly from the data without prior knowledge of mixture types. Endmembers can also be estimated. Results are presented using synthetic data sets of macroscopic and microscopic mixtures and usingwell-known, well-characterized laboratory data sets. The unmixing accuracy of this newphysics-based algorithm is compared to linear methods and to results published for other nonlinear models. The proposed method achieves the best unmixing accuracy.
Article
Linear mixing models are widely used in terrestrial remote sensing, with the errors in these models being often attributed to “nonlinear” mixing. Nonlinear mixing refers to the interaction of light with multiple target materials. Reflectance data from creosote bush in the Manix Basin of the Mojave Desert is used to show the existence and importance of nonlinear mixing in and region vegetation. It shows that the difference in the reflectance spectrum of plants against a soil background and the spectrum of the plant against a dark background is well represented by light that has interacted with both the soil and the plant.
Conference Paper
In previous work, kernel methods were introduced as a way to generalize the linear mixing model for hyperspectral data. This work led to a new adaptive kernel unmixing method that both identified and unmixed linearly and intimately mixed pixels. However, the results from this previous research was limited to lab-based data where the endmembers were known a-priori and atmospheric effects were absent. This paper documents the results of the adaptive kernel-based unmixing techniques on real-world hyperspectral data collected over Smith Island, Virginia, USA. The results show that the adaptive kernel unmixing method can readily identify where nonlinear mixtures exist in the image even when perfect knowledge of the endmembers and the reflectance cannot be known.
Conference Paper
In this paper a new spectral unmixing approach applied to hyperspectral imagery using Neural Networks architectures is proposed. As well as the inversion phase, Neural networks are also considered for the dimensionality reduction of the measurement vector. The performance of the entire procedure has been tested on two sets of experimental data provided by the AHS instrument and by the CHRIS Proba mission. The unmixing results given by the neural approach have been compared with those obtained using a more standard technique based on Linear Unmixing Model.
Article
This paper addresses the unmixing of hyperspectral images, when intimate mixtures are present. In these scenarios the light suffers multiple interactions among distinct endmembers, which is not accounted for by the linear mixing model. A two-step method to unmix hyperspectral intimate mixtures is proposed: first, based on the Hapke intimate mixture model, the reflectance is converted into single scattering albedo average. Second, the mass fractions of the endmembers are estimated by a recently proposed method termed simplex identification via split augmented Lagrangian (SISAL). The proposed method is evaluated on a well known intimate mixture data set.
Article
A method of incorporating macroscopic and microscopic reflectance models into hyperspectral pixel unmixing is presented and discussed. A vast majority of hyperspectral unmixing methods rely on the linear mixture model to describe pixel spectra resulting from mixtures of endmembers. Methods exist to unmix hyperspectral pixels using nonlinear models, but rely on severely limiting assumptions or estimations of the nonlinearity. This paper will present a hyperspectral pixel unmixing method that utilizes the bidirectional reflectance distribution function to model microscopic mixtures. Using this model, along with the linear mixture model to incorporate macroscopic mixtures, this method is able to accurately unmix hyperspectral images composed of both macroscopic and microscopic mixtures. The mixtures are estimated directly from the hyperspectral data without the need for a priori knowledge of the mixture types. Results are presented using synthetic datasets, of macroscopic and microscopic mixtures, to demonstrate the increased accuracy in unmixing using this new physics-based method over linear methods. In addition, results are presented using a well-known laboratory dataset. Using these results, and other published results from this dataset, increased accuracy in unmixing over other nonlinear methods is shown.
Article
Accurate land cover classification that ensures robust mapping under diverse acquisition conditions is important in environmental studies where the identification of the land cover changes and its quantification have critical implications for management practices, functioning of ecosystems, and impact of climate. While remote sensing data have served as a useful tool for large scale monitoring of the earth, hyperspectral data offer an enhanced capability for more accurate land cover classification. However, constructing a robust classification framework for hyperspectral data poses issues that stem from inherent properties of hyperspectral data, including highly correlated spectral bands, high dimensionality of data, nonlinear spectral responses, and nonstationarity of samples in space and time. ^ This dissertation addresses the issues in hyperspectral data classification by leveraging the concept of manifolds. A manifold is a nonlinear low dimensional subspace that is supported by data samples. Manifolds can be exploited in developing robust feature extraction and classification methods that are pertinent to the aforementioned issues. ^ In this dissertation, various manifold learning algorithms that are widely used in machine learning community are investigated for the classification of hyperspectral data. Performance of global and local manifold learning methods is investigated in terms of (a) parameter values, (b) number of features retained, and (c) scene characteristics of hyperspectral data. The empirical study involving several data sets with diverse characteristics is outlined in Chapter 3. Results indicate that the manifold coordinates produce generally higher classification accuracies compared to those obtained by linear feature extraction methods, when they are used with proper settings. ^ Chapter 4 addresses two limitations in manifold learning—(a) heavy computational requirements and (b) lack of attention to spatial context—which limits the applicability of manifold learning algorithms for large scale remote sensing data. Approximation approaches such as the Nyström methods are employed to mitigate the computation burden, where a set of landmark samples is first selected for the construction of the approximate manifolds, and the remaining samples are then linearly embedded in the manifold. While various landmark selection schemes are possible (e.g. random selection, clustering based approaches), spatially representative samples that are potentially relevant to data on grids can be obtained if the spatial context is considered in the selection scheme. A framework for representing the spatial coherence of samples is proposed using the kernel feature extraction framework. The proposed method produces a set of new features in which a unique spatial coherence pattern for homogeneous regions is captured in the individual features, which yield high classification accuracies and qualitatively superior results. ^ Finally, an adaptive classification framework that exploits manifolds is proposed to obtain robust classification results for hyperspectral data. Spectral signatures can vary significantly across extended areas, often resulting in poor classification of land cover. The proposed adaptive framework employs a manifold regularization classifier, where the classifier is trained with labeled samples in one location and adapted to samples in spatially disjoint areas that exhibit significantly different distributions. In experimental studies, classification accuracies were higher for the proposed approach than for other kernel based semi-supervised classification methods.^
Article
In this paper, we proposed a nonlinear unmixing matching algorithm using bidirectional reflectance function (BDRF) and maximum liklihood estimation (MLE). Spectral unmixing algorithms are used to determine the contribution of multiple substances in a single pixel of a hyperspectral image. For any kind of unmixing model basic approach is to describe how different substances are combined in a composite spectrum. When a linear reationship exists between the fractional abundance of the substances, linear unmixing algorithms can determine the endmembers present in that particular pixel. When the relationship is not linear rather each substance is randomly distributed in a homogeneous way the mixing is called nonlinear. Though there are plenty of unmixing algorithms based on linear mixing models (LMM) but very few algorithms have developed to to unmix nonlinear data. We proposed a nonlinear unmixing technique using BDRF and MLE and tested our algorithm using both synthetic and real hyperspectral data.
Article
A technique is presented for quantitative analysis of planetary reflectance spectra as mixtures of particles on microscopic and macroscopic scales using principal components analysis. This technique allows for determination of the endmembers being mixed, their abundance, and the scale of mixing, as well as other physical parameters. Eighteen lunar telescopic reflectance spectra of the Copernicus crater region, from 600 nm to 1800 nm in wavelength, are modeled in terms of five likely endmembers: mare basalt, mature mare soil, anorthosite, mature highland soil, and clinopyroxene.
Article
in recent years, the Isomap method has been widely used for making nonlinearly reduction for hyperspectral image. However, during the construction process of the short path graph, the boundary points, which are not noise points, have always been omitted for the consideration of the stability of the graph. In the paper, we introduce the PLS method to repair and simulate the manifold coordinates of the boundary points (MCBP) in the graph. And the simulated manifold coordinates have been evaluated from two different aspects to verify our method. The results shows that the simulated manifold coordinates agree well with the real and also keep geometry structure of high dimensional hyperspectral image. It will be quite useful for further classification or visualization with low dimensional manifold image.
Article
When the number of labeled samples is limited, traditional supervised feature selection techniques often fail due to unrepresentative sample problem. However, in classification of hyperspectral data, the labeled samples are often difficult, expensive or time-consuming to obtain. Recently, several semi-supervised feature selection algorithms have been proposed, which aim at doing feature selection using some unlabeled data. In this paper, a novel semi-supervised band selection method which aims to improve classification accuracy of highspectral remote sensing data is proposed. This algorithm combines Fisher's criteria and Graph Laplacian, exploits labeled and unlabeled samples at the same time. With the help of the generalized eigenvalue method, we can easily get the loading factors from the linear transformation matrix to determine the weight value for each band. Experimental results demonstrate effectiveness of the proposed method.
Article
In previous work, kernel methods were introduced as a way to generalize the linear mixing model for hyperspectral data. This work led to a new physics-based kernel that allowed accurate unmixing of intimate mixtures. Unfortunately, the new physics-based kernel did not perform well on linear mixtures; thus, different kernels had to be used for different mixtures. Ideally, a single unified kernel that can perform both unmixing of areal and intimate mixtures would be desirable. This paper presents such a kernel that can automatically identify the underlying mixture type from the data and perform the correct unmixing method. Results on real-world, ground-truthed intimate and linear mixtures demonstrate the ability of this new data-driven kernel to perform generalized unmixing of hyperspectral data.
Article
The spectral unmixing of mixed pixels is a key factor in remote sensing images, especially for hyperspectral imagery. A commonly used approach to spectral unmixing has been linear unmixing. However, the question of whether linear or nonlinear processes dominate spectral signatures of mixed pixels is still an unresolved matter. In this study, we put forward a new nonlinear model for inferring end‐member fractions within hyperspectral scenes. This study focuses on comparing the nonlinear model with a linear model. A detail comparative analysis of the fractions ‘sunlit crown’, ‘sunlit background’ and ‘shadow’ between the two methods was carried out through visualization, and comparing with supervised classification using a database of laboratory simulated‐forest scenes. Our results show that the nonlinear model of spectral unmixing outperforms the linear model, especially in the scenes with translucent crown on a white background. A nonlinear mixture model is needed to account for the multiple scattering between tree crowns and background.
Conference Paper
This paper studies a new Bayesian algorithm to unmix hyperspectral images. The algorithm is based on the recent normal compositional model introduced by Eismann. Contrary to the standard linear mixing model, the endmember spectra are assumed to be random signatures with know mean vectors. Appropriate prior distributions are assigned to the abundance coefficients to ensure the usual positivity and sum-to-one constraints. However, the resulting posterior distribution is too complex to obtain a closed form expression for the Bayesian estimators. A Markov chain Monte Carlo algorithm is then proposed to generate samples distributed according to the full posterior distribution. These samples are used to estimate the unknown model parameters. Several simulations are conducted on synthetic and real data to illustrate the performance of the proposed method.
Conference Paper
In previous work, kernel methods were introduced as a way to generalize the linear mixing model. This work led to a new set of algorithms that performed the unmixing of hyperspectral imagery in a reproducing kernel Hilbert space. By processing the imagery in this space different types of unmixing could be introduced - including an approximation of intimate mixtures. Whereas previous research focused on developing the mathematical foundation for kernel unmixing, this paper focuses on the selection of the kernel function. Experiments are conducted on real-world hyperspectral data using a linear, a radial-basis function, a polynomial, and a proposed physics-based kernel. Results show which kernels provide the best ability to perform intimate unmixing.
Conference Paper
This paper presents a new linear hyperspectral unmixing method of the minimum volume class, termed simplex identification via split augmented Lagrangian (SISAL). Following Craig's seminal ideas, hyperspectral linear unmixing amounts to finding the minimum volume simplex containing the hyperspectral vectors. This is a nonconvex optimization problem with convex constraints. In the proposed approach, the positivity constraints, forcing the spectral vectors to belong to the convex hull of the end member signatures, are replaced by soft constraints. The obtained problem is solved by a sequence of augmented Lagrangian optimizations. The resulting algorithm is very fast and able so solve problems far beyond the reach of the current state-of-the art algorithms. The effectiveness of SISAL is illustrated with simulated data.
Conference Paper
For hyperspectral imagery, the term ldquospectral edgerdquo has not been clearly defined because of the complexity of the high dimensional properties in spectral space. In this paper, a new definition of the spectral edge is presented based on a data-driven mathematic approach Manifold Learning. It considers both the spectral features in spectral space and the discontinuity of image function in image space. Experimental analysis using EO-1 hyperspectral imagery shows that the spectral edge based method has desired performance to describe the edge contours in the hyperspectral imagery.
Conference Paper
This paper is an elaboration of the DECA algorithm to blindly unmix hyperspectral data. The underlying mixing model is linear, meaning that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. The proposed method, as DECA, is tailored to highly mixed mixtures in which the geometric based approaches fail to identify the simplex of minimum volume enclosing the observed spectral vectors. We resort then to a statistical framework, where the abundance fractions are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. With respect to DECA, we introduce two improvements: 1) the number of Dirichlet modes are inferred based on the minimum description length (MDL) principle; 2) The generalized expectation maximization (GEM) algorithm we adopt to infer the model parameters is improved by using alternating minimization and augmented Lagrangian methods to compute the mixing matrix. The effectiveness of the proposed algorithm is illustrated with simulated and read data.
Article
In this paper, we address the problem of redundancy reduction of high-dimensional noisy signals that may contain anomaly (rare) vectors, which we wish to preserve. Since anomaly data vectors contribute weakly to the l<sub>2</sub>-norm of the signal as compared to the noise, l<sub>2</sub> -based criteria are unsatisfactory for obtaining a good representation of these vectors. As a remedy, a new approach, named Min-Max-SVD (MX-SVD) was recently proposed for signal-subspace estimation by attempting to minimize the maximum of data-residual l<sub>2</sub>-norms, denoted as l<sub>2,l</sub> and designed to represent well both abundant and anomaly measurements. However, the MX-SVD algorithm is greedy and only approximately minimizes the proposed l<sub>2,l</sub>-norm of the residuals. In this paper we develop an optimal algorithm for the minization of the l<sub>2,l</sub>-norm of data misrepresentation residuals, which we call Maximum Orthogonal complements Optimal Subspace Estimation (MOOSE). The optimization is performed via a natural conjugate gradient learning approach carried out on the set of n dimensional subspaces in IR m , m ≫ n , which is a Grassmann manifold. The results of applying MOOSE, MX-SVD, and l<sub>2</sub>- based approaches are demonstrated both on simulated and real hyperspectral data.
Article
Although hyperspectral remotely sensed data are believed to be nonlinear, they are often modeled and processed by algorithms assuming that the data are realizations of some linear stochastic processes. This is likely due to the reason that either the nonlinearity of the data may not be strong enough, and the algorithms based on linear data assumption may still do the job, or the effective algorithms that are capable of dealing with nonlinear data are not widely available. The simplification on data characteristics, however, may compromise the effectiveness and accuracy of information extraction from hyperspectral imagery. In this paper, we are investigating the existence of non- linearity in hyperspectral data represented by a 4-m Airborne Visible/Infrared Imaging Spectrometer image acquired over an area of coastal forests on Vancouver Island. The method employed for the investigation is based on the statistical test using surrogate data, an approach often used in nonlinear time series analysis. In addition to the high-order autocorrelation, spectral angle is utilized as the discriminating statistic to evaluate the differences between the hyperspectral data and their surrogates. To facilitate the statistical test, simulated data sets are created under linear stochastic constraints. Both simulated and real hyperspectral data are rearranged into a set of spectral series where the spectral and spatial adjacency of the original data is maintained as much as possible. This paper reveals that the differences are statistically significant between the values of discriminating statistics derived from the hyperspectral data and their surrogates. This indicates that the selected hyperspectral data are nonlinear in the spectral domain. Algorithms that are capable of explicitly addressing the nonlinearity are needed for processing hyperspectral remotely sensed data.
Article
Nonlinear models have recently shown interesting properties for spectral unmixing. This paper studies a generalized bilinear model and a hierarchical Bayesian algorithm for unmixing hyperspectral images. The proposed model is a generalization not only of the accepted linear mixing model but also of a bilinear model that has been recently introduced in the literature. Appropriate priors are chosen for its parameters to satisfy the positivity and sum-to-one constraints for the abundances. The joint posterior distribution of the unknown parameter vector is then derived. Unfortunately, this posterior is too complex to obtain analytical expressions of the standard Bayesian estimators. As a consequence, a Metropolis-within-Gibbs algorithm is proposed, which allows samples distributed according to this posterior to be generated and to estimate the unknown model parameters. The performance of the resulting unmixing strategy is evaluated via simulations conducted on synthetic and real data.
Conference Paper
Many high-dimensional datasets can be mapped onto lower-dimensional linear simplexes, parametrized by barycentric coordinates. We present an unsupervised algorithm that is able to find the barycentric coordinates and corresponding vertices of such a high-dimensional dataset, by combining manifold learning with a distance geometry based algorithm for finding a maximal volume inscribed simplex. The performance of the algorithm is demonstrated on a Swiss-roll dataset that is restricted to a simplex, and on the spectral unmixing of hyperspectral imagery.
Conference Paper
In this paper, we present an unmixing algorithm that is capable to determine endmembers and their abundances in hyperspectral imagery under non-linear mixing assumptions. The algorithm is an based upon the popular N-findR method, but uses distances between points in spectral space instead of the spectral values. These distances are defined as shortest-path distances in a nearest-neighbor graph, hereby respecting the non-trivial geometry of the data manifold in the case of nonlinearly mixed pixels. This allows the algorithm to be applied under non-linear mixing conditions. A demonstration on artificial data is given.
Conference Paper
A local proximity based data regularization framework for active learning is proposed as a means to optimally construct the training set for supervised classification of hyperspectral data, thereby reducing the effort required to acquire ground reference data. Based on the "Consistency Assumption", a local k-nearest neighborhood Laplacian Graph based regularizer is constructed to explore local inconsistency that often results from insufficient description of the current learner for the data space. Two graph regularization methods, which differ in the approach used to construct the graph weights, are investigated. One utilizes only spectral information, while the other further incorporates local spatial information through a composite Gaussian heat kernel. The regularizer queries samples with greatest violation of the smoothness assumption based on the current model, then adjusts the decision function towards the direction that is most consistent with both labeled and unlabeled data. Experiments show excellent performance on both unlabeled and unseen data for 10 class hyperspectral image data acquired by AVIRIS, as compared to random sampling and the state-of-the-art SVM<sub>SIMPLE</sub>.
Article
Approaches to combine local manifold learning (LML) and the k -nearest-neighbor ( k NN) classifier are investigated for hyperspectral image classification. Based on supervised LML (SLML) and k NN, a new SLML-weighted k NN (SLML-W k NN) classifier is proposed. This method is appealing as it does not require dimensionality reduction and only depends on the weights provided by the kernel function of the specific ML method. Performance of the proposed classifier is compared to that of unsupervised LML (ULML) and SLML for dimensionality reduction in conjunction with the k NN (ULML- k NN and SLML- k NN). Three LML methods, locally linear embedding (LLE), local tangent space alignment (LTSA), and Laplacian eigenmaps, are investigated with these classifiers. In experiments with Hyperion and AVIRIS hyperspectral data, the proposed SLML-W k NN performed better than ULML- k NN and SLML- k NN, and the highest accuracies were obtained using weights provided by supervised LTSA and LLE.
Conference Paper
A novel manifold learning feature extraction approach-preserving neighborhood discriminant embedding (PNDE) of hyperspectral image is proposed in this paper. The local geometrical and discriminant structure of the data manifold can be accurately characterized by within-class neighboring graph and between-class neighboring graph. Unlike manifold learning, such as LLE, Isomap and LE, which cannot deal with new test samples and images larger than 70×70, the method here can process full scene hyperspectral images. Experiments results on hyperspectral datasets and real-word datasets show that the proposed method can efficiently reduce the dimensionality while maintaining high classification accuracy. In addition, only a small amount of training samples are needed.
Conference Paper
Nonlinear mixture analysis for hyperspectral imagery is investigated in this paper. A simple but effective nonlinear mixture model is adopted, where the multiplication of each pair of endmembers results in another ¿endmember¿, representing nonlinear scattering effect during pixel construction process. The analysis is followed by original linear demixing process. Due to the larger number of nonlinear terms being added, the resulting abundance estimation may contain some error if most of endmembers do not really participate in the mixture of a pixel. We take advantage of the developed endmember variable linear mixture model (EVLMM) to search the actual endmember set for each pixel, which yields more accurate abundance estimation.
Article
Accurate monitoring of spatial and temporal variation in tree cover provides essential information for steering management practices in orchards. In this light, the present study investigates the potential of Hyperspectral Mixture Analysis. Specific focus lies on a thorough study of non-linear mixing effects caused by multiple photon scattering. In a series of experiments the importance of multiple scattering is demonstrated while a novel conceptual Nonlinear Spectral Mixture Analysis approach is presented and successfully tested on in situ measured mixed pixels in Citrus sinensis L. orchards. The rationale behind the approach is the redistribution of nonlinear fractions (i.e., virtual fractions) among the actual physical ground cover entities (e.g., tree, soil). These ‘virtual’ fractions, which account for the extent and nature of multiple photon scattering only have a physical meaning at the spectral level but cannot be interpreted as an actual physical part of the ground cover. Results illustrate that the effect of multiple scattering on Spectral Mixture Analysis is significant as the linear approach provides a mean relative root mean square error (RMSE) for tree cover fraction estimates of 27%. While traditional nonlinear approaches only slightly reduce this error (RMSE = 23%), important improvements are obtained for the novel Nonlinear Spectral Mixture Analysis approach (RMSE = 12%).
Article
A simple one-dimensional geometrical-optics model for spectral albedo of powdered surfaces, in particular of lunar regolith, is presented. As distinct from, e.g., the Kubelka–Munk formula, which deals with two effective parameters of a medium, the suggested model uses spectra of optical constants of the medium materials. Besides, our model is invertible, i.e., allows estimations of spectral absorption using albedo spectrum, ifa prioridata on the real part of refractive index and surface porosity are known. The model has been applied to interpret optical properties of the Moon. In particular, it has been shown that: (1) both color indices and depth of absorption bands for regolith-like surfaces depend on particle size, which should be taken into account when correlations between these optical characteristics and abundance of Fe and Ti in the lunar regolith are studied; (2) fine-grained reduced iron occurring in regolith particles affects band minima positions in reflectance spectra of lunar pyroxenes and, consequently, affects the result of determination of pyroxene types and Fe abundance by Adams' method.