ArticlePDF Available

Classification of Multisensor Images with Different Spatial Resolution

Authors:

Abstract

The paper is focused on the analysis of classification possibilities of multisensor data with different spatial resolutions using combined classifiers based on Bayes approach with equal prior probabilities and on minimum of the Mahalanobis distance. The task set up for the 2014 IEEE GRSS Data Fusion Contest was chosen as an application example. High resolution RGB image and lower resolution thermal infrared image from the same urban area were processed to perform classification of each higher resolution pixel. Development of a fast and straightforward procedure was targeted and combined classifiers are proposed for that, exploiting spectral features from each data set separately. It is shown that data fusion can be achieved using the proposed classifiers and improvement of classification quality can be obtained with respect to the cases where only one of the data sets is used. The best classification results were obtained using the combined Bayes- Type classifier that provided overall classification accuracy of about 95 % when the ground truth pixels from the high resolution RGB image were used both for design and testing.
ELEKTRONIKA IR ELEKTROTECHNIKA,ISSN 1392-1215,VOL.21,NO.5,2015
1Abstract—The paper is focused on the analysis of
classification possibilities of multisensor data with different
spatial resolutions using combined classifiers based on Bayes
approach with equal prior probabilities and on minimum of the
Mahalanobis distance. The task set up for the 2014 IEEE GRSS
Data Fusion Contest was chosen as an application example.
High resolution RGB image and lower resolution thermal
infrared image from the same urban area were processed to
perform classification of each higher resolution pixel.
Development of a fast and straightforward procedure was
targeted and combined classifiers are proposed for that,
exploiting spectral features from each data set separately. It is
shown that data fusion can be achieved using the proposed
classifiers and improvement of classification quality can be
obtained with respect to the cases where only one of the data
sets is used. The best classification results were obtained using
the combined Bayes- type classifier that provided overall
classification accuracy of about 95 % when the ground truth
pixels from the high resolution RGB image were used both for
design and testing.
Index Terms—Remote sensing, hyperspectral imaging,
image classification, data fusion.
I. INTRODUCTION
Remote sensing from airplanes and satellites has become a
widely used tool for solving tasks in management of natural
resources, urban planning, precision agriculture and other
areas. Different types of sensors are used for that, including
multispectral, hyperspectral, LiDAR, SAR, acquiring
different kinds of data. It is a common practice to employ
several sensors at once to obtain complementary information
from the same area. For example, LiDAR and multispectral
data from the same forest area are often collected to obtain
information for its inventory. LiDAR data can be processed
to obtain a height model of the stand, while spectral data can
be used to detect species or assess health of trees etc. Quite
often in this case, data from two different sources should be
used in a combined way to solve a specific task, i.e. data
fusion should be performed during processing. Data from
optical sensors are in general acquired in the form of three-
dimensional images where each pixel is related with its
spatial coordinates calculated from simultaneously collected
GPS information. Pixel size in this case depends on the
Manuscript received January 5, 2015; accepted June 26, 2015.
This research was performed within the project No.
2013/0031/2DP/2.1.1.1.0/13/APIA/VIAA/010 funded by the European
Regional Development Fund.
distance to the target, viewing angle and number of sensing
cells in the sensor. LiDAR and SAR data are usually pre-
processed to obtain images characterizing geographical areas
under study and are also registered to geographical
coordinates. Pixel size in this case can be chosen in the pre-
processing procedure but it is limited by the amount of
collected data within the spatial unit.
When data from sensors are obtained with different spatial
resolution, their fusion becomes a challenging task. It is also
crucial to perform precise registration of acquired images to
geographical coordinates. Otherwise pixels of these images
cannot be properly related with physical objects observed
and their combined use for analysis of these objects cannot
be performed correctly.
II.STATE OF THE ART
One of the major tasks in remote sensing is classification
of geographical areas or distinct objects represented by
acquired images. There are multiple classification
approaches developed for that [1]. If the data representing
each class can be interpreted as a sample realisation from the
multidimensional universe with Gaussian distribution, Bayes
classification approach is applicable and has shown good
results [2]. Therefore it is purposeful to consider Bayesian
approach to classification of multiresolution data obtained
from different sensors.
Classification of multisensor data with different spatial
resolutions is usually performed by combining outputs of
separate classifiers each dealing with data from one sensor,
or designing a single classifier operating with a fused image
[3]. The first approach is simpler in general and there are
multiple studies following it [4]–[10]. Approaches to
combination of multiple individual classifiers including
Bayesian ones were considered by Xu et al. [4] with
application for handwriting recognition. Averaged Bayes
classifier was proposed to combine results of different
classifiers from the same data. Enhanced combination of
independent Bayesian classifiers was elaborated in [6],
based on original definition in [5]. As a result, an iterative
procedure was proposed using steps, similar to the
expectation-maximization algorithm. Other sophisticated
approaches include using of neural networks [7], Dempster-
Shafer theory [8] and fuzzy sets [10]. Authors of [3] and
[11] propose forming of the adequate mathematical models
for description of SAR images on the basis of probability
Classification of Multisensor Images with
Different Spatial Resolution
Aivars Lorencs1, Ints Mednieks1, Juris Sinica-Sinavskis1
1Institute of Electronics and Computer Science,
Dzerbenes St. 14, LV-1006 Riga, Latvia
mednieks@edi.lv
http://dx.doi.org/10.5755/ j01.eee.21.5.13333
81
ELEKTRONIKA IR ELEKTROTECHNIKA,ISSN 1392-1215,VOL.21,NO.5,2015
integral transforms of natural random values. However,
application of classification algorithms obtained in this way
has not provided excellent results with chosen training and
test sets.
a)
b)
Fig. 1. Illustration of processed multisensor data: (a) part of the RGB
image; (b) the same part of the band 10 image acquired by the thermal
infrared sensor, with ground truth for 7 classes.
Storvik et al. [12] proposed a Bayesian approach for the
case when pixels of lower resolution images precisely
overlap those from the higher resolution image and pixel
dimensions at lower resolutions are entire multiples of the
pixel dimensions at the higher (reference) resolution.
However, this situation can be achieved only in a single
multi-modal sensor.
III. TASK AND GOALS
A more common application case where the data were
acquired by two different sensors working in different
spectral ranges was considered within the 2014 IEEE GRSS
Data Fusion Contest (DFC) [13]. High resolution (~0.2 m ×
0.2 m pixel) RGB images and lower resolution (~1 m × 1 m
pixel) thermal infrared (TI) hyperspectral data in 84 bands
with wavelengths from 7.8 μm to 11.5 μm acquired from
urban area were presented for this contest. Classification of
higher resolution pixels into 7 land cover classes was
targeted and ground truth for all classes was presented to
facilitate that. In this paper we propose a classifier exploiting
DFC data from both sensors in its classification rule, present
and analyse the results obtained using different classifier
designs. The following goals were set up:
To design a classifier distinguishing all categories of
higher resolution pixels with high precision, i.e. low error
rate;
To provide a general classification approach not
exploiting specific features of the analysed scene i.e.
independent from image specifics;
To propose a solution requiring low computational
resources i.e. facilitate fast processing of large data sets.
Only “subset” images of the data set “grss_dfc_2014
presented in the initial stage of the DFC were processed,
including all ground truth regions (see Fig. 1). Ground truth
regions were presented in a separate image with the same
spatial resolution as the RGB image and covered ~17 % of
the whole area of the “subset” image. To prepare ground
truth for the TI image, only pixels fully included in defined
ground truth regions were used, comprising ~12 % of the
whole area of the “subset” TI image.
IV. DESIGN OF SEPARATE CLASSIFIERS
To provide a solution of a defined task, a classifier
obtained using an appropriate data fusion method should be
designed. Our chosen approach presumes two stages: in the
first stage, two separate classifiers are designed, each using
one of the available images only; in the second stage, these
two classifiers are “mated” into a combined one, expecting
an increase of precision.
The vector of mean values
1 2 3
, , T
k k k k
μ
for
each class k,
1,7k
was calculated first from the ground
truth pixels of this class within the RGB image. After that,
covariance matrices for each class were calculated
1
1,
1
k
cT
k k k
k
c
Σ x μ x μ
(1)
where
k
c
is a number of pixels in the design set of the class
k;
is a column vector of the RGB intensity values of
pixel
.
Assuming that intensity distribution of pixels of the class k
in the RGB bands is represented by the random vector
1 2 3
, , T
k k k k
X X XX
, (1) actually gives us a point
estimate of the covariance matrix of this random vector.
To prepare a classifier for the TI image, 8 spectral bands
out of 84 were chosen, featuring lower image noise and
taken from different parts of spectral range of the TI sensor.
The following octet of TI bands was formed: (4, 14, 26, 36,
47, 57, 69, 78). After that, the same procedure as for the
RGB image was applied i.e. a vector of mean values
1 2 8
, ,..., T
k k k k
μ
for each class of pixels was
obtained and the covariance matrix calculated as follows
1
1,
1
k
cT
k k k
k
c
S y μ y μ
(2)
where
k
c
is the number of pixels in the design set of the
class k, formed for the TI image,
is the column vector of
the TI intensity values of pixel
taken from the chosen
bands.
With certain credibility we may assume that intensity
distribution of pixels of the class kin the RGB image is
represented by the random vector with Gaussian distribution
k
X
. The authors of [3] and [11] also consider this
82
ELEKTRONIKA IR ELEKTROTECHNIKA,ISSN 1392-1215,VOL.21,NO.5,2015
distribution as an adequate model for multidimensional
optical images. In this case, probability density function for
this vector can be approximately expressed by
1/2
3/2 1
2 exp ,
2
k k k
f M
xΣ x
(3)
where
k
Mx
is the square of the Mahalanobis distance
from vector
1 2 3
, , T
x x xx
to the vector of mean values
.
Accepting hypothesis that intensity distribution of pixels
of the class kin the chosen 8 bands of the TI image is
represented by the random vector
k
Y
with Gaussian
distribution, probability density function for this vector can
be approximately expressed by
1/2
8/2 1
2 exp ,
2
k k k
g M
y S y
(4)
where
k
My
is the square of the Mahalanobis distance
from vector
1 2 8
, ,..., T
y y yy
to the vector of mean
values
.
If the number of pixels that do not belong to any of the 7
classes is negligibly small, we can define two classifiers
which ignore these pixels.
We will denote by Wa classifier which classifies each
pixel in the RGB image as a pixel of class kif and only if the
intensity vector xmeets the following condition for each j
1.
k j
f f x x
(5)
In an analogous way we will define a classifier Vwhich
classifies each pixel in the TI image as a pixel of class kif
and only if its intensity vector ymeets the following
condition for each j
1.
k j
g g y y
(6)
Applying logarithmic operation to (5) and (6) we can
rewrite the classification rules in the following equivalent
form: classifier Wis a classifier which classifies a RGB
image pixel with intensity vector xas a pixel of class kif and
only if
ln ,
k
j k j
M M Σ
x x Σ
(7)
for each j. And, similarly, classifier Vis a classifier which
classifies a TI image pixel with intensity vector yas a pixel
of class kif and only if
ln ,
k
j k j
M M
S
y y S
(8)
for each j.
Apparently, classifiers Wand Vare qualified as Bayes
type classifiers. Our informative basis here allows to define
two other separate classifiers as well. Let us denote by W' a
classifier which classifies a RGB image pixel with intensity
vector xas a pixel of class kif and only if
,
k j
M Mx x
(9)
for each j. Similarly, classifier V' classifies a TI image pixel
with intensity vector yas a pixel of class kif and only if
,
k j
M M
y y
(10)
for each j.
V.DESIGN OF THE COMBINED CLASSIFIER
Combination of the separate classifiers is performed on
the basis of procedure relating each pixel from the RGB
image with a pixel from TI image so that the bigger pixel
from the TI image includes the larger part of the smaller
RGB image pixel. We will name such pixel from the TI
image the associated pixel. As the boundary pixels of the
ground truth polygons may have associated pixels only
partly corresponding to the defined ground truth areas, they
are eliminated from these polygons using the morphological
erosion and not used for design of the combined classifier.
Let us define the classification rule of the RGB image
pixels. We will denote by Ua classifier of RGB image
pixels which classifies a RGB image pixel
with intensity
vector xas a pixel of class kif and only if
ln ln ,
a a
j k j k
k k
j j
M M M M
x x y y
Σ S
Σ S
(11)
for each j, where
is the vector of the TI intensity values
of the pixel associated with pixel
. Classifier U' is
obtained by replacing the rule (11) with rule (12)
0,
a a
j k j k
M M M M
x x y y
(12)
for each j.
VI. RESULTS AND CONCLUSIONS
Testing of classifiers W,W',V,V',U, and U' on the basis
of the design sets provided the results presented in Table I.
To obtain comparable results, morphological erosion
introduced for design of the combined classifier was applied
for design of all classifiers. Kappa coefficient is based on
Cohen’s kappa measure and characterizes classifier's quality
in comparison with the ideal classifier. Classification results
of individual pixels obtained using the best classifier Uare
visualized in Fig. 2.
As it is seen from the table, Bayes type classifiers provide
better overall accuracy than their counterparts based on
Mahalanobis distance minimum principle in all cases. In
83
ELEKTRONIKA IR ELEKTROTECHNIKA,ISSN 1392-1215,VOL.21,NO.5,2015
addition, combined classifier provides better quality
measures than individual ones. Therefore we can conclude
that the proposed combination approach of separate
classifiers is fruitful.
In Fig. 2 one may notice that obtained pixel classification
errors are mainly related with intermixing of vegetation and
tree classes as well as erroneous classification of roads or
concrete roofs as grey roofs. Such errors were somewhat
expected due to spectral similarity of these classes and more
sophisticated classification approaches should be used to
distinguish between them.
TABLE I. CLASSIFICATION RESULTS.
W
W'
V
V'
U
U'
User's accuracy, %
road
96
53
92
91
96
87
trees
90
94
38
31
89
94
red roof
96
97
48
31
99
99
grey roof
83
94
57
77
91
96
concrete roof
97
96
19
59
97
96
vegetation
91
84
57
34
93
86
bare soil
94
90
54
50
97
97
Overall accuracy, %
93
83
55
54
95
91
Kappa
0.91
0.79
0.45
0.44
0.94
0.89
Fig. 2. Classification results of individual ground truth pixels obtained using the combined Bayes- type classifier U.
Analysis of the classification results lead to conclusion
that the assumption about the Gaussian distribution of the
intensity vectors of pixels within a land cover category is a
sufficiently adequate model of the real processed data.
Notwithstanding the already achieved, classification
accuracy can be probably improved by forming subclasses
within categories where the Bayes type classifier provides
relatively worse results. This could be planned as future
work. Another possible way to improve classification results
could be more thorough investigation of informativeness of
the spectral bands in TI image. However, in this work we
focused on development of the data fusion principle for
classification and mentioned improvement possibilities were
left out of its scope.
The authors of paper [12] have obtained good
classification results by designing classifiers based on more
sophisticated mathematical model, namely Markov random
field theory, describing distributions of the intensity vectors
of pixels. However, it cannot be substantiated that their
approach should be followed in particular application case.
Their assumption about the pixel sizes and locations in
images from different sensors is true in specific case which
cannot be related with our task. In addition, Cohen's kappa
measure achieved by these authors for real test data is not
higher than 0.9 and it is obtained for a different system of
pixel categories. The combined classifier proposed in this
paper seems much simpler and straightforward for
implementation.
ACKNOWLEDGMENT
The authors would like to thank Telops Inc. (Quebec,
Canada) for acquiring and providing the data used in this
study, the IEEE GRSS Image Analysis and Data Fusion
Technical Committee and Dr. Michal Shimoni (Signal and
Image Centre, Royal Military Academy, Belgium) for
organizing the 2014 Data Fusion Contest, the Centre de
Recherche Public Gabriel Lippmann (CRPGL, Luxembourg)
and Dr. Martin Schlerf (CRPGL) for their contribution of the
Hyper-Cam LWIR sensor, and dr. Michaela De Martino
(University of Genoa, Italy) for her contribution to data
preparation. The authors would like to thank reviewers for
their valuable suggestions as well as Madis Menke for his
assistance.
REFERENCES
[1] J. A. Richards, X. Jia, Remote Sensing Digital Image Analysis. An
Introduction. Springer, 439 p., 2006.
[2] R. Dinuls, G. Erins, A. Lorencs, I. Mednieks, J. Sinica-Sinavskis,
84
ELEKTRONIKA IR ELEKTROTECHNIKA,ISSN 1392-1215,VOL.21,NO.5,2015
“Tree species identification in mixed Baltic forest using LiDAR and
multispectral data”, IEEE Selected Topics Appl. Earth Observ.
Remote Sensing, vol. 5, no. 2, pp. 594–603, 2012. [Online].
Available: http://dx.doi.org/10.1109/JSTARS.2012.2196978
[3] A. Voisin, V. A. Krylov, G. Moser, S. B. Serpico, J. Zerubia,
“Supervised classification of multisensor and multiresolution remote
sensing images with a hierarchical copula-based approach”, IEEE
Trans. Geosci. Remote Sens., vol. 52, no. 6, pp. 3346–3358, 2014.
[Online]. Available: http://dx.doi.org/10.1109/TGRS.2013.2272581
[4] L. Xu, A. Krzyzak, C. Y. Suen, “Methods of combining multiple
classifiers and their applications to handwriting recognition”, IEEE
Trans. Syst., Man, Cybern., vol. 22, no. 3, pp. 418–435, 1992.
[Online]. Available: http://dx.doi.org/10.1109/21.155943
[5] Z. Ghahramani, H. C. Kim, “Bayesian classifier combination”,
Gatsby Computational Neuroscience Unit Technical Report No.
GCNU-T., London, UK, 9 p., 2003.
[6] E. Simpson, S. Roberts, I. Psorakis, A. Smith, “Dynamic Bayesian
combination of multiple imperfect classifiers”, Decision Making and
Imperfection, Berlin: Springer, pp. 1–38, 2013. [Online]. Available:
http://dx.doi.org/10.1007/978-3-642-36406-8_1
[7] J. A. Benediktsson, P. H. Swain, O. K. Ersoy, “Neural network
approaches versus statistical methods in classification of multisource
remote sensing data”, IEEE Trans. Geosci. Remote Sens., vol. 28,
no. 4, pp. 540–552, 1990. [Online]. Available: http://dx.doi.org/
10.1109/TGRS.1990.572944
[8] S. Foucher, M. Germain, J.-M. Boucher, G. B. Benie, “Multisource
classification using ICM and Dempster–Shafer theory”, IEEE Trans.
Instrum. Meas., vol. 51, no. 2, pp. 277–281, 2002 [Online].
Available: http://dx.doi.org/10.1109/19.997824.
[9] G. J. Briem, J. A. Benediktsson, J. R. Sveinsson, “Multiple classifiers
applied to multisource remote sensing data”, IEEE Trans. Geosci.
Remote Sens., vol. 40, no. 10, pp. 2291–2299, 2002. [Online].
Available: http://dx.doi.org/10.1109/TGRS.2002.802476
[10] M. Fauvel, J. Chanussot, J. A. Benediktsson, “Decision fusion for the
classification of urban remote sensing images”, IEEE Trans. Geosci.
Remote Sens., vol. 44, no. 10, pp. 2828–2838, 2006. [Online].
Available: http://dx.doi.org/10.1109/TGRS.2006.876708
[11] B. Storvik, G. Storvik, R. Fjortoft, “On the combination of
multisensor data using meta-Gaussian distributions”, IEEE Trans.
Geosci. Remote Sens., vol. 47, no. 7, pp. 2372–2379, 2009. [Online].
Available: http://dx.doi.org/10.1109/TGRS.2009.2012699
[12] G. Storvik, R. Fjortoft, A. H. S. Solberg, “A Bayesian approach to
classification of multiresolution remote sensing data”, IEEE Trans.
Geosci. Remote Sens, vol. 43, no. 3, pp.539–547, 2005. [Online].
Available: http://dx.doi.org/10.1109/TGRS.2004.841395
[13] 2014 IEEE GRSS Data Fusion Contest. [Online]: Available:
http://www.grss-ieee.org/community/technical-committees/data-
fusion/.
85
... These pixels were used to design the intended classifiers and therefore they are elements of the design set. In this study, the designed classifiers were tested using the same set of pixels in the same way as it was done in [1] and with the same justification. As I' is the image of the same scene, ground truth areas depicted in I can be transformed into the corresponding areas related to the same categories in image I' and pixels within these areas form the design set for the TI image. ...
... From the TI image, 8 spectral bands (4, 14, 26, 36, 47, 57, 69, and 78) out of 84 were chosen in the same way as in [1] so that pixels of this image were represented by 8- All pixel classifiers designed within this research are constructed as Bayes type classifiers based on corresponding probability distributions. As there is no information available about the prior probabilities of categories, they are assumed to be equal. ...
Article
Full-text available
The paper addresses problems related to classification of images obtained by various types of remote sensing devices. Development and use of Bayes type land cover classifiers based on multidimensional Gaussian, Dirichlet and gamma distributions is analysed and compared on the basis of sample data from RGB and hyperspectral thermal sensing devices with unequal spatial resolution. Approaches to data fusion for design of the combined classifiers are presented including the cases where different families of multidimensional distributions are used to model the sensor data and classifiers are designed using combinations of their probability density functions. The best classification results are obtained when the fusion of data from both images is used together with classification based on all three considered distributions combined together.
Article
Full-text available
This work describes the task of inventorying Baltic mixed forests at an individual tree level. The development of a practicable methodology for semi-automated identification of tree species was targeted. Data acquisition equipment and preprocessing software, explored forest area, processing approaches, obtained classification results as well as newly developed software are described. To resolve the core problem - tree species identification - a classification approach is proposed for processing multi-spectral imagery data from the vicinity of tree tops. A multi-class classifier is designed from multi-spectral data of interactively selected trees included in initial design (training) sets for two conifer and three deciduous species of interest. An approach for the stabilization of the classification results is proposed, based on improving the representativeness of the design sets by selection of trees from different locations, dismissing trees with overlapping crowns and anomalies, followed by the calculation of a spectral dissimilarity parameter of the design sets and dismissing the sets of trees of any species which are too similar. The best classification results were obtained using a two-stage procedure. In the first stage, species clusters were created by adding randomly selected trees from the whole analyzed forest area. Final classification of all trees was done by using a Bayes classifier designed on the basis of cluster properties. A procedure for increasing robustness of the clustering stage is proposed, based on performing multiple clustering attempts, each using a randomly sampled subset of a chosen design set for the classifier design, and making a decision about the class of each tree by the majority vote from the results of these attempts. This classification algorithm was tested against the set of trees, for which information was available from field work. It is shown that a mean classification error below 3% can be achieved and the maximum error rat- was decreased substantially by applying the proposed approach for selection of representative design sets.
Article
Full-text available
Several earth observation satellites acquire image bands with different spatial resolutions, e.g., a panchromatic band with high resolution and spectral bands with lower resolution. Likewise, we often face the problem of different resolutions when performing joint analysis of images acquired by different satellites. This work presents models and methods for classification of multiresolution images. The approach is based on the concept of a reference resolution, corresponding to the highest resolution in the dataset. Prior knowledge about the spatial characteristics of the classes is specified through a Markov random field model at the reference resolution. Data at coarser scales are modeled as mixed pixels by relating the observations to the classes at the reference resolution. A Bayesian framework for classification based on this multiscale model is proposed. The classification is realized by an iterative conditional modes (ICM) algorithm. The parameter estimation can be based both on a training set and on pixels with unknown class. A computationally efficient scheme based on a combination of the ICM and the expectation-maximization algorithm is proposed. Results obtained on simulated and real satellite images are presented.
Article
Full-text available
Possible solutions to the problem of combining classifiers can be divided into three categories according to the levels of information available from the various classifiers. Four approaches based on different methodologies are proposed for solving this problem. One is suitable for combining individual classifiers such as Bayesian, k -nearest-neighbor, and various distance classifiers. The other three could be used for combining any kind of individual classifiers. On applying these methods to combine several classifiers for recognizing totally unconstrained handwritten numerals, the experimental results show that the performance of individual classifiers can be improved significantly. For example, on the US zipcode database, 98.9% recognition with 0.90% substitution and 0.2% rejection can be obtained, as well as high reliability with 95% recognition, 0% substitution, and 5% rejection
Article
Full-text available
We propose to use evidential reasoning in order to relax Bayesian decisions given by a Markovian classification algorithm, the multiscale iterated conditional mode (ICM) algorithm. The Dempster-Shafer rule of combination enables us to fuse decisions in a local spatial neighborhood which we further extend to be multisource. This approach enables us to more directly fuse information. Application to the classification of very noisy images produces interesting results
Article
In this paper, we develop a novel classification approach for multiresolution, multisensor [optical and synthetic aperture radar (SAR)], and/or multiband images. This challenging image processing problem is of great importance for various remote sensing monitoring applications and has been scarcely addressed so far. To deal with this classification problem, we propose a two-step explicit statistical model. We first design a model for the multivariate joint class-conditional statistics of the coregistered input images at each resolution by resorting to multivariate copulas. Such copulas combine the class-conditional marginal probability density functions (pdfs) of each input channel that are estimated by finite mixtures of well-chosen parametric families. We consider different distribution families for the most common types of remote sensing imagery acquired by optical and SAR sensors. We then plug the estimated joint pdfs into a hierarchical Markovian model based on a quad-tree structure, where each tree-scale corresponds to the different input image resolutions and to corresponding multiscale decimated wavelet transforms, thus preventing a strong resampling of the initial images. To obtain the classification map, we resort to an exact estimator of the marginal posterior mode. We integrate a prior update in this model in order to improve the robustness of the developed classifier against noise and speckle. The resulting classification performance is illustrated on several remote sensing multiresolution data sets, including very high resolution and multisensor images acquired by COSMO-SkyMed and GeoEye-1.
Book
Remote Sensing Digital Image Analysis provides the non-specialist with an introduction to quantitative evaluation of satellite and aircraft derived remotely retrieved data. Each chapter covers the pros and cons of digital remotely sensed data, without detailed mathematical treatment of computer based algorithms, but in a manner conductive to an understanding of their capabilities and limitations. Problems conclude each chapter. This fourth edition has been developed to reflect the changes that have occurred in this area over the past several years.
Article
Bayesian model averaging linearly mixes the probabilistic predictions of mul-tiple models, each weighted by its posterior probability. This is the coherent Bayesian way of combining multiple models only under very restrictive assump-tions, which we outline. We explore a general framework for Bayesian model combination (which differs from model averaging) in the context of classification. This framework explicitly models the relationship between each model's output and the unknown true label. The framework does not require that the models be probabilistic (they can even be human assessors), that they share prior information or receive the same training data, or that they be independent in their errors. Fi-nally, the Bayesian combiner does not need to believe any of the models is in fact correct. We test several variants of this classifier combination procedure starting from a classic statistical model proposed by [1] and using MCMC to add more complex but important features to the model. Comparisons on several datasets to simpler methods like majority voting show that the Bayesian methods not only per-form well but result in interpretable diagnostics on the data points and the models.
Article
With the ever-increasing number and diversity of Earth observation satellites, it steadily becomes more important to be able to analyze compound data sets consisting of different types of images acquired by different sensors. In this paper, we examine different ways of obtaining joint distributions of such images, and we propose a method that enables incorporation of correlations between images while keeping a good fit to the marginal distributions. The approach basically consists of two steps. First, the marginal densities are specified. Based on this specification, each marginal variable is transformed to a normal distributed variable. The joint distribution of the transformed variables is assumed to be multivariate normal. Transforming back to the original scale gives a joint distribution with dependence, where the initial marginal distributions are preserved. The parameters of the new joint distribution can be estimated. The focus is on marginal distributions that are Gamma, K , or Gaussian, although any distribution could be considered. The joint distributions produced by the transformation method can be used in supervised classification of radar and optical images. Results obtained for a set of four-look synthetic aperture radar (SAR) images, as well as a combination of SAR and optical images, are presented.
Article
The classification of very high resolution remote sensing images from urban areas is addressed by considering the fusion of multiple classifiers which provide redundant or complementary results. The proposed fusion approach is in two steps. In a first step, data are processed by each classifier separately, and the algorithms provide for each pixel membership degrees for the considered classes. Then, in a second step, a fuzzy decision rule is used to aggregate the results provided by the algorithms according to the classifiers' capabilities. In this paper, a general framework for combining information from several individual classifiers in multiclass classification is proposed. It is based on the definition of two measures of accuracy. The first one is a pointwise measure which estimates for each pixel the reliability of the information provided by each classifier. By modeling the output of a classifier as a fuzzy set, this pointwise reliability is defined as the degree of uncertainty of the fuzzy set. The second measure estimates the global accuracy of each classifier. It is defined a priori by the user. Finally, the results are aggregated with an adaptive fuzzy operator ruled by these two accuracy measures. The method is tested and validated with two classifiers on IKONOS images from urban areas. The proposed method improves the classification results when compared with the separate use of the different classifiers. The approach is also compared with several other fuzzy fusion schemes