Conference PaperPDF Available

Combining Thermal and Visual Imaging in Spacecraft Proximity Operations

Authors:

Abstract and Figures

Advances in multiple spacecraft operations require precise determination of the relative kinematic state of the platforms involved. Such knowledge – above all in case of non-cooperative approaches - can be usefully attained by means of imaging. However, the extremely varying light conditions in the orbital environment do not allow for extended operations in the visible spectrum. The paper investigates the possible coupling of thermal imagery with visible one in order to have a continued relative tracking even during eclipse. The focus is on simple camera devices, suitable for mounting onboard microsatellites, and on the combined (visual and infrared) image processing and data fusion helped by the purposeful modeling of the spacecraft dynamics and bus signature.
Content may be subject to copyright.
Combining Thermal and Visual Imaging in Spacecraft
Proximity Operations
Giovanni B. Palmerini
DIAEE – Astronautics, Electrical Engineering and Energetics Dept.
Sapienza Università di Roma
Rome, Italy
giovanni.palmerini@uniroma1.it
Abstract—Advances in multiple spacecraft operations require
precise determination of the relative kinematic state of the
platforms involved. Such knowledge – above all in case of non-
cooperative approaches - can be usefully attained by means of
imaging. However, the extremely varying light conditions in the
orbital environment do not allow for extended operations in the
visible spectrum. The paper investigates the possible coupling of
thermal imagery with visible one in order to have a continued
relative tracking even during eclipse. The focus is on simple
camera devices, suitable for mounting onboard microsatellites,
and on the combined (visual and infrared) image processing and
data fusion helped by the purposeful modeling of the spacecraft
dynamics and bus signature.
Keywords— microsatellite onboard camera, formation flying
navigation, in-space thermal imaging, infrared/visible data fusion
I. INTRODUCTION
Advances in space operations call for a number of
maneuvers where the accurate knowledge of the relative
kinematic state (including position and velocity as well as
attitude and attitude rates) is required onboard to drive
autonomous guidance, navigation and control (GNC) loops.
Examples of these maneuvers are represented by rendezvous
and docking between spacecraft and to the International Space
Station (ISS), by in situ servicing operations and by grasping of
space debris with the help of robotic manipulators. The
required accuracy depends on the phase of the maneuver, and
can be defined - in the frame of proximity operations – as
ranging from tens of meters to the centimeter or the fraction of.
Specifically, in case of intermediate phases of rendezvous [1]
or close formation flying [2], the distance between the involved
spacecraft is in the order of hundreds of meters or less and the
requested accuracy is better than one meter. Two to three
orders of magnitude decrease on both figures should be
considered in final rendezvous phases, as well as in grasping
operations by means of robotic manipulators.
These accuracies can be difficult and expensive to obtain by
means of active sensors, as they would present bounded fields
of view and ranges and should coexist with the limitations in
power typical of space application. At the same time, common
radio frequency ranging, with transmitting source and receiver
located onboard the two platforms, is unsuitable in non-
cooperative maneuvers, including debris capture but also
operations with older or not prepared spacecraft. Instead, visual
imaging is clearly a suitable option, and the implementation of
closed-loop systems has been studied in depth and then applied
in cooperative rendez-vous (see [3] for a proposed real time
solution process in the ISS case). On the other hand, studies are
currently in progress to better define the characteristics of
visual navigation in non-co-operative formation flying [4].
Visual imaging is of course limited to the fraction of the
orbit exposed to sunlight, that ranges between 75% and 80% of
the orbital period for low Earth circular orbits, i.e. the ones
where the proximity condition is meaningful and expected.
Moreover, the real suitability of the technique can be
furthermore reduced by blinding and outages. Aiding by other
sensors, even with a poorer accuracy, is needed to enlarge this
availability window, in order to attain a continuous tracking of
the relative position. Furthermore, this aiding could provide a
coarse but useful estimate in direction and distance for the
visual imaging system to capture the target as soon as the
spacecraft comes out from the eclipse.
Due to the fact that, in the case of interest, the two
platforms involved move along similar orbits, with almost
equal light/eclipse cycles, and to the latency proper to thermal
phenomena, a significant infrared (IR) signature is granted
even in case one of the bodies should be a spent satellites or
debris in general, i.e. in the non-cooperative case. Therefore,
thermal imaging [5] could provide the required aiding. The
typical poorer accuracy of thermal camera information is not
expected to improve the quality of results, but at least to allow
for a continuous observation and some ranging data. If a
target’s thermal model should be available, or roughly inferred
from a sequence of thermal and visual images, the
measurements’ scenario would change, and even a continuous
complete kinematic state, including attitude and attitude rate,
could be estimated.
This paper is intended to discuss these topics with specific
reference to simple imaging hardware, fitting the requirements
in terms of volume, mass and power that are typical of
microsatellites, i.e. spacecraft in the order of tens of kilograms,
where these sensors are not considered to be the main payload.
The goal is to evaluate possible advantages of a combination of
measurement techniques in visible and infrared spectra. Short
notes on the issues related to the thermographic camera
[depicted in next section] will be first provided, inspired by
recently available low cost, uncooled sensors. Similarly, the
sensing part in the visible band [Section III] will be briefly
introduced. It has to be stressed that the reference to hardware
is aimed to handle typical microsatellite limitations as well as
to indicate a class of equipment that could be easily adopted in
lab experiments. The processing of the images in both bands,
discussed together with the relevant hardware, would differ
depending upon the previous availability of a reasonable model
for the target or the need to build it on-the-fly by means of the
observations. In this paper, the application to spacecraft
formation flying with a previous knowledge of the size of the
chaser satellite will be the main focus. Details of the
mathematical model assumed for the dynamics are included,
and the observation conditions are depicted [Section IV]. The
fusion of the information generated by the two imaging
systems with different accuracy will be performed on the basis
of Kalman filtering [Section V]. Finally, the analysis of the
results obtained by numerical simulations will indicate if the
performance can be deemed of interest to enable close
formation flying or proximity operations by microsatellites.
II. THERMAL IMAGING IN SPACE
A. Hardware
The thermographic device is considered as based on
standard microbolometers, recently made available for
consumer applications, operating in the 8 to 14 μm band with a
resolution of 160 x 220 pixels. To simplify the design and
reduce the constraints on the accommodation, the detector can
be uncooled, a reasonable choice given the typical range of
temperature of satellites and the capability to properly limit –
by passive means - the thermal excursion. Also the lens is
relatively simple, with a limited focal length.
B. Image processing
With respect to the processing, the first constraint is given
by the refresh rate of the camera, which is considered capable
to deliver the readings at 9Hz frequency (standard
performance). Looking at the understanding of the image
content, the two cases of availability/unavailability of a thermal
model for the target should be discussed separately.
In case of a model already available, the idea is to process
first a binary version of the thermal image in order to define the
size mask of the target spacecraft and to evaluate its distance
from the imaging satellite. Such a binary image could be
obtained by a quite rough threshold, as within the considered
scenario the background is 4K except in the critical condition
where the sun is at local zenith, behind the imager, and the
target, at nadir, has the Earth in background. Even in that case,
however, zenith side of the target should be hotter (at least 10K
more) than Earth’s one (290K), while resolution of the thermal
cameras is remarkably better than 1°C. Then, strictly
depending on the specific characteristics of the specific target,
the Hough transform can be applied to images generated with a
sequence of different thresholds to identify geometric features
with similar temperature, as edges in conductive materials.
Basic knowledge in spacecraft thermal control systems [6]
greatly helps in obtaining a sketch of the structure behavior and
of the temperature values attained. This approximate
knowledge, which of course should take into account the
spacecraft’s materials, remains significant even with respect to
poorer detail typical of thermal models. Indeed, the slow
variation of temperature and the difference with the cold 4 K
background are factors that strongly increase the observability.
Relative position and attitude can be then computed. To notice
that the benchmark images allowing to identify possible
features are represented by a sequence of thermal pictures of
the spacecraft at different orientation stored onboard [a specific
approach to limit the size of benchmark data by considering a
number of different rotation angles as well as the intermediate
ones will be considered]. Differences between features’
size/position identified in following images produce the rate
information to complete the state estimate.
On the other hand, the case in which no model is available
approaches a blind start. Only the first operation, i.e. the
identification of the coarse mask of the target is attempted, and
by differentiating between following images the (not
exhaustive) information on the ranging velocity can be
obtained. More elaborate operations, aiming to compute the
relative state, are delayed until a number of visible images –
easier to understand – will be available, and a rough thermal
model could be inferred by operators. Indeed, a completely
autonomous relative navigation procedure with respect to a
previously unknown orbiting object seems unlikely.
III. IMAGING IN THE VISIBLE BAND
A. Hardware
The visual imaging hardware is represented by
microcamera class devices, which already proved successful
onboard several microsatellites. As in case of the thermal
camera, the focal length is short in order for the camera to be
easily accommodated. Available resolution is instead fairly
better than thermographic cameras, easily attaining 640x480
(320x240 can be considered, in agreement with previous works
[4], [7], [8]).
B. Image processing
At least two techniques to identify the target in the captured
images are available and proved to be successful in similar
studies: the Scale Invariant Feature Transform (SIFT) [7] and
the Hough transform [3]. The larger computational burden
associated with SIFT and a preliminary preference to use the
same algorithm in the two bands suggested to adopt the Hough
transform. In fact, SIFT – with the definition of the descriptors
at the key points based on the intensity gradient - has been
deemed as not enough robust with respect to the poorer quality
characteristics of the thermal images.
Figure 1. A Simple sketch of a side of a cubic microsatellite. The inner part,
largely covered with cells, will have very different optical and thermal
properties with respect to outer part where aluminum structure is exposed.
Additional components as patch antenna (pink) or mechanism (grey) appear.
1m
In some way, the issue of the previously known model
happens to be less relevant due to the increased resolution. It is
assumed that the identification of the elements will be easier in
visible than in infrared captures, and that smaller details (as an
example the patch antenna in Fig.1), once recognized in a
sequence of images, can even help in attitude determination.
IV. APPLICATION TO PROXIMITY FORMATION FLYING
The process to determine the relative kinematic state
between two satellites, one of them being the observer (deputy
or chaser) and the other labeled as chief or target, conveniently
builds upon the available knowledge about their relative
dynamics. The behavior of two spacecraft flying in close
formation is well represented by the linear model known as
Euler-Hill (EH) or Clohessy-Wiltshire equations [9]. With the
assumption of the chief belonging to a circular orbit (Fig.2),
and labeling as x and y the in-plane coordinates of the chaser in
a local vertical, local horizontal frame centered in the chief, the
dynamics allowing for closed relative trajectories is given in
the orbital plane by:
+
++=
++
+=
ω
ω
ω
ω
ω
ω
ω
ω
ω
ω
0
0
0
0
0
0
0
0
0
0
2sin232cos2)(
22cos23sin)(
x
yt
y
xt
x
ty
y
xt
y
xt
x
tx
(1)
where 3
r
μω
= is the mean motion along the orbit of radius r
and μ=398600 Km3 s
-2 the gravitational parameter, while the
index 0 refers to the initial conditions. The resulting orbit -
sketched in Fig.3 in the Earth centered inertial (ECI) frame,
with the deputy moving on a slightly eccentric path - is an ideal
one, valid as far as the linearization of the differential Earth
gravitational attraction between the positions of the two
satellites holds.
This simple behavior, successfully adopted in the
preliminary design of several missions, will be actually
affected by orbital perturbations. Fig.4, represented in the local
frame centered in the chief location and obtained by adding to
the EH model an approximate, but reasonably evaluated effect
of the differential solar radiation pressure, shows that the real
relative trajectory does not close exactly. The knowledge in
time of the relative kinematic state is therefore extremely
significant to exploit GNC loops performing designed
maneuvers, or even to safely maintain the orbital configuration.
With respect to the attitude, the chaser is assumed to
constantly point the Earth (nadir pointing) while the deputy is
considered to be commanded to constantly point the center of
mass of the target and to maintain it at the center of its field of
view. Furthermore, a special initial condition can be selected to
produce easier-to-compare data without losing in the general
value of the simulations. Such a condition assumes that the
deputy starts along the radial direction above the target, and
following simulations will refer to the initial configuration
where Earth, target, chaser, Sun are aligned in the given order.
This configuration (position * in Fig.3) corresponds to the
critical condition previously recalled and, due to the common
orbital period of the two spacecraft, ensures that blinding (Sun
and target falling in the sensor’s field of view at the same time)
will be avoided.
V. VISUA L / INFRARED OBSERVABLES
The observables of the two different imaging systems will
follow, in this preliminary analysis, separate paths, meaning
that each of them will be analyzed to provide distinct
information about the target. The observables themselves are
equal in nature for the two systems, while they differ as far as it
concerns the accuracy (the visible can be easily assumed to be
the better one) and the availability (the infrared sensor output is
continuous, while the visible is not).
The first observable is the distance to the target, which can
be obtained by the evaluation of the size of the target body in
terms of pixels of the digital image once the value (or a good
estimate of) of the real chief spacecraft size should be known.
The distance to the target will be given by
f
L
L
yxd
image
real
=+= 22 (2)
where f is the focal distance. Due to the (strong) hypothesis of a
perfectly centered vision, and to a cubic shape assumed for the
target (Fig.1), the characteristic dimension, i.e. the length of the
square side represented in the image Limage can be measured and
compared to the real one Lrea l (the overall problem practically
becomes bi-dimensional).
There is also a second observable, which determination is
based in the capabilities of the imaging systems. In fact a
generic image will include two sides of the cubic microsatellite
structure. As these two sides will receive very different light
and can have different exposition to surrounding environment,
they will have different signatures in both visible and infrared
bands.
Figure 2. A sketch of the orbital configuration analyzed for the dual band
image-based navigation.
Figure 3. Chief (target, with constant nadir pointing attitude) and deputy (green,
pointing constantly to the chaser) trajectories in the common orbital plane. Note
that the chaser is never blinded from the Sun while observing the target, and
sees the Earth in background only when the Sun is behind, which is also the
initial condition labeled as *.
-0.1 -0.08 -0.06 -0.04 -0. 02 00.02 0. 04 0.06 0. 08
-0.06
-0.04
-0.02
0
0.02
0.04
0.06
tangential (k m)
radial (k m)
Figure 4. Relative trajectory in the chief-centered frame. Small triangle indicate
that the observer continuously and perfectly points towards the target (chief).
Figure 5. Observable lengths of sides (effective in the IR spectrum)
01000 2000 30 00 4000 5000 600 0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
time (s)
sides in IR (m)
Zenith
Side1
Nadir
Side2
Figure 6. The observable length – along the orbit - of the different chief
satellite’s sides (fully observed orbit, i.e. effective in the IR spectrum only)
01000 2000 3000 4000 5000 6000
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
time (s)
sides in visible (m)
Zenith
Side1
Nadir
Side2
Figure 7. The observable length of the different microsatellite’s sides
consid ering ec lipse (effective in the visible portion of the sp ectrum)
01000 2000 3000 4000 5000 600 0
-0.2
-0.15
-0.1
-0.05
0
0.05
0.1
0.15
time (s)
blue radial , red tang ential (km)
true
estimated
Figure 8. True and reconstructed trajectory components in the local frame
-0.1 -0. 08 -0.06 -0. 04 -0.02 00. 02 0.04 0.06 0. 08
-0.06
-0.04
-0.02
0
0.02
0.04
0.06
tangent ial (k m)
radial (km)
true
est imat ed
Figure 9. True and reconstructed (estimated) relative trajectory in
case of a fully observed orbit (view in a local orbital frame)
01000 2000 3000 4000 5000 6000
-0.15
-0.1
-0.05
0
0.05
0.1
time (s)
blue radial, red tangential (km)
true
estimated
Figure 10. Comparison between true and reconstructed trajectory without
the contribution of observables during the eclipse phase
-0.1 -0.08 -0 .06 -0.04 -0.02 00.02 0.04 0.06 0. 08
-0.06
-0.04
-0.02
0
0.02
0.04
0.06
tangential (k m)
radial (k m)
true
estim ated
Figure 11. Comparison between true and reconstructed trajectory without the
contribution of observables during the eclipse phase
As sketched in Fig.5, the analysis of these two different
signatures, that are always corresponding to
sidelateral L
side nadir / zenith L
real
real
δ
δ
sin
cos , (3)
with δ equal to the angular position of the chaser along its
relative orbit around the target, will provide the additional
observable
nadir or zenith
image
llatera
image
L
L
arctan=
δ
(4)
This observable offers insight for relative attitude, and
actually adds information content as the angle δ is strictly
related to the relative position. In fact, once the pointing of the
target is fixed and known and the one of the chaser is assumed
as correct, δ allows to exploit their relation with the position.
Figures 6 and 7 plot the behavior in time of the length of the
sides listed in Eq.(3) in the two cases where imaging can be
performed all along the orbit (i.e. in the infrared band) or the
eclipse should be considered (i.e. the visible case). From these
values it is possible to compute the different observables
reported in Eqs.(2) and (4).
VI. VISUA L / INFRARED DATA FUSION
The two imaging techniques are, at least in the space
environment, totally complementary. In fact the visible
hardware is expected to be more accurate and rich, but also
blind for a significant fraction of time, while the infrared
detectors, even with a poorer resolution and with the well-
known limits of thermal images in terms of precise shapes’
edging and temporal variation of thermal field, always provide
an output [10]. It is interesting to remark that the fact the two
sensors work simultaneously during orbit’s sunny side can be
used to adapt the relative weights between their measurements.
The idea to fuse information contained in visible and thermal
images is clearly not new [11], and the issue is basically to
identify the most proficient way to combine them. Advanced
approaches aim to blend the information included in the images
at a primitive level ([12], [13]). Even if usually poorer, the
simpler, most immediate and straightforward approach to
process separately the two synchronous images and to obtain
from each of them the estimate of the kinematic state [14], or a
part of, has been selected for this preliminary analysis. In order
to combine the two separate measurements, to take into
account the intervals – outages - where one of them should not
be available, and to duly consider the knowledge about the
dynamics, a recursive Kalman filtering seems a suitable
solution. Due to the nature of the observables, the extended
(EKF) formulation is used, while the dynamic process
dynamic, based on the EH equations described in previous
section, is linear.
The implemented filter is a standard, total state formulation
written referring to the kinematic state variables, i.e.
[]
T
y x y x x
, and not to their errors [15]. Classical relation
hold, with a prediction phase:
k
kk xx
Φ
=
+1 (5)
QPP T
kkkk += +
+
ΦΦ
1 (6)
the computation of the Kalman gain K:
[]
1
ˆˆˆ
+= k
T
kkk
T
kkk RHPHHPK (7)
and the updates in the state (when measurements would be
available) and in the covariance:
(
)
k
kkkk zzKxx ˆ
+= + (8)
[]
+ = kkkk PHKIP ˆ (9)
where, as usual, Φ is the transition matrix to be obtained by the
Euler-Hill dynamics represented in Eq.(1), z is the
measurements vector, P is the covariance matrix, Q and R are
the noise matrices associated to the process and to the
measurements, the index k relates to the time instant, and the
indexes – and + define predicted or updated quantities. Aside
from these routine formulas, some interest should be devoted to
the measurement matrix H that, following EKF rules, will read
as (^ indicates the result of an evaluation at the current step):
x
z
Hk
=
ˆ (10)
In the present case, when the two observables, i.e. the
distance and the relative angle, recovered from one (either
visible or infrared) image are considered, H reads as
++
=
×
00
1
00
ˆ
2
2222
24
x
x
y
yx
y
yx
x
Hk
(11)
while when both images are considered it becomes
=
×
×
×
k
k
kH
H
Hˆ
ˆ
ˆ
24
24
44 (12)
VII. FINDINGS AND REMARKS
Preliminary simulations carried on, a sample of them being
reported in Fig. 8-11, show that the fusion of visible and
infrared images is certainly beneficial, even if the R matrix
representing the expected noise in measurements is certainly
higher for thermal camera (one and two order of magnitudes in
the performed tests). In detail, comparing Fig. 9 and 11, the
reconstructed trajectory is far closer to the real one when
measurements all along the orbit – i.e. including the infrared
ones - are used. The second observable, i.e. the angle δ
representing either the angular position along the relative orbit
or the relative attitude between chaser and target has a
recognized, even if not dominant, effect on the estimate quality.
Additional analysis will take into account a larger number
of numerical tests to compute statistical indexes for the
trajectory reconstruction (not to be limited by the effects of the
random values included in some filter’s issues). Even more
important, the infrared image for the simulations will be built
on the basis of a simple thermal model of the microsatellite,
assuming standard rules for conductivity, absorption and
emissivity, able to provide temperature maps for all sides. To
take into account the different behavior of the portion of the
sides covered by solar cells such a model should include at
least 9 nodes per side.
Further improvements can be expected by fully exploiting
the attitude information coming from the difference in sides’
imaging (δ observable). The addition of more complete filters,
including multiple behavioral models [16] and recursive
schemes more complex than EKF, already tested in spacecraft
proximity dynamics [17], should work towards this goal.
Overall, the combination of visible and infrared imaging in
spacecraft proximity operations appears an interesting option.
Analysis to scenarios different from planar formation flying
referenced to a circular orbit should also be considered.
Relaxing some of the hypotheses, mainly the one on the perfect
camera viewing direction or the ideal attitude of the chaser, is
requested in order to validate the system, estimate its
robustness, and prepare for lab tests. At the same time,
reference to current available IR hardware will improve the
significance of the study.
REFERENCES
[1] CNES, Mécanique Spatiale, Toulose: Cépaduès Editions, 1997.
[2] K.T. Alfrend, S.R. Vadali, P. Gurfil, J.P. How, L.S. Breger, Spacecraft
Formation Flying: Dynamics, Control and Navigation, Burlington, MA:
Elsevier, 2010.
[3] G. Casonato, G.B. Palmerini, “Visual techniques applied to the
ATV/ISS rendez-vous monitoring,” 2004 IEEE Aerospace Conference
Proceedings, vol. 1, pp. 613-624.
[4] P. Gasbarri, M. Sabatini, G.B. Palmerini, “Ground tests for vision based
determination and control of formation flying spacecraft trajectories,”
Acta Astronautica, vol.102, pp. 378-391, 2014.
[5] R. Gade, T.B. Moeslund, “Thermographic cameras and applications: a
survey,” Machine Vision and Applications, vo.25, 2014, pp.245-262.
[6] D.G. Gilmore (Ed.), “Spacecraft Thermal Control Handbook”, 2nd ed. El
Segundo, CA: The Aerospace Press & AIAA, 2002.
[7] G. Palmerini, M. Sabatini, P. Gasbarri, “Analysis and tests of visual
based techniques for orbital rendezvous operations,” 2013 IEEE
Aerospace Conference Proceedings.
[8] M. Sabatini, R. Monti, P. Gasbarri, G.B. Palmerini, “Adaptive and
robust algorithms and tests for visual-based navigation of a space robotic
manipulator,” Acta Astronautica, vol. 83, pp. 65–84, February-March
2013.
[9] W.H. Clohessy, R.S. Wiltshire, “Terminal Guidance System for Satellite
Rendezvous,” Journal of the Aerospace Sciences, Vol. 27, No. 9, 1960,
pp. 653–658.
[10] S.G. Kong, J. Heo, F. B oughorbel, Y. Zheng, B.R. Abidi, A. Koschan,
M. Yi, M.A. Abidi, “Multiscale Fusion of Visible and Thermal IR
Images for Illumination-Invariant Face Recognition,” Int. Journal of
Computer Vision, vol.71, No.2, 2007, pp.215-233.
[11] V. Deodeshmukh, S. Chaudhuri, S. Dutta Roy, “Cooperative Infrared
and Visible Band Tracking,” Proceedings of the International
Conference on Applied Pattern Recognition, 2009.
[12] M.J. Johnson, P. Bajcsy, “Integration of Thermal and Visible Imagery
for Robust Foreground Detection in Tele-immersive Spaces,”
Proceedings of the 11th International Conference on Information Fusion,
2008, pp. 1265-1272.
[13] J. Saeedi, K. Faez, “Infrared and visible image fusion using fuzzy logic
and population-based optimization,” Applied Soft Computing, vol.12,
No.3, pp.1041-1054, 2012
[14] P. Kumar, A Mittal, P. Kumar, “Fusion of Thermal Infrared and Visible
Spectrum Video for Robust Surveillance,” Computer Vision, Graphics
and Image Processing, Springer Lecture Notes in Computer Science Vol.
4338, 2006, pp 528-539.
[15] J.A. Farrell, M. Barth, The Global Positioning System and Inertial
Navigation, New York: McGraw-Hill, 1998.
[16] M. Airouche, L. Bentabet, M. Zamer, G. Gao “Pedestrian tracking using
thermal, color and location clue measurements: a DSmT-based
framework,” Machine Vision and Applications, vo.23, 2012, pp.999-
1010.
[17] F. Reali, G.B. Palmerini, “Estimate Problems for Satellite Clusters,”
2008 IEEE Aerospace Conference Proceedings.
... These activities significantly contribute to prolonging the operational lifespan of space vehicles and enhancing their mission capabilities [1]. Visual navigation plays a pivotal role in on-orbit services, facilitating a range of tasks including attitude measurement [2], component identification [3], and visual multi-sensor fusion [4]. It is also critical in non-cooperative target detection [5], shape reconstruction of spacecraft [6], and formation flying [7,8], among various other domains. ...
... Looking forward, we plan to focus on extracting prior information from on-orbit navigation images to further enhance visual navigation systems. Our aim is to integrate image enhancement techniques with advanced tasks such as on-orbit target tracking, attitude estimation [2], component identification [3], and visual multi-sensor fusion [4]. This integration will provide robust visual support for future space on-orbit servicing missions. ...
Article
Full-text available
In the domain of space rendezvous and docking, visual navigation plays a crucial role. However, practical applications frequently encounter issues with poor image quality. Factors such as lighting uncertainties, spacecraft motion, uneven illumination, and excessively dark environments collectively pose significant challenges, rendering recognition and measurement tasks during visual navigation nearly infeasible. The existing image enhancement methods, while visually appealing, compromise the authenticity of the original images. In the specific context of visual navigation, space image enhancement’s primary aim is the faithful restoration of the spacecraft’s mechanical structure with high-quality outcomes. To address these issues, our study introduces, for the first time, a dedicated unsupervised framework named SpaceLight for enhancing on-orbit navigation images. The framework integrates a spacecraft semantic parsing network, utilizing it to generate attention maps that pinpoint structural elements of spacecraft in poorly illuminated regions for subsequent enhancement. To more effectively recover fine structural details within these dark areas, we propose the definition of a global structure loss and the incorporation of a pre-enhancement module. The proposed SpaceLight framework adeptly restores structural details in extremely dark areas while distinguishing spacecraft structures from the deep-space background, demonstrating practical viability when applied to visual navigation. This paper is grounded in space on-orbit servicing engineering projects, aiming to address visual navigation practical issues. It pioneers the utilization of authentic on-orbit navigation images in the research, resulting in highly promising and unprecedented outcomes. Comprehensive experiments demonstrate SpaceLight’s superiority over state-of-the-art low-light enhancement algorithms, facilitating enhanced on-orbit navigation image quality. This advancement offers robust support for subsequent visual navigation.
... In order to overcome the abovementioned challenges, infrared sensors are introduced in space perception missions considering the images captured by infrared sensors and visible sensors are highly complementary in weak illumination conditions. Palmerini [5] combined the infrared images and visible images to attain a continuous tracking of the relative position during space proximity operations. Frank et al. [6] used infrared sensors to provide complementary visual information for state estimation during relative navigation. ...
... It also employs the Texture Material Mapping Tool to create material maps for textures. Although far-infrared bands (8)(9)(10)(11)(12)(13)(14) are widely applied for space surveillance due to cost, size, and power consumption of infrared sensors [47], the nearinfrared (0.78-3) and mid-infrared (3)(4)(5)(6)(7)(8) bands are also considered in SDD to achieve a more comprehensive dataset. In order to generate the weak illumination scene, all the visible images and infrared images are synthesised at the time of twilight, which is set to 05:50 in this work. ...
Article
Full-text available
The increasing amount of space debris in recent years has greatly threatened space operation. In order to ensure the safety level of spacecraft, space debris perception via on-orbit visual sensors has become a promising solution. However, the perception capability of visual sensors largely depends on illumination, which tends to be insufficient in dark environments. Since the images captured by visible and infrared sensors are highly complementary in dark environments, a convolutional sparse representation-based visible and infrared image fusion algorithm is proposed in this paper to expand the applicability of visual sensors. In particular, the local contrast measure is applied to obtain the refined weight map for fusing the base layers, which is more robust in a dark space environment. The algorithm can settle two significant problems in space debris surveillance, namely, improving the signal-noise ratio in a noise space environment and preserving more detailed information in a dark space environment. A space debris dataset containing registered visible and infrared images has been purposely created and used for algorithm evaluation. Experimental results demonstrate that the proposed method in this paper is effective for enhancing image qualities and can achieve favorable effects compared to other state-of-the-art algorithms.
... The experiments in this paper utilize three main datasets: FLIR, TNO [50], MSRS [38], and Roadscene [51]. The FLIR dataset (available at www.flir.com/oem/adas/adas-dataset-form/) ...
... It has also been actively utilized in robotics based vision applications [5][6][7]. Besides this, the use of visible and thermal imagery in fusion has also been hosted in spacecraft proximity operations [8] to overcome the effect of different lightening conditions in the orbital environment. [9,10] have advanced the field of image fusion of multisensory data for the improved computer vision applications. ...
... The fusion of these two sensors is also incorporated in the application of robotics based vision [8,52]. In the operation of space proximity [28] to have a continuous relative tracking even at the time of eclipse the fusion of thermal and visible sensor are used [16,42]. ...
Article
Full-text available
In the discipline of computer vision, object tracking is one of the progressive and prominent areas of research with its application in the field of medical imaging, vehicle navigation, surveillance etc. Many of the proposed object tracking algorithms has shown success in the recent years. In this manuscript, we introduced a novel approach for object tracking that can develop an efficient framework of various features from different sensors. If we contemplate a RGB (red–green–blue) image it has better distinction of colors from human eye standpoint but is degraded by shadows and noise caused by illumination. Unlike RGB images thermal images are less receptive to such type of noise factors yet environmental condition can alter its distinction. To overcome this distinction issue of the two sensors a fusion of these two modalities is introduced, considering their interdependent advantages. This proposed technique is focused at enhancing the information collected from the fusion of visible imaging and thermal imaging sensors. It can also be implemented if the number of sensor are increased which in turn increases the number of features. With the use of features from two different sensors the proposed scheme utilizes the six information cues for the estimation of single output. The EPSO (enhanced particle swarm optimization) based particle filtering was adjusted with the concept of using multi-cue granular computing to weigh the particles and estimate the ultimate tracking result. After conducting attribute weight adaptation, the same approach is expanded to produce source-level fusion. The experimental performance of the method has been demonstrated on publicly available standard video sequences. After comparing it against state-of-the-art approaches, the findings show that it outperforms the trackers mentioned in the literature.
... The former suggests installation of an RGB camera to visually track the projected 2D point [18], [19] while the latter suggests installation of a thermal camera to monitor its temperature [20], [21]. The integration of thermal and visual information to augment robot capabilities has been recently studied in many works, e.g., human falling detection [22], thermographic reconstruction [23], spacecraft operations [24] etc. In this work, we show how thermal and visual signals can be used to guide the end-effector's motion, thus, paving a new path towards the development of multi-perception modalities for robotic manipulation [25], [26]. ...
Article
Full-text available
In this paper, we present a novel robotic system for heliography, which literally refers to painting with the sun. In the context of artistic creation, heliography involves the precise manipulation of a magnifying glass to burn artwork on wooden panels by concentrating solar energy. Since this painting procedure exposes the human operator to prolonged periods of intensive sunlight, it can potentially cause sunburns to the skin or damage the eyes. To avoid these issues, in this paper we propose to automate the heliography process with a robot in lieu of a human operator. The proposed thermal servoing capabilities have the potential to robotize various solar power generation technologies such as Concentrated Solar Power (CSP) and Concentrated Photovoltaics (CPV). To perform this task, our robotic system is equipped with a magnifying glass attached to the end-effector, and is instrumented with vision and thermal sensors; The proposed sensor-based controls enable to automatically: 1) Track the orientation of the sun in real-time to maximize the concentrated solar energy; 2) Direct the solar rays towards the point of interest; 3) Control the heat power intensity at the target point to achieve the required steady state temperature.
... The former suggests installation of an RGB camera to visually track the projected 2D point [17] while the latter suggests installation of a thermal camera to monitor its temperature [18], [19]. The integration of thermal and visual information to augment robot capabilities has been recently studied in many works, e.g., human falling detection [20], thermographic reconstruction [21], spacecraft operations [22] etc. In this work, we show how thermal and visual signals can be used to guide the end-effector's motion, thus, paving a new path towards the development of multi-perception modalities for robotic manipulation [23]. ...
Preprint
Full-text available
In this paper, we present a novel robotic system for heliography, which literally refers to painting with the sun. In the context of artistic creation, heliography involves the precise manipulation of a magnifying glass to burn artwork on wooden panels by concentrating solar energy. Since this painting procedure exposes the human operator to prolonged periods of intensive sunlight, it can potentially cause sunburns to the skin or damage the eyes. To avoid these issues, in this paper we propose to automate the heliography process with a robot in lieu of a human operator. The proposed thermal servoing capabilities have the potential to robotize various solar power generation technologies such as Concentrated Solar Power (CSP) and Concentrated Photovoltaics (CPV). To perform this task, our robotic system is equipped with a magnifying glass attached to the end-effector, and is instrumented with vision and thermal sensors; The proposed sensor-based controls enable to automatically: 1) Track the orientation of the sun in real-time to maximize the concentrated solar energy; 2) Direct the solar rays towards the point of interest; 3) Control the heat power intensity at the target point to achieve the required steady state temperature. The performance of our system is evaluated by conducting autonomous heliography experiments with various patterns.
Article
Full-text available
State-of-the-art techniques for vision-based relative navigation rely on images acquired in the visible spectral band. Consequently, the accuracy and robustness of the navigation is strongly influenced by the illumination conditions. The exploitation of thermal-infrared images for navigation purposes is studied in the present paper, as a possible solution to improve navigation in close proximity with a target spacecraft. Thermal-infrared images depend on the thermal radiation emitted by the target, hence, they are independent from light conditions; however, they suffer from a poorer texture and a lower contrast with respect to visible ones. This paper proposes pixel-level image fusion to overcome the limitations of the two types of images. The two source images are merged into a more informative one, retaining the strengths of the distinguished sensing modalities. The contribution of this work is twofold: firstly, a realistic thermal infrared images rendering tool for artificial targets is implemented; secondly, different pixel-level visible-thermal infrared images fusion techniques are assessed through qualitative and quantitative performance metrics to ease and improve the subsequent image processing step. The work presents a comprehensive evaluation of the best fusion techniques for on-board implementation, paving the way to the development of a multispectral end-to-end navigation chain.
Article
Full-text available
Trackers based on cameras operating in the visible band do not work well in low lighting conditions. Infrared (IR) cam-eras typically have a low frame rate, hence making tracking in the IR band difficult. This paper presents a novel ap-proach for cooperative tracking between two cameras oper-ating in the IR and visible bands. We represent a framework for fusing results of two such kalman trackers, using an es-timated geometric relationship between the two cameras.
Article
Full-text available
In this paper, we address the problem of pedestrians tracking in cluttered scenes using location, color and thermal cues. The Dezert–Smarandache (DSm) theoretical framework is used to combine the measurements provided by the sensors into a single and unified frame. The use of DSm Theory allows modeling the conflicts that might arise between the sensors due to the presence of clutter and high levels of occlusion. The location cue is integrated as a prior knowledge, which increases the robustness of the tracking. A belief measure is derived and used as a step in a particle filtering algorithm. Finally, experimental results are given, where the developed approach is used to track walking persons in indoor scenes with high levels of occlusion and clutter.
Conference Paper
Full-text available
Advanced visual based techniques are proposed in this paper for the dual task of identifying a target and evaluating the relative position and attitude between a chaser and the target during a space rendezvous. The algorithm is first tested on a given image, and then implemented onboard a free floating platform. The experimental results of this rendezvous performed with the testbed in the lab are reported, showing remarkable performance that offers the basis for future developments of close proximity control strategies.
Article
Full-text available
This paper presents a new wavelet-based algorithm for the fusion of spatially registered infrared and visible images. Wavelet-based image fusion is the most common fusion method, which fuses the information from the source images in the wavelet transform domain according to some fusion rules. We specifically propose new fusion rules for fusion of low and high frequency wavelet coefficients of the source images in the second step of the wavelet-based image fusion algorithm. First, the source images are decomposed using dual-tree discrete wavelet transform (DT-DWT). Then, a fuzzy-based approach is used to fuse high frequency wavelet coefficients of the IR and visible images. Particularly, fuzzy logic is used to integrate the outputs of three different fusion rules (weighted averaging, selection using pixel-based decision map (PDM), and selection using region-based decision map (RDM)), based on a dissimilarity measure of the source images. The objective is to utilize the advantages of previous pixel-and region-based methods in a single scheme. The PDM is obtained based on local activity measurement in the DT-DWT domain of the source images. A new segmentation-based algorithm is also proposed to generate the RDM using the PDM. In addition, a new optimization-based approach using population-based optimization is pro-posed for the low frequency fusion rule instead of simple averaging. After fusing low and high frequency wavelet coefficients of the source images, the final fused image is obtained using the inverse DT-DWT. This new method provides improved subjective and objectives results as compared to previous image fusion methods.
Article
The advances in the computational capabilities and in the robustness of the dedicated algorithms are suggesting vision based techniques as a fundamental asset in performing space operations such as rendezvous, docking and on-orbit servicing. This paper discusses a vision based technique in a scenario where the chaser satellite must identify a non-cooperative target, and use visual information to estimate the relative kinematic state. A hardware-in-the-loop experiment is performed to test the possibility to perform a space rendezvous using the camera as a standalone sensor. This is accomplished using a dedicated test bed constituted by a dark room hosting a robotic manipulator. The camera is mounted on the end effector that moves replicating the satellite formation dynamics, including the control actions, which depend at each time step by the state estimation based on the visual algorithm, thus realizing a closed GNC loop.
Article
Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras.
Book
Space agencies are now realizing that much of what has previously been achieved using hugely complex and costly single platform projects-large unmanned and manned satellites (including the present International Space Station)-can be replaced by a number of smaller satellites networked together. The key challenge of this approach, namely ensuring the proper formation flying of multiple craft, is the topic of this second volume in Elsevier's Astrodynamics Series, Spacecraft Formation Flying: Dynamics, control and navigation. In this unique text, authors Alfriend et al. provide a coherent discussion of spacecraft relative motion, both in the unperturbed and perturbed settings, explain the main control approaches for regulating relative satellite dynamics, using both impulsive and continuous maneuvers, and present the main constituents required for relative navigation. The early chapters provide a foundation upon which later discussions are built, making this a complete, standalone offering.
Article
Optical navigation for guidance and control of robotic systems is a well-established technique from both theoretic and practical points of view. According to the positioning of the camera, the problem can be approached in two ways: the first one, “hand-in-eye”, deals with a fixed camera, external to the robot, which allows to determine the position of the target object to be reached. The second one, “eye-in-hand”, consists in a camera accommodated on the end-effector of the manipulator. Here, the target object position is not determined in an absolute reference frame, but with respect to the image plane of the mobile camera. In this paper, the algorithms and the test campaign applied to the case of the planar multibody manipulator developed in the Guidance and Navigation Lab at the University of Rome La Sapienza are reported with respect to the eye-in-hand case. In fact, being the space environment the target application for this research activity, it is quite difficult to imagine a fixed, non-floating camera in the case of an orbital grasping maneuver. The classic approach of Image Base Visual Servoing considers the evaluation of the control actions directly on the basis of the error between the current image of a feature and the image of the same feature in a final desired configuration. Both simulation and experimental tests show that such a classic approach can fail when navigation errors and actuation delays are included. Moreover, changing light conditions or the presence of unexpected obstacles can lead to a camera failure in target acquisition. In order to overcome these two problems, a Modified Image Based Visual Servoing algorithm and an Extended Kalman Filtering for feature position estimation are developed and applied. In particular, the filtering shows a quite good performance if target's depth information is supplied. A simple procedure for estimating initial target depth is therefore developed and tested. As a result of the application of all the novel approaches proposed, the experimental test campaign shows a remarkable increase in the robustness of the guidance, navigation and control systems