Conference PaperPDF Available

Optical motion tracking in earthquake-simulation shake table testing: Preliminary results

Authors:

Abstract and Figures

Sensors such accelerometers and displacement transducers are generally used in earthquake-simulation shake table testing to measure the induced motions. In particular the Anti-seismic Structure Laboratory at the Pontificia Universidad Cato¿lica del Peru¿ (PUCP) uses LVDT (linear variable differential transformer) sensors, which can achieve accurate measurements. However there are limitations in the number of measuring points; moreover, the required instrumentation is demanding and destructive tests can not be measured with such devices. We present the preliminary results of an optical motion tracking system to measure the induced motions for shake table testing at the PUCP's Anti-seismic Structure Laboratory.
Content may be subject to copyright.
Optical Motion Tracking in Earthquake-Simulation
Shake Table Testing: Preliminary Results
Paul Rodr´
ıguez
Abstract Sensors such accelerometers and displacement
transducers are generally used in earthquake-simulation shake
table testing to measure the induced motions. In particular the
Anti-seismic Structure Laboratory at the Pontifical Catholic Uni-
versity of Peru (PUCP) uses LVDT (linear variable differential
transformer) sensors, which can achieve accurate measurements.
However there are limitations in the number of measuring
points; moreover, the required instrumentation is demanding and
destructive tests can not be measured with such devices.
We present the preliminary results of an optical motion
tracking system to measure the induced motions for shake table
testing at the PUCP’s Anti-seismic Structure Laboratory.
I. INTRODUCTION
SHAKE table tests are used to assess how a model or a
full-size building responds to the vibrations of a sim-
ulated earthquake. The accuracy of the induced motion’s
measurements are paramount to appraise the behavior and the
structural health of the construction during the tests, and their
analysis may also be used to propose design enhancements.
Sensors such as accelerometers and displacement trans-
ducers are commonly used in Shake Table experiments to
measure the earthquake induced motions. Nevertheless, there
are limitations in the number of measuring points; moreover,
the required instrumentation is demanding and destructive tests
can not be measured with such devices. Other methodologies
of motion tracking include magnetic, acoustic, and optical
techniques.
Particularly, the PUCP’s Anti-seismic Structure Laboratory
uses LVDT (linear variable differential transformer, a type of
displacement transducers) sensors to measure the earthquake
induced motions. The tests are carried on a Shake Table
(4.40 ×4.40 m.) with one degree of freedom. The maximum
displacement and acceleration of the Shake Table are 130 mm.
for each sense and 1G respectively. Usually each test lasts
about 30 seconds. There are two types of tests carried out in
the PUCP’s Anti-seismic Structure Laboratory: “resistance”
and “wall-bend” tests. The “resistance” test is carried out
to assess the structural health of the construction after the
simulated earthquake whereas the “wall-bend” test studies the
bending properties of a given wall. In both cases the LVDT
sensors measure the construction’s induced motion only in
the direction and sense of the Shake Table’s drive movement.
Even though the LVDT sensors have good accuracy (in the
order of millimeters), they must be physically attached in the
Paul Rodr´
ıguez is with Digital Signal Processing Group at
the Pontificia Universidad Cat´
olica del Per´
u, Lima, Peru. Email:
prodrig@pucp.edu.pe, Tel: +51 1 9 9339 5427
structure and require cumbersome cabling and configurations
and substantial time for setup (up to two days).
In the context of this paper, tracking is the problem of
generating an inference about the motion of an object given a
sequence of images. In particular, the object (or objects) for an
optical motion tracking system (i) may be particular features of
the scene (image-based systems) or (ii) may consist of artificial
markers introduced for such purpose (marker-based systems).
Both types of optical motion tracking systems have been
successfully employed in Earthquake engineering. Image-
based systems are reported in [1], [2], [3]; the main drawback
of such systems is that they need robust feature detection tech-
niques. For marker-based systems, active and retro-reflective
markers are quite popular [4], [5], [6] because the segmenta-
tion procedure for such markers is relatively straightforward
(based on intensity). For systems where active markers are pre-
ferred (see [4]) they may need high-speed cameras in order to
estimate accurate measurements. For retro-reflective markers
based systems [5], [6] some type of artificial illumination is
needed. Moreover, even though some type of intensity-adapted
segmentation may be used (see [7] for instance) this type of
systems are not fully robust to changing levels of illumination.
In this work we adopt a marker-based optical motion
tracking system to measure the (simulated) earthquake induced
motions, where one of our main contributions is the use of
AM-FM (amplitude modulated, frequency modulated) opaque
markers. The segmentation process for such markers is robust
to changing levels of illumination. They also embed spatial
resolution information used to simplify the measurement of
the induced motions. Furthermore, since our markers are
opaque (i.e.: do not emit light nor are retro-reflective) and
given the modest dinamics of the PUCP’s Shake Table, the
required video’s recording speed for our system is within
nowadays standard bounds (30 to 60 fps). In Figure 1 we show
our system configuration for the two types of tests carried
out in the PUCP’s Anti-seismic Structure Laboratory where
several high resolution digital cameras1are used to record the
displacement of the markers placed onto the studied structure’s
surface.
This paper is organized as follows: in Section II we describe
the characteristics of our AM-FM passive markers and its
robust segmentation procedure based on the AM-FM demodu-
lation [9], [10, Ch. 4.4]. In Section III we show our preliminary
experimental results. Finally in IV we give our concluding
1Sanyo XACTI HD1010, 1920×1080 pixels @ 29.97 fps, 1280×720 @
59.94 fps. The XACTI HD1010 digital cameras were chosen for this project
because they are a good compromise between video recording capabilities
(see [8]) and economical constrains.
(a) Configuration for a “resistance” test.
(b) Configuration for a “wall-bend” test.
Fig. 1. System configuration for the two types of tests carried out in the
PUCP’s Anti-seismic Structure Laboratory.
remarks.
II. TRACKING AM-FM MARK ERS
The AM-FM (amplitude modulated, frequency modulated)
modulation image representation has been successfully em-
ployed to segment images in a variety of scenarios [11],
[12], [13], where the AM-FM structures were intrinsic to
the analyzed image. In contrast, we impose artificial AM-FM
structures in the form of opaque markers in the scene; for the
scope of this paper, the scene is a given structure placed onto
a shake table (see Figures 2(a) and 2(b)).
In this section we provide a brief description of the AM-FM
demodulation and its associated dominant component analysis
(DCA) [9], [10, Ch. 4.4]. Then we follow with the description
of the AM-FM marker and the segmentation procedure based
on the estimated AM-FM parameters extracted from the scene.
A. AM-FM Demodulation
The AM-FM representation of images allows us to model
non-stationary image content in terms of amplitude and phase
functions using
I(ξ) =
M
X
n=1
an(ξ) cos(ϕn(ξ)) (1)
where I(ξ) : R2Ris the input image, ξ= (ξ1, ξ2)R2,
MN,an:R2[0,)and ϕn:R2R.
The interpretation of (1) suggests that the MAM-FM
component images an(ξ)·cos(ϕn(ξ)) are used to model the
essential image modulation structure. The amplitude functions
an(ξ)embed the energy (intensity in this context) of an im-
age’s region, whereas the Frequency-Modulated components
cos(ϕn(ξ)) capture fast-changing spatial variability in image
intensity.
(a) AM-FM opaque markers. The
2-D sinusoid pattern may be in
the horizontal or vertical direction.
Circles are 10 cm. apart from each
other (vertical and horizontal dis-
tances).
(b) Scene with vertical AM-
FM markers placed onto a
small structure. Image size is
1920 ×1080 pixels and it is a
frame from a video recorded at
29.97 fps with a Sanyo XACTI
HD1010 digital camera [8]
(c) 2-D dataset ω(1)(ξ)display
as image. Note that ω(1)(ξ)is
the dominant Instantaneous fre-
quency (IF) of 2(b) in the vertical
direction (see (2)).
(d) 2-D dataset ω(2)(ξ)display
as image. Note that ω(2)(ξ)is
the dominant Instantaneous fre-
quency (IF) of 2(b) in the hori-
zontal direction (see (2)).
Fig. 2. AM-FM markers are shown in (a). (b) depicts a typical video frame.
In (c) and (d) we display the 2-D datasets ω(1)(ξ)and ω(2) (ξ)as images.
ω(1)(ξ)and ω(2) (ξ)are the dominant Instantaneous frequency (IF) of 2(b)
in the vertical and horizontal direction respectively (see (2)). Note the high
contrast of (c), where the vertical AM-FM markers are placed.
Given a real image I(ξ), we need to compute the AM-
FM parameters. We use the term AM-FM demodulation to
imply the computation of the instantaneous amplitude (IA)
functions an(ξ), instantaneous phase (IP) functions ϕn(ξ), and
the instantaneous frequency (IF) vector functions ωn(ξ) =
ϕn(ξ) =
∂ξ1ϕn(ξ),
∂ξ2ϕn(ξ)=ω(1)
n(ξ), ω(2)
n(ξ).
For the scope of this paper, the IA, IP and IF computations
are carried out via the robust AM-FM demodulation algorithm
proposed in [14] (see also [9] and [15] for further details).
The AM-FM dominant component analysis (DCA), de-
scribed in [9], consist on applying a collection of band-pass
filters (filter bank) to the original image, and then proceed
with the AM-FM demodulation of each band-pass filtered
image and select the estimates from the channel with the
maximum amplitude estimate a(ξ) = max
n[1, M]{an(ξ)}. Hence,
the algorithm adaptively selects the bandpass filter with the
maximum response, modeling the input image as
I(ξ) = a(ξ) cos(ϕ(ξ)) (2)
where κ(ξ) = argmax
n[1, M]
{an(ξ)},ϕ(ξ) = ϕκ(ξ)(ξ),ω(ξ) =
ω(1)(ξ), ω(2)(ξ)and ω(l)(ξ) = ω(l)
κ(ξ)(ξ)for l= 1,2. This
approach does not assume spatial continuity, and allows the
model to quickly adapt to singularities in the image.
B. AM-FM Markers: description, segmentation and tracking
The design of the AM-FM opaque markers (see Figure
2(a)) includes 6 black circles (radius 0.5 cm.) 10 cm. apart
from each other (vertical and horizontal distances) and a 2-D
sinusoid pattern (in the horizontal or vertical direction) such
its wavelength is 1cm approximately. The markers are printed
in a A4 canson paper using a standard laser printer and are
placed onto the structure (to be analyzed) using a strong glue
and/or small upholstery nails; in Figure 2(b) six markers are
place onto a small structure.
Given a video frame (e.g. Figure 2(b)) with the AM-
FM markers place onto a structure, we proceed to compute
the DCA AM-FM demodulation parameters following (2); in
particular, we are interested in the dominant instantaneous
frequency (IF) vector function with vertical and horizontal
components given by ω(1)(ξ)and ω(2)(ξ)respectively. In
Figures 2(c) and 2(d) we display the 2-D datasets ω(1)(ξ)
and ω(2)(ξ)as images. Clearly, the high contrast areas of the
image shown in Figure 2(c) are related to the 2-D sinusoid
pattern in the vertical direction (see Figure 2(b)).
Moreover, the resulting histogram for the 2-D datasets
ω(1)(ξ)(or ω(2) (ξ)when appropriate) will be multimodal,
where the highest frequency mode (by design) should cor-
respond to the 2-D sinusoid pattern; in Figure 3 we show
the histogram of the instantaneous frequency of 2(b) in the
horizontal direction (note that this is a very typical histogram).
Using highest frequency mode assumption we may follow
two approaches to first segment each marker as an entity
containing the black circles: (i) compute the histogram of
ω(1)(ξ)(or ω(2) (ξ)when appropriate), identify the lower and
upper thresholds related to the highest frequency mode (for
instance, via [7]) and use such thresholds for segmentation or
(ii) manually segment the location of each marker for a given
“training” frame, compute the histogram of ω(1)(ξ)(or ω(2) (ξ)
when appropriate) in the manually segmented regions of
interest (ROI), identify the lower and upper thresholds (again
via [7]) and use such thresholds to segment all other frames.
The first approach is fully automatic but, even though unlikely
in a real scenario, the highest frequency mode assumption may
not be true. The second approach needs one training frame
to be manually segmented but (in the practice) gives better
segmentation results. The latter approach is used in the present
report. Once the ROIs (markers) are segmented we proceed
to estimate the centroid of each black circle (6 per marker);
since at this point we know the properties of the object we
are dealing with, it results in a straightforward procedure.
0 0.01 0.02 0.03 0.04 0.045
QSHIGSVVIWTSRHMRK
XSXLI%1*1QEVOIVW
Fig. 3. Histogram of the instantaneous frequency of 2(b) in the horizontal di-
rection (note that this is a very typical histogram). The mode corresponding to
the AM-FM markers is highlighted. The horizontal axis represents normalized
frequency.
Since the markers embed spatial information and the dis-
placement characteristics of the Shake Table at the PUCP’s
Anti-seismic lab. (maximum displacement and acceleration of
130 mm. for each sense and 1G respectively) are modest
it is possible to estimate the black circles (of each marker)
displacement (and therefore the structure displacement) with-
out calibrating the cameras’ intrinsic/extrinsic parameters (pre-
vious to the test, see [16], [17]). Nevertheless we plan in
the short term to include such calibration. Furthermore, it
is needed for “bend-wall” tests, where the movement is
perpendicular to the cameras’ point of view; once the cameras’
intrinsic/extrinsic parameters are estimated it will be possible
to estimate the displacement of the individual black circles
(and therefore, the wall bending properties) in the direction
and sense of the the Shake Table’s drive movement.
III. PRELIMINARY EXP ER I ME N TAL RE SU LTS
The preliminary experimental results presented below come
from a collection of destructive tests carried out as part
of a last year course (“Anti-seismic Engineering” from the
Civil Engineering Department) at the PUCP’s Anti-seismic
Structure Lab. Four small structures (with different mechanical
properties) are placed and secured onto the Shake Table; in
Figure 4 we depict the structures’ organization. Due to the
nature of the tests (destructive) no LVDT sensors are used.
The deployment of the cameras (Sanyo XACTI HD1010
digital camera [8]) is illustrated in Figure 4; note that from
cameras 1 and 2 point of view the movement of the two frontal
structures will be “right” and “left”. The videos were recorded
at 29.97 fps and 1080p (frames of 1920 ×1080 pixels) in
H.264/MPEG-4 AVC format. A simple Matlab MEX interface
(available at [18]) based on the FFmpeg library ([19]) was used
to directly decode the videos; a Matlab program was written
Fig. 4. Structures’ organization and cameras’ deployment for destructive
tests carried out as part of a last year course (“Anti-seismic Engineering
from the Civil Engineering Department) at the PUCP’s Anti-seismic Structure
Laboratory.
to estimate the black circle’s centroids of each marker using
the procedure described in Section II-B.
On what follows we focus in the video acquired by camera 2
(see Figure 4), for which the estimated spatial resolution was
1.008 mm/pixel. The structure in front of camera 2 had six
markers placed onto its surface, and for the sake of clarity, we
choose to present the estimated centroid’s evolution for the
top-most/left-most and bottom-most/right-most black circles
(related to the top-most/left-most and bottom-most/right-most
markers respectively, see Figure 6). In Figures 5(a) and 5(b)
we present the horizontal and vertical displacement of the
top-most/left-most (in blue) and bottom-most/right-most (in
green) black circles (see also Figures 6(a) through 6(f)). We
must note that for Figure 5(a) “down” is to be understood
as “left” and “up” as “right” in the cameras’ perspective.
From Figure 5(b) it should be apparent that the structure
starts loosing its original geometric properties (starts to break
down) at about the 11th second, and from the 15th second
onwards, the structure is severely damage. The T3 and T4
marks in 5(b) (and 5(d)) highlight two particularly extreme
instances. Additionally, between instants T3 through T6 the
rocking movement of the top part of the structure is clearly
identifiable (see Figures 5(d) and 6(c)-6(f)). It may also be
deduced that the base of the structure suffered a structural
damage: the bottom-most/right-most black circle ends up in
a lower horizontal position (compare the green line values in
Figure 5(b) at seconds 0 and 35).
IV. CONCLUSIONS
The use of AM-FM markers for the proposed optical
motion tracking system provide a robust scheme to segment
the regions of interest (markers) and consistently track the
induced movements. The system has an accuracy comparable
to the LVDT sensors’ accuracy (in the order of millimeters).
Moreover, it dramatically decreases the instrumentation time:
from up to two days down to less than 30 minutes.
In the short term we intend to collect full statistics and
compare the LVDT measurements and those estimated by
the proposed system. We also plan to include the estimation
of the cameras’ intrinsic/extrinsic parameters to correct the
distortion in the video’s frames and to allow us to estimate
the displacement in “bend-wall” tests.
Currently, the proposed systems analyzes the acquired
videos in an off-line fashion, taking about 1 hour to estimate
the induced movements (from one video sequence). In the
mid-term, we foresee an on-line system, where the Matlab
scripts (currently used) are replaced by a C library; we also
plan to acquire the uncompressed video directly via an HDMI
frame-grabber [20].
V. ACKNOWLEDGMENT
The author thanks the diligent cooperation of the members
of Anti-seismic Structure Laboratory at the PUCP.
REFERENCES
[1] Gongkang Fu and Adil G. Moosa, “An optical approach to structural
displacement measurement and its application,” Journal of Engineering
Mechanics, vol. 128, no. 5, pp. 511–520, 2002.
[2] D. Nastase, S. Chaudhuri, R. Chadwick, T. Hutchinson, K. Doerr, and
F. Kuester, “Development and evaluation of a seismic monitoring system
for building interiors - part I: Experiment design and results, IEEE
Trans. on Instrumentation and Measurement, vol. 57, pp. 332–344, 2008.
[3] D. Nastase, S. Chaudhuri, R. Chadwick, T. Hutchinson, K. Doerr, and
F. Kuester, “Development and evaluation of a seismic monitoring system
for building interiors - part II: Image data analysis and results,” IEEE
Trans. on Instrumentation and Measurement, vol. 57, pp. 345–354, 2008.
[4] Satoshi Fujita, Osamu Furuya, and Tadashi Mikoshiba, “Research and
development of measurement method for structural fracturing process in
shake table tests using image processing technique,” Journal of pressure
vessel technology, vol. 126, no. 1, pp. 115–121, 2004.
[5] T. Hutchinson, S. Ray Chaudhuri, and F. Kuesterand S. Auduong,
“Light-based motion tracking of equipment subjected to earthquake
motions,” J. Comp. in Civil Engineering, vol. 19, pp. 292–303, 2005.
[6] K. Kanda, Y. Miyamoto, A. Kondo, and M.Oshio, “Monitoring of
earthquake induced motions and damage with optical motion tracking,”
Smart Materials and Structures, vol. 14, pp. 32–38, 2005.
[7] Jeng-Horng Chang, Kuo-Chin Fan, and Yang-Lang Chang, “Multi-
modal gray-level histogram modeling and decomposition,Image Vision
Comput., vol. 20, no. 3, pp. 203–216, 2002.
[8] Sanyo Corp., “Instruction manual VPC HD 1010,” http://www.
sanyo.com/.
[9] J. P. Havlicek, AM-FM Image Models, Ph.D. thesis, The University of
Texas at Austin, 1996.
[10] A. C. Bovik, Handbook of Image and Video Processing, Academic
Press, May 2000.
[11] M. S. Pattichis, C. S. Pattichis, M. Avraam, A. C. Bovik, and K. Kyr-
iakou, “Am-fm texture segmentation in electron microscopic muscle
imaging,” IEEE J. Medical Imaging, vol. 19, no. 12, pp. 1253–1258,
december 2000.
[12] P. Rodr´
ıguez, M. S. Pattichis, and M. B. Goens, “M-mode echocardio-
graphy image and video segmentation based on am-fm demodulation
techniques,” in Int. Conf. of the IEEE Eng. in Medicine and Biology
Society, Cacun, Mexico, September 2003, vol. 2, pp. 1176–1179.
[13] C. Christodoulou, C. S. Pattichis, Victor Murray, M. S. Pattichis, and
A. Nicolaides, “Am-fm representations for the characterization of
carotid plaque ultrasound images,” in 4th European Conference of the
International Federation for Medical and Biological Engineering. 2008,
pp. 546–459, Springer Berlin Heidelberg.
[14] Paul Rodr´
ıguez, Fast and Accurate AM-FM Demodulation of Digital
Images With Applications, Ph.D. thesis, University of New Mexico
(UNM), Albquerque, NM, USA, 2005.
[15] G. Girolami and D. Vakman, “Instantaneous frequency estimation and
measurement: a quasi-local method,” Measurement Science Technology,
vol. 13, pp. 909–917, june 2002.
[16] Zhengyou Zhang, A flexible new technique for camera calibration,
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.
22, pp. 1330–1334, 2000.
[17] P. F. Sturm and S. J. Maybank, “On plane-based camera calibration: a
general algorithm, singularities, applications,” 1999, vol. 1.
[18] P. Rodr´
ıguez, “Ffmpeg video for mex (FFV4MEX),” http://sites.
google.com/a/istec.net/prodrig/.
[19] “FFmpeg,” http://ffmpeg.org/.
[20] Black Magic, “Intensity Pro,” http://www.
blackmagic-design.com/products/intensity/.
0 5 10 15 20 25 30 35
−100
−50
0
50
100
Time
mm.
T1
T2
T3
T4 T5 T6
(a) Horizontal displacement of the top-most/left-most (in blue) and bottom-
most/right-most (in green) black circles, where “down” is to be understood
as “left” and “up” as “right” in the cameras’ perspective (see also Figures
6(a) through 6(f)).
0 5 10 15 20 25 30 35
0
10
20
30
40
(b) Vertical displacement of the top-most/left-most (in blue) and bottom-
most/right-most (in green) black circles, where “up” and “down” are in
accordance to the cameras’ perspective (see also Figures 6(a) through
6(f)).
21 22 23 24 25 26 27 28 29 30
−100
−50
0
50
100
Time
mm.
T3
T4 T5 T6
(c) Zoom of the horizontal displacement (a) between seconds 21 through 30.
21 22 23 24 25 26 27 28 29 30
0
10
20
30
40
Time
mm.
T3 T4
T5
T6
(d) Zoom of the vertical displacement (b) between seconds 21 through 30.
The rocking movement of the top part (in blue) of the structure is clearly
identifiable. See also 6(c)-6(f)).
Fig. 5. Horizontal ((a) and (c)) and vertical ((b) and (d)) displacement of
the top-most/left-most (in blue) and bottom-most/right-most (in green) black
circles. See also Figures 6(a) through 6(f).
(a) Frame at instant T1 (see Figures
5(a) and 5(b)).
(b) Frame at instant T2 (see Figures
5(a) and 5(b)).
(c) Frame at instant T3 (see Figures
5(a) and 5(b)).
(d) Frame at instant T4 (see Figures
5(a) and 5(b)).
(e) Frame at instant T5 (see Figures
5(a) and 5(b)).
(f) Frame at instant T6 (see Figures
5(a) and 5(b)).
Fig. 6. Frames from camera 2 (see Figure 4) at instants T1-T6 (see Figures
5(a) and 5(b)) are shown. Original frames have been cropped to focus the
reader on the structure.
... Several laboratory-based experimental set-ups (e.g., [30,31]) have demonstrated the potential of image-based monitoring. Displacement measurements of high-rise buildings based on conventional imaging have been proposed by [32,33], the latter using high-speed linear cameras, while [34,35] proposed the dynamic monitoring of slender structures using various computer-vision techniques. ...
Article
Full-text available
The Multi-Parameter Wireless Sensing (MPwise) system is an innovative instrumental design that allows different sensor types to be combined with relatively high-performance computing and communications components. These units, which incorporate off-the-shelf components, can undertake complex information integration and processing tasks at the individual unit or node level (when used in a network), allowing the establishment of networks that are linked by advanced, robust and rapid communications routing and network topologies. The system (and its predecessors) was originally designed for earthquake risk mitigation, including earthquake early warning (EEW), rapid response actions, structural health monitoring, and site-effect characterization. For EEW, MPwise units are capable of on-site, decentralized, independent analysis of the recorded ground motion and based on this, may issue an appropriate warning, either by the unit itself or transmitted throughout a network by dedicated alarming procedures. The multi-sensor capabilities of the system allow it to be instrumented with standard strong- and weak-motion sensors, broadband sensors, MEMS (namely accelerometers), cameras, temperature and humidity sensors, and GNSS receivers. In this work, the MPwise hardware, software and communications schema are described, as well as an overview of its possible applications. While focusing on earthquake risk mitigation actions, the aim in the future is to expand its capabilities towards a more multi-hazard and risk mitigation role. Overall, MPwise offers considerable flexibility and has great potential in contributing to natural hazard risk mitigation.
Article
Full-text available
Optical motion tracking technologies using surveillance cameras have been applied to the measurement of earthquake induced motions and detection of seismic damage to interior elements to reduce risks in a building. We have developed methods for detecting structural collapse and overturning of interior elements. Shaking table tests were carried out to evaluate the optical system's ability to detect damage. The movements of two-storey frame structures and rectangular pieces of wood on the shaking table were monitored using conventional video cameras. Results from this exploratory study showed that optical measurements are promising in terms of capturing small to large deformations and identifying various types of seismic damage.
Chapter
Full-text available
Stroke is the third leading cause of death in the western world and a major cause of disability in adults. The objective of this work was to investigate the use of AM-FM representations for the characterization of carotid plaques ultrasound images for the identification of individuals with asymptomatic carotid stenosis at risk of stroke. To characterize the plaques using AM-FM features, we compute (i) the instantaneous amplitude, (ii) the instantaneous frequency magnitude and (iii) the instantaneous frequency angle in order to capture directional information. For each AM-FM feature, we compute the histograms over the plaque regions. The statistical K-nearest neighbour (KNN) classifier was implemented for the classification of the carotid plaques into symptomatic or asymptomatic using the leave-one-out methodology. Tests were carried out on a dataset of 274 carotid plaque ultrasound images (137 symptomatic + 137 asymptomatic), which showed that the AM-FM features performed slightly better than the traditional texture features and gave better results than simple histogram. Best results were obtained when a combination of the three AM-FM representations was used reaching a classification success rate of 71.5%.
Book
55% new material in the latest edition of this "must-have? for students and practitioners of image & video processing!This Handbook is intended to serve as the basic reference point on image and video processing, in the field, in the research laboratory, and in the classroom. Each chapter has been written by carefully selected, distinguished experts specializing in that topic and carefully reviewed by the Editor, Al Bovik, ensuring that the greatest depth of understanding be communicated to the reader. Coverage includes introductory, intermediate and advanced topics and as such, this book serves equally well as classroom textbook as reference resource. - Provides practicing engineers and students with a highly accessible resource for learning and using image/video processing theory and algorithms - Includes a new chapter on image processing education, which should prove invaluable for those developing or modifying their curricula - Covers the various image and video processing standards that exist and are emerging, driving today's explosive industry - Offers an understanding of what images are, how they are modeled, and gives an introduction to how they are perceived - Introduces the necessary, practical background to allow engineering students to acquire and process their own digital image or video data - Culminates with a diverse set of applications chapters, covered in sufficient depth to serve as extensible models to the reader's own potential applications About the Editor... Al Bovik is the Cullen Trust for Higher Education Endowed Professor at The University of Texas at Austin, where he is the Director of the Laboratory for Image and Video Engineering (LIVE). He has published over 400 technical articles in the general area of image and video processing and holds two U.S. patents. Dr. Bovik was Distinguished Lecturer of the IEEE Signal Processing Society (2000), received the IEEE Signal Processing Society Meritorious Service Award (1998), the IEEE Third Millennium Medal (2000), and twice was a two-time Honorable Mention winner of the international Pattern Recognition Society Award. He is a Fellow of the IEEE, was Editor-in-Chief, of the IEEE Transactions on Image Processing (1996-2002), has served on and continues to serve on many other professional boards and panels, and was the Founding General Chairman of the IEEE International Conference on Image Processing which was held in Austin, Texas in 1994.
Article
A number of optical devices are commercially available now for measurement. Some of them, such as laser devices and still and video cameras with high resolution, may be used effectively and efficiently for measuring structural displacement. This paper presents an approach to this type of application using a charge-coupled-device camera. It can acquire digitized images for low cost, to be used to identify structural displacement via digital signal processing. It is shown that this approach's resolution for point measurement is comparable with traditional sensors such as dial gages. Furthermore, it offers a new capability of displacement measurement for a large number of points on a structure, and it can provide spatially intensive displacement data. This kind of data may be used for structural damage detection and health monitoring, as suggested and demonstrated herein.
Article
Traditional sensors, such as accelerometers and displacement transducers, are widely used in laboratory and field experiments in earthquake engineering to measure the motions of both structural and nonstructural components. Such sensors, however, must be physically attached to the structure and require cumbersome cabling and configurations and substantial time for setup. For reduced-scale experiments, these conventional sensors may substantially alter the dynamic properties of the system by changing the mass, stiffness, and damping properties of the specimen. Moreover, it is very difficult with traditional sensors to capture the three-dimensional motions of light or oddly shaped components such as microscopes, computers, or other building contents. In this paper, the methodology of light-based motion. tracking is applied to the measurement of the three-dimensional motions of various types of equipment and building contents commonly found in biological and chemical science laboratories. The system is comprised of six high-speed, high-resolution charge-coupled-device (CCD) cameras outfitted with a cluster of red-light emitting diodes (LEDs). Retroreflective (passive) spherical markers discretely located in a scene are tracked intime and used to describe the behavior of various types of equipment and contents subjected to a range of earthquake motions. Results from this study show that the nonintrusive, light-based approach is extremely promising in terms of its ability to capture relative displacements in three orthogonal directions and complementary rotations.
Article
The largest three-dimensional shake table is now being constructed in Hyogo prefecture, Japan for a solution of fracturing process of structures, buildings, and soils. However it seems to be difficult to measure the fracturing processes of the structures during severe earthquake by using conventional methods and equipment, because the three-dimensional measurement of larger dynamic displacement in excess of elastic region of the structure will be the key to a solution and cannot be obtained by any of the vibration pick-ups such as displacement transducers and so on. In this study, R&D of the new measurement method to clarify the fracturing process of the structures by applying a so-called motion capture technique, which has been mainly studied for modeling of human actions and motions. This paper describes the concept of the system, the outline of the proto-type image processing system developed in the study and the results of the shake table test using five-story steel-structure model to investigate the measurable accuracy of the system.
Article
This paper describes a method for estimating the instantaneous frequency and instantaneous amplitude of real signals. The method is based on engineering ideas and uses frequency separation with a filter similar to those found in radio devices. The method approximates the analytic signal, and the estimated amplitude and frequency are practically the same as those given by the analytic signal. The method is simple, fast, precise, robust to noise, easy to implement, free of parasitic cross-terms and gives results which compare very favourably with other methods for the measurement of frequency modulated signals, associated or not with amplitude modulation. Examples are given, both of mathematical signals and of real world signals (a bat echo-location signal), showing that the method can be used for precise analysis of a very large variety of signals.
Article
In this paper, we present a novel multi-modal histogram thresholding method in which no a priori knowledge about the number of clusters to be extracted is needed. The proposed method combines regularization and statistical approaches. By converting the approaching histogram thresholding problem to the mixture Gaussian density modeling problem, threshold values can be estimated precisely according to the parameters belonging to each contiguous cluster. Computational complexity has been greatly reduced since our method does not employ conventional iterative parameter refinement. Instead, an optimal parameter estimation interval was defined before the estimation procedure. This predefined optimal estimation interval reduces time consumption while other histogram decomposition based methods search all feature space to locate an estimation interval for each candidate cluster. Experimental results with both simulated data and real images demonstrate the robustness of our method.
Article
55% new material in the latest edition of this "must-have? for students and practitioners of image & video processing!This Handbook is intended to serve as the basic reference point on image and video processing, in the field, in the research laboratory, and in the classroom. Each chapter has been written by carefully selected, distinguished experts specializing in that topic and carefully reviewed by the Editor, Al Bovik, ensuring that the greatest depth of understanding be communicated to the reader. Coverage includes introductory, intermediate and advanced topics and as such, this book serves equally well as classroom textbook as reference resource. - Provides practicing engineers and students with a highly accessible resource for learning and using image/video processing theory and algorithms - Includes a new chapter on image processing education, which should prove invaluable for those developing or modifying their curricula - Covers the various image and video processing standards that exist and are emerging, driving today's explosive industry - Offers an understanding of what images are, how they are modeled, and gives an introduction to how they are perceived - Introduces the necessary, practical background to allow engineering students to acquire and process their own digital image or video data - Culminates with a diverse set of applications chapters, covered in sufficient depth to serve as extensible models to the reader's own potential applications About the Editor... Al Bovik is the Cullen Trust for Higher Education Endowed Professor at The University of Texas at Austin, where he is the Director of the Laboratory for Image and Video Engineering (LIVE). He has published over 400 technical articles in the general area of image and video processing and holds two U.S. patents. Dr. Bovik was Distinguished Lecturer of the IEEE Signal Processing Society (2000), received the IEEE Signal Processing Society Meritorious Service Award (1998), the IEEE Third Millennium Medal (2000), and twice was a two-time Honorable Mention winner of the international Pattern Recognition Society Award. He is a Fellow of the IEEE, was Editor-in-Chief, of the IEEE Transactions on Image Processing (1996-2002), has served on and continues to serve on many other professional boards and panels, and was the Founding General Chairman of the IEEE International Conference on Image Processing which was held in Austin, Texas in 1994.
Article
"July, 2005." Thesis (Ph. D.)--University of New Mexico, 2005. Includes bibliographical references (leaves 123-128).