Content uploaded by Xinlan Jiang
Author content
All content in this area was uploaded by Xinlan Jiang on Oct 05, 2022
Content may be subject to copyright.
A Novel Visual Perception Enhancement Algorithm
for High-speed Railway in the Low Light Condition
Guohao Lyu, Hua Huang, Hui Yin, Siwei Luo and Xinlan Jiang
Beijing Key Lab of Traffic Data Analysis and Mining
Beijing Jiaotong University Beijing, China 100044
Email: lvguohao@bjtu.edu.cn
Abstract—With the rapid development of high-speed railway,
the safety of railway becomes extremely important. Video is a
direct and effective manner for monitoring railway environment,
but it is easily affected by weather condition and ambient light.
Therefore the security risks are hidden in the low light condition
and difficult to identify. In this paper, we propose a new visual
perception enhancement algorithm VPEA based on Illuminance-
Reflectance Model for low light image, and it can be used to
produce a clear image from the train-borne video in the low
light condition. The experimental results show that VPEA has
better practical effect on image enhancement, and VPEA will be
used in railway safety check and railway facility inspection in
the future.
Keywords—Retinex; Low Light Image; Visual Perception
Enhancement;High-speed Railway;
I. INTRODUCTION
The high speed railway is growing rapidly all over the
world in recent years. High speed trains run in closed en-
vironments fenced by guardrails for safety assurance, and
any unexpected matters may lead to disastrous consequences,
such as missing communication units and bolts on the track,
broken fences, unpredictable objects falling into the rail area
or hanging on wires on top of the train. It is an extremely
urgent task to ensure the high speed trains free from accidents.
The moving camera installed in the font of a high-speed
patrol train[1][2][3] is used for railway inspection tasks, as
shown in Fig.1. The camera above is more economic, easier
Fig. 1. The moving camera installed in the font of a high-speed patrol train
to implement, and more comprehensive in data acquisition, but
this video is easily affected by weather condition and ambient
light. It means that the potential safety risks are hidden in the
low light condition and difficult to identify,as shown in Fig.2.
In this paper, we propose a visual perception enhancement
algorithm VPEA based on Illuminance-Reflectance Model for
(a) Image taken in the evening (b) Image taken in the tunnel
Fig. 2. The railway monitoring images in the low light condition
low light image, and it can be used to produce a clear image
from the train-borne video in the low light condition.
The rest of the paper is organized as follows. Related
works on Illuminance-Reflectance Model are described in
Section 2. Section 3 formulates VPEA based on Illuminance-
Reflectance Model for low light image. Experimental results
are demonstrated in Section 4. Section 5 concludes the paper
and gives the further work.
II. RE LA ND WO RK S ON ILLUMINANCE-R EFL EC TANCE
MOD EL
In low light condition, the HSV (Human Vision System)
is able to generate visual images of the environment or the
object. That means human have the ability to perceive the
surrounding environment and color no matter what the light
condition is. It is difficult to pinpoint exactly that which parts
of HSV cause this phenomenon. In 1971 Land et al proposed
Retinex theory[4]in order to simulate the process of perceiving
peripheral vision of HSV. The word Retinex is a compound
word of cortex and retina. Some scholars have applied this
theory to image enhancement and obtained some good results.
The enhancement algorithms based on Retinex theory take
the enhanced image as the product of reflectance and illumi-
nance. The reflectance corresponds to the essential attribute of
the image, the illuminance corresponds to the outside influence
of the image. The algorithms aim at getting reflectance in
some ways and then recover the original characteristic of
the image by further removing the influence of illuminance
in the image. According to the Retinex theory, Jobson et
al.[5][6] proposed the center/surround Retinex algorithm to
get reflectance of the image. In the center/surround Retinex
algorithm, the convolution between an original image and a
low-pass filter is proposed to obtain illuminance. Further the
reflectance can be obtained by division or subtraction in the
logarithm domain[5]. Many researches on the selection of low
ICSP2014 Proceedings
978-1-4799-2186-7/14/$31.00 ©2014 IEEE
1022
Authorized licensed use limited to: Tsinghua University. Downloaded on June 24,2021 at 08:59:52 UTC from IEEE Xplore. Restrictions apply.
pass filter have been done, such as inverse square function,
Gaussian function[5], [6], [7], [9], bilateral filter[10] and NL
filter[11].
As one single filter, namely SSR (Single Scale Retinex)[5],
cannot estimate the illumination accurately, Jobson proposed
to use the weighted form of the multiple filters to estimate the
illumination, namely MSR (Multi Scale Retinex)[6]. But MSR
easily leads to the halo phenomenon. Meylan et al.[8] used the
adaptive filtering method to estimate illumination, firstly they
detected the high contrast edge of the image, and then changed
the shape of the filter in the edge to reduce diffusion. In this
way, the halo phenomenon can be eliminated more effectively.
Litao et al. successively proposed LDME algorithm (Luma
Dependent Nonlinear Enhancement) [7], IRME algorithm
(Illuminance-reflectance Model for Enhancement)[9] and these
algorithms are implemented by FPGA. Medioni et al. proposed
an active perception of adaptive Retinex image enhancement
algorithm by using NL (Nonocal Mean) Filter[11][12]. It
makes Retinex algorithm applies to both unexposed images and
exposed images. From the above discussion, center/surround
Retinex algorithm can be divided into two types:
1. Only keeping reflectance and remove illuminance, such
as SSR and MSR.
2. Processing illuminance, and synthesize the new image
with the processed illuminance and the original reflectance,
such as LDME and IRME.
In this paper, we propose a new image enhancement
algorithm based on Illuminance-Reflectance Model, and it can
process illuminance to achieve dynamic range compression
while still retaining or even enhancing the visual key features.
In addition, the proposed method only processes the lumi-
nance/intensity information of images so the incorrect color
rendition can be avoided and the halo phenomenon is also
eliminated.
III. VIS UAL PERCEPTION ENHANCEMENT ALGORITHM
FO R LOW LI GH T IMAGE
According to the illuminance-reflectance model mentioned
in the above section, a visual perception enhancement algo-
rithm for low light image called VPEA is proposed in this
section. The flow chart of VPEA is shown in Fig.3.
The detail information of VPEA is as follows:
A. Color transformation and Illuminate estimation
In order to reduce color cast of the image enhancement
result and increase the processing speed, color image should
be transformed into gray image by using Eq(1)
i0= 0.30ir+ 0.59ig+ 0.11ib(1)
where 0.30ir,0.59ig,0.11ibare the three primary colors of
RGB in the original image i0.
There are a lot of researches[4]-[8][12] on how to accu-
rately get illuminate estimation l, and we choose the simpler
but more effective one among them
l(x, y) = i0(x, y)∗w(x, y )(2)
Fig. 3. The flow chart of VPEA
where w(x, y) = P·e
−(x2+y2)
σ2is Gaussian function and Pis
normalized parameter which makes ∫∫ P·e
−(x2+y2)
σ2dxdy = 1
,where σcontrols the size of the Gaussian convolution kernels.
B. Illuminance processing
For illuminance, we use gamma correction to adjust the
dynamic range, which is expressed as Eq(3):
L=(l
255)γ
255 (3)
Where lis illuminance, Lis the dynamic range adjustment
result of illuminance. γis exponential coefficient, can be got
by the following equation:
γ=
γlIm≤T h1
(γh−γl)·Im−T h1
T h2−T h1+γlT h1≤Im≤T h2
γhIm≥T h2
(4)
Where Imis the mean of the intensity image I0.γis a
piecewise linear function with γl,γh,T h1,T h2as empirical
parameters, as shown in Fig.4. γ=γhis for the dark image,
γ=γlis for the bright image, and γis adaptively adjusted
according to the mean of image for other images. A set of
the curve shapes of γis provided in Fig.5.
Although illuminance has made a lot of improvement in
global visual effect by adjusting its dynamic range, the local
contrast of illuminance is reduced because of the gamma
correction stretching or compressing effect. Therefore, the
image visual quality is degraded. In order to improve the image
contrast, Eq(5) is proposed for contrast adjustment,
LE=L+L−¯
L
LMax −LM in
L(5)
1023
Authorized licensed use limited to: Tsinghua University. Downloaded on June 24,2021 at 08:59:52 UTC from IEEE Xplore. Restrictions apply.
Fig. 4. γpiecewise linear function
Fig. 5. The different γfunctions for adjusting the dynamic range
where ¯
Lis the mean of illuminance Lwith gamma correction;
LMax , LM in are the maximum and minimum values of L.LE
is the value of illuminance. The value of L−¯
L
LMax −LMin decides
the contrast adjustment, ranges from −1to 1.
The final enhanced image can be given by Eq(6).
I=LE·r(6)
C. Color cast correction and color restoration
For the enhanced image we need to transform it from gray
image into color image[9], as Eq(7) shows:
Ii=I
i0
iii= [R, G, B](7)
where i0is the gray image transformed from the original image
by Eq(1), Iis the enhanced image, iiis one primary color of
RGB in the original image .
Considering the phenomenon that enhanced image easily
appears color cast. We propose Eq(8) to correct the color cast
of enhanced image
midi= (1 −¯
Ii/¯
I)∗Ii+Iii= [R, G, B](8)
where ¯
Iiis the mean of one primary color of RGB in image
I,¯
Iis the mean of image Igiven by Eq(9)
¯
I= (¯
IR+¯
IG+¯
IB)/3(9)
where ¯
Ii/¯
Iis used to calculate the proportion of each primary
color in the image in Eq(8). If the value of (1 −¯
Ii/¯
I)is
negative, it means the primary color is too large and the overall
image bias towards this primary color, so the color needs to
be reduced to midi; if the value of (1 −¯
Ii/¯
I)is positive, it
means the primary color is too small, and the overall image
bias towards the other two primaries, so the color needs to be
increased to midi.
Considering the color restored image should satisfy the
grey world hypothesis, we should further adjust the color
image with Eq(10) based on the result of Eq(8),
finali=midi·avgGray/avgii= [R, G, B](10)
where avgiis the mean of one primary color of RGB in image
mid and avgGray is the mean of image mid given by Eq(11):
avgGray = (avgR+avgG+avgB)/3(11)
IV. EXPERIMENT AND DISCUSSION
In this section, we compare VPEA with Histogram E-
qualization, MSRCR and IRME on HSRIhigh speed railway
inspectionimage dataset in the low light condition(images col-
lected by us). The HSRI dataset contains 2000 images captured
in the evening or in the tunnels. Histogram Equalization is
performed using the basic function in Matlab. A commercial
software PhotoFlair@ is used to implement MSRCR. IRME is
performed using the Matlab code according to Reference[9].
In our experiments, a PC with 2.13GHz dual-core CPU 2GB
RAM is used as the experimental platform and VPEA in
Matlab takes about 0.23s to process an 800×600 image.
A. Qualitative evaluation
Fig.6 presents some results of VPEA, Histogram Equal-
ization, MSRCR and IRME on HSRI dataset. The results of
Histogram Equalization are the worst. The results of MSRCR
expose much more halo artifacts, especially in enhanced im-
ages taken in tunnels. IRME performs better than MSRCR with
much more natural look. However, both MSRCR and IRME
create incorrect color in some areas. For VPEA, it performs
the best in the four enhancement methods with much more
natural look and its color adjustment method can effectively
remove the color cast.
B. Quantitative evaluation
Image enhancement method is usually difficult due to
the subjectivity of human sense. In this paper, we evaluate
VPEA by a quantitative method based on the visually optimal
region[13]. In this method, a region of 2D space of the mean of
an image in [100; 200] and the standard deviation of an image
in [35; 80] are expressed as Visually Optimal Region(VOR),
we evaluate the overall performance of these four methods
based on the experimental results on HSRI dataset, as shown
in Table I. The values of mean(Mean), standard deviation(Std)
are the average values for all the images in the dataset. And
the value of VOR is the proportion of the number of enhanced
images that fall in VOR to the total number of the images in
the dataset. It would seem obvious that all these four methods
can enhance original images. The results of Histogram Equal-
ization are the worst in all the four enhancement algorithms.
Compared with VPEA, MSRCR has much larger standard
deviation but fewer images fall in VOR, because the excessive
enhancement in MSRCR will increase standard deviation but
1024
Authorized licensed use limited to: Tsinghua University. Downloaded on June 24,2021 at 08:59:52 UTC from IEEE Xplore. Restrictions apply.
(a) Original images
(b) Histogram Equalization
(c) MSRCR
(d) IRME
(e) VPEA
Fig. 6. The comparison of experimental results in HSRI dataset
TABLE I. STATIST ICA L RE SULT S OF DI FFE REN T EN HAN CE MEN T
AL GOR IT HMS
Dataset Original Histogram
Equalization
MSRCR IRME VPEA
HSRI
Mean 21.3 58.2 84.4 102.3 115.7
Std 33.5 46.4 71.9 38.3 41.2
VOR 0% 10.3% 15.4% 64.6% 72.4%
make the enhanced image less natural. IRME works better than
MSRCR but worse than VPEA. This shows the advantage of
higher adaptability of our method. The parameters of IRME are
constant and cannot be changed with different images, while
VPEA modifies its parameters adaptively according to the
global characteristics of different images. Moreover, the color
adjustment method in VPEA makes the color of enhanced
image more natural and more close to the true color. That
is why the all visibility indicators of VPEA are better than
Histogram Equalization, MSRCR and IRME.
V. CONCLUSION
Considering the influence of low light condition, we pro-
pose a visual perception enhancement algorithm (VPEA) based
on illumination-reflection model. The experimental results
demonstrate that VPEA has the good practical effect in image
enhancement. We will focus on the further efficiency improve-
ment of the enhancement algorithm to realize the real-time
image processing in the future.
ACKNOWLEDGMENT
This work is supported by National Nature Science Foun-
dation of China (61273364),Fundamental Research Funds for
the Central Universities (2014JBM037) and Linfield Faculty
Development Grant.
REF ER EN CE S
[1] C. Alippi, E. Casagrande, F. Scotti, “Composite real-time processing for
track profile measurement,” IEEE Trans. Instrumentation and Measure-
ment, vol.49, no.3, pp.559-564, 2000.
[2] J.Lin, S. W. Luo, Q. Y. Li, “Real-time rail head surface defect detection: a
geometrical approach,” IEEE Int. Symp. Instrial Electronis, vol.15, no.8,
pp.769-774, 2009.
[3] A. Rubaai, “A neural-net-based device for monitoring amtrak railroad
track system,” IEEE Trans. Industry Applications, vol.39, no.2, pp.374-
381, 2003.
[4] E. H. Land, J. J. McCann, “Lightness and retinex theory,” Journal of the
Optical society of America vol.61, no.1, pp.1-11, 1971.
[5] D. J. Jobson, Z.-u. Rahman, G. A. Woodell, “Properties and performance
of a center/surround retinex,” IEEE Trans. on Image Processing, vol.6,
no.3, pp.451-462, 1997.
[6] D. J. Jobson, Z.-u. Rahman, G. A. Woodell, “A multiscale retinex for
bridging the gap between color images and the human observation of
scenes,” IEEE Trans. on Image Processing, vol.6, no.7, pp.965-976, 1997.
[7] L. Tao, R. Tompkins, V. K. Asari, “An illuminance-reflectance model for
nonlinear enhancement of color images,” IEEE Conference on Computer
Vision and Pattern Recognition Workshops, IEEE, 2005, pp. 159-159.
[8] L. Meylan, S. Susstrunk, “High dynamic range image rendering with a
retinexbased adaptive filter,” IEEE Trans. on Image Processing vol.15,
no.9, pp.2820-2830, 2006.
[9] H. T. Ngo, M. Z. Zhang, L. Tao, V. K. Asari, “Design of a digital
architecture for real-time video, enhancement based on illuminance-
reflectance model,” IEEE Int. Midwest Symposium on Circuits and
Systems, Vol. 1, IEEE, 2006, pp. 286-290.
[10] S. Paris, P. Kornprobst, J. Tumblin, F. Durand, “A gentle introduction
to bilateral filtering and its applications,” ACM SIGGRAPH 2007, Vol.1,
ACM, 2007, pp. 458-462.
[11] A. Choudhury, G. Medioni, “Perceptually motivated automatic sharp-
ness enhancement using hierarchy of non-local means,” IEEE Int. Con-
ference on Computer Vision Workshops, IEEE, 2011, pp. 730-737.
[12] A. Choudhury, G. Medioni, “Perceptually motivated automatic color
contrast enhancement,” IEEE Int. Conference on Computer Vision Work-
shops , IEEE, 2009, pp. 1893-1900.
[13] D. J. Jobson, Z. u. Rahman, G. A.Woodell, “Statistics of visual
representation,” Int. Society for Optics and Photonics, SPIE, 2002, pp.
25-35.
1025
Authorized licensed use limited to: Tsinghua University. Downloaded on June 24,2021 at 08:59:52 UTC from IEEE Xplore. Restrictions apply.