Conference PaperPDF Available

Abstract and Figures

In this paper we propose a new method to automatically correct wide-angle lens distortion from the distorted lines generated by the projection on the image of 3D straight lines. We have to deal with two major problems: on the one hand, wide-angle lenses produce a strong distortion, which makes the detection of distorted lines a particularly difficult task. On the other hand, the usual single parameter polynomial lens distortion models is not able to manage such a strong distortion. We propose an extension of the Hough transform by adding a distortion parameter to detect the distorted lines, and division lens distortion models to manage wide-angle lens distortion. We present some experiments on synthetic and real images to show the ability of the proposed approach to automatically correct this type of distortion. A comparison with a state-of-the-art method is also included to show the benefits of our method.
Content may be subject to copyright.
Wide-Angle Lens Distortion Correction Using
Division Models
Miguel Alem´an-Flores, Luis Alvarez, Luis Gomez, and Daniel Santana-Cedr´es
CTIM (Centro de Tecnolog´ıas de la Imagen),
Universidad de Las Palmas de Gran Canaria, Spain
{maleman,lalvarez,lgomez,dsantana}@ctim.es
http://www.ctim.es
Abstract. In this paper we propose a new method to automatically cor-
rect wide-angle lens distortion from the distorted lines generated by the
projection on the image of 3D straight lines. We have to deal with two
major problems: on the one hand, wide-angle lenses produce a strong
distortion, which makes the detection of distorted lines a particularly
difficult task. On the other hand, the usual single parameter polynomial
lens distortion models is not able to manage such a strong distortion.
We propose an extension of the Hough transform by adding a distor-
tion parameter to detect the distorted lines, and division lens distortion
models to manage wide-angle lens distortion. We present some experi-
ments on synthetic and real images to show the ability of the proposed
approach to automatically correct this type of distortion. A comparison
with a state-of-the-art method is also included to show the benefits of
our method.
Keywords: lens distortion, wide-angle lens, Hough transform, line de-
tection
1 Introduction
Wide-angle lenses are specially suited for some computer vision tasks, such as
real-time tracking, surveillance, close range photogrammetry or even for simple
aesthetic purposes. The main advantage these lenses offer is that they provide
a wide view up to 180 degrees. However, the strong distortion produced by
these lenses may cause severe problems, not only visually, but also for further
processing in applications such as object detection, recognition and classification.
To model the lens distortion, we consider radial distortion models given by
the expression:
ˆxxc
ˆyyc=L(r)xxc
yyc,(1)
where (x, y) is the original (distorted) point, (ˆx, ˆy) is the corrected (undistorted)
point, (xc, yc) is the center of the camera distortion model, L(r) is the function
which defines the shape of the distortion model and r=p(xxc)2+ (yyc)2.
2 M. Alem´an-Flores, L. Alvarez, L. Gomez, D. Santana-Cedr´es
According to the choice of function L(r), there exist two widely accepted types
of lens distortion models: the polynomial model and the division model.
The polynomial model, or simple radial distortion model [10], is formulated
as:
L(r) = 1 + k1r2+k2r4+..., (2)
where the set k= (k1, ...., kNk)Tcontains the distortion parameters estimated
from image measurements, usually by means of non-linear optimization tech-
niques. The two-parameter model is the usual approach, due to its simplicity
and accuracy [12], [1]. Alvarez, Gomez and Sendra [1] proposed an algebraic
method suitable for correcting significant radial distortion which is highly effi-
cient in terms of computational cost. An on-line demo of the implementation of
this algebraic method can be found in [2].
Camera calibration is a topic of interest in Computer Vision which, in order
to be efficient, requires including the distortion into the camera model. Most
calibration techniques rely on the linear pinhole camera and use a calibration
pattern to establish a point-to-point correspondence between 2D and 3D points
(see a review on camera calibration in [14]). In this applications, the polynomial
model with only one distortion parameter, k1(one-parameter model), achieves an
accuracy around 0.1 pixels in image space using lenses exhibiting large distortion
[7], [8]. However, [7] also indicates that for cases of strong radial distortion, the
one-parameter model is not recommended.
The division model has initially been proposed by [13], but it has received
special attention after the more recent research by Fitzgibbon [9]. It is formulated
as:
L(r) = 1
1 + k1r2+k2r4+....(3)
The main advantage of the division model is the requirement of fewer terms
than the polynomial model for the case of severe distortion. Therefore, the divi-
sion model seems to be more adequate for wide-angle lenses (see a recent review
on distortion models for wide-angle lenses in [11]). Additionally, when using
only one distortion parameter, its inversion is simpler, since it requires finding
the roots of a second degree polynomial instead of a third degree polynomial. In
fact, a single parameter version of the division model is normally used.
For both models, L(r) can be estimated by considering that 3D lines in the
image must be projected onto 2D straight lines, and minimizing the distortion
error, which is given by the sum of the squares of the distances from the points
to the lines [7].
Once a lens distortion model has been selected, we must decide how to apply
it. Some methods rely on the human-supervised identification of some known
straight lines in one or more images [3], [4], [15]. As a consequence of the human
intervention, these methods are robust, independent of the camera parameters,
and require no calibration patterns. However, for the same reason, these methods
are slow and tedious for the case of dealing with large sets of images.
Wide-Angle Lens Distortion Correction Using Division Models 3
New approaches have recently appeared to eliminate human intervention.
In [6] and [5], an automatic radial estimation method is discussed. This method
works on a single image and no human intervention or special calibration pattern
are required. The method applies the one-parameter Fitzgibbon’s division model
to estimate the distortion from a set of automatically detected non-overlapping
circular arcs within the image. The main limitation of the method is that each
circular arc has to be a collection of contiguous points in the image and, therefore,
the method fails if there are no such arcs.
In this paper, we propose a new unsupervised method which makes use of
the one-parameter division model to correct, from a single image, the radial dis-
tortion caused by a wide-angle lens. We first automatically detect the distorted
lines within the image by adapting the usual Hough transform to our problem.
The adaptation consists in embedding the radial distortion parameter into the
Hough parametric space to tackle the detection of the longest arcs (distorted
lines) within the image. From the improved Hough transform, we obtain a col-
lection of distorted lines and an initial value for the distortion parameter k1.
Next, we optimize this parameter by minimizing the distance of the corrected
line points to straight lines.
2 A Hough Space Including a Division Lens Distortion
Parameter
In order to correct the distortion, we need to estimate the magnitude and sign
of the distortion parameter and, to this aim, we can rely on the information
provided by line primitives. Line primitives are searched in the edge image which
is computed using any edge detector. One of the most commonly used techniques
to extract lines in an edge image is the Hough transform, which searches for the
most reliable candidates within a certain space. This space is usually a two-
dimensional space which considers the possible values for the orientation and
the distance to the origin of the candidate lines. Each edge point votes for those
lines which could contain this point, and the lines which receive the highest
scores are considered the most reliable ones.
However, this technique does not consider the influence of the distortion in
the alignment of the edge points, in such a way that straight lines are split into
different segments due to the effect of the distortion. For this reason, we propose
to include a new dimension in the Hough space, namely the distortion parameter.
For practical reasons, instead of considering the distortion parameter value itself
in the Hough space, we make use of the percentage of correction obtained with
that value, which is given by:
p= (˜rmax rmax)/rmax ,(4)
where rmax is the distance from the center of distortion to the furthest point
in the original image, and ˜rmax is the same distance, but after applying the
distortion model. This way, the parameter pis easier to interpret than the dis-
tortion parameter itself. Another advantage of using pas an extra parameter in
4 M. Alem´an-Flores, L. Alvarez, L. Gomez, D. Santana-Cedr´es
(a) Synthetic image (b) Real image
Fig. 1. Values of the maximum in the voting space with respect to the percentage of
correction for the images in (a) the synthetic image in Fig. 2 and (b) the real image in
Fig. 3 using the modified Hough transform and division lens distortion model
the Hough space is that it does not depend on the image resolution. When we
use single parameter division models the relation between parameter pand k1
is straightforward and it is given by the expression :
k1=p
(1 + p)r2
max
.(5)
To reduce the number of points which vote and the number of lines that
each edge point votes for, we first estimate the magnitude and orientation of
the edge for every edge point. Only those points where the magnitude of the
gradient is higher than a certain threshold are considered. Afterward, we select,
for every value of pand every edge point, those lines which, after being corrected
according to the distortion model associated to this value of p, are close enough
to the point and present an orientation which is similar to the orientation of the
edge in that point. Furthermore, the vote of a point for a line depends on how
close they are, and is given by v= 1/(1 + d), where dis the distance from the
point to the line.
In the Hough space, the different lines may have different orientations and
distances to the origin. Nevertheless, they should all have the same value of
the distortion parameter (i.e. the same value of p), since it is a single value for
the whole image. This means that we must not search for the best candidates
individually, but for the value of pwhich concentrates the largest number of
significant lines.
Figure 1 illustrates how the maximum of the voting score varies within the
Hough space according to the percentage of correction determined by the dis-
tortion parameter.
Once we have searched for the best value of pwithin the three-dimensional
Hough space, we refine it to obtain a more accurate approximation. To this
aim, by using standard optimization techniques (gradient descent method) we
minimize the following error function:
Wide-Angle Lens Distortion Correction Using Division Models 5
(a) (b)
(c) (d)
Fig. 2. Lens distortion correction for a test image: (a) lines detected using the Bukhari-
Dailey method, (b) lines detected using the proposed method, (c) undistorted image us-
ing the Bukhari-Dailey method, and (d) undistorted image using the proposed method.
E(p) =
Nl
X
j
Np(j)
X
i
dist (xji , linej)2(6)
Nl is the number of lines, Np(j) is the number of points of the jth line and xji
are the points associated to linej. This error measures how distant the points are
from their respective lines, so that the lower this value, the better the matching.
3 Experimental Results
We have tested our model in some images showing wide-angle lens distortion
and we have compared the results with those obtained using the Bukhari-Dailey
method [5]. We have used the code avaliable on F. Bukhari’s web page1.
1http://www.cs.ait.ac.th/vgl/faisal/downloads.html
6 M. Alem´an-Flores, L. Alvarez, L. Gomez, D. Santana-Cedr´es
(a) (b)
(c) (d)
Fig. 3. Lens distortion correction for a real image: (a) lines detected using the Bukhari-
Dailey method, (b) lines detected using the proposed method, (c) undistorted image us-
ing the Bukhari-Dailey method, and (d) undistorted image using the proposed method
Figure 2 (1024 ×683 pixels) presents the results for a synthetic image. It
consists of a calibration pattern in which the radial distortion has been simu-
lated using a division model. The magnitude of such distortion is 20% (p= 0.2).
Figure 2(a) shows the arcs detected using the Bukhari-Dailey method, whereas
the lines detected using the proposed method (modified Hough transform and
division model) are shown in Fig. 2(b). We have represented each line using a
different color to identify them. In both cases, from the detected arcs or distorted
lines, the distortion is estimated and the images are corrected. Figure 2(c) illus-
trates the result using the Bukhari-Dailey method, whereas Fig. 2(d) presents
the corrected image using the proposed method. As observed, the Bukhari-Dailey
method splits those lines where points are not contiguous, while the proposed
method is able to identify a single line from different disconnected segments (see,
for instance, how the edges of the squares in the same row or column are not
associated using the Bukhari-Dailey method, but are properly linked using our
method). Since longer lines provide more useful information than shorter ones,
this results in a better distortion estimation for the proposed method.
Wide-Angle Lens Distortion Correction Using Division Models 7
Table 1. Number of lines, number of points, CPU time and percentage of correction
for Fig. 2 and 3 using the Bukhari-Dailey method and the proposed method
Figure Measure Bukhari-Dailey Our method
Figure 2 (synthetic image)
No. of arcs 306 24
No. of points 11,255 9,033
CPU time (sec.) 79.611 7.844
% correction 0 19.9555
Figure 3 (real image)
No. of arcs 22 22
No. of points 2,894 3,651
CPU time (sec.) 57.41 3.209
% correction 63.3116 49.9186
Figure 3 (640 ×425 pixels)2illustrates the same experiment on a real image
with a strong distortion. Figure 3(a) shows the arcs detected using the Bukhari-
Dailey method. As observed, when different segments of the same line are visible,
this method is not able to associate them (see for instance the lower green line,
which is not continued on the right side of the image), but the proposed method
associates them into the same line (see Fig. 3(b)). For this case, the corrected
image using the proposed method is also better than that obtained by means of
the Bukhari-Dailey method (compare Fig. 3(c) and Fig. 3(d)).
Table 1 shows some quantitative results. If we analyze the results for the
calibration pattern, we can observe two important advantages of our method.
First, the number of lines which have been identified is 24, which is exactly
the number of lines within the image. Nevertheless, the Bukhari-Dailey method
extracts a higher number of lines, since each one of them has been split in many
segments. Second, the percentage of correction obtained with our method is very
close to the real value (20%). In this case the Bukhari-Dailey method does not
provide a good result (0% of correction), probably because the obtained segments
are too small to properly estimate the distortion model. Concerning the total
amount of points of the arcs obtained by both methods, the Bukhari-Dailey
method obtains more points (11,255 points in all) than our method (9,033 points)
probably due to the spurious arcs extracted by the Bukhari-Dailey method.
For the real image, both methods have identified the same number of lines,
but those obtained by our method are longer (3,651 points in all) and they have
not been split. Regarding the computational cost, in the experiments presented,
our method is about 10 times faster than the one proposed by Bukhari-Dailey.
4 Conclusions
In this paper we propose a new method to automatically correct wide-angle lens
distortion. The main novelty of the paper is the combination of an improved 3D
Hough space, which includes the distortion parameter to detect distorted lines,
and the division distortion model which is able to manage the strong distortion
2US Air Force CC0 http://commons.wikimedia.org/wiki/File:Usno-amc.jpg
8 M. Alem´an-Flores, L. Alvarez, L. Gomez, D. Santana-Cedr´es
produced by wide-angle lenses. We present some experiments which show that
the proposed method properly corrects the lens distortion in the case of wide-
angle lenses and outperforms the results obtained in [5] specially in the case
where the distorted lines are not contiguous arcs in the image.
Acknowledgement: This work has been partially supported by the MICINN
project reference MTM2010-17615 (Ministry of Science and Innovation, Spain).
References
1. Alvarez, L., Gomez, L., Sendra, R.: An algebraic approach to lens distortion by
line rectification. Journal of Mathematical Imaging and Vision 39(1), 36–50 (2008)
2. Alvarez, L., Gomez, L., Sendra, R.: Algebraic lens distortion model estimation.
Image Processing On Line. http://www.ipol.im (2010)
3. Alvarez, L., Gomez, L., Sendra, R.: Accurate depth dependent lens distortion mod-
els: an application to planar view scenarios. Journal of Mathematical Imaging and
Vision 39(1), 75–85 (2011)
4. Brown, D.: Close-range camera calibration. Photogrammetric Engineering 37(8),
855–866 (1971)
5. Bukhari, F., Dailey, M.: Automatic radial distortion estimation from a single image.
Journal of Mathematical Imaging and Vision 45(1), 31–45 (2012)
6. Bukhari, F., Dailey, M.N.: Robust radial distortion from a single image. In: Bebis,
G., Boyle, R.D., Parvin, B., Koracin, D., Chung, R., Hammoud, R.I., Hussain, M.,
Tan, K.H., Crawfis, R., Thalmann, D., Kao, D., Avila, L. (eds.) ISVC (2). Lecture
Notes in Computer Science, vol. 6454, pp. 11–20. Springer (2010)
7. Devernay, F., Faugeras, O.: Straight lines have to be straight. Machine Vision and
Applications 13(1), 14–24 (2001)
8. Faugeras, O., Toscani, G.: Structure from motion using the reconstruction and re-
projection technique. In: Proc. IEEE Workshop on Computer Vision (IEEE Com-
puter Society) pp. 345–348 (November-December 1987)
9. Fitzgibbon, A.W.: Simultaneous linear estimation of multiple view geometry and
lens distortion. In: Proc. IEEE International Conference on Computer Vision and
Pattern Recognition pp. 125–132 (2001)
10. Hartley, R.I., Zisserman, A.: Multiple view geometry in computer vision. Cam-
bridge University Press (2004)
11. Hughes, C., Glavin, M., Jones, E., Denny, P.: Review of geometric distortion com-
pensation in fish-eye cameras. In: Signals and Systems Conference, 208. (ISSC
2008). IET Irish. pp. 162–167. Galway, Ireland (2008)
12. Kang, S.: Radial distortion snakes. In: Proc. IEICE Transactions on Information
and Systems pp. 1603–1611 (2000)
13. Lenz, R.: Linsenfehlerkorrigierte Eichung von Halbleiterkameras mit Standardob-
jektiven f¨ur hochgenaue 3D - Messungen in Echtzeit. In: Paulus, E. (ed.) Muster-
erkennung 1987, Informatik-Fachberichte, vol. 149, pp. 212–216. Springer Berlin
Heidelberg (1987)
14. Salvi, J., Armangu, X., Batlle, J.: A comparative review of camera calibrating
methods with accuracy evaluation. Pattern Recognition Letters 35(7), 1617–1635
(2002)
15. Wang, A., Qiu, T., Shao, L.: A simple method to radial distortion correction with
centre of distortion estimation. Journal of Mathematical Imaging and Vision 35(3),
165–172 (2009)
... The distortion parameter is estimated by optimizing the Hough entropy in the image gradient space. Based on [1] [2], we propose a new automatic method to correct the radial distortion in still images. As main differences with [1] [2], in this paper we consider both polynomial and division models, and estimate two distortion parameters, instead of just one-parameter division models. ...
... Based on [1] [2], we propose a new automatic method to correct the radial distortion in still images. As main differences with [1] [2], in this paper we consider both polynomial and division models, and estimate two distortion parameters, instead of just one-parameter division models. Furthermore, a new efficient iterative algorithm is used to get optimal distortion parameters as well as the center of distortion (in [2], the center of distortion is assumed to be the center of the image). ...
... Next, we consider the case r 1 = 1 and we study the regions in the k 1 , k 2 parameter space where all the real roots of (9) are outside the interval [0] [1]. To find such regions we will consider the following level curves in the ...
Article
Full-text available
In this paper, we study lens distortion for still images considering two well-known distortion models: the two-parameter polynomial model and the two-parameter division model. We study the invertibility of these models, and we mathematically characterize the conditions for the distortion parameters under which the distortion model defines a one-to-one transformation. This ensures that the inverse transformation is well defined and the distortion-free image can be properly computed, which provides robustness to the distortion models. A new automatic method to correct the radial distortion is proposed, and a comparative analysis for this method is extensively performed using the polynomial and the division models. With the aim of obtaining an accurate estimation of the model, we propose an optimization scheme which iteratively improves the parameters to achieve a better matching between the distorted lines and the edge points. The proposed method estimates two-parameter radial distortion models by detecting the longest distorted lines within the image. This is done by applying the Hough transform extended with a radial distortion parameter. Next, a two-parameter model is estimated using an iterative nonlinear optimization scheme. This scheme aims at minimizing the distance from the edge points to their associated lines by adjusting the two distortion parameters as well as the coordinates of the center of distortion. We present some experiments on real images with significant distortion to show the ability of the proposed approach to correct the radial distortion. A visual and quantitative comparison between both automatic twoparameter model estimations indicates that the division model is more efficient for those images showing strong distortion.
... Few correction models, except for the nonparametric barrel distortion correction model of , have been used in a sports context for computational tracking (see also Pers et al. 2008). A recent review reported that the best radial distortion model for cameras with high degrees of distortion is the division model (Alemán-Flores et al. 2013). It is important to note that image distortion of wide-angle cameras strongly only partial results for errors associated with the use of a single camera (as in Toki & Sakurai 2005), and the radial distortion correction model has only been partially reported when wide-angle lenses are used (as in Clemente et al. 2014). ...
... The Hough transform (Hough 1959;Duda & Hart 1972) is a well-known and widely applied method in image processing and computer vision to identify lines in an image. We refer the reader to any computer vision text for a complete description of the method (see for instances, Shapiro & Stockman 2001;Alemán-Flores et al. 2013;Alemán-Flores et al. 2014). For our purposes, the use of Hough classical transform allows easy detection of the lines within an image by the standard related algorithm, which is also quite efficient. ...
... In previous studies (Alemán-Flores et al. 2013, the proposed algorithm has been described in detail and used for radial distortion correction of single pictures (e.g. paintings, photo of buildings, doors and people walking along the street). ...
Article
Full-text available
The aim of this study was to assess the applicability and reliability of a single wide angle lens GoPro® camera for tracking and kinematics analysis of futsal players. An official game of a Brazilian professional team in the quarter-final round of the 2013 São Paulo futsal league was recorded by four digital video cameras (30 Hz; 720 X 480) placed at the highest points of the court (40 x 20 m; FIFA standard). We compared tracking performed with three cameras equipped with conventional lenses to a GoPro® camera with a wide-angle lens with a 170° field of view. Manual and automatic tracking of players trajectories (n = 5; 23.20±2.39 years) during competition and in a controlled environment were performed. We found that the root mean square (RMS) error calculated to determine the position on the court based on images captured with a GoPro® camera was 0.32 m and for the velocity was 0.71 m·s-1. A five-minute excerpt of the game showed an average difference of 2.11% between the total distance covered obtained by tracking with the GoPro® with radial distortion correction (438.97±164.65 m), and cameras with conventional lenses (449.54±170.92 m). Temporal correlation analysis confirmed the great similarity between the reconstructed trajectories obtained by these cameras (r = 0.99 for movements on both x and y axes). We showed that a single GoPro® Hero 3+ can provide reliable kinematic analysis of computational tracking based on videogrammetry for futsal, using the Hough transform and division model technique for radial distortion correction. Keywords: Radial distortion; Hough transform; GoPro; automatic tracking; futsal; Biomechanics Software available in: http://ctim.ulpgc.es/demo112/#results
... In the second step, they introduced a sampling algorithm that finds the distortion parameters consistent with the largest number of arcs identified by the first step, favoring the support of the longer arc candidates over the short ones. Aleman-Flores proposed [12] a modification of the Hough transform that incorporates the distortion parameter in the Hough-space and makes it possible to detect distorted lines even when they are not observed as continuous arcs. The modified Hough space [12] detection would not only provide the orientation and location of the lines, but also an initial estimate of the distortion parameter. ...
... Aleman-Flores proposed [12] a modification of the Hough transform that incorporates the distortion parameter in the Hough-space and makes it possible to detect distorted lines even when they are not observed as continuous arcs. The modified Hough space [12] detection would not only provide the orientation and location of the lines, but also an initial estimate of the distortion parameter. Other methods based on modified ...
Preprint
Full-text available
In many computer vision domains, the input images must conform with the pinhole camera model, where straight lines in the real world are projected as straight lines in the image. Performing computer vision tasks on live sports broadcast footage imposes challenging requirements where the algorithms cannot rely on a specific calibration pattern must be able to cope with unknown and uncalibrated cameras, radial distortion originating from complex television lenses, few visual clues to compensate distortion by, and the necessity for real-time performance. We present a novel method for single-image automatic lens distortion compensation based on deep convolutional neural networks, capable of real-time performance and accuracy using two highest-order coefficients of the polynomial distortion model operating in the application domain of sports broadcast. Keywords: Deep Convolutional Neural Network, Radial Distortion, Single Image Rectification
... In practice, due to noise and edge detection errors in the distorted image, θ i and ρ i would not cluster into a point in the Hough space for a curved line. To make the Hough tranform adapt the distorted line, distortion parameters are introduced into the hough space (Cucchiara et al., 2003;Alemán-Flores et al., 2013, 2014aSantana-Cedrés et al., 2015). The Hough transform that incorporates distortion parameter is called the extended Hough transform. ...
... Finally, the value corresponding to maximum supporting points (also called voting) in Hough space is chosen as the estimation of k (Cucchiara et al., 2003). Furthermore, to make the parameter independent of the image resolution and avoid trivial small values, instead of using k directly in Hough space, a proxy variable p is used (Alemán-Flores et al., 2013, 2014b: ...
Preprint
Wide field-of-view (FOV) cameras, which capture a larger scene area than narrow FOV cameras, are used in many applications including 3D reconstruction, autonomous driving, and video surveillance. However, wide-angle images contain distortions that violate the assumptions underlying pinhole camera models, resulting in object distortion, difficulties in estimating scene distance, area, and direction, and preventing the use of off-the-shelf deep models trained on undistorted images for downstream computer vision tasks. Image rectification, which aims to correct these distortions, can solve these problems. In this paper, we comprehensively survey progress in wide-angle image rectification from transformation models to rectification methods. Specifically, we first present a detailed description and discussion of the camera models used in different approaches. Then, we summarize several distortion models including radial distortion and projection distortion. Next, we review both traditional geometry-based image rectification methods and deep learning-based methods, where the former formulate distortion parameter estimation as an optimization problem and the latter treat it as a regression problem by leveraging the power of deep neural networks. We evaluate the performance of state-of-the-art methods on public datasets and show that although both kinds of methods can achieve good results, these methods only work well for specific camera models and distortion types. We also provide a strong baseline model and carry out an empirical study of different distortion models on synthetic datasets and real-world wide-angle images. Finally, we discuss several potential research directions that are expected to further advance this area in the future.
... Many achievements based on plumb-line approach have been obtained in recent years [1], [15], [16], [25], [30]. Alemán-Flores et al. [18], [19], [32], [33], [35] studied on lens distortion correction by line rectification. Polynomial model as well as division model are utilized for distortion estimation. ...
... In our experiments, LLS algorithm is more than two orders of magnitude faster than LM algorithm under the same working conditions as shown in Table 2 [33] method. They have conducted continuous study on radial distortion estimation, and published several related articles [18], [19], [32], [33], [35]. Moreover, they have provided an online demo for their automatic distortion correction method, which is available to anyone interested in their study. ...
Article
Full-text available
The pinhole model utilized in most computer vision algorithms becomes unfeasible because of lens distortion. Thus it is a must to compensate lens distortion to make the pinhole model available. In this paper, we propose a new robust line-based distortion estimation method to correct radial distortion. Our method works from a single image and is able to estimate the distortion center rather than assuming it is at the image center. Distortion parameters are estimated from parameters of circulars arcs, on the basis that straight lines are imaged as circular arcs under one-parameter division model. A new feature selection scheme by refining circular arcs is introduced to make the process of distortion estimation fully automatic and more robust. Moreover, a linear optimization algorithm is applied to calculating parameters in each selection run, making our feature selection scheme more efficient. Experiments on synthetic images and real images show that our method performs well in radial distortion estimation even with severely distorted images.
... In practice, due to noise and edge detection errors in the distorted image, θ i and ρ i would not cluster into a point in the Hough space for a curved line. To make the Hough transform adapt the distorted line, distortion parameters are introduced into the hough space (Cucchiara et al. 2003;Alemán-Flores et al. 2013, 2014aSantana-Cedrés et al. 2015). The Hough transform that incorporates distortion parameters is called the extended Hough transform. ...
Article
Full-text available
Wide field-of-view (FOV) cameras, which capture a larger scene area than narrow FOV cameras, are used in many applications including 3D reconstruction, autonomous driving, and video surveillance. However, wide-angle images contain distortions that violate the assumptions underlying pinhole camera models, resulting in object distortion, difficulties in estimating scene distance, area, and direction, and preventing the use of off-the-shelf deep models trained on undistorted images for downstream computer vision tasks. Image rectification, which aims to correct these distortions, can solve these problems. In this paper, we comprehensively survey progress in wide-angle image rectification from transformation models to rectification methods. Specifically, we first present a detailed description and discussion of the camera models used in different approaches. Then, we summarize several distortion models including radial distortion and projection distortion. Next, we review both traditional geometry-based image rectification methods and deep learning-based methods, where the former formulates distortion parameter estimation as an optimization problem and the latter treats it as a regression problem by leveraging the power of deep neural networks. We evaluate the performance of state-of-the-art methods on public datasets and show that although both kinds of methods can achieve good results, these methods only work well for specific camera models and distortion types. We also provide a strong baseline model and carry out an empirical study of different distortion models on synthetic datasets and real-world wide-angle images. Finally, we discuss several potential research directions that are expected to further advance this area in the future.
... Parameters s 1 , s 2 , s 3 and s 4 model the thin prism distortion [9], [10]. Parameters k 4 , k 5 and k 6 are part of the division model [11], [12], also for radial distortion, allowing better results with higher distortions, typical of wider angle lenses [13]. Production errors in the radial curvature of the lens elements leads to radial distortion, while problems with the decentering of the various lens elements leads both to radial distortion and tangential distortion. ...
... The distortion parameters can be estimated by taking this property into the optimizing procedure and minimizing the closeness of the reconstructed curves to the ideal straight lines. [14]. ...
Article
Full-text available
In this paper a method for correcting the radial distortion of interferograms generated by a spatial heterodyne spectrometer system is presented. Instead of utilizing calibration patterns, the distortion model parameters are estimated based on the distorted fringe features generated by projecting the straight interference stripes onto the detector. Comparisons between polynomial models and division models indicate that division models can deliver competitive performance on the reconstructed image with fewer parameters. Simulated interferograms based on ray-tracing are used to demonstrate the correction of errors in the spatial, phase, and spectral domain caused by optical distortion.
Article
Full-text available
A very important property of the usual pinhole model for camera projection is that 3D lines in the scene are projected to 2D lines. Unfortunately, wide-angle lenses (specially low-cost lenses) may introduce a strong barrel distortion, which makes the usual pinhole model fail. Lens distortion models try to correct such distortion. In reference 1., we propose an algebraic approach to the estimation of the lens distortion parameters based on the rectification of lines in the image. Using the proposed method, the lens distortion parameters are obtained by minimizing a 4 total degree polynomial in several variables. We perform numerical experiments using calibration patterns and real scenes to show the performance of the proposed method.
Conference Paper
Full-text available
The majority of computer vision applications assume the pin-hole camera model. However, most optics will introduce some undesirable effects, rendering the assumption of the pin-hole camera model invalid. This is particularly evident in cameras with wide fields-of-view. The aim of distortion correction is, therefore, to transform the distorted view of wide-angle cameras to the pin-hole perspective view. Fish-eye cameras are those with “super-wide” fields-of-view, e.g. those cameras with fields-of-view of up to 180 degrees. However, these lenses exhibit severe forms of distortion. The most evident of these is radial distortion, but several other distortions, such as uneven illumination and inaccurate estimation of the centre of distortion, should also be considered when using a fish-eye camera. In this paper, we review and discuss methods of correcting radial and other distortions for fish-eye cameras and illustrate the effect of these methods on a test image exhibiting multiple types of distortion.
Conference Paper
Full-text available
In this paper, we address the problem of recover- ing the camera radial distortion coefficients from one image. The approach that we propose uses a special kind of snakes called radial distortion snakes. Ra- dial distortion snakes behave like conventional de- formable contours, except that their behavior are globally connected via a consistent model of image radial distortion. Experiments show that radial dis- tortion snakes are more robust and accurate than conventional snakes and manual point selection.
Article
Many computer vision algorithms rely on the assumptions of the pinhole camera model, but lens distortion with off-the-shelf cameras is usually significant enough to violate this assumption. Many methods for radial distortion estimation have been proposed, but they all have limitations. Robust automatic radial distortion estimation from a single natural image would be extremely useful for many applications, particularly those in human-made environments containing abundant lines. For example, it could be used in place of an extensive calibration procedure to get a mobile robot or quadrotor experiment up and running quickly in an indoor environment. We propose a new method for automatic radial distortion estimation based on the plumb-line approach. The method works from a single image and does not require a special calibration pattern. It is based on Fitzgibbon’s division model, robust estimation of circular arcs, and robust estimation of distortion parameters. We perform an extensive empirical study of the method on synthetic images. We include a comparative statistical analysis of how different circle fitting methods contribute to accurate distortion parameter estimation. We finally provide qualitative results on a wide variety of challenging real images. The experiments demonstrate the method’s ability to accurately identify distortion parameters and remove distortion from images.
Article
Camera calibrating is a crucial problem for further metric scene measurement. Many techniques and some studies concerning calibration have been presented in the last few years. However, it is still difficult to go into details of a determined calibrating technique and compare its accuracy with respect to other methods. Principally, this problem emerges from the lack of a standardized notation and the existence of various methods of accuracy evaluation to choose from. This article presents a detailed review of some of the most used calibrating techniques in which the principal idea has been to present them all with the same notation. Furthermore, the techniques surveyed have been tested and their accuracy evaluated. Comparative results are shown and discussed in the article. Moreover, code and results are available in internet.
Conference Paper
Diese Arbeit befaßt sich mit Verfahren zur Eichung von CCD- oder ähnlichen Fernseh-Kameras. Durch Ausnutzung spezieller Eigenschaften dieser Kameras (Sensorzeilen und -spalten stehen senkrecht aufeinander, das Verhältnis ihrer jeweiligen Abstände ist konstant und genau bekannt) können die inneren und äußeren Kameraparameter schnell und genau bestimmt werden. Zu den inneren Parametern zählen Skalierungsfaktoren, der Hauptpunkt, die Kammerkonstante und Linsenfehlerkoeffizienten. Die äußeren Parameter geben die Lage des kamerafesten Koordinatensystems bezüglich eines durch den Eichkörper vorgegebenen Weltkoordinatensystems an. Die üblicherweise bei der Berücksichtigung von Linsenfehlern entstehenden nichtlinearen Terme werden eliminiert. Somit entfallen zeitraubende, iterative Optimierungsverfahren. Darüber hinaus wird durch die getrennte Eichung des horizontalen Skalierungsfaktors die Verwendung eines Eichkörpers mit Eichpunkten in nur einer Ebene ermöglicht. Dies ist ein gewichtiger Vorteil, da die Herstellung bzw. genaue Ausmessung von dreidimensionalen Eichkörpem üblicherweise erheblichen Aufwand und Kosten verursacht. Die photogrammetrische Alternative, Selbstkalibrierung durch Bündelanpassung, erfordert zwar keine Kenntnis der Lage der Eichpunkte, jedoch muß ein und dieselbe Kamera in verschiedene Positionen gebracht werden. Auch erfolgt die Korrespondenzbestimmung bisher weitgehend manuell und das Verfahren ist programmtechnisch recht aufwendig. Durch die Berücksichtigung des Linsenfehlers kann auf teure Meßobjektive verzichtet werden. Der maximale Abbildungsfehler (im gesamten Bildfeld) wird von mehr als zwei Bildpunkten auf etwa 0.06 Bildpunkte reduziert; der mittlere Fehler beträgt ungefähr 0.03 Bildpunkte. Die Zeit für die Orientierungsbestimmung bei Kenntnis der inneren Parameter dauert auf einem Mikroprozessor (“C”-Hochsprache unter UNIX auf 68020) ungefähr lOmsec, kann also bei geeigneter Bildvorverarbeitung und Merkmalsextraktion schritthaltend mit der Fernseh-Halbbildfrequenz durchgeführt werden. Da bei den äußeren Kameraparametern eine Genauigkeit von besser als 0.02° Winkelfehler und 1/3 000 relativer translatorischer Fehler erzielt wird, eignet sich das Verfahren insbesondere für die dynamische Kontrolle oder die Eichung von Robotern. Allgemeine 3D-Messungen mit Triangulationsverfahren werden durch die Eichung der Orientierung mehrerer Kameras relativ zueinander ermöglicht.