ArticlePDF Available

Image Processing for Autonomous Positioning of Eye Surgery Robot in Micro-Cannulation

Authors:

Abstract and Figures

Vitreoretinal surgery tasks are difficult even for expert surgeons. Therefore, an eye-surgery robot has been developed to assist surgeons in performing such difficult tasks accurately and safely. In this paper, the autonomous positioning of a micropipette mounted on an eye-surgery robot is proposed; specifically, the shadow of the micropipette is used for positioning in the depth direction. First, several microscopic images of the micropipette and its shadow are obtained, and the images are manually segmented into three regions, namely, the micropipette, its shadow, and the eye ground regions. Next, each pixel of the segmented regions is labeled, and labeled images are used as ground-truth data. Subsequently, the Gaussian Mixture Model (GMM) is used by the eye surgery robot system to learn the sets of the microscope images and their corresponding ground-truth data using the HSV color information as feature values. The GMM model is then used to estimate the regions of the micropipette and its shadow in a real-time microscope image as well as their tip positions, which are utilized for the autonomous robotic position control. After the planar positioning is performed using the visual servoing method, the micropipette is moved to approach the eye ground until the distance between the tip of the micropipette and its shadow is either equal to or less than a predefined threshold. Thus, the robot could accurately approach the eye ground and safely stop before contact. An autonomous positioning task is performed ten times in a simulated eye-surgery setup, and the robot stops at an average height of 1.37 mm from a predefined target when the threshold is 1.4 mm. Further enhancement in the estimation accuracy in the image processing would improve the positioning accuracy and safety.
Content may be subject to copyright.
Procedia CIRP 65 ( 2017 ) 105 109
Available online at www.sciencedirect.com
2212-8271 © 2016 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
Peer-review under responsibility of the scientific committee of the 3rd CIRP Conference on BioManufacturing 2017
doi: 10.1016/j.procir.2017.04.036
ScienceDirect
3rd CIRP Conference on BioManufacturing
Image Processing for Autonomous Positioning
of Eye Surgery Robot in Micro-Cannulation
Takashi Tayamaa, Yusuke Kurosea, Tatsuya Nittaa, Kanako Haradaa,b,*, Yusei Someyac, Seiji Omatac, Fumihito Araic,
Fumiyuki Arakid, Kiyoto Totsukad,Takashi Uetad, Yasuo Nodad, Muneyuki Takaod, Makoto Aiharad, Naohiko Sugitaa,
Mamoru Mitsuishia
aFaculty/School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo -ku, Tokyo, 113-8656, Japan
bJapan Science and Technology Agency, K’s Gobancho 7, Gobancho, Chiyo da-ku, Tokyo, 102-0076 Japan
cGraduate school of Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8603 , Japan
dDepartment of Ophthalmology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
* Corresponding author. Tel.: +81-3-5841-6357; fax: +81-3-5841-6357. E-mail address: kanako@nml.t.u-tokyo.ac.jp
Abstract
Vitreoretinal surger y tasks are difficult even for expert surgeons. Therefore, an eye-surgery robot has been developed to assist surgeons in
performing such difficult tasks accurately and safely. In this paper, the autonomous positioning of a micropipette mounted on an eye-surgery
robot is proposed; specifically, the shadow of the micropipette is used for positioning in the depth direction. First, several microscopic images of
the micropipette and its shadow are obtained, and the images are manually segmented into three regions, namely, the micropipette, its shadow,
and the eye ground regions. Next, each pixel of the segmented regions is labeled, and labeled images are used as ground-truth data. Subsequently,
the Gaussian Mixture Model (GMM) is used by the eye surgery robot system to learn the sets of the microscope images and their corresponding
ground-truth data using the HSV color information as feature values. The GMM model is then used to estimate the regions of the micropipette
and its shadow in a real-time microscope image as well as their tip positions, which are utilized for the autonomous robotic position control. After
the planar positioning is performed using the visual servoing method, the micropipette is moved to approach the eye ground until the distance
between the tip of the micropipette and its shadow is either equal to or less than a predefined threshold. Thus, the robot could accurately approach
the eye ground and safely stop before contact. An autonomous positioning task is performed ten times in a simulated eye-surgery setup, and the
robot stops at an average height of 1.37 mm from a predefined target when the threshold is 1.4 mm. Further enhancement in the estimation
accuracy in the image processing would improve the positioning accuracy and safety.
© 2017 The Authors. Published by Elsevier B.V.
Peer-review under responsibility of the scientific committee of the 3rd CIRP Conference on BioManufacturing 2017.
Keywords: Eye Surgery, Robotics, Image Processing
1. Introduction
Extremely delicate tasks such as peeling of the 2.5-Ɋm inner
limiting membrane (ILM) and cannulation of retinal blood
vessels that are approximately 100 Ɋ in diameter are
performed in vitreoretinal surgery. Such tasks are difficult even
for expert surgeons. The authors of the present work have been
developing a master-slave eye surgery robot to assist surgeons
in performing the aforementioned tasks and have successfully
demonstrated the high-accuracy positioning of a micropipette
mounted on the robot [1]. In the system, the hand motion of the
operator measured with a master manipulator was scaled down
with a motion-scaling ratio of 1/40 using the master-slave
control. However, the accuracy of the tool positioning was
subject to the skills of the operator in the master-slave control.
Several groups have studied control automation of surgical
robots. The works in [2, 3, 4] are about the automation of
surgical robots for general surgery. Sen et al. studied a needle
path planning method for autonomous skin-suturing and
demonstrated multi-throw skin-suturing using a phantom [2].
They developed a 3D-printed suture-needle angular positioner,
which attached to the needle driver of a da Vinci surgical
© 2016 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
Peer-review under responsibility of the scientifi c committee of the 3rd CIRP Conference on BioManufacturing 2017
106 Takashi Tayama et al. / Procedia CIRP 65 ( 2017 ) 105 – 109
system. The angular positioner facilitated the grasping of a
curved needle in the desired orientation. The needle grasped by
the positioner was placed in a constrained needle path using
predefined needle entry and exit points. Murali et al. proposed
the automation of simple surgical tasks by using the learning-
by-observation method [3]. Their learning-by-observation
process is based on the finite state machine. They demonstrated
debridement of 3D viscoelastic tissue phantoms as well as
pattern cutting of 2D orthotropic tissue phantoms. McKinly et
al. developed interchangeable surgical tools, which were
attached to the da Vinci needle driver of a da Vinci Research
Kit [5]. They demonstrated autonomous tumor resection in a
phantom experiment, including the detection of the tumor
location, exposing and extracting a simulated tumor, and fluid
injection to seal the simulated wound [4].
Regarding vitreoretinal surgery, Becker et al. studied vision-
based control of a handheld surgical micromanipulator using
virtual fixtures for cannulation of retinal blood vessels as well
as ILM peeling [5]. The same research group also studied semi-
automated intraocular laser surgery using virtual fixtures for
their handheld device [6]. These studies demonstrated
autonomous navigation of a handheld instrument, where the
position information was calculated using a three-dimensional
(3D) reconstruction of stereomicroscope images. However, the
surgeon observes the patient’s eye ground through a surgical
contact lens placed on the eye, the cornea, and the crystalline
lens. A liquid is injected to replace the vitreous body. Thus,
accurate stereo calibration is difficult in clinical applications.
To this end, we propose the automation of robotic tool
positioning that does not require stereo calibration.
Currently, we are working toward the complete automation
of a cannulation procedure. In a previous work, we proposed
autonomous positioning of our eye-surgery robot and validated
its performance in an experiment [8]. Image-based visual
servoing was used for planar positioning, and a subtle distortion
in the microscopic image by physical contact between the eye
ground and a micropipette tip was detected for the positioning
in the depth direction. Accurate detection was feasible;
however, a safer method was needed to prevent possible
damage to the eye ground due to physical contact. In a new
scenario in the present study, the eye surgery robot
automatically and accurately approaches a predefined target in
the eye ground and then stops to be switched to autonomous
injection, which is to be developed. This paper describes the
autonomous positioning of a micropipette in the depth direction
using its shadow. This method can be practical as surgeons use
shadows to position instruments in the depth direction in
vitreoretinal surgery. The appearance of shadows is not affected
by the astigmatism as well.
This paper is structured as follows. Section 2 describes the
proposed autonomous control method. Section 3 describes an
experiment carried out to evaluate the proposed method.
Section 4 discusses the results of the experiment, and the
conclusion and future work are discussed in Section 5.
Fig. 1 Eye surgery robot system. Blue arrow: Master-slave mode flow. Red
arrow: Autonomous micro cannu lation mode flow.
2. Autonomous position method
The eye surgery robot reported in [1] is used for the present
study. A green micropipette with a tip diameter of
approximately 0.05 mm was mounted on it, and a surgical
drape was used to simulate the eye ground. Figure 1 shows the
control flows for the master-slave and autonomous position
modes (Fig. 1).
In the autonomous positioning mode, several microscopic
images were transmitted to the image- processing PC, and the
color features of the micropipette, its shadow, and the
simulated eye ground were learned in advance. First, the real-
time microscopic image was processed, and the tip positions of
the micropipette and its shadow were estimated using the
learning data. Next, the current tip positions were sent to the
control PC. The micropipette was controlled to be placed at a
predefined target point in the microscopic view using the visual
servoing method. Subsequently, the robot approaches the eye
ground until the distance between the estimated tips of the
micropipette and its shadow decreases to a predefined
threshold. The details of the algorithms are described in the
following subsections.
2.1. Estimation of the position of the micropipette tip and its
shadow in the microscopic image
Some instrument detection methods were reported in [9, 10,
11], and accurate detections were demonstrated. In our method,
a relatively simple method was employed; specifically, the
Gaussian Mixture Model (GMM) [12] was used to estimate the
shape of the tips of the micropipette and its shadow in the
microscopic image. GMM was chosen after a preliminary
experiment comparing GMM with the Support Vector Machine
(SVM), and GMM was better at detecting a shadow.
Figure 2 shows the overview of the estimation method. First,
several microscopic images showing both the micropipette and
its shadow were collected by slightly changing the
micropipette’s position and lighting conditions. Then, ground
truth images, which one of the authors of the present work
manually segmented by observation, were created for each
microscope image, and its components (i.e. the micropipette,
shadow, and background) were labeled accordingly. For easier
visualization, the instrument, the shadow, and the background
107
Takashi Tayama et al. / Procedia CIRP 65 ( 2017 ) 105 – 109
were colored blue, red, and green, respectively. The pairs of the
microscopic image and its corresponding ground truth data are
used for preoperative learning using the GMM. A feature
vector xi of the i-th pixel of each microscope image is labeled
as li, according to the label of the corresponding pixel of the
ground truth image, as given by (1) and (2). xi is the HSV (Hue,
Saturation, and Value) color data of the i-th pixel of each
microscopic image.

12
,,,
k
xx x
,
x
,
x
(1)

12
,, ,
k
ll l
,
k
l
,
k
l
(2)
, where
k
is the number of pixels of the microscopic image. In
this study, k = 954486. A GMM model was generated using the
feature vectors
x
and the corresponding labels
l
.
In the real-time estimation, the HSV color data are extracted
from each pixel of the real-time microscopic image, and the
system sets it as the feature vector xinput. The estimated label
ˆ
l
of each pixel of the real-time microscopic image is generated
based on the GMM learning data. The GMM method can
predict a label based on the probability of belonging to each
model and has advantages in estimating the shadow region,
which has a vague border in the image.
After the estimation of the label of each pixel of the real-
time microscopic image using GMM, the system generated a
binary image based on the micropipette’s pixels that were
estimated. The binary image contained estimation errors.
Therefore, morphological image processing [14] was
performed to estimate the region of the micropipette more
accurately (Fig. 3). In this study, the leftmost points of the
region’s contour were identified as the tip of the micropipette,
based on our previous work [13]. A binary image of the shadow
was generated as well, and the same process was performed to
extract its tip. It should be noted that the leftmost point of each
contour was used in this study as all of the collaborative
surgeons were right-handed, and the rightmost point was used
for the setup for left-handed operators.
Fig. 3 The process of tip extraction
2.2. Autonomous robotic positioning
We employed the visual servo control method [15] for the
planar motion. The microscopic image features are expressed
as:
u = Jr
(3)
11
1
1
m
nn
m
uu
rr
uu
rr
ww
§·
¨¸
ww
¨¸
¨¸
¨¸
ww
¨¸
¨¸
ww
©¹
·
1
u
w
¸
1
·
·
1
¸
r
¸
¸
r
¸
¸
¸
¸
¸
¸
¸
¸
n
u
w
¸
¸
¸
n
¹
r
¸¸
r
J
, where
u
represents the microscopic image features,
r
indicates the robotic control features,
J
is the image Jacobian,
m is the number of robotic control features, and n is the number
of microscopic image features. Once the target position
target
u
in the microscopic image is predefined, the target velocity of
the eye surgery robot is expressed as

target
target t
O
O
he
eye
surg
t
ar
g
e
t
O
rJe
Ju u
(4)
, where
O
is the control gain, e is the difference between the
image feature of the target position
target
u
, and
t
u
is the image
feature of the robotic position at time t. In the present study, the
robotic motion was controlled in a plane and thus m = 2 and n
= 2.
The flowchart of the motion control in the depth direction is
described in Fig. 4. The micropipette is moved autonomously
and robotically with a constant velocity to approach the eye
ground. The system keeps moving the robot while calculating
the distance in the image between the estimated micropipette
tip and its shadow, and the robot stops when the calculated
distance is either equal or smaller than a predefined threshold.
The threshold can be nearly zero when the estimation of the tip
positions is highly accurate.
3. Experiment
3.1. Experiment I
3.1.1. Setup
We performed an experiment to evaluate learning data of the
Experiment II with five-fold cross-validation. First, five pairs
of microscope images and the corresponding ground-truth data
Fig. 2 The learning and estimation process of GMM
108 Takashi Tayama et al. / Procedia CIRP 65 ( 2017 ) 105 – 109
were prepared in advance while changing the position and
strength of light source.
3.1.2. Result
Table 1 shows the result. The average score for each item is
higher than 90%. We considered that this was sufficient for
testing the idea of autonomous positioning of the eye surgical
robot using image processing.
Table 1. The score for the five-fold cross validation (%)
Data
set
#1
Data
set
#2
Data
set
#3
Data
set
#4
Data
set
#5
Ave.
Micro
pipette
99.9
100
99.0
100
99.9
99.8
Shadow
97.5
90.6
98.5
90.8
95.9
94.7
Eye
ground
98.5
97.9
98.9
96.5
96.4
97.6
3.2. Experiment II
3.2.1. Setup
We performed an experiment to evaluate the accuracy of the
proposed autonomous positioning method. Figure 5 illustrates
the experimental setup. A halogen light was used to illuminate
the micropipette to simulate the illumination in clinical
applications. A custom-made force sensor was placed under the
simulated eye ground to detect any contact by the tip of the
micropipette.
Four pairs of microscope images and the corresponding
ground-truth data of Experiment I were used as learning data
for GMM. A target point in the microscopic image was set by
clicking on a corresponding pixel.
First, the tip of the micropipette was positioned at the
predefined target point in the microscopic image using the
visual servoing method and then moved in the depth direction
at a velocity of 0.03 m/s. The tip of the micropipette and its
shadow were estimated using the proposed method, and the
robot autonomously stopped when the distance between the
tips was either equal to or smaller than a threshold. Thresholds
of 30 pixels (equivalent to 1.1 mm) and 40 pixels (equivalent
to 1.4 mm) were tested in the experiment. The robot was moved
until the contact of the micropipette tip with the simulated eye
ground was detected after the autonomous positioning, and the
additional motion required for the contact detection, which is
equivalent to the height of micropipette tip from the simulated
eye ground when the autonomous positioning was complete,
was measured. A force of 5 mN or larger, detected by the force
sensor, was considered as contact, based on [16]. This
procedure was repeated ten times, and the position of the light
source and initial position of the micropipette were varied for
each trial. The process took about 0.058 s including 0.026 s for
the image processing using a single core of a 3.4 [GHz]
processor and 3.0 [GB] RAM.
Fig. 5 Experimental setup
3.2.2. Result
Figure 6 shows microscopic images when the autonomous
position is complete under the threshold conditions of 1.1 mm
and 1.4 mm, respectively. For the threshold condition of 1.1
mm, the contact on the simulated eye ground was not detected
when the autonomous positioning was complete in nine out of
the ten trials, resulting in a success rate of 90%. For the
threshold of 1.4 mm, no contact was detected in all the ten trials,
leading to a success rate of 100%. Table 2 shows the success
rates and the average heights of the micropipette tip from the
simulated eye ground when the autonomous positioning was
complete. The average height was 1.09 mm for the threshold of
1.1 mm, and 1.37 mm for the threshold of 1.4 mm.
Fig. 6 Sample microscopic images when the autonomous position is
complete. (a) threshold: 1.1 mm, (b) threshold: 1.4 mm. ۑ: Contact status
(in green: no contact, in red: in contact). Plus mark in green: Target point.
Cross mark in blue: Estimated tip of the micropipette. Cross mark in red:
Estimated tip point of micropipette’s shadow.
Fig. 4 Flowchart of the automated positioning process
109
Takashi Tayama et al. / Procedia CIRP 65 ( 2017 ) 105 – 109
Table 2. The resu lts of the experiment
Threshold
1.1 mm
1.4 mm
Success rate [%]
90
(9/10 trials)
100
(10/10 trials)
Average height from the
simulated eye ground [mm]
1.09 0.22r
1.37 0.29r
4. Discussion
A comparison of the microscope images for the successful and
failed autonomous positioning cases is shown in Fig. 7. The
angle between the micropipette and its shadow was small in the
failure case. A more robust tip estimation method is needed; for
example, the cross point of the centerlines of the estimated
shapes of the micropipette and its shadow regions can be used
for estimating the approach distance in the future.
As for the choice of the thresholds of the distance between
the estimated position of the micropipette tip and its shadow,
successful and safe positioning in the depth direction has a
higher priority over closer positioning on the target. The
threshold of 1.4 mm was considered better as it achieved a
100% success rate. The average height of 1.37 mm can be
further reduced by improving the tip estimation method.
The micropipette was colored, and the simulated eye ground
did not have any texture in this study. The setup will be
replaced by a more clinically realistic setup after the basic
performance of the autonomous positioning is improved.
Fig. 7 Examples of (a) Success case (b) Failure case. ۑ: Contact status (in
green: no contact, in red: in contact). Green plus point: Target point. Blue
cross point: Estimated tip end of t he micropipette. Red cross point:
Estimated tip point of micropipette’s shadow.
5. Conclusion
We proposed a method of safe and autonomous positioning
of a micropipette using its shadow for autonomous cannulation
of retinal blood vessels. The experiment was performed to
evaluate the success rate and accuracy of the autonomous
positioning. The micropipette was successfully positioned with
a 100% success rate at an average height of 1.37 mm from the
simulated eye ground. Future work will include improving the
estimation of the position of the micropipette tips and their
shadow and evaluating the method in a more clinically realistic
experiment setup.
Acknowledgements
This work was partially supported by the ImPACT Program
of Council for Science, Technology and Innovation (Cabinet
Office, Government of Japan) from the Japan Society for the
Promotion of Science (JSPS).
References
[1] Sakai T, Harada K, Tanaka S, Ueta T, Noda Y, Sugita N, Mitsuishi M.
Design and development of a miniature parallel robot for eye surgery. 36th
Annual International Conference of the IEEE Engineering in Medicine and
Biology Society; 2014: p. 371-374.
[2] Sen S , Garg A, Gea ly DV, Mc Kinly S, Jen Y, Gold berg K. Automati ng
multi-throw multilateral surgical suturing with a mechanical needle guide
and sequential convex optimization. 2016 IEEE International Conference
on Robotics and Automation ( ICRA); 2016: p. 4178-4185.
[3] Murali A, Sen S, Kehoe B, Garg A, McFarland S, Patil S, Boyd WD, Lim
S, Abbeel P, Goldberg K. Learning by observation for surgical subtasks:
Multilateral cuttin g of 3D viscoelastic and 2D Orthotropic Tissu e Phantoms.
2015 IEEE International Conference on Robotics and Automation (ICRA);
2015: p. 1202-1209.
[4] McKi nly S, Gar g A, Sen S , David GV, McKin ly JP, Jen Y, Guo M, Boyd
Doug, Goldberg K. An interchangeable surgical instrument system with
application to supervised automation of multilateral tumor resection. IEEE
International Conference on Automation Science and Engineering (CASE);
2016: p. 821-826.
[5] Kazanzides P, Ch en Z, Deguet A, Fischer GS, Ta ylor RH, DiMaio SP. An
Open-Source Research Kit for the da Vinci Surgical System. 2014 IEEE
International Conferen ce on Robotics and Automation; 2014: p. 6434-643 9.
[6] Becker B, Voros S, Lobes Jr. L, Handa J, Ha ger G, Riviere C. Visi on-Based
Control of a Handheld Surgical Mi cromanipulator with Virtual Fixt ures.
IEEE Trans Robot; 2013: vol. 29, no. 3, p. 674-683.
[7] Becker B, MacLachlan R, Lobes Jr. L, Riviere C. Semiautomated
Intraocular Laser Surgery Using Handheld Instruments. Lasers in Surgery
and Medicine. Phoenix; 2010: p. 264-273.
[8] Sakai T, Ono T, Murilo M, Tanaka S, Harada K, Noda Y, Ueta T, Arai F,
Sugita N, Mitsuishi M. Autonomous 3-D positioning of surgical instrument
concerning compatibility with the eye surgical p rocedure. The Robot ics
and Mechatronics Conference. Yokohama, 2016: 1A1-01b6
[9] Richa R, Balicki M, Meisner E, Sznitman R, Taylor RH, Hager GD. Visual
Tracking of Surgical Tools for Proximity Detection in Retinal Surgery. Th e
International Conference on Information Processing in Computer-Assisted
Interventions (IPCAI); 2011: p. 55-66.
[10] Alsheakhali M, Yigitsoy M, Eslami A, Navab N. Surgical Tool Detection
and Tracking in Retinal Microsurgery. Medical Imaging 2015: Image-
Guided Procedures, Robotic Interventions, and Modeling; 2015.
[11] Rieke N, Tan D, Amat di San Filippo C, Tombari F, Alsheakhali M,
Belagiannis V, Eslami A, Navab N. Real-time localization of articulated
surgical instruments in retinal microsurgery. Medical Image Analysis;
2016: vol. 34, p. 82-100.
[12] Bishop C. Pattern Recognition and Machine Learning. Springer-Verlag
New York : 2006.
[13] Kim J, Tayama T, Kurose Y, Marinho MM, Nitta T, Harada K, Mitsuishi
M. Microscopic Image Processing for Eye Surgical Robot Automation. The
12th Asian Conference on Computer Aided Surgery; 2016: p. 142 -144.
[14] Russ JC. The IMAGE PROCESSING Handbook Sixth Edition: 2011.
[15] Hutchinson S, Hager GD, Corke P. A Tutorial on Visual Servo Control.
IEEE Trans Rob Autom; 1996: vol. 12, no. 5, p. 651-670.
[16] Gonenc B, Taylor RH, Iordachita I, Gehlbach P, Handa J. Force-sensing
microneedle for assisted retinal vein cannulation. IEEE SENSORS 2014;
2014: p. 698-701.
... It has been also exploited by other techniques and applications such as mapping and geographic information systems to extract certain regions [31]. In robotics, DIP has been very widely exploited [32] to implement very crucial images-based tasks [31,[33][34][35][36][37] e.g., to do a 3D-based motion detection and multi-frame images recognition [38], to implement a vision-based tracking process for underwater vehicles [39], to enable an image sketching procedure from which a selected robot can benefit [32], or to enable the robot move to autonomously detect shadows of a region-of-interest (ROI) objects [40]. It has also been used by machine translation to measure the distance between an object and machine for a better focus calculation [41] or to propose a spectral-spatial classification method [42]. ...
... That is, a lot of computation time will be needed for both methods. To calculate the enhancement rate of PICA | can be found using (40): (40), it is found that PICA has enhanced the edge detection and extraction process in term of computation time with a percentage of 92.1, 88.8, and 88.5% corresponding to images sizes 352×288, 640×480, and 726×544. This rate varies depending on the image's size whereas | has an inverse proportion with the image size. ...
Article
Full-text available
In this paper, images’ pixels are exploited to extract objects’ edges. This paper has proposed a Pixel Intensity based Contrast Algorithm (PICA) for Image Edges Extraction (IEE). This paper highlights three contributions. Firstly, IEE process is fast and PICA has less computation time when processing different images’ sizes. Secondly, IEE is simple and uses a 2×4 mask which is different from other masks where it doesn’t require while-loop(s) during processing images. Instead, it has adopted an if-conditional procedure to reduce the code complexity and enhance computation time. That is, the reason why this design is faster than other designs and how it contributes to IEE will be explained. Thirdly, design and codes of IEE and its mask are available, made an open source, and in-detail presented and supported by an interactive file; it is simulated in a video motion design. One of the PICA’s features and contributions is that PICA has adopted to use less while-loop(s) than traditional methods and that has contributed to the computation time and code complexity. Experiments have tested 526 samples with different images’ conditions e.g., inclined, blurry, and complexbackground images to evaluate PICA’s performance in terms of computation time, enhancement rate for processing a single image, accuracy, and code complexity.By comparing PICA to other research works, PICA consumes 5.7 mS to process a single image which is faster and has less code complexity by u×u. Results have shown that PICA can accurately detect edges under different images’ conditions. Results have shown that PICA has enhanced computation time rate for processing a single image by 92.1% compared to other works. PICA has confirmed it is accurate and robust under different images’ conditions. PICA can be used with several types of images e.g., medical images and useful for real-time applications.
... Recently, the use of vision system is rapidly increasing in different fields as inspection [17], welding [18], wood engineering [19,20], or robot's [21,22] position control. Vision positioning systems are increasingly common in process automation [23][24][25][26], autonomous driving [27][28][29][30][31], or augmented reality assistants [32][33][34]. ...
Article
Full-text available
Research in vision systems applied to manufacturing processes has increased during the last years. Nevertheless, accurate positioning systems frequently require costly investments. This article presents a new algorithm developed in LabVIEW for controlling a novel positioning system that processes images to obtain the position, which could be implemented in micromachining. With this aim, the method uses the analysis of the LEDs shown in an image projected on an LCD screen to perform an accurate positioning. The ultimate goal of this method is to get the coordinates of the images shown on the screen in order to know the movement made by the system and, in this way, be able to compensate the error. The experimental results and related analysis developed proved the accuracy and consistency in dissimilar situations. In addition, once implemented the algorithm proposed in a closed loop program, a positioning system is achieved where the error is always convergent.
Article
Full-text available
Microsurgical techniques have been widely utilized in various surgical specialties, such as ophthalmology, neurosurgery, and otolaryngology, which require intricate and precise surgical tool manipulation on a small scale. In microsurgery, operations on delicate vessels or tissues require high standards in surgeons’ skills. This exceptionally high requirement in skills leads to a steep learning curve and lengthy training before the surgeons can perform microsurgical procedures with quality outcomes. The microsurgery robot (MSR), which can improve surgeons’ operation skills through various functions, has received extensive research attention in the past three decades. There have been many review papers summarizing the research on MSR for specific surgical specialties. However, an in-depth review of the relevant technologies used in MSR systems is limited in the literature. This review details the technical challenges in microsurgery, and systematically summarizes the key technologies in MSR with a developmental perspective from the basic structural mechanism design, to the perception and human–machine interaction methods, and further to the ability in achieving a certain level of autonomy. By presenting and comparing the methods and technologies in this cutting-edge research, this paper aims to provide readers with a comprehensive understanding of the current state of MSR research and identify potential directions for future development in MSR.
Conference Paper
This paper proposes an End-to-End stereovision-guided laser surgery system that can conduct laser ablation on targets selected by human operators in the color image, referred as StereoCNC. Two digital cameras are integrated into a previously developed robotic laser system to add a color sensing modality and formulate the stereovision. A calibration method is implemented to register the coordinate frames between stereo cameras and the laser system, modelled as a 3D-to-3D least-squares problem. The calibration reprojection errors are used to characterize a 3D error field by Gaussian Process Regression (GPR). This error field can make predictions for new point cloud data to identify an optimal position with lower calibration errors. A stereovision-guided laser ablation pipeline is proposed to optimize the positioning of the surgical site within the error field, which is achieved with a Genetic Algorithm search; mechanical stages move the site to the low-error region. The pipeline is validated by the experiments on phantoms with color texture and various geometric shapes. The overall targeting accuracy of the system achieved an average RMSE of 0.13 ± 0.02 mm and maximum error of 0.34 ± 0.06 mm, as measured by pre- and post-laser ablation images. The results show potential applications of using the developed stereovision-guided robotic system for superficial laser surgery, including dermatologic applications or removal of exposed tumorous tissue in neurosurgery.
Conference Paper
Vitreoretinal surgery is one of the most difficult surgical operations, even for experienced surgeons. Thus, a master-slave eye surgical robot has been developed to assist the surgeon in safely performing vitreoretinal surgeries; however, in the master-slave control, the robotic positioning accuracy depends on the surgeon's coordination skills. This paper proposes a new method of autonomous robotic positioning using the shadow of the surgical instrument. First, the microscope image is segmented into three regions-namely, a micropipette, its shadow, and the eye ground-using a Gaussian mixture model (GMM). The tips of the micropipette and its shadow are then extracted from the contour lines of the segmented regions. The micropipette is then autonomously moved down to the simulated eye ground until the distance between the tips of micropipette and its shadow in the microscopic image reaches a predefined threshold. To handle possible occlusions, the tip of the shadow is estimated using a Kalman filter. Experiments to evaluate the robotic positioning accuracy in the vertical direction were performed. The results show that the autonomous positioning using the Kalman filter enhanced the accuracy of robotic positioning.
Conference Paper
Full-text available
We present a telerobotics research platform that provides complete access to all levels of control via open-source electronics and software. The electronics employs an FPGA to enable a centralized computation and distributed I/O architecture in which all control computations are implemented in a familiar development environment (Linux PC) and low-latency I/O is performed over an IEEE-1394a (FireWire) bus at speeds up to 400 Mbits/sec. The mechanical components are obtained from retired first-generation da Vinci ® Surgical Systems. This system is currently installed at 11 research institutions, with additional installations underway, thereby creating a research community around a common open-source hardware and software platform.
Book
The Image Processing Handbook covers two-dimensional (2D) and three-dimensional (3D) imaging techniques, image printing and storage methods, image processing algorithms, image and feature measurement, quantitative image measurement analysis, and more. Incorporating image processing and analysis examples at all scales, from nano- to astro this Seventh Edition: Features a greater range of computationally intensive algorithms than previous versions Provides better organization, more quantitative results, and new material on recent developments. Includes completely rewritten chapters on 3D imaging and a thoroughly revamped chapter on statistical analysis. Contains more than 1700 references to theory, methods, and applications in a wide variety of disciplines. Presents 500+ entirely new figures and images, with more than two-thirds appearing in color.
Conference Paper
Surgical treatment of the eye fundus involves the insertion of thin surgical tools into the eyeball, that have to interact with the micrometer-scale structures of the eye. In such small scale, manually handling the instruments is especially challenging, since involuntary hand tremors alone can have larger amplitudes than the structures being handled. In order to enhance the precision of surgeons in such delicate task, we designed a master-slave eye surgical robot to assist in vitreoretinal surgery, based on a 6-PUS parallel mechanism to provide high precision, while being compact as not to clutter the operating room. Experiments demonstrate that the developed robot in master-slave control provides higher accuracy compared to manual handling. In order to further enhance its usability, an autonomous 3-D surgical instrument positioning method based on visual servoing and contact detection was implemented. That method was evaluated in the automation of cannulation procedures in a manner that is compatible with clinical reality.
Article
Real-time visual tracking of a surgical instrument holds great potential for improving the outcome of retinal microsurgery by enabling new possibilities for computer-aided techniques such as augmented reality and automatic assessment of instrument manipulation. Due to high magnification and illumination variations, retinal microsurgery images usually entail a high level of noise and appearance changes. As a result, real-time tracking of the surgical instrument remains challenging in in-vivo sequences. To overcome these problems, we present a method that builds on random forests and addresses the task by modelling the instrument as an articulated object. A multi-template tracker reduces the region of interest to a rectangular area around the instrument tip by relating the movement of the instrument to the induced changes on the image intensities. Within this bounding box, a gradient-based pose estimation infers the location of the instrument parts from image features. In this way, the algorithm does not only provide the location of instrument, but also the positions of the tool tips in real-time. Various experiments on a novel dataset comprising 18 in-vivo retinal microsurgery sequences demonstrate the robustness and generalizability of our method. The comparison on two publicly available datasets indicates that the algorithm can outperform current state-of-the art.
Article
Visual tracking of surgical instruments is an essential part of eye surgery, and plays an important role for the surgeons as well as it is a key component of robotics assistance during the operation time. The difficulty of detecting and tracking medical instruments in-vivo images comes from its deformable shape, changes in brightness, and the presence of the instrument shadow. This paper introduces a new approach to detect the tip of surgical tool and its width regardless of its head shape and the presence of the shadows or vessels. The approach relies on integrating structural information about the strong edges from the RGB color model, and the tool location-based information from L∗a∗b color model. The probabilistic Hough transform was applied to get the strongest straight lines in the RGB-images, and based on information from the L∗ and a∗, one of these candidates lines is selected as the edge of the tool shaft. Based on that line, the tool slope, the tool centerline and the tool tip could be detected. The tracking is performed by keeping track of the last detected tool tip and the tool slope, and filtering the Hough lines within a box around the last detected tool tip based on the slope differences. Experimental results demonstrate the high accuracy achieved in term of detecting the tool tip position, the tool joint point position, and the tool centerline. The approach also meets the real time requirements.
Article
Automating repetitive surgical subtasks such as suturing, cutting and debridement can reduce surgeon fatigue and procedure times and facilitate supervised tele-surgery. Programming is difficult because human tissue is deformable and highly specular. Using the da Vinci Research Kit (DVRK) robotic surgical assistant, we explore a 'Learning By Observation' (LBO) approach where we identify, segment, and parameterize motion sequences and sensor conditions to build a finite state machine (FSM) for each subtask. The robot then executes the FSM repeatedly to tune parameters and if necessary update the FSM structure. We evaluate the approach on two surgical subtasks: debridement of 3D Viscoelastic Tissue Phantoms (3d-DVTP), in which small target fragments are removed from a 3D viscoelastic tissue phantom; and Pattern Cutting of 2D Orthotropic Tissue Phantoms (2d-PCOTP), a step in the standard Fundamentals of Laparoscopic Surgery training suite, in which a specified circular area must be cut from a sheet of orthotropic tissue phantom. We describe the approach and physical experiments with repeatability of 96% for 50 trials of the 3d-DVTP subtask and 70% for 20 trials of the 2d-PCOTP subtask. A video is available at: http://j.mp/Robot-Surgery-Video-Oct-2014.
Article
Retinal vein cannulation (RVC) is a challenging procedure proposed for drug delivery into the very small retinal veins. The available glass cannulas for this procedure are both hard to visualize and fragile thereby limiting the feasibility of both robot-assisted and manual RVC approaches. In this study, we develop and test a new force-sensing RVC instrument that can be easily integrated with the existing manual and robotic devices. The tool enables (1) the measurement of the forces required for puncturing retinal veins in vivo and (2) an assistive method to inform the operator of the needle piercing the vessel wall. The fiber Bragg grating based sensor can be inserted into the eye through a small (∅ 0.9 mm) opening and provides a quantitative assessment at the tool tip with a resolution smaller than 0.25 mN. Assessment of forces during vessel penetration in the chorioallantoic membranes of chicken embryos have revealed a consistent sharp drop in tool tip force upon vessel puncture that has been used as a signature to provide auditory feedback to the user to stop needle advancement and begin drug delivery.
Article
Performing micromanipulation and delicate operations in submillimeter workspaces is difficult because of destabilizing tremor and imprecise targeting. Accurate micromanipulation is especially important for microsurgical procedures, such as vitreoretinal surgery, to maximize successful outcomes and minimize collateral damage. Robotic aid combined with filtering techniques that suppress tremor frequency bands increases performance; however, if knowledge of the operator's goals is available, virtual fixtures have been shown to further improve performance. In this paper, we derive a virtual fixture framework for active handheld micromanipulators that is based on high-bandwidth position measurements rather than forces applied to a robot handle. For applicability in surgical environments, the fixtures are generated in real time from microscope video during the procedure. Additionally, we develop motion scaling behavior around virtual fixtures as a simple and direct extension to the proposed framework. We demonstrate that virtual fixtures significantly outperform tremor cancellation algorithms on a set of synthetic tracing tasks (p <; 0.05). In more medically relevant experiments of vein tracing and membrane peeling in eye phantoms, virtual fixtures can significantly reduce both positioning error and forces applied to tissue (p <; 0.05).