Available via license: CC BY-NC-ND 4.0
Content may be subject to copyright.
Procedia CIRP 65 ( 2017 ) 105 – 109
Available online at www.sciencedirect.com
2212-8271 © 2016 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
Peer-review under responsibility of the scientific committee of the 3rd CIRP Conference on BioManufacturing 2017
doi: 10.1016/j.procir.2017.04.036
ScienceDirect
3rd CIRP Conference on BioManufacturing
Image Processing for Autonomous Positioning
of Eye Surgery Robot in Micro-Cannulation
Takashi Tayamaa, Yusuke Kurosea, Tatsuya Nittaa, Kanako Haradaa,b,*, Yusei Someyac, Seiji Omatac, Fumihito Araic,
Fumiyuki Arakid, Kiyoto Totsukad,Takashi Uetad, Yasuo Nodad, Muneyuki Takaod, Makoto Aiharad, Naohiko Sugitaa,
Mamoru Mitsuishia
aFaculty/School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo -ku, Tokyo, 113-8656, Japan
bJapan Science and Technology Agency, K’s Gobancho 7, Gobancho, Chiyo da-ku, Tokyo, 102-0076 Japan
cGraduate school of Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8603 , Japan
dDepartment of Ophthalmology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
* Corresponding author. Tel.: +81-3-5841-6357; fax: +81-3-5841-6357. E-mail address: kanako@nml.t.u-tokyo.ac.jp
Abstract
Vitreoretinal surger y tasks are difficult even for expert surgeons. Therefore, an eye-surgery robot has been developed to assist surgeons in
performing such difficult tasks accurately and safely. In this paper, the autonomous positioning of a micropipette mounted on an eye-surgery
robot is proposed; specifically, the shadow of the micropipette is used for positioning in the depth direction. First, several microscopic images of
the micropipette and its shadow are obtained, and the images are manually segmented into three regions, namely, the micropipette, its shadow,
and the eye ground regions. Next, each pixel of the segmented regions is labeled, and labeled images are used as ground-truth data. Subsequently,
the Gaussian Mixture Model (GMM) is used by the eye surgery robot system to learn the sets of the microscope images and their corresponding
ground-truth data using the HSV color information as feature values. The GMM model is then used to estimate the regions of the micropipette
and its shadow in a real-time microscope image as well as their tip positions, which are utilized for the autonomous robotic position control. After
the planar positioning is performed using the visual servoing method, the micropipette is moved to approach the eye ground until the distance
between the tip of the micropipette and its shadow is either equal to or less than a predefined threshold. Thus, the robot could accurately approach
the eye ground and safely stop before contact. An autonomous positioning task is performed ten times in a simulated eye-surgery setup, and the
robot stops at an average height of 1.37 mm from a predefined target when the threshold is 1.4 mm. Further enhancement in the estimation
accuracy in the image processing would improve the positioning accuracy and safety.
© 2017 The Authors. Published by Elsevier B.V.
Peer-review under responsibility of the scientific committee of the 3rd CIRP Conference on BioManufacturing 2017.
Keywords: Eye Surgery, Robotics, Image Processing
1. Introduction
Extremely delicate tasks such as peeling of the 2.5-Ɋm inner
limiting membrane (ILM) and cannulation of retinal blood
vessels that are approximately 100 Ɋ in diameter are
performed in vitreoretinal surgery. Such tasks are difficult even
for expert surgeons. The authors of the present work have been
developing a master-slave eye surgery robot to assist surgeons
in performing the aforementioned tasks and have successfully
demonstrated the high-accuracy positioning of a micropipette
mounted on the robot [1]. In the system, the hand motion of the
operator measured with a master manipulator was scaled down
with a motion-scaling ratio of 1/40 using the master-slave
control. However, the accuracy of the tool positioning was
subject to the skills of the operator in the master-slave control.
Several groups have studied control automation of surgical
robots. The works in [2, 3, 4] are about the automation of
surgical robots for general surgery. Sen et al. studied a needle
path planning method for autonomous skin-suturing and
demonstrated multi-throw skin-suturing using a phantom [2].
They developed a 3D-printed suture-needle angular positioner,
which attached to the needle driver of a da Vinci surgical
© 2016 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
Peer-review under responsibility of the scientifi c committee of the 3rd CIRP Conference on BioManufacturing 2017
106 Takashi Tayama et al. / Procedia CIRP 65 ( 2017 ) 105 – 109
system. The angular positioner facilitated the grasping of a
curved needle in the desired orientation. The needle grasped by
the positioner was placed in a constrained needle path using
predefined needle entry and exit points. Murali et al. proposed
the automation of simple surgical tasks by using the learning-
by-observation method [3]. Their learning-by-observation
process is based on the finite state machine. They demonstrated
debridement of 3D viscoelastic tissue phantoms as well as
pattern cutting of 2D orthotropic tissue phantoms. McKinly et
al. developed interchangeable surgical tools, which were
attached to the da Vinci needle driver of a da Vinci Research
Kit [5]. They demonstrated autonomous tumor resection in a
phantom experiment, including the detection of the tumor
location, exposing and extracting a simulated tumor, and fluid
injection to seal the simulated wound [4].
Regarding vitreoretinal surgery, Becker et al. studied vision-
based control of a handheld surgical micromanipulator using
virtual fixtures for cannulation of retinal blood vessels as well
as ILM peeling [5]. The same research group also studied semi-
automated intraocular laser surgery using virtual fixtures for
their handheld device [6]. These studies demonstrated
autonomous navigation of a handheld instrument, where the
position information was calculated using a three-dimensional
(3D) reconstruction of stereomicroscope images. However, the
surgeon observes the patient’s eye ground through a surgical
contact lens placed on the eye, the cornea, and the crystalline
lens. A liquid is injected to replace the vitreous body. Thus,
accurate stereo calibration is difficult in clinical applications.
To this end, we propose the automation of robotic tool
positioning that does not require stereo calibration.
Currently, we are working toward the complete automation
of a cannulation procedure. In a previous work, we proposed
autonomous positioning of our eye-surgery robot and validated
its performance in an experiment [8]. Image-based visual
servoing was used for planar positioning, and a subtle distortion
in the microscopic image by physical contact between the eye
ground and a micropipette tip was detected for the positioning
in the depth direction. Accurate detection was feasible;
however, a safer method was needed to prevent possible
damage to the eye ground due to physical contact. In a new
scenario in the present study, the eye surgery robot
automatically and accurately approaches a predefined target in
the eye ground and then stops to be switched to autonomous
injection, which is to be developed. This paper describes the
autonomous positioning of a micropipette in the depth direction
using its shadow. This method can be practical as surgeons use
shadows to position instruments in the depth direction in
vitreoretinal surgery. The appearance of shadows is not affected
by the astigmatism as well.
This paper is structured as follows. Section 2 describes the
proposed autonomous control method. Section 3 describes an
experiment carried out to evaluate the proposed method.
Section 4 discusses the results of the experiment, and the
conclusion and future work are discussed in Section 5.
Fig. 1 Eye surgery robot system. Blue arrow: Master-slave mode flow. Red
arrow: Autonomous micro cannu lation mode flow.
2. Autonomous position method
The eye surgery robot reported in [1] is used for the present
study. A green micropipette with a tip diameter of
approximately 0.05 mm was mounted on it, and a surgical
drape was used to simulate the eye ground. Figure 1 shows the
control flows for the master-slave and autonomous position
modes (Fig. 1).
In the autonomous positioning mode, several microscopic
images were transmitted to the image- processing PC, and the
color features of the micropipette, its shadow, and the
simulated eye ground were learned in advance. First, the real-
time microscopic image was processed, and the tip positions of
the micropipette and its shadow were estimated using the
learning data. Next, the current tip positions were sent to the
control PC. The micropipette was controlled to be placed at a
predefined target point in the microscopic view using the visual
servoing method. Subsequently, the robot approaches the eye
ground until the distance between the estimated tips of the
micropipette and its shadow decreases to a predefined
threshold. The details of the algorithms are described in the
following subsections.
2.1. Estimation of the position of the micropipette tip and its
shadow in the microscopic image
Some instrument detection methods were reported in [9, 10,
11], and accurate detections were demonstrated. In our method,
a relatively simple method was employed; specifically, the
Gaussian Mixture Model (GMM) [12] was used to estimate the
shape of the tips of the micropipette and its shadow in the
microscopic image. GMM was chosen after a preliminary
experiment comparing GMM with the Support Vector Machine
(SVM), and GMM was better at detecting a shadow.
Figure 2 shows the overview of the estimation method. First,
several microscopic images showing both the micropipette and
its shadow were collected by slightly changing the
micropipette’s position and lighting conditions. Then, ground
truth images, which one of the authors of the present work
manually segmented by observation, were created for each
microscope image, and its components (i.e. the micropipette,
shadow, and background) were labeled accordingly. For easier
visualization, the instrument, the shadow, and the background
107
Takashi Tayama et al. / Procedia CIRP 65 ( 2017 ) 105 – 109
were colored blue, red, and green, respectively. The pairs of the
microscopic image and its corresponding ground truth data are
used for preoperative learning using the GMM. A feature
vector xi of the i-th pixel of each microscope image is labeled
as li, according to the label of the corresponding pixel of the
ground truth image, as given by (1) and (2). xi is the HSV (Hue,
Saturation, and Value) color data of the i-th pixel of each
microscopic image.
12
,,,
k
xx x
,
k
x
,
x
(1)
12
,, ,
k
ll l
,
k
l
,
k
l
(2)
, where
k
is the number of pixels of the microscopic image. In
this study, k = 954486. A GMM model was generated using the
feature vectors
x
and the corresponding labels
l
.
In the real-time estimation, the HSV color data are extracted
from each pixel of the real-time microscopic image, and the
system sets it as the feature vector xinput. The estimated label
ˆ
l
of each pixel of the real-time microscopic image is generated
based on the GMM learning data. The GMM method can
predict a label based on the probability of belonging to each
model and has advantages in estimating the shadow region,
which has a vague border in the image.
After the estimation of the label of each pixel of the real-
time microscopic image using GMM, the system generated a
binary image based on the micropipette’s pixels that were
estimated. The binary image contained estimation errors.
Therefore, morphological image processing [14] was
performed to estimate the region of the micropipette more
accurately (Fig. 3). In this study, the leftmost points of the
region’s contour were identified as the tip of the micropipette,
based on our previous work [13]. A binary image of the shadow
was generated as well, and the same process was performed to
extract its tip. It should be noted that the leftmost point of each
contour was used in this study as all of the collaborative
surgeons were right-handed, and the rightmost point was used
for the setup for left-handed operators.
Fig. 3 The process of tip extraction
2.2. Autonomous robotic positioning
We employed the visual servo control method [15] for the
planar motion. The microscopic image features are expressed
as:
u = Jr
(3)
11
1
1
m
nn
m
uu
rr
uu
rr
ww
§·
¨¸
ww
¨¸
¨¸
¨¸
ww
¨¸
¨¸
ww
©¹
·
1
u
w
¸
1
·
·
1
¸
r
¸
¸
r
¸
¸
¸
¸
¸
¸
¸
¸
n
u
w
¸
¸
¸
n
¹
r
¸¸
r
J
, where
u
represents the microscopic image features,
r
indicates the robotic control features,
J
is the image Jacobian,
m is the number of robotic control features, and n is the number
of microscopic image features. Once the target position
target
u
in the microscopic image is predefined, the target velocity of
the eye surgery robot is expressed as
target
target t
O
O
he
eye
surg
t
ar
g
e
t
O
rJe
Ju u
(4)
, where
O
is the control gain, e is the difference between the
image feature of the target position
target
u
, and
t
u
is the image
feature of the robotic position at time t. In the present study, the
robotic motion was controlled in a plane and thus m = 2 and n
= 2.
The flowchart of the motion control in the depth direction is
described in Fig. 4. The micropipette is moved autonomously
and robotically with a constant velocity to approach the eye
ground. The system keeps moving the robot while calculating
the distance in the image between the estimated micropipette
tip and its shadow, and the robot stops when the calculated
distance is either equal or smaller than a predefined threshold.
The threshold can be nearly zero when the estimation of the tip
positions is highly accurate.
3. Experiment
3.1. Experiment I
3.1.1. Setup
We performed an experiment to evaluate learning data of the
Experiment II with five-fold cross-validation. First, five pairs
of microscope images and the corresponding ground-truth data
Fig. 2 The learning and estimation process of GMM
108 Takashi Tayama et al. / Procedia CIRP 65 ( 2017 ) 105 – 109
were prepared in advance while changing the position and
strength of light source.
3.1.2. Result
Table 1 shows the result. The average score for each item is
higher than 90%. We considered that this was sufficient for
testing the idea of autonomous positioning of the eye surgical
robot using image processing.
Table 1. The score for the five-fold cross validation (%)
Data
set
#1
Data
set
#2
Data
set
#3
Data
set
#4
Data
set
#5
Ave.
Micro
pipette
99.9
100
99.0
100
99.9
99.8
Shadow
97.5
90.6
98.5
90.8
95.9
94.7
Eye
ground
98.5
97.9
98.9
96.5
96.4
97.6
3.2. Experiment II
3.2.1. Setup
We performed an experiment to evaluate the accuracy of the
proposed autonomous positioning method. Figure 5 illustrates
the experimental setup. A halogen light was used to illuminate
the micropipette to simulate the illumination in clinical
applications. A custom-made force sensor was placed under the
simulated eye ground to detect any contact by the tip of the
micropipette.
Four pairs of microscope images and the corresponding
ground-truth data of Experiment I were used as learning data
for GMM. A target point in the microscopic image was set by
clicking on a corresponding pixel.
First, the tip of the micropipette was positioned at the
predefined target point in the microscopic image using the
visual servoing method and then moved in the depth direction
at a velocity of 0.03 m/s. The tip of the micropipette and its
shadow were estimated using the proposed method, and the
robot autonomously stopped when the distance between the
tips was either equal to or smaller than a threshold. Thresholds
of 30 pixels (equivalent to 1.1 mm) and 40 pixels (equivalent
to 1.4 mm) were tested in the experiment. The robot was moved
until the contact of the micropipette tip with the simulated eye
ground was detected after the autonomous positioning, and the
additional motion required for the contact detection, which is
equivalent to the height of micropipette tip from the simulated
eye ground when the autonomous positioning was complete,
was measured. A force of 5 mN or larger, detected by the force
sensor, was considered as contact, based on [16]. This
procedure was repeated ten times, and the position of the light
source and initial position of the micropipette were varied for
each trial. The process took about 0.058 s including 0.026 s for
the image processing using a single core of a 3.4 [GHz]
processor and 3.0 [GB] RAM.
Fig. 5 Experimental setup
3.2.2. Result
Figure 6 shows microscopic images when the autonomous
position is complete under the threshold conditions of 1.1 mm
and 1.4 mm, respectively. For the threshold condition of 1.1
mm, the contact on the simulated eye ground was not detected
when the autonomous positioning was complete in nine out of
the ten trials, resulting in a success rate of 90%. For the
threshold of 1.4 mm, no contact was detected in all the ten trials,
leading to a success rate of 100%. Table 2 shows the success
rates and the average heights of the micropipette tip from the
simulated eye ground when the autonomous positioning was
complete. The average height was 1.09 mm for the threshold of
1.1 mm, and 1.37 mm for the threshold of 1.4 mm.
Fig. 6 Sample microscopic images when the autonomous position is
complete. (a) threshold: 1.1 mm, (b) threshold: 1.4 mm. ۑ: Contact status
(in green: no contact, in red: in contact). Plus mark in green: Target point.
Cross mark in blue: Estimated tip of the micropipette. Cross mark in red:
Estimated tip point of micropipette’s shadow.
Fig. 4 Flowchart of the automated positioning process
109
Takashi Tayama et al. / Procedia CIRP 65 ( 2017 ) 105 – 109
Table 2. The resu lts of the experiment
Threshold
1.1 mm
1.4 mm
Success rate [%]
90
(9/10 trials)
100
(10/10 trials)
Average height from the
simulated eye ground [mm]
1.09 0.22r
1.37 0.29r
4. Discussion
A comparison of the microscope images for the successful and
failed autonomous positioning cases is shown in Fig. 7. The
angle between the micropipette and its shadow was small in the
failure case. A more robust tip estimation method is needed; for
example, the cross point of the centerlines of the estimated
shapes of the micropipette and its shadow regions can be used
for estimating the approach distance in the future.
As for the choice of the thresholds of the distance between
the estimated position of the micropipette tip and its shadow,
successful and safe positioning in the depth direction has a
higher priority over closer positioning on the target. The
threshold of 1.4 mm was considered better as it achieved a
100% success rate. The average height of 1.37 mm can be
further reduced by improving the tip estimation method.
The micropipette was colored, and the simulated eye ground
did not have any texture in this study. The setup will be
replaced by a more clinically realistic setup after the basic
performance of the autonomous positioning is improved.
Fig. 7 Examples of (a) Success case (b) Failure case. ۑ: Contact status (in
green: no contact, in red: in contact). Green plus point: Target point. Blue
cross point: Estimated tip end of t he micropipette. Red cross point:
Estimated tip point of micropipette’s shadow.
5. Conclusion
We proposed a method of safe and autonomous positioning
of a micropipette using its shadow for autonomous cannulation
of retinal blood vessels. The experiment was performed to
evaluate the success rate and accuracy of the autonomous
positioning. The micropipette was successfully positioned with
a 100% success rate at an average height of 1.37 mm from the
simulated eye ground. Future work will include improving the
estimation of the position of the micropipette tips and their
shadow and evaluating the method in a more clinically realistic
experiment setup.
Acknowledgements
This work was partially supported by the ImPACT Program
of Council for Science, Technology and Innovation (Cabinet
Office, Government of Japan) from the Japan Society for the
Promotion of Science (JSPS).
References
[1] Sakai T, Harada K, Tanaka S, Ueta T, Noda Y, Sugita N, Mitsuishi M.
Design and development of a miniature parallel robot for eye surgery. 36th
Annual International Conference of the IEEE Engineering in Medicine and
Biology Society; 2014: p. 371-374.
[2] Sen S , Garg A, Gea ly DV, Mc Kinly S, Jen Y, Gold berg K. Automati ng
multi-throw multilateral surgical suturing with a mechanical needle guide
and sequential convex optimization. 2016 IEEE International Conference
on Robotics and Automation ( ICRA); 2016: p. 4178-4185.
[3] Murali A, Sen S, Kehoe B, Garg A, McFarland S, Patil S, Boyd WD, Lim
S, Abbeel P, Goldberg K. Learning by observation for surgical subtasks:
Multilateral cuttin g of 3D viscoelastic and 2D Orthotropic Tissu e Phantoms.
2015 IEEE International Conference on Robotics and Automation (ICRA);
2015: p. 1202-1209.
[4] McKi nly S, Gar g A, Sen S , David GV, McKin ly JP, Jen Y, Guo M, Boyd
Doug, Goldberg K. An interchangeable surgical instrument system with
application to supervised automation of multilateral tumor resection. IEEE
International Conference on Automation Science and Engineering (CASE);
2016: p. 821-826.
[5] Kazanzides P, Ch en Z, Deguet A, Fischer GS, Ta ylor RH, DiMaio SP. An
Open-Source Research Kit for the da Vinci Surgical System. 2014 IEEE
International Conferen ce on Robotics and Automation; 2014: p. 6434-643 9.
[6] Becker B, Voros S, Lobes Jr. L, Handa J, Ha ger G, Riviere C. Visi on-Based
Control of a Handheld Surgical Mi cromanipulator with Virtual Fixt ures.
IEEE Trans Robot; 2013: vol. 29, no. 3, p. 674-683.
[7] Becker B, MacLachlan R, Lobes Jr. L, Riviere C. Semiautomated
Intraocular Laser Surgery Using Handheld Instruments. Lasers in Surgery
and Medicine. Phoenix; 2010: p. 264-273.
[8] Sakai T, Ono T, Murilo M, Tanaka S, Harada K, Noda Y, Ueta T, Arai F,
Sugita N, Mitsuishi M. Autonomous 3-D positioning of surgical instrument
concerning compatibility with the eye surgical p rocedure. The Robot ics
and Mechatronics Conference. Yokohama, 2016: 1A1-01b6
[9] Richa R, Balicki M, Meisner E, Sznitman R, Taylor RH, Hager GD. Visual
Tracking of Surgical Tools for Proximity Detection in Retinal Surgery. Th e
International Conference on Information Processing in Computer-Assisted
Interventions (IPCAI); 2011: p. 55-66.
[10] Alsheakhali M, Yigitsoy M, Eslami A, Navab N. Surgical Tool Detection
and Tracking in Retinal Microsurgery. Medical Imaging 2015: Image-
Guided Procedures, Robotic Interventions, and Modeling; 2015.
[11] Rieke N, Tan D, Amat di San Filippo C, Tombari F, Alsheakhali M,
Belagiannis V, Eslami A, Navab N. Real-time localization of articulated
surgical instruments in retinal microsurgery. Medical Image Analysis;
2016: vol. 34, p. 82-100.
[12] Bishop C. Pattern Recognition and Machine Learning. Springer-Verlag
New York : 2006.
[13] Kim J, Tayama T, Kurose Y, Marinho MM, Nitta T, Harada K, Mitsuishi
M. Microscopic Image Processing for Eye Surgical Robot Automation. The
12th Asian Conference on Computer Aided Surgery; 2016: p. 142 -144.
[14] Russ JC. The IMAGE PROCESSING Handbook Sixth Edition: 2011.
[15] Hutchinson S, Hager GD, Corke P. A Tutorial on Visual Servo Control.
IEEE Trans Rob Autom; 1996: vol. 12, no. 5, p. 651-670.
[16] Gonenc B, Taylor RH, Iordachita I, Gehlbach P, Handa J. Force-sensing
microneedle for assisted retinal vein cannulation. IEEE SENSORS 2014;
2014: p. 698-701.