Fig 2 - uploaded by Christos Bergeles
Content may be subject to copyright.
(a) Direct ophthalmoscopy with Navarro’s schematic eye [10]. (b) Ophthalmoscopy with Navarro’s schematic eye with a vitrectomy lens [11]. (c) Indirect ophthalmoscopy with Navarro’s schematic eye with a condensing lens [12]. 

(a) Direct ophthalmoscopy with Navarro’s schematic eye [10]. (b) Ophthalmoscopy with Navarro’s schematic eye with a vitrectomy lens [11]. (c) Indirect ophthalmoscopy with Navarro’s schematic eye with a condensing lens [12]. 

Source publication
Article
Full-text available
Future retinal therapies will be partially automated in order to increase the positioning accuracy of surgical tools. Proposed untethered microrobotic approaches that achieve this increased accuracy require localization information for their control. Since the environment of the human eye is externally observable, images can be used to localize the...

Contexts in source publication

Context 1
... ophthalmoscopy involves direct observation of the human eye retina by a clinician [22]. Based on [23], the field of view is found at 10 • [see Fig. 2(a) and Table I]. The formed image of the intraocular objects is always virtual (see Fig. 3 solid line), and capturing it requires an imaging system with a nearly infinite working distance. Such an imaging system will also have a large depth of field, and, thus, extracting depth information from focus will be insensitive to object ...
Context 2
... lenses allow for the visualization of devices operating in the vitreous humor of phakic (i.e., intact intraocular lens) eyes [22]. In Fig. 2(b), the vitrectomy lens S5.7010 from HUCO Vision SA [11] is shown. Its optical parameters can be found in Table I. Vitrectomy lenses increase the field of view (up to 40 • ), attenuate the virtual images formed by the eye optics, and position them inside the eye. The virtual images are subsequently captured by an additional imaging ...
Context 3
... Due to vignetting, there are always some rays that escape the eye and are not captured by the lens, thus, limiting the maximum achievable field of view. State-of-the-art condensing lenses and their design considerations are discussed in [12]. From simulations of a system composed of Navarro's schematic eye equipped with a condensing lens (see Fig. 2(c) and Table I), the aerial image position versus the on-axis object position can be estimated (see Fig. 3 dashed-dotted line). The results indicate that if the aerial image is directly captured by an imaging system with a shallow depth of field, both a high field of view and accurate focus-based localization can be achieved. Because ...
Context 4
... the simple paraxial models. We propose a method that is based on raytracing on an optical model of the human eye that can be constructed preoperatively. Methods to extract individual eye parameters are described in [24] and [25]. Recently, a method that creates per- Fig. 5. Simulation of the isofocus surfaces and isopixel curves for the system of Fig. 2(c). The different isofocus surfaces correspond to the distance from the lens to the sensor (d ls ), for uniform sensor steps of ∼1.27 mm. The isopixel curves correspond to pixel distances from the optical axis (d op ). sonalized eye models from biometric measurements was proposed ...
Context 5
... an experimental testbed, we use the model eye [9] from Gwb International, Ltd. This eye is equipped with a planoconvex lens that mimics the compound optical system of the human eye. The model eye contains no liquid, and thus, the lens Fig. 12. Simulation of the isofocus surfaces and isopixel curves for the system composed of the model eye and the condensing lens. The different isofocus surfaces correspond to the distance from the lens to the sensor (d ls ), for uniform sensor steps of ∼0.7 mm. The isopixel curves correspond to pixel distances from the optical axis (d op ...
Context 6
... imaging device consists of two components: a condensing lens that is kept at a constant position with respect to the eye and a sensor that captures the aerial image directly and moves with respect to the lens to focus on objects throughout the eye. The condensing lens is a custom-made double conicconvex lens based on [12] (see Fig. 2(c) for parameters, where the refractive index was changed to 1.531 due to lack of the original material at Sumipro bv.). This lens causes a 0.78× magnification, thus, an object of 100 µm near the retina would create an image of 78 µm. The image is captured by a firewire Basler A602f camera with a CMOS sensor (9.9 µm × 9.9 µm sensing ...
Context 7
... simulated isofocus surfaces and isopixel curves of the composite system are shown in Fig. 12. Their parameterization is shown in Fig. 13. The behavior of the parameters is similar to the one displayed in Fig. 6. The assumed conic constant of the isofocus surfaces is kept constant at −0.175, which is the value we measured for the retina of the model ...
Context 8
... we conclude that there exists an isofocus surface that corresponds to the retinal surface, and we consider it as the first surface. From Fig. 12, we see that the first isofocus surface does indeed roughly correspond to the retinal shape. As a result, calibration for the conic constant and the curvature is not needed. ...

Similar publications

Article
Full-text available
This paper proposes a multimodal approach for vessel segmentation of macular Optical Coherence Tomography (OCT) slices along with the fundus image. The method is comprised of two separate stages; The first step is 2D segmentation of blood vessels in curvelet domain, enhanced by taking advantage of vessel information in crossing OCT slices (named fe...
Article
Full-text available
Optic Disc (OD) localization is an important pre-processing step that significantly simplifies subsequent segmentation of the OD and other retinal structures. Current OD localization techniques suffer from impractically-high computation times (few minutes per image). In this work, we present a fast technique that requires less than a second to loca...

Citations

... The main advantages of this approach are simple instrumentation, which is usually comprised of two digital cameras or a stereo camera, and fast speed of acquisition with frequency of up to 15 Hz. However, the accuracy of surface reconstructions is quite poor with an accuracy of 1.02 ± 0.51 mm [9][10][11][12]. ...
Article
Full-text available
Purpose Robotic systems have the potential to overcome inherent limitations of humans and offer substantial advantages to patients including reduction in surgery time. Our group has undertaken the challenge of developing autonomous wound closure system. One of the initial steps is to allow accurate assessment of wound skin topology and wound edge location. We present a vision-laser scanner to generate 3D point cloud for 3D reconstruction of wound's edge and topology. Methods When the laser range sensor measures Z coordinate, two encoders installed on the actuators of the gantry robot provide the precision values of X, Y coordinates simultaneously. The 3D point cloud of the wound skin is generated by recordings of X, Y and Z during scanning is performed over wound skin surface. To reduce the scanning time, we exploit a supplementary laser LED to project a regular laser spot on the wound skin surface, which can provide an additional measurement point by incorporating artificial neural network estimation approach. In the meantime, the point cloud of the wound edge can be extracted by detecting if the laser spot is located on the wound edge in the image from 2D camera. Results The mean absolute error (MAE) and standard deviation (σ) of wound edge are measured in MeshLab environment. The MAE (σ) in X (tangent), Y (tangent), and Z (normal) are 0.32 (0.22) mm, 0.37 (0.34) mm, and 0.61 (0.29) mm, respectively. The experimental results demonstrate that the vision-laser scanner attains high accuracy in determining wound edge location along the tangent of the wound skin. Conclusion A vision-laser scanner is developed for 3D reconstruction of wound's edge and topology. The experimental tests on the different wound models revealed the effectiveness of the vision-laser scanner. The proposed scanner can generate 3D point cloud of the wound skin and its edge simultaneously, and thus significantly improve the accuracy of wound closure in clinical applications.
... For instance, Richa et al. [20] used stereo images and detected the proximity based on the relative stereo disparity. Bergeless et al. [21], [22] presented a wide-angle localization method for microrobotic devices using an optical system. The team of Carnegie Mellon University [23], [24] proposed a retinal surface estimation method using projected beam patterns on the retinal surface. ...
Article
Full-text available
Vitreoretinal surgery is challenging even for expert surgeons owing to the delicate target tissues and the diminutive workspace in the retina. In addition to improved dexterity and accuracy, robot assistance allows for (partial) task automation. In this work, we propose a strategy to automate the motion of the light guide with respect to the surgical instrument. This automation allows the instrument’s shadow to always be inside the microscopic view, which is an important cue for the accurate positioning of the instrument in the retina. We show simulations and experiments demonstrating that the proposed strategy is effective in a 700-point grid in the retina of a surgical phantom. Furthermore, we integrated the proposed strategy with image processing and succeeded in positioning the surgical instrument’s tip in the retina, relying on only the robot’s geometric information and microscopic images.
... The main advantages of this approach are simple instrumentation, which is usually comprised of two digital cameras or a stereo camera, and fast speed of acquisition with frequency of up to 15 Hz. However, the accuracy of surface reconstructions is quite poor with an accuracy of 1.02± 0.51 mm [9][10][11][12]. ...
Article
Full-text available
PurposeRobotic systems have the potential to overcome inherent limitations of humans and offer substantial advantages to patients including reduction in surgery time. Our group has undertaken the challenge of developing autonomous wound closure system. One of the initial steps is to allow accurate assessment of wound skin topology and wound edge location. We present a vision-laser scanner to generate 3D point cloud for 3D reconstruction of wound’s edge and topology.Methods When the laser range sensor measures Z coordinate, two encoders installed on the actuators of the gantry robot provide the precision values of X, Y coordinates simultaneously. The 3D point cloud of the wound skin is generated by recordings of X, Y and Z during scanning is performed over wound skin surface. To reduce the scanning time, we exploit a supplementary laser LED to project a regular laser spot on the wound skin surface, which can provide an additional measurement point by incorporating artificial neural network estimation approach. In the meantime, the point cloud of the wound edge can be extracted by detecting if the laser spot is located on the wound edge in the image from 2D camera.ResultsThe mean absolute error (MAE) and standard deviation (σ) of wound edge are measured in MeshLab environment. The MAE (σ) in X (tangent), Y (tangent), and Z (normal) are 0.32 (0.22) mm, 0.37 (0.34) mm, and 0.61 (0.29) mm, respectively. The experimental results demonstrate that the vision-laser scanner attains high accuracy in determining wound edge location along the tangent of the wound skin.ConclusionA vision-laser scanner is developed for 3D reconstruction of wound’s edge and topology. The experimental tests on the different wound models revealed the effectiveness of the vision-laser scanner. The proposed scanner can generate 3D point cloud of the wound skin and its edge simultaneously, and thus significantly improve the accuracy of wound closure in clinical applications.
... The optics of the eye and the wide-angle viewing systems used in retinal surgery introduce a more complex optical model than the standard camera model used here. Nevertheless, most properties, including eye radius and lens characteristics, can be estimated from biometric data such as routine optical coherence tomography (OCT) [21], while the wide-angle viewing system and surgical microscope can be calibrated separately before an operation. An estimate for T c,s , the transformation between the eyeball and the fundus camera, can also be estimated using registration of features on the iris. ...
Article
Full-text available
The advanced state of diabetic retinopathy, a leading cause of blindness, is treated by panretinal photocoagulation (PRP), a repetitive procedure performed by a surgeon using a handheld laser probe. In its place we propose a soft-robotic flexible probe precisely steered using magnetic fields generated by an external magnetic steering system. We develop a kinematic model for the PRP task and show that the process can be automated given image feedback of the retina through a fundus camera. We demonstrate the concept in an eye phantom of a human eye, achieving sufficiently high accuracy and faster speeds than human surgeons.
... However, it is not feasible to apply the same techniques in an intact eyeball, including the cornea, the lens, and the vitreous humor (or the saline solution that replaces the vitreous humor during vitrectomy), since the camera calibration and resulting surface reconstruction are prone to failure, owing to considerable optical distortion and unreliable visual detection. Most calibration methods assume a classical perspective camera model in a single medium, such as air, but this assumption does not hold in a complex eye entailing the refraction of the light (Bergeles et al., 2010). The optical path includes the cornea and lens; during surgery, it also includes saline, with which the eye is filled after vitrectomy, and a contact lens (or a binocular indirect ophthalmomicroscope lens) to provide a wideangle view during operation. ...
... This difficulty in intraocular surgery has led to the development of new 3D localization methods for controlling a microrobot inside the eye, taking the unique optical characteristics into account (Bergeles et al., 2010(Bergeles et al., , 2012. Bergeles et al. (2010) introduced a focus-based method, accounting for the optics of the human eye in the imaging and localization of the microrobot with a single stationary camera. ...
... This difficulty in intraocular surgery has led to the development of new 3D localization methods for controlling a microrobot inside the eye, taking the unique optical characteristics into account (Bergeles et al., 2010(Bergeles et al., , 2012. Bergeles et al. (2010) introduced a focus-based method, accounting for the optics of the human eye in the imaging and localization of the microrobot with a single stationary camera. They adopted an optical model called the Navarro schematic eye, based on biometric data. ...
Article
This paper presents techniques for robot-aided intraocular surgery using monocular vision in order to overcome erroneous stereo reconstruction in an intact eye. We propose a new retinal surface estimation method based on a structured-light approach. A handheld robot known as the Micron enables automatic scanning of a laser probe, creating projected beam patterns on the retinal surface. Geometric analysis of the patterns then allows planar reconstruction of the surface. To realize automated surgery in an intact eye, monocular hybrid visual servoing is accomplished through a scheme that incorporates surface reconstruction and partitioned visual servoing. We investigate the sensitivity of the estimation method according to relevant parameters and also evaluate its performance in both dry and wet conditions. The approach is validated through experiments for automated laser photocoagulation in a realistic eye phantom in vitro. Finally, we present the first demonstration of automated intraocular laser surgery in porcine eyes ex vivo.
... The main reason for estimating the 2D pixel positions of miniaturized agents is related to significant technical difficulties in tracking the agents in 3D. Prior research on 3D tracking used multicameras [21,22] and depth-from-focus techniques to estimate the 3D position of the miniaturized agents [23,24]. Recently, a template-based hybrid visual tracking algorithm was presented to estimate the 3D posture of micro-objects in a scanning electron microscope [25]. ...
Article
Full-text available
Miniaturized grippers that possess an untethered structure are suitable for a wide range of tasks, ranging from micromanipulation and microassembly to minimally invasive surgical interventions. In order to robustly perform such tasks, it is critical to properly estimate their overall configuration. Previous studies on tracking and control of miniaturized agents estimated mainly their 2D pixel position, mostly using cameras and optical images as a feedback modality. This paper presents a novel solution to the problem of estimating and tracking the 3D position, orientation and configuration of the tips of submillimeter grippers from marker-less visual observations. We consider this as an optimization problem, which is solved using a variant of the Particle Swarm Optimization algorithm. The proposed approach has been implemented in a Graphics Processing Unit (GPU) which allows a user to track the submillimeter agents online. The proposed approach has been evaluated on several image sequences obtained from a camera and on B-mode ultrasound images obtained from an ultrasound probe. The sequences show the grippers moving, rotating, opening/closing and grasping biological material. Qualitative results obtained using both hydrogel (soft) and metallic (hard) grippers with different shapes and sizes ranging from 750 microns to 4 mm (tip to tip), demonstrate the capability of the proposed method to track the agent in all the video sequences. Quantitative results obtained by processing synthetic data reveal a tracking position error of 25 ± 7μm and orientation error of 1.7 ± 1.3 degrees. We believe that the proposed technique can be applied to different stimuli responsive miniaturized agents, allowing the user to estimate the full configuration of complex agents from visual marker-less observations.
... However, a full 3D representation is problematic because 3D estimation in the eye is challenging. Microscope calibration can be difficult 33 and modelling the lens of the eye to achieve intraocular localization is an area of active research, 34 It is possible to calculate the centerlines if necessary for robotic control, but it is no longer integral to the internal workings of the algorithm, which is faster than. 25 ...
Article
Background: Fast and accurate mapping and localization of the retinal vasculature is critical to increasing the effectiveness and clinical utility of robot-assisted intraocular microsurgery such as laser photocoagulation and retinal vessel cannulation. Methods: The proposed EyeSLAM algorithm delivers 30 Hz real-time simultaneous localization and mapping of the human retina and vasculature during intraocular surgery, combining fast vessel detection with 2D scan-matching techniques to build and localize a probabilistic map of the vasculature. Results: In the harsh imaging environment of retinal surgery with high magnification, quick shaky motions, textureless retina background, variable lighting and tool occlusion, EyeSLAM can map 75% of the vessels within two seconds of initialization and localize the retina in real time with a root mean squared (RMS) error of under 5.0 pixels (translation) and 1° (rotation). Conclusions: EyeSLAM robustly provides retinal maps and registration that enable intelligent surgical micromanipulators to aid surgeons in simulated retinal vessel tracing and photocoagulation tasks.
... This paper bases its results on Navarro's wide-field eye [8], which is an established biometric model explaining the eye's optical aberrations in a large (∼ 70 • , measured from the eye's centre) field-of-view [see Fig. 1(a)]. The biometric model and aspheric condensing lens parameters, based on [9], are given in Table 1 for completeness. The double aspheric lenses listed in this paper belong to the family of lenses described in [10], which differ among themselves primarily on the field-of-view and magnification. ...
Conference Paper
Full-text available
Ophthalmoscopes have yet to capitalise on novel low-cost miniature optomechatronics, which could disrupt ophthalmic monitoring in rural areas. This paper demonstrates a new design integrating modern components for ophthalmoscopy. Simulations show that the optical elements can be reduced to just two lenses: an aspheric ophthalmoscopic lens and a commodity liquid-lens, leading to a compact prototype. Circularly polarised transpupilary illumination, with limited use so far for ophthalmoscopy, suppresses reflections, while autofocusing preserves image sharpness. Experiments with a human-eye model and cadaver porcine eyes demonstrate our prototype’s clinical value and its potential for accessible imaging when cost is a limiting factor.
... Proposed methods usually rely on the measurement of the displacement or deformation retrieved from an imaging sensor. In our context, to ensure efficient navigation control of magnetic microrobot its location is determined from medical imaging such as magnetic resonance imaging (MRI) [15], or digital microscopy [16], [17]. Hence, no additional sensing modalities are required, and the vision sensor is a priori able to provide the force feedback [14]. ...
Article
Full-text available
This paper has presented a new vision-based force-sensing framework that allows to characterize the forces applied on a magnetic microrobot in an endovascular-like environment. Especially, unlike common approaches used with optical microscopy where orthographic projection model are used, we consider in this paper the weak-perspective model. The proposed vision-based force characterization allows to retrieve the three dimensional (3D) translational velocities and accelerations of a microrobot viewed from a digital microscope. Hence, thanks to the dynamic model the external forces are estimated on-line. The framework was applied and validated for a magnetic microrobot navigating in a viscous flow. Experimental results in two different environments illustrate the efficiency of the proposed method.
... Proposed methods usually rely on the measurement of the displacement or deformation retrieved from an imaging sensor. In our context, to ensure efficient navigation control of magnetic microrobot its location is determined from medical imaging such as magnetic resonance imaging (MRI) [15], or digital microscopy [16], [17]. Hence, no additional sensing modalities are required, and the vision sensor is a priori able to provide the force feedback [14]. ...
Conference Paper
In this paper we aim to characterize and validate the system's dynamic model of a magnetic microrobot navigating in viscous flow. First, the controlled magnetic forces exerted on the magnetic microrobot was calibrated, validating the magnetic model. Secondly, the external forces were characterized on-line from digital microscope measurements. Especially, unlike common approaches used with microscope where orthographic projection model were used, we have proposed to consider the weak-perspective model. Thus, the proposed vision-based force characterization allows us to retrieve the 3D translational velocities and accelerations of the magnetic microrobot viewed from a digital microscope. Experimental results in two different environments illustrate the efficiency of the proposed method.