Fig 2 - uploaded by António J. R. Neves
Content may be subject to copyright.
On the left, a detailed view of the CAMBADA vision system. On the right, one of the robots.

On the left, a detailed view of the CAMBADA vision system. On the right, one of the robots.

Source publication
Article
Full-text available
Robotic soccer is nowadays a popular research domain in the area of multi-robot systems. In the context of RoboCup, the Middle Size League is one of the most challenging. This paper presents an efficient omnidirectional vision system for real-time object detection, developed for the robotic soccer team of the University of Aveiro, CAMBADA. The visi...

Contexts in source publication

Context 1
... CAMBADA robots [1] use a catadioptric vision system, often named omnidirectional vision system, based on a digital video camera pointing at a hyperbolic mirror, as presented in Fig. 2. We are using a digital camera Point Grey Flea 2, 1 FL2-08S2C with a 1/3" CCD Sony ICX204 that can deliver images up to 1024 Â 768 pixels in several image formats, namely RGB, YUV 4:1:1, YUV 4:2:2 or YUV 4:4:4. The hyperbolic mirror was developed by IAIS Fraunho- fer Gesellschaft 2 (FhG-AiS). Although the mirror was designed for the ...
Context 2
... of interest containing eventual circular objects. After finding these points, a validation pro- cedure is used for choosing points containing a ball, accord- ing to our characterization. The voting procedure of the Hough transform is carried out in a parameter space. Object candidates are obtained as local maxima of a denoted Inten- sity Image (Fig. 20c), that is constructed by the Hough Trans- form block (Fig. ...
Context 3
... to the special features of the Hough circular transform, a circular object in the Edges Image would produce an intense peak in Intensity Image corresponding to the center of the object (as can be seen in Fig. 20c). On the contrary, a non-circular object would produce areas of low intensity in the Intensity Image. How- ever, as the ball moves away, its edge circle size decreases. To solve this problem, information about the distance between the robot center and the ball is used to adjust the Hough transform. We use the inverse mapping of our ...
Context 4
... position of the detect ball is then sent to the Real-time Database, together with the information of the white lines and the information about the obstacles to be used, afterward, by the high level process responsible for the behaviors of the robots. Fig. 20 presents an example of the Morphological Processing Sub- System. As can be observed, the balls in the Edges Image (Fig. 20b) have almost circular contours. Fig. 20c) shows the resulting image after applying the circular Hough transform. Notice that the center of the balls present a very high peak when compared to the rest of the image. ...
Context 5
... is then sent to the Real-time Database, together with the information of the white lines and the information about the obstacles to be used, afterward, by the high level process responsible for the behaviors of the robots. Fig. 20 presents an example of the Morphological Processing Sub- System. As can be observed, the balls in the Edges Image (Fig. 20b) have almost circular contours. Fig. 20c) shows the resulting image after applying the circular Hough transform. Notice that the center of the balls present a very high peak when compared to the rest of the image. The ball considered was the closest to the robot, due to the fact that it has the high peak in the ...
Context 6
... together with the information of the white lines and the information about the obstacles to be used, afterward, by the high level process responsible for the behaviors of the robots. Fig. 20 presents an example of the Morphological Processing Sub- System. As can be observed, the balls in the Edges Image (Fig. 20b) have almost circular contours. Fig. 20c) shows the resulting image after applying the circular Hough transform. Notice that the center of the balls present a very high peak when compared to the rest of the image. The ball considered was the closest to the robot, due to the fact that it has the high peak in the ...

Similar publications

Article
Full-text available
Robotic vision is nowadays one of the most challenging branches of robotics. In the case of a humanoid robot, a robust vision system has to provide an accurate representation of the surrounding world and to cope with all the constraints imposed by the hardware architecture and the locomotion of the robot. Usually humanoid robots have low computatio...

Citations

... Both the gray and depth images determine the position of the tableware. First, the position and radius of the tableware are detected with the Hough circle transform [13][14][15][16][17][18][19], which enables to check whether the coordinates detected in the gray image and the depth image are the same, aiming to prevent erroneous detection. If they are detected not the same, the process returns to the Hough circle transform in step 1 and performs the detection process again in the next position of the tableware by the moving of the conveyor; otherwise, it moves on to the next step. ...
Article
Full-text available
In this study, an automated tableware tidying-up robot system was developed to tidy up tableware in a self-service restaurant with a large amount of tableware. This study focused on sorting and collecting tableware placed on trays detected by an RGB-D camera. Leftover food was also treated with this robot system. The RGB-D camera efficiently detected the position and height of the tableware and whether there was leftover food or not by image processing. A parallel arm and robot hand mechanism was designed to realize the advantages of a low cost and high processing speed. Two types of rotation mechanisms were designed to realize the function of throwing away leftover food. The effectiveness of the camera detection system was verified through the experiments of tableware and leftover food detection. The effectiveness of the prototype robot and the rotation assist mechanism was verified through the experiments of grasping tableware, throwing away leftover food by two types of rotating mechanisms, collecting multiple tableware, and the sorting of overlapping tableware with multiple robots.
... Geometrics localization menggunakan transformasi Hough dan identifikasi melalui garis biru dan kuning tiang gawang telah diusulkan di Lima et al. [9]. Kemudian optimasi kesalahan antara garis bidang yang terdeteksi dan peta virtual telah diperkenalkan di Neves et al. [10]. Namun, pendekatan pertama dan kedua tidak dapat dilakukan karena warna biru dan kuning dari tiang gawang dan latar belakang sudah ditiadakan. ...
... Kemudian setelah proses resampling selesai, didapatkan posisi estimasi dari bobot rata-rata dikalikan dengan hipotesis partikel sebelumnya seperti yang ditunjukan persamaan (10). ...
Article
Full-text available
"Dimana saya?" adalah pertanyaan utama, yang merupakan representasi lokalisasi atau penentuan psisi, dimana hal tersebut adalah permasalahan yang harus dijawab oleh robot sepak bola beroda. Deadreconing adalah metode paling populer yang digunakan dalam pergerakan robot beroda. Namun, kesalahan posisi yang meningkat adalah topik utama dari metode deadreconing. Selanjutnya dalam makalah ini diusulkan lokalisasi sepak bola beroda menggunakan filter partikel melalui Omnivision. Model sensor dan model gerak dari filter partikel juga dibahas, dimana model sensor diperoleh dari segmentasi dan ekstraksi ciri landmark lapangan sepak bola. Hasil eksperimen menunjukkan bahwa metode yang diusulkan memperkirakan posisi robot secara akurat dengan kesalahan 15%.
... For instance, a team can use robots of any color, making it harder to use this technique. Besides, the color segmentation approach needs to be re-calibrated on each slight environment variation, as uneven illumination or field changes [10]. ...
Preprint
Full-text available
When producing a model to object detection in a specific context, the first obstacle is to have a dataset labeling the desired classes. In RoboCup, some leagues already have more than one dataset to train and evaluate a model. However, in the Small Size League (SSL), there is not such dataset available yet. This paper presents an open-source dataset to be used as a benchmark for real-time object detection in SSL. This work also presented a pipeline to train, deploy, and evaluate Convolutional Neural Networks (CNNs) models in a low-power embedded system. This pipeline was used to evaluate the proposed dataset with state-of-art optimized models. In this dataset, the MobileNet SSD v1 achieves 44.88% AP (68.81% AP50) at 94 Frames Per Second (FPS) while running on an SSL robot.
... The method they did was comparing two outcomes contour from rotary and radial scan. Another example is in [9], they used an omnidirectional camera to further processed based on color features to distinguish the ball from other objects. All the approaches stated before as in normal and omnidirectional camera used the handcrafted features. ...
Conference Paper
Full-text available
Kontes Robot Sepak Bola Indonesia (KRSBI) is an annual event for contestants to compete their design and robot engineering in the field of robot soccer. Each contestant tries to win the match by scoring a goal toward the opponent's goal. In order to score a goal, the robot needs to locate the ball. We employ an omnidirectional vision camera as a visual sensor for a robot to perceive ball. We calibrate streaming images from the camera, in order to remove the mirror distortion. We deploy PeleeNet as our deep learning model for object detection. We fine-tune PeleeNet on modified PASCALVOC 2007-2012 dataset with the additional ball object. Our experiment results show PeleeNet has the potential to be deployed as a deep learning mobile platform for KRSBI as the ball detection architecture. It has a perfect combination of memory efficiency, speed and accuracy.
... Generally, the omnidirectional camera is used to acquire environmental information around an autonomous mobile robot for it to accomplish self-localization. For RoboCup competition, most of the teams design their vision system with an omnidirectional camera to recognize the robots' positions by colored objects such as the goals, the white lines of the field [23][24][25][26][27][28]. However, the rules for the middle-size soccer robot league of the RoboCup changed in 2009 so that the goalposts and the crossbar are painted white, and there is no specific color for goals, to increase the difficulty of the challenge. ...
Article
Full-text available
In this paper, we propose a self-localization method for a soccer robot using an omnidirectional camera. Based on the projective geometry of the omnidirectional visual system, the image distortion from the original omnidirectional image can be completely corrected, so the robot can quickly localize itself on the playing field. First, we transform the distorted omnidirectional image to a distortion-free unwrapped image of the soccer field by projective geometry. The obtained image makes the sequent field recognizable and the self-localization of the robot more convenient and accurate. Then, by geometric invariants, the correspondence between the unwrapped image and the model of the playing field is constructed. Next, the homography theory is applied to get the precise location and orientation of the robot. The simulation and experimental results show that the proposed method can quickly and accurately determine the position and azimuth of the soccer robot and the distance between two objects on the playing field.
... The method they did was comparing two outcomes contour from rotary and radial scan. Another example is in [9], they used an omnidirectional camera to further processed based on color features to distinguish the ball from other objects. All the approaches stated before as in normal and omnidirectional camera used the handcrafted features. ...
... Geometric localization using Hough transform and identification through the blue and yellow line of goalpost has been proposed in Lima et al. [7]. Then error optimization between detected field line and the virtual map [8]. However, the first and second approach cannot be implemented since the blue and yellow color of the goalpost and the background. ...
... At the time of resampling or reshaping the particle to the position according to the latest weight is described in equation (8). ...
... Dalam dekade terakhir, visi omnidirectional atau sistem omni-vision telah menjadi salah satu hal terpenting dalam sistem robot sepakbola. Omni-vision memberikan pandangan 360 derajat dari lingkungan sekitarnya robot dalam satu gambar yang dapat digunakan untuk deteksi objek [4], pelacakan [5], dan lokalisasi [6] [7]. ...
... Terlepas dari keuntungan pandangan lebar dari gambar bulat, distorsi barel membuat pendeteksian objek atau pelacakan lebih rumit. Berbagai metode telah dikembangkan untuk memperbaiki dan mengembalikan menggunakan beberapa teknik pemrosesan gambar [6], yang membuat perhitungan lebih kompleks. ...
Article
Full-text available
In the Indonesian wheeled soccer robot competition in one team consists of three robots, where one robot is a goalkeeper. In the competition the movement of robots and balls is very dynamic. So that a method is needed to predict the movement of the ball so that the goalkeeper can anticipate the movement of the ball. In this research the ball detected by digital image processing and Particle Swarm Optimization-Neural Network (PSO-NN) is used as a calibration model for object distance through omnidirectional cameras. The interpolation approach of the polynomial curve is used to obtain estimates of the model from two-dimensional data from detected objects. The results showed that the distance conversion in object detection with the PSO-NN model obtained 0.13% in percentage of average squared error (PMSE) measurement and 20% in an average prediction error.
... Terdapat juga robot yang memanfaatkan kamera vision omnidirectional yang lebih efisien untuk deteksi secara real time. Sistem ini memanfaatkan kamera untuk menemukan bola dan garis puth [15]. Pendekatan lain yang digunakan adalah deteksi objek bola menggunakan metode segmentasi warna yang dapat diubah dan dimodifikasi. ...
Conference Paper
Full-text available
Pemrosesan gambar memberikan banyak manfaat dalam teknologi robotika. Penerapan algortima yang tepat akan memberikan hasil yang sesuai. Makalah ini menjelaskan algoritma untuk deteksi dan pelacakan bola oleh robot menggunakan pemrosesan gambar. Gerakan kompleks menyebabkan deteksi salah dari berbagai objek, selain bola yang berwarna orange juga dapat menghasilkan data palsu dan ketidaktepatan robot karena banyak noise di sekitar. Pendekatan untuk deteksi bola oleh robot melalui penggunaan metode kontur dan centroid (titik pusat data) bola sebagai ukuran jarak dan tingkat intensitas warna sebagai deteksi bola. Rotasi kamera untuk deteksi dan pelacakan bola dikendalikan melalui laptop melalui ketepatan jaringan nirkabel. Kata Kunci-deteksi bola, pelacakan bola, centroid (titik pusat), intensitas warna I. PENDAHULUAN Deteksi dan pelacakan bola di berbagai olahraga telah menjadi permasalahan yang berkembang dan menantang. Ada banyak aplikasi yang mencakup ekstraksi highlight, Hough Transform dan analisis taktik. Berbagai teknik deteksi telah diimplementasikan di seluruh dunia. Misalnya, dalam kriket, deteksi bola sangat penting untuk mempertimbangkan pengenalan area dan warna. Demikian pula dalam robot bola, pemantauan langsung bola sangat penting untuk diimplementasikan. Teknik visi stereoskopis, teknik deteksi lintasan, teknik filter kalman, optimisasi grafik landmark bergerak untuk multi-robot (MMG-O) [1] adalah berbagai teknik yang diikuti untuk berbagai deteksi dan pelacakan bola dalam olahraga. Ide dasar mengikuti dan melacak terletak pada pemahaman tentang olahraga. Liga ukuran kecil dalam sepak bola RoboCup adalah contoh utama dari jenis ini [2], [4]. Dalam makalah ini, robot awalnya mendeteksi bola melalui kamera protokol internet kemudian melacaknya menggunakan rotasi tilt dan pan kamera dan pergerakan motor arus searah. Penelitian utama meliputi ekstraksi highlight, pelacakan objek, Hough Transform [5], deteksi kontur dan pencocokan template berbasis kalman filter [6], [7]. Melacak dan mendeteksi bola menggunakan kamera secara langsung adalah permasalahan rumit karena radius dan bentuk bola terus berubah dengan setiap frame yang tertangkap dengan perubahan kecepatan bola yang cepat. Radius kecil bola, pengaruh lingkungan sekitar dan ukuran frame akan menjadi permasalahan dalam deteksinya. Untuk mengatasi masalah asosiasi data, pendekatan berbasis objek dan pengenalan objek terdekat telah dikembangkan yang mencakup filter objek terkuat, filter objek terdekat, metode filter jalur pemisah, asosiasi data probabilistik, metode Asosiasi Viterbi [8], deteksi bola berbasis lintasan dan algoritma pelacakan [9], deteksi dengan menggabungkan informasi audio dan visual [10], pelacakan multi-hipotesis dan lainnya. Sistem terdiri dari robot, laptop untuk mengelola akuisisi gambar dan kontrol robot, kamera protokol internet yang mendeteksi bola melalui penangkapan gambar secara langsung dan mentransfernya ke laptop melalui router nirkabel. Router bertindak sebagai media komunikasi antara robot dengan laptop. Setelah menerima gambar secara langsung, laptop memberikan perintah untuk kamera dan motor robot untuk bergerak. Robot ini terdiri dari 4 motor DC, chip ESP8266, driver motor L298 H-Bridge untuk mengendalikan motor dan kamera protokol internet yang terpasang di dalamnya. Daya dihasilkan melalui baterai The 7th Indonesian Symposium on Robotic Systems and Control (ISRSC) 47
... Very recently, there have been some alternative methods, namely the use of deep Reinforcement Learning (RL), where a policy is created in order to maximize the rewards of an agent [9], [10]. On the other hand, omnidirectional cameras have been used in an increasing number of applications ranging from surveillance to robot navigation [11], [12], [13], [14], which require a The input is an omnidirectional image with an initial state of the bounding box, represented in the world coordinate system. Using this information, a set of possible actions are applied in order to detect the pedestrian in the 3D environment. ...
... This is why the traditional feature extraction methods are not suited for these systems. Some methods exist for soccer robots using catadioptric systems [11]. However, they require the knowledge of the object shape and color. ...
... where θ are the optimal parameters of the model learned in (11). The obtained classification will provide the initial state position for the deep RL branch (see Sec. III-A), in testing. ...
Preprint
Full-text available
Pedestrian detection is one of the most explored topics in computer vision and robotics. The use of deep learning methods allowed the development of new and highly competitive algorithms. Deep Reinforcement Learning has proved to be within the state-of-the-art in terms of both detection in perspective cameras and robotics applications. However, for detection in omnidirectional cameras, the literature is still scarce, mostly because of their high levels of distortion. This paper presents a novel and efficient technique for robust pedestrian detection in omnidirectional images. The proposed method uses deep Reinforcement Learning that takes advantage of the distortion in the image. By considering the 3D bounding boxes and their distorted projections into the image, our method is able to provide the pedestrian's position in the world, in contrast to the image positions provided by most state-of-the-art methods for perspective cameras. Our method avoids the need of pre-processing steps to remove the distortion, which is computationally expensive. Beyond the novel solution, our method compares favorably with the state-of-the-art methodologies that do not consider the underlying distortion for the detection task.