ChapterPDF Available

Person Following Robot with Vision-based and Sensor Fusion Tracking Algorithm

Authors:

Abstract and Figures

The person following robot ApriAttendaTM equipped with a stereo camera and Vision System and LRF is introduced. ApriAttendaTM has the Vision-Based Tracking system and the Vision-Based Motion Control system. ApriAttendaTM can do the person following motion using the tracking information. Moreover, ApriAttendaTM used LRF as another sensor for the tracking performance gain. The respective problems of the vision and LRF tracking systems are pointed out and an improvement method based on the idea of the Vision-LRF Sensor Fusion system is proposed. One feature of this new system is that the fusion rate changes depending on the congestion information of the environment. The experimental movement results of applying these systems to ApriAttendaTM are reported. The efficiency of the proposed method is confirmed by the experiment. As discussed here, efforts to achieve an advanced application using sensors independently are subject to an unavoidable limit. So, a system design integrating information from two or more types of sensor is required. Because the vision data containing abundant information plays a key role in the complex system, further development of the vision system is desirable.
Content may be subject to copyright.
A preview of the PDF is not available
... One such technique is pattern recognition, which looks for specific shapes, colors, and movement characteristics in the images captured by the robot's cameras [15]. This process is complex, as the robot must identify the correct set of parameters to characterize a person, which can be challenging given the diversity of operating environments [16]. Moreover, some techniques, such as the use of histograms of oriented gradient (HOG) descriptors and support vector machine classifiers, are computationally expensive for real-time applications on small autonomous robots [17]. ...
Article
Full-text available
In the ever-expanding sphere of assistive robotics, the pressing need for advanced methods capable of accurately tracking individuals within unstructured indoor settings has been magnified. This research endeavours to devise a realtime visual tracking mechanism that encapsulates high performance attributes while maintaining minimal computational requirements. Inspired by the neural processes of the human brain’s visual information handling, our innovative algorithm employs a pattern image, serving as an ephemeral memory, which facilitates the identification of motion within images. This tracking paradigm was subjected to rigorous testing on a Nao humanoid robot, demonstrating noteworthy outcomes in controlled laboratory conditions. The algorithm exhibited a remarkably low false detection rate, less than 4%, and target losses were recorded in merely 12% of instances, thus attesting to its successful operation. Moreover, the algorithm’s capacity to accurately estimate the direct distance to the target further substantiated its high efficacy. These compelling findings serve as a substantial contribution to assistive robotics. The proficient visual tracking methodology proposed herein holds the potential to markedly amplify the competencies of robots operating in dynamic, unstructured indoor settings, and set the foundation for a higher degree of complex interactive tasks.
... An orthogonal but relevant aspect to classify DETRAFO approaches is the intended use of robots. Many previous works focus on a generic goal (e.g., [18,16,14,5,23,26,17]), while other examples are tailored to specific usages, such as office assistant [6], nurse assistant [9], social companion [21], video surveillance [20,15], and transportation [28]. Some existing algorithms require that a specific person be tracked within the field of perception of the robot using skeletal detection [9], face detection [18], color tag detection [14], or leg detection [26]. ...
Chapter
The operation of a telepresence robot as a service robot has gained wide attention in robotics. The recent COVID-19 pandemic has boosted its use for medical uses, allowing patients to interact while avoiding the risk of contagion. While telepresence robots are designed to have a human operator that controls them, their sensing and actuation abilities can be used to achieve higher levels of autonomy. One desirable ability, which takes advantage of the mobility of a telepresence robot, is to recognize people and the space in which they operate. With the ultimate objective to assist individuals in office spaces, we propose an approach for rendering a telepresence robot autonomous with real-time, indoor human detection and pose classification, with consequent chaperoning of the human. We validate the approach through a series of experiments involving an Ohmni Telepresence Robot, using a standard camera for vision and an additional Lidar sensor. The evaluation of the robot’s performance and comparison with the state of the art shows promise of the feasibility of using such robots as office assistants. KeywordsService-Oriented Robotic SystemsAutonomous NavigationROSService RobotsHuman-Robot Interaction
... Another technology to implement follow me a robot is a person following robot with visionbased and sensor fusion tracking algorithms [9]. By using this algorithm, the person following the robot targets person who measures the distance between the person and robot and directs the platform to him/her using stereo vision processing and Laser Range Finder (LRF) sensing data. ...
Article
The technology of An Autonomous "follow me" platform for carrying and moving objects has gone through rapid technological advancements. Numerous follow me robots are accessible with various running advancements, yet the expense is high. These robots are not user-friendly and therefore not much successful. In this research, a fully automated, economical, fast, efficient and smart "Follow Me" robot is designed. This robot has the ability to carry luggage or move objects from one place to another place. It will help to pregnant women and elderly people to carry their things. An autonomous follow me robot has two working modes, the first one is the default mode and the second one is Bluetooth mode or remote mode. In default mode, the user will walk in the front of the ultrasonic sensor and it will follow the user until it goes beyond the range. In Bluetooth mode, the customer needs to interact with the robot with the help of a mobile application. The customer by then has the Graphical User Interface (GUI) to control the robot. This framework enables the client to vigorously communicate with the robot at various dimensions of the control (left, right, forward, backward, and stop). The application interface is built as simple as it can be used by a wide range of patients.
... To better understand existing mobile companions that could influence the transportation infrastructure in urban cities, best practice use cases are given. Not only automotive manufacturers promote advanced digital solutions for self-driving technologies, but also other technology and mobility service providers like Piaggio Fast Forward [27], Microsoft [22] or Toshiba [23] have developed mobile robotic vehicles that follow humans and assist with various services. Only a few are available for sale and the majority represent prototypes for the purpose of research. ...
Chapter
Urban mobility is changing due to the emergence of new technologies like autonomously navigating robots. In the future, various transport operators and micro mobility services will be integrated in an increasingly complex mobility system, potentially realizing benefits such as a reduction of congestion, travel costs, and emissions. The field of personal robotic transport agents is projected to increasingly play a role in urban mobility, hence in this study, prospective target groups and corresponding user needs concerning human-following robots for smart urban mobility applications are investigated. Building on an extensive literature review, three focus groups with a total of 19 participants are conducted, utilizing scenario-based design and personas. Results show clearly definable user needs and potential technological requirements for mobile robots deployed in urban road environments. The two most mentioned potential applications were found in the fields of leisure applications and in healthcare for elderly people. Based on these focus group results, two personal automated driving robots which differ in function, operation and interaction were designed. The focus group-based results and derived requirements shed light on the importance of context-sensitivity of robot design.
... These robots are very good examples for human machine interaction. For example, the robot ApriAttenda [1] has the ability to follow a person. It can care for the elderly or a child. ...
Conference Paper
Full-text available
This paper is about a brand new robot and all its development stages from the design to the show time. As an undergraduate research project (the LAP program at Atilim University), the robot TozTorUs is the outcome of the dense efforts of a team. With the sensors equipped, it navigates autonomously in the environment in which it is located by avoiding the obstacles. It can understand your questions and answer them using Google’s speech technologies. Although it is not a humanoid robot, with eyes and mouth simulator LED displays, it is as friendly as a human. We can also control TozTorUs using a mobile phone. Apart from these, it is able to adjust its height with respect to the visitor’s, thus allowing it to make an eye contact with the person. Although TozTorUs is designed for welcoming, it may also be employed for consulting, security and elderly assistance.
Chapter
The lack of caregivers in an aging society is a major social problem. Without assistance, many of the elderly and disabled are unable to perform daily tasks. One important daily activity is shopping in supermarkets. Pushing a shopping cart and moving it from shelf to shelf is tiring and laborious, especially for customers with certain disabilities or the elderly. To alleviate this problem, we develop a person following shopping support robot using a Kinect camera that can recognize customer shopping actions or activities. Our robot can follow within a certain distance behind the customer. Whenever our robot detects the customer performing a “hand in shelf” action in front of a shelf it positions itself beside the customer with a shopping basket so that the customer can easily put his or her product in the basket. Afterwards, the robot again follows the customer from shelf to shelf until he or she is done with shopping. We conduct our experiments in a real supermarket to evaluate its effectiveness.
Conference Paper
Full-text available
This paper presents an efficient person tracking algorithm for a vision-based mobile robot using two independently moving cameras each of which is mounted on its own pan/tilt unit. Without calibrating these cameras, the goal of our proposed method is to estimate the distance to a target appearing in the image sequences captured by the cameras. The main contributions of our approach include: 1) establishing the correspondence between the control inputs to the pan/tilt units and the pixel displacement in the image plane without using the intrinsic parameters of the cameras; and 2) derivation of the distance information from the correspondence between the centers of masses of the segmented color-blobs from the left and the right images without stereo camera calibration. Our proposed approach has been successfully tested on a mobile robot for the task of person following in real environments.
Chapter
Full-text available
The ongoing development of life support robots is presented by introducing the newly developed sharp ear robot ApriAlphaTM V3 and the person-following robot ApriAttendaTM from the viewpoint of human interfaces and mobile intelligence. In the future, by making full use of advanced network technology, home-use robots are expected to be at the core of home network systems and the widespread adoption of robots in everyday life is expected to be greatly facilitated by improvements in their working environment. Showing the concept of UDRobTM, the environmental design including objects should be considered from the perspectives of both robots and humans. To realize life support robots, it is important to demonstrate what the robot can do in terms of actual tasks. The authors believe intelligent robots are the next technology whose development will decisively change the way people live. Other important issues are standardization of the robot's interface and safety problem. The activities of RSi (Robot Service Initiative) contribute to the common interface of information service such as weather forecast or news for service providers (Narita, 2005). In the OMG (Object Management Group) meeting, such an interface is also discussed (Kotoku, 2005), (Mizukawa, 2005). On the safety problem, the discussion on safety is an ongoing topic from the Aichi Expo. These activities will be fruitful in near future.
Conference Paper
Full-text available
We address the problem of detecting and tracking people with a mobile robot. The need for following a person with a mobile robot arises in many different service robotic applications. The main problems of this task are realtime- constraints, a changing background, varying illumination conditions and a non-rigid shape of the person to be tracked. The presented system has been tested extensively on a mobile robot in our everyday office environment.
Article
Human tracking is a fundamental research issue for mobile robot, since coexistence of human and robots is expected in the near future. In this paper, we present a new method for real-time tracking of human walking around a robot using a laser range-finder. The method converts range data with r-θ coordinates to a 2D image with x-y coordinates. Then human tracking is performed using block matching between templates, i.e. appearances of human legs, and the input range data. The view-based human tracking method has the advantage of simplicity over conventional methods which extract local minima in the range data. In addition, the proposed tracking system employs a particle filter to robustly track human in case of occlusions. Experimental results using a real robot demonstrate usefulness of the proposed method.
Article
Future service robots will need to keep track of the persons in their environment. A number of people tracking systems have been developed for mobile robots, but it is currently impossible to make objective comparisons of their perfor-mance. This paper presents a comprehensive, quantitative evaluation of a state-of-the-art people tracking system for a mobile robot in an office environment, for both single and multiple persons.
Conference Paper
We have developed the person following robot "ApriAttendatrade". This robot can accompany a person using vision based target detection and avoid obstacles with ultrasonic sensors while following the person. The robot first identifies an individual with its image processing system by detecting a person's region and recognizing the registered color and texture of his/her clothes. Usually, the person following robot has to detect and recognize the specified person and calculate his/her position in a complicated real-life environment of fixed objects and moving people. Our newly developed algorithm allows the robot to extract a particular individual from a cluttered background, and to find and reconnect with the person if it loses visual contact. Tracking people with stereo vision was realized by systematizing visual and motion control with a robust algorithm that utilizes various characteristics of the image data. The developed algorithm uses several analyses to extract information on the distance to each feature point, speed of target, color and texture of clothes for a stable tracking in many situations including changes of view due to self motion, shifts in lighting and objects similar to the target. The person following robot ApriAttendatrade has been exhibited at Aichi EXPO 2005, and its robust functions and smooth person following capability were successfully demonstrated
Conference Paper
To monitor multiple moving objects from a robot in motion is an essential technology in robotic application areas including service and security for human daily life. For this, a method to track multiple walking humans using a mobile robot "in motion" with laser range finder (LRF) is investigated in this paper. Geometric characteristics of human legs are considered to detect their position from the LRF data. Frequency and phase of walking motion are extracted using a pendulum model of the angle between two legs and extended Kalman filter. The algorithm with human walking model anticipates the position of moving humans. The effectiveness of the proposed method is also evaluated with some experiments to track multiple walking humans in indoor environment. An experimental testbed which consists of a mobile robot "Yamabico" and a LRF is employed to track people. Resultant experiments and analysis showed that multiple walking people are tracked well from a mobile robot with the proposed method
Conference Paper
The paper presents two different methods for mobile robot tracking and following of a fast-moving person in outdoor unstructured and possibly dynamic environment. The robot is equipped with laser range-finder and omnidirectional camera. The first method is based on visual tracking only and while it works well at slow speeds and controlled conditions, its performance quickly degrades as conditions become more difficult. The second method which uses the laser and the camera in conjunction for tracking performs well in dynamic and cluttered outdoor environments as long as the target occlusions and losses are temporary. Experimental results and analysis are presented for the second approach
Conference Paper
In this paper, the information services of "ApriAlpha™" such as news reading, controlling home appliances, and question-answer system for recipes using network technologies are explained. ApriAlpha is a mobile home robot which can offer security and information services. The robot acts as a voice controlled human interface, thus you can use ApriAlpha™ anytime and anywhere even you are busy doing something. ApriAlpha's controller is built based on Toshiba's open robot controller architecture (ORCA) to make easy integration of the technologies such as an agent, UPnP™, and question-answer technologies. The practicability of the ApriAlpha™ has been expanded by cooperating with networked home appliances. Finally, the infrastructure improvement for the physical environment is described to realize more home robots in practical use.
Conference Paper
We have proposed a concept of a robotic information home appliance corresponding to one category of home robots, and developed ApriAlpha, a concept model of the robotic information home appliance. ApriAlpha is a wheel locomotion type human friendly home robot which controls advanced home appliances, standing between their users and them as a voice controlled information terminal, and offers security and information services to users. We have integrated various robot technologies such as voice communication, image recognition, planning and motion control on ApriAlpha, and we have introduced a framework of the distributed object technology based open robot controller architecture (ORCA), which we are currently developing in view of its easy extension and efficient development. This paper describes the proposed robotic information home appliance and its concept model, ApriAlpha. The functions of the developed robot are confirmed by performing several demonstrations, and the merit of applying the framework of ORCA to the home robot controller is also confirmed through its development.