Article

Autonomous navigation and obstacle avoidance in smart robotic wheelchairs

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This review research paper provides a comprehensive analysis of the advancements, challenges, and methodologies in autonomous navigation and obstacle avoidance for smart robotic wheelchairs. The integration of robotics and assistive technology has revolutionized mobility solutions for individuals with impairments, enabling them to navigate complex environments independently. The paper examines the various sensor modalities, machine learning algorithms, and computer vision techniques employed for environment perception and obstacle recognition. It discusses path planning algorithms, motion control strategies, and decision-making processes for autonomous navigation. The review also addresses limitations, such as localization accuracy and dynamic environment modelling, while highlighting recent research advancements and suggesting future directions. Overall, this paper serves as a valuable resource for researchers and practitioners in the field of smart robotic wheelchairs, aiming to enhance mobility and quality of life for individuals with mobility impairments.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... However, when a user is onboard, this behavior is perceived as a sudden and unpredictable change in direction, potentially leading to uncomfortable and even dangerous situations, such as overturning. Consequently, significant modifications are necessary for traditional algorithms to address these issues, adding complexity to the solution [72][73][74][75][76][77]. With the increasing popularity and capabilities of deep learning methods, there has been a development of robot navigation methods utilizing neural networks. ...
Article
Full-text available
Driving a motorized wheelchair is not without risk and requires high cognitive effort to obtain good environmental perception. Therefore, people with severe disabilities are at risk, potentially lowering their social engagement, and thus, affecting their overall well-being. Therefore, we designed a cooperative driving system for obstacle avoidance based on a trained reinforcement learning (RL) algorithm. The system takes the desired direction and speed from the user via a joystick and the obstacle distribution from a LiDAR placed in front of the wheelchair. Considering both inputs, the system outputs a pair of forward and rotational speeds that ensure obstacle avoidance while being as close as possible to the user commands. We validated it through simulations and compared it with a vector field histogram (VFH). The preliminary results show that the RL algorithm does not disruptively alter the user intention, reduces the number of collisions, and provides better door passages than a VFH; furthermore, it can be integrated on an embedded device. However, it still suffers from higher jerkiness.
Article
Full-text available
This research paper investigates the impact of wheelchair accessibility on the overall well-being and quality of life of individuals with mobility limitations. The paper highlights the importance of wheelchair accessibility as a key determinant of independence, social participation, and overall societal inclusion. By examining the barriers and challenges faced by wheelchair users in various environments, including public spaces, workplaces, and educational institutions, this study sheds light on the profound implications of limited accessibility. Furthermore, it explores the benefits and potential outcomes of improved wheelchair accessibility, such as increased opportunities for employment, education, and social engagement. The research emphasizes the need for proactive measures, including policy reforms, infrastructure modifications, and awareness campaigns, to enhance wheelchair accessibility and break down the physical and attitudinal barriers that hinder full participation. By recognizing wheelchair accessibility as a crucial component of a more inclusive society, this research contributes to the ongoing dialogue and advocacy efforts aimed at promoting equal rights and opportunities for individuals with mobility impairments.
Article
Full-text available
Assistive technology in rehabilitation programs is vital for people with vision impairments worldwide. The term "blind assistive technology" refers to mobility devices specifically designed to provide position, orientation and mobility assistance for visually impaired individuals during indoor and outdoor activities. The paper presents a comprehensive evaluation of 140 research articles published over the past 75 years (1946 to 2022). This research analyses the evolution of assistive technology aids in depth, in terms sensing technique followed, algorithms employed for obstacle detection, localization, object recognition, depth estimation and scene understanding. It also includes, the functional attributes of the aid, feedback type, and assistive solutions embedded in aid. It evaluates the assistive aids for their usability index, portability, battery life, feedback type, and aesthetics. The survey findings reveal that optical and sonic sensor-based aids prioritize speed, weight, and battery life but lack major functionalities, achieving an average performance score of 62%. Stereo, monocular, SLAM, and 3-D point cloud-based aids excel in obstacle distance estimation and avoidance but require greater memory resources, with a lower performance score of 41%. Artificial intelligence and cloud-based aids offer comprehensive scene details but demand complex computational capabilities, achieving a performance score of 44%. However, the most suitable technology for developing state-of-the-art solutions for blind individuals is the multisensor fusion-based and guide robot-based aids, providing a majority of the essential assistive functions with a performance score of 51%. The study highlights possible challenges associated with implementing assistive technology aids, emphasizes the importance of user acceptability, and stresses the need for real-time evaluation of blind aids. The paper lays a concrete foundation and direction for future development, emphasizing the critical challenges faced by blind users, including boarding trains, traveling on public transport, shopping in a supermarket, avoiding dynamic obstacles, and real-time understanding of the surrounding scene. Addressing these key concerns is crucial for the continued development and improvement of assistive technology aids for the visually impaired, leading to enhanced independence, mobility, and ultimately, a higher quality of life.
Article
Full-text available
This research paper provides a comprehensive review of methodologies for path planning and optimization of mobile robots. With the rapid development of robotics technology, path planning and optimization have become fundamental areas of research for achieving efficient and safe autonomous robot navigation. In this paper, we review the classic and state-of-the-art techniques of path planning and optimization, including artificial potential fields, A* algorithm, Dijkstra's algorithm, genetic algorithm, swarm intelligence, and machine learning-based methods. We analyze the strengths and weaknesses of each approach and discuss their application scenarios. Moreover, we identify the challenges and open problems in this field, such as dealing with dynamic environments and real-time constraints. This paper serves as a comprehensive reference for researchers and practitioners in the robotics community, providing insights into the latest trends and developments in path planning and optimization for mobile robots.
Article
Full-text available
The Low-Cost Voice Controlled Wheelchair with Raspberry Pi is an innovative assistive technology designed to improve the mobility and independence of people with disabilities. This research aims to develop a wheelchair system that can be operated using voice commands at an affordable price, making it accessible to a wider range of individuals with limited mobility. The device is built on the Raspberry Pi, a reasonably priced, credit-card-sized computer, and uses an easy-to-use yet efficient voice recognition technique to let users control the wheelchair with their vocal commands. A Raspberry Pi, a microphone, and motor controllers are some of the system's hardware components. The software uses Python programming language and open-source voice recognition technology to recognize voice commands, making it easy for users to navigate their environment independently. The system has been tested on a prototype and has shown promising results in terms of accuracy and reliability. The Low-Cost Voice Controlled Wheelchair with Raspberry Pi can give disabled persons new levels of mobility and independence, enhancing their quality of life and enhancing their capacity to carry out daily tasks.
Article
Full-text available
This research study addresses the advantages and difficulties of Cloud Computing (CC) in Supply Chain Management (SCM). An overview of the current state of SCM and the difficulties businesses in this sector confront is presented at the beginning of the article. It then explores how cloud-based solutions can address these challenges, such as through the use of real-time data analytics, collaborative platforms, and intelligent automation. Additionally, the paper investigates the potential risks and challenges associated with cloud-based SCM, including data security and privacy concerns, vendor lock-in, and the need for robust disaster recovery plans. To provide a comprehensive understanding of the topic, the paper includes a case study that illustrates how a company successfully implemented cloud-based SCM solutions to improve their operations. The paper concludes by highlighting the key takeaways and insights from the research, and by identifying potential future directions for research in this field. Overall, this study delivers insightful information about the function of CC in SCM and offers useful suggestions for companies looking to use this technology to enhance their supply chain operations.
Article
Full-text available
In recent years, commercial and research interest in service robots working in everyday environments has grown. These devices are expected to move autonomously in crowded environments, maximizing not only movement efficiency and safety parameters, but also social acceptability. Extending traditional path planning modules with socially aware criteria, while maintaining fast algorithms capable of reacting to human behavior without causing discomfort, can be a complex challenge. Solving this challenge has involved the development of proactive systems that take into account cooperation (and not only interaction) with the people around them, the determined incorporation of approaches based on Deep Learning, or the recent fusion with skills coming from the field of human–robot interaction (speech, touch). This review analyzes approaches to socially aware navigation and classifies them according to the strategies followed by the robot to manage interaction (or cooperation) with humans.
Article
Full-text available
The Simultaneous Localization and Mapping (SLAM) technique has achieved astonishing progress over the last few decades and has generated considerable interest in the autonomous driving community. With its conceptual roots in navigation and mapping, SLAM outperforms some traditional positioning and localization techniques since it can support more reliable and robust localization, planning, and controlling to meet some key criteria for autonomous driving. In this study the authors first give an overview of the different SLAM implementation approaches and then discuss the applications of SLAM for autonomous driving with respect to different driving scenarios, vehicle system components and the characteristics of the SLAM approaches. The authors then discuss some challenging issues and current solutions when applying SLAM for autonomous driving. Some quantitative quality analysis means to evaluate the characteristics and performance of SLAM systems and to monitor the risk in SLAM estimation are reviewed. In addition, this study describes a real-world road test to demonstrate a multi-sensor-based modernized SLAM procedure for autonomous driving. The numerical results show that a high-precision 3D point cloud map can be generated by the SLAM procedure with the integration of Lidar and GNSS/INS. Online four–five cm accuracy localization solution can be achieved based on this pre-generated map and online Lidar scan matching with a tightly fused inertial system.
Article
Full-text available
In this paper, we propose a modular navigation system that can be mounted on a regular powered wheelchair to assist disabled children and the elderly with autonomous mobility and shared-control features. The lack of independent mobility drastically affects an individual’s mental and physical health making them feel less self-reliant, especially children with Cerebral Palsy and limited cognitive skills. To address this problem, we propose a comparatively inexpensive and modular system that uses a stereo camera to perform tasks such as path planning, obstacle avoidance, and collision detection in environments with narrow corridors. We avoid any major changes to the hardware of the wheelchair for an easy installation by replacing wheel encoders with a stereo camera for visual odometry. An open source software package, the Real-Time Appearance Based Mapping package, running on top of the Robot Operating System (ROS) allows us to perform visual SLAM that allows mapping and localizing itself in the environment. The path planning is performed by the move base package provided by ROS, which quickly and efficiently computes the path trajectory for the wheelchair. In this work, we present the design and development of the system along with its significant functionalities. Further, we report experimental results from a Gazebo simulation and real-world scenarios to prove the effectiveness of our proposed system with a compact form factor and a single stereo camera.
Article
Full-text available
A wheelchair locomotion simulator (WCS) is an innovative solution to assess the biomechanical cost of wheelchairs (WC) accessibility in a controlled and safe virtual environment. In this context, this paper presents a haptic feedback control architecture based on a direct model reference adaptive control (MRAC) with intelligent tuning of its adaptation gains. The control objective is to follow the reference model velocity while producing the force feedback during the push phase, in order to faithfully recreate the dynamic behavior of the WC in a virtual environment. To accomplish this, a wheelchair ergometer model with friction is used to provide realistic navigation in the virtual environment (VE), by detecting and driving the wheelchair wheels. A two-wheeled vehicle model including the rolling resistance aspect is used to describe the wheelchair dynamic behavior. Since the controller adaptation gains are operated on the tracking error between the reference model and the simulator output, the WC model is also used as a reference model to specify the desired dynamics of the adaptive control system. For an optimal solution, an intelligent metaheuristic algorithm Elephant Herding Optimization (EHO) is employed to optimize the controller gain adaptation parameter to keep the tracking error as small as possible. Finally, the simulation results obtained show the effectiveness of the proposed control strategy.
Article
Full-text available
An effective path-planning algorithm in three-dimensional (3D) environments based on a geometric approach for redundant/hyper-redundant manipulators are presented in this paper. The method works within confined spaces cluttered with obstacles in real-time. Using potential fields in 3D, a middle path is generated for point robots. Beams are generated tangent to the path points, which constructs a basis for preparing a collision-free path for the manipulator. Then, employing a simply control strategy without interaction between the links, the motion planning is achieved by advancing the end-effector of the manipulator through narrow terrains while keeping each link’s joints on this path until the end-effector reaches the goal. The method is simple, robust and significantly increases maneuvering ability of the manipulator in 3D environments compared to existing methods as illustrated with examples.
Article
Full-text available
The development of light detection and ranging (lidar) technology began in the 1960s, following the invention of the laser, which represents the central component of this system, integrating laser scanning with an inertial measurement unit (IMU) and Global Positioning System (GPS). Lidar technology is spreading to many different areas of application, from those in autonomous vehicles for road detection and object recognition, to those in the maritime sector, including object detection for autonomous navigation, monitoring ocean ecosystems, mapping coastal areas, and other diverse applications. This paper presents lidar system technology and reviews its application in the modern road transportation and maritime sector. Some of the better-known lidar systems for practical applications, on which current commercial models are based, are presented, and their advantages and disadvantages are described and analyzed. Moreover, current challenges and future trends of application are discussed. This paper also provides a systematic review of recent scientific research on the application of lidar system technology and the corresponding computational algorithms for data analysis, mainly focusing on deep learning algorithms, in the modern road transportation and maritime sector, based on an extensive analysis of the available scientific literature.
Article
Full-text available
The real-time segmentation of sidewalk environments is critical to achieving autonomous navigation for robotic wheelchairs in urban territories. A robust and real-time video semantic segmentation offers an apt solution for advanced visual perception in such complex domains. The key to this proposition is to have a method with lightweight flow estimations and reliable feature extractions. We address this by selecting an approach based on recent trends in video segmentation. Although these approaches demonstrate efficient and cost-effective segmentation performance in cross-domain implementations, they require additional procedures to put their striking characteristics into practical use. We use our method for developing a visual perception technique to perform in urban sidewalk environments for the robotic wheelchair. We generate a collection of synthetic scenes in a blending target distribution to train and validate our approach. Experimental results show that our method improves prediction accuracy on our benchmark with tolerable loss of speed and without additional overhead. Overall, our technique serves as a reference to transfer and develop perception algorithms for any cross-domain visual perception applications with less downtime.
Article
Full-text available
In this paper, a new path tracker is proposed for autonomous robots by re-designing a classical obstacle avoidance algorithm “Follow the Gap Method (FGM)”. Until now, FGM is not used to track a global plan of consecutive waypoints dynamically yet. On the other hand, this is a very fundamental requirement for autonomous robots. To use the FGM as a dynamic tracker, the proposed methodology is borrowing the “Look Ahead Distance” (LAD) from geometric path tracking methods and adapting it to the local planner. The LAD is defined as the distance from the robot to the desired waypoint on the path to be tracked. In the proposed solution, a dynamic and optimized LAD for the local planner is defined which is automatically adjusted by the robot velocity. The dynamic LAD function is optimized to increase tracking, avoidance, and comfort capabilities. This study is the first one that uses FGM together with a global planner as part of a whole autonomous system. Another novelty of the paper is the optimization of LAD. In this study, the LAD is optimized by taking into account not only the tracking error but also the distance to obstacle and comfort metrics simultaneously, for the first time in literature. The optimization is performed with various weight coefficients in the cost function. Three metrics are used to compare the effect of weight coefficients on the optimization. These metrics are the Root Mean Square (RMS) values of “Distance to Path”, “Distance to Obstacle” and “Magnitude of Total Acceleration”. For instance according to the experiments, when the tracking coefficient is doubled in optimization, the distance to path metric comes from 0,424m to 1,23m, indicating that the robot is tracking better. Similar effects on other metrics are observed when the related coefficients are changed. Besides the simulations, real-world experiments are performed on the real autonomous wheelchair platform to show the real-time performance of the proposed approach.
Article
Full-text available
For autonomous navigation, three main functions are essential: finding the location, creating the mapping, and getting the optimum path. Since Human operators study the map and correlate it to aerial pictures to locate target locations, consistent localization and mapping concurrently are challenging tasks. Light detection and ranging (LiDAR) can create a 2-dimensional (2D) map and generate positional data for the indoor area, but it fails in the presence of a dynamic object. The global positioning systems (GPS) data offering precise location tracking in outdoor spaces can tackle the weakness of LiDAR. Therefore, we design a robot operating system (ROS) based vehicular system integrating GPS and LiDAR data. The Inertial Measurement Unit (IMU) is used to make an educated approximation for LiDAR registration. A Rao-Blackwellized particle filter (RBPF) based Gmapping algorithm has been retreated in the proposed system using sensors data for navigation and mapping, where each particle has its map of the surrounding. The computational complexity due to large particle formation in RBPF is solved using Gaussian distribution based convergence. The experiments are carried out in moderate room size and large size room environments with obstacles and without obstacles. It generates a 2D map of unknown environments minimizing the cumulative error due to relative measurement of LiDAR data. The proposed system provides autonomous driving in an unfamiliar environment increasing localization accuracy by solving the error accumulation problem in an unconstraint environment. The Gmapping based proposed implementation succeeded to generate maps accurately with a trajectory error of about 0.094 cm.
Article
Full-text available
Smart wearable technologies such as fitness trackers are creating many new opportunities to improve the quality of life for everyone. It is usually impossible for visually impaired people to orientate themselves in large spaces and navigate an unfamiliar area without external assistance. The design space for assistive technologies for the visually impaired is complex, involving many design parameters including reliability, transparent object detection, handsfree operations, high-speed real-time operations, low battery usage, low computation and memory requirements, ensuring that it is lightweight, and price affordability. State-of-the-art visually impaired devices lack maturity, and they do not fully meet user satisfaction, thus more effort is required to bring innovation to this field. In this work, we develop a pair of smart glasses called LidSonic that uses machine learning, LiDAR, and ultrasonic sensors to identify obstacles. The LidSonic system comprises an Arduino Uno device located in the smart glasses and a smartphone app that communicates data using Bluetooth. Arduino collects data, manages the sensors on smart glasses, detects objects using simple data processing, and provides buzzer warnings to visually impaired users. The smartphone app receives data from Arduino, detects and identifies objects in the spatial environment, and provides verbal feedback about the object to the user. Compared to image processing-based glasses, LidSonic requires much less processing time and energy to classify objects using simple LiDAR data containing 45-integer readings. We provide a detailed description of the system hardware and software design, and its evaluation using nine machine learning algorithms. The data for the training and validation of machine learning models are collected from real spatial environments. We developed the complete LidSonic system using off-the-shelf inexpensive sensors and a microcontroller board costing less than USD 80. The intention is to provide a design of an inexpensive, miniature, green device that can be built into, or mounted on, any pair of glasses or even a wheelchair to help the visually impaired. This work is expected to open new directions for smart glasses design using open software tools and off-the-shelf hardware.
Article
Full-text available
Many wheelchair people depend on others to control the movement of their wheelchairs, which significantly influences their independence and quality of life. Smart wheelchairs offer a degree of self-dependence and freedom to drive their own vehicles. In this work, we designed and implemented a low-cost software and hardware method to steer a robotic wheelchair. Moreover, from our method, we developed our own Android mobile app based on Flutter software. A convolutional neural network (CNN)-based network-in-network (NIN) structure approach integrated with a voice recognition model was also developed and configured to build the mobile app. The technique was also implemented and configured using an offline Wi-Fi network hotspot between software and hardware components. Five voice commands (yes, no, left, right, and stop) guided and controlled the wheelchair through the Raspberry Pi and DC motor drives. The overall system was evaluated based on a trained and validated English speech corpus by Arabic native speakers for isolated words to assess the performance of the Android OS application. The maneuverability performance of indoor and outdoor navigation was also evaluated in terms of accuracy. The results indicated a degree of accuracy of approximately 87.2% of the accurate prediction of some of the five voice commands. Additionally, in the real-time performance test, the root-mean-square deviation (RMSD) values between the planned and actual nodes for indoor/outdoor maneuvering were 1.721 × 10−5 and 1.743 × 10−5, respectively.
Article
Full-text available
The decision to purchase the best available electric power wheelchair (EPWC) for a person with a disability in a low-resource context is very stressful, whether it is based on financial circumstances or the availability of medical solutions. The study's objective is to assess the EPWC options available on the market, focused on a set of conflicting criteria. In this research, three multi-criteria decision-making (MCDM) approaches are used to make decisions. ENTROPY method for weightage calculation of various parameters, COPRAS and EDAS methods for evaluating and ranking alternatives are applied. Both COPRAS and EDAS are applied separately for ranking of selected wheelchair models, and to check the robustness of the applied method, sensitivity analysis on cost criterion is carried out. The result shows that for both methods, EPWC-1 is the top priority model to buy, whereas EPWC-7 is the worst model for COPRAS, and EPWC-10 is the worst model for EDAS among the ten alternatives.
Article
Full-text available
One of the most challenging tasks for autonomous robots is avoiding unexpected obstacles during their path following operation. Follow the gap method (FGM) is one of the most popular obstacle avoidance algorithms that recursively guides the robot to the goal state by considering the angle to the goal point and the distance to the closest obstacles. It selects the largest gap around the robot, where the gap angle is calculated by the vector to the midpoint of the largest gap. In this paper, a novel obstacle avoidance procedure is developed and applied to a real fully autonomous wheelchair. This proposed algorithm improves the FGM’s travel safety and brings a new solution to the obstacle avoidance task. In the proposed algorithm, the largest gap is selected based on gap width. Moreover, the avoidance angle (similar to the gap center angle of FGM) is calculated considering the locus of the equidistant points from obstacles that create obstacle circles. Monte Carlo simulations are used to test the proposed algorithm, and according to the results, the new procedure guides the robot to safer trajectories compared with classical FGM. The real experimental test results are in parallel to the simulations and show the real-time performance of the proposed approach.
Article
Full-text available
Social robot navigation in public spaces, buildings or private houses is a difficult problem that is not well solved due to environmental constraints (buildings, static objects etc.), pedestrians and other mobile vehicles. Moreover, robots have to move in a human-aware manner—that is, robots have to navigate in such a way that people feel safe and comfortable. In this work, we present two navigation tasks, social robot navigation and robot accompaniment, which combine machine learning techniques with the Social Force Model (SFM) allowing human-aware social navigation. The robots in both approaches use data from different sensors to capture the environment knowledge as well as information from pedestrian motion. The two navigation tasks make use of the SFM, which is a general framework in which human motion behaviors can be expressed through a set of functions depending on the pedestrians’ relative and absolute positions and velocities. Additionally, in both social navigation tasks, the robot’s motion behavior is learned using machine learning techniques: in the first case using supervised deep learning techniques and, in the second case, using Reinforcement Learning (RL). The machine learning techniques are combined with the SFM to create navigation models that behave in a social manner when the robot is navigating in an environment with pedestrians or accompanying a person. The validation of the systems was performed with a large set of simulations and real-life experiments with a new humanoid robot denominated IVO and with an aerial robot. The experiments show that the combination of SFM and machine learning can solve human-aware robot navigation in complex dynamic environments.
Article
Full-text available
Urban environments, university campuses, and public and private buildings often present architectural barriers that prevent people with disabilities and special needs to move freely and independently. This paper presents a systematic mapping study of the scientific literature proposing devices, and software applications aimed at fostering accessible wayfinding and navigation in indoor and outdoor environments. We selected 111 out of 806 papers published in the period 2009–2020, and we analyzed them according to different dimensions: at first, we surveyed which solutions have been proposed to address the considered problem; then, we analyzed the selected papers according to five dimensions: context of use, target users, hardware/software technologies, type of data sources, and user role in system design and evaluation. Our findings highlight trends and gaps related to these dimensions. The paper finally presents a reflection on challenges and open issues that must be taken into consideration for the design of future accessible places and of related technologies and applications aimed at facilitating wayfinding and navigation.
Article
Full-text available
Disability is a disruption or limitation of a person’s body functions in carrying out daily activities. A person with physical disabilities needs an assistive device such as a wheelchair. The latest wheelchair development is the smart wheelchair. Smart wheelchairs require a control system to detect obstacles quickly. This aims to provide safety, especially for users. One of the obstacles that are quite dangerous is descending stairs. Therefore the researchers propose a descending stairs detection system for smart wheelchairs. The proposed method in this study is the gray level co-occurrence matrix (GLCM) as the feature extraction algorithm, learning vector quantization (LVQ) as the classification algorithm, and sequential forward selection (SFS) for feature selection. Based on the simulation result, the SFS feature selection gets two selected GLCM features. The best accuracy is 94.5% with the selected features, namely contrast and dissimilarity. This result has an increase in accuracy when compared to using the GLCM 6 feature method with the LVQ classification that does not use feature selection, where the method gets 92.5% accuracy in off-time testing. Accuracy decreased to 78.21% when detecting floors and 89.06% when detecting descending stairs in real-time system testing.
Article
Full-text available
With the emergence of COVID-19, mobile health applications have increasingly become crucial in contact tracing, information dissemination, and pandemic control in general. Apps warn users if they have been close to an infected person for sufficient time, and therefore potentially at risk. The distance measurement accuracy heavily affects the probability estimation of being infected. Most of these applications make use of the electromagnetic field produced by Bluetooth Low Energy technology to estimate the distance. Nevertheless, radio interference derived from numerous factors, such as crowding, obstacles, and user activity can lead to wrong distance estimation, and, in turn, to wrong decisions. Besides, most of the social distance-keeping criteria recognized worldwide plan to keep a different distance based on the activity of the person and on the surrounding environment. In this study, in order to enhance the performance of the COVID-19 tracking apps, a human activity classifier based on Convolutional Deep Neural Network is provided. In particular, the raw data coming from the accelerometer sensor of a smartphone are arranged to form an image including several channels (HAR-Image), which is used as fingerprints of the in-progress activity that can be used as an additional input by tracking applications. Experimental results, obtained by analyzing real data, have shown that the HAR-Images are effective features for human activity recognition. Indeed, the results on the k-fold cross-validation and obtained by using a real dataset achieved an accuracy very close to 100%.
Article
Full-text available
In the 34 developed and 156 developing countries, there are ~132 million disabled people who need a wheelchair, constituting 1.86% of the world population. Moreover, there are millions of people suffering from diseases related to motor disabilities, which cause inability to produce controlled movement in any of the limbs or even head. This paper proposes a system to aid people with motor disabilities by restoring their ability to move effectively and effortlessly without having to rely on others utilizing an eye-controlled electric wheelchair. The system input is images of the user’s eye that are processed to estimate the gaze direction and the wheelchair was moved accordingly. To accomplish such a feat, four user-specific methods were developed, implemented, and tested; all of which were based on a benchmark database created by the authors. The first three techniques were automatic, employ correlation, and were variants of template matching, whereas the last one uses convolutional neural networks (CNNs). Different metrics to quantitatively evaluate the performance of each algorithm in terms of accuracy and latency were computed and overall comparison is presented. CNN exhibited the best performance (i.e., 99.3% classification accuracy), and thus it was the model of choice for the gaze estimator, which commands the wheelchair motion. The system was evaluated carefully on eight subjects achieving 99% accuracy in changing illumination conditions outdoor and indoor. This required modifying a motorized wheelchair to adapt it to the predictions output by the gaze estimation algorithm. The wheelchair control can bypass any decision made by the gaze estimator and immediately halt its motion with the help of an array of proximity sensors, if the measured distance goes below a well-defined safety margin. This work not only empowers any immobile wheelchair user, but also provides low-cost tools for the organization assisting wheelchair users.
Article
Full-text available
As one of the typical application-oriented solutions to robot autonomous navigation, visual simultaneous localization and mapping is essentially restricted to simplex environmental understanding based on geometric features of images. By contrast, the semantic simultaneous localization and mapping that is characterized by high-level environmental perception has apparently opened the door to apply image semantics to efficiently estimate poses, detect loop closures, build 3D maps, and so on. This article presents a detailed review of recent advances in semantic simultaneous localization and mapping, which mainly covers the treatments in terms of perception, robustness, and accuracy. Specifically, the concept of “semantic extractor” and the framework of “modern visual simultaneous localization and mapping” are initially presented. As the challenges associated with perception, robustness, and accuracy are being stated, we further discuss some open problems from a macroscopic view and attempt to find answers. We argue that multiscaled map representation, object simultaneous localization and mapping system, and deep neural network-based simultaneous localization and mapping pipeline design could be effective solutions to image semantics-fused visual simultaneous localization and mapping.
Article
Full-text available
This paper focuses on data fusion, which is fundamental to one of the most important modules in any autonomous system: perception. Over the past decade, there has been a surge in the usage of smart/autonomous mobility systems. Such systems can be used in various areas of life like safe mobility for the disabled, senior citizens, and so on and are dependent on accurate sensor information in order to function optimally. This information may be from a single sensor or a suite of sensors with the same or different modalities. We review various types of sensors, their data, and the need for fusion of the data with each other to output the best data for the task at hand, which in this case is autonomous navigation. In order to obtain such accurate data, we need to have optimal technology to read the sensor data, process the data, eliminate or at least reduce the noise and then use the data for the required tasks. We present a survey of the current data processing techniques that implement data fusion using different sensors like LiDAR that use light scan technology, stereo/depth cameras, Red Green Blue monocular (RGB) and Time-of-flight (TOF) cameras that use optical technology and review the efficiency of using fused data from multiple sensors rather than a single sensor in autonomous navigation tasks like mapping, obstacle detection, and avoidance or localization. This survey will provide sensor information to researchers who intend to accomplish the task of motion control of a robot and detail the use of LiDAR and cameras to accomplish robot navigation.
Article
In several existing red–green–blue and depth (RGB-D) semantic segmentation algorithms, schemes are used to supplement contextual information through multilayer feature interactions. However, these approaches ignore the complementation of the contextual information and the introduction of noise interfering with the segmentation process. To minimize noise interference during this process, we introduce a two-layer hop cascaded asymptotic network (THCANet) for robot-driving road-scene semantic segmentation in RGB-D images. To exploit the depth map and supervision to strengthen semantic segmentation, we propose an attention cross-fusion module for the interactive combination of RGB-D features through multimodality weighting. Notably, the information of the two modalities reduces noise during fusion. After fusing features from the RGB-D modalities, we also use a novel multiscale context module to fuse features at multiple scales and employ a jump cascade architecture between the modules to recover lost context information and suppress irrelevant noise. Moreover, multiple supervision is performed at different segmentation stages to improve accuracy. The proposed THCANet system demonstrates the best performance on a robot-driving road dataset compared with similar methods, and its generalization ability is demonstrated using the NYU-Depth V2 dataset.
Article
Seamless positioning and navigation requires an integration of outdoor and indoor positioning systems. Until recently, these systems mostly function in-silos. Though GNSS has become a standalone system for outdoors, no unified positioning modality could be found for indoor environments. Wi-Fi and Bluetooth signals are popular choices though. Increased adoption of different machine learning techniques for indoor–outdoor context detection and localization could be witnessed in the recent literature. The difficulty in precise data annotation, need for sensor fusion, the effect of different hardware configurations pose critical challenges that affect the success of indoor–outdoor (IO) positioning systems. Wireless sensor-based techniques are explicitly programmed, hence estimating locations dynamically becomes challenging. Machine learning and deep learning techniques can be used to overcome such situations and react appropriately by self-learning through experiences and actions without human intervention or reprogramming. Hence, the focus of the work is to present the readers a comprehensive survey of the applicability of machine learning and deep learning to achieve seamless navigation. The paper systematically discusses the application perspectives, research challenges, and the framework of ML (mostly) and DL (a few) based positioning approaches. The comparisons against various parameters like the technology used, the procedure applied, output metric and challenges are presented along with experimental results on benchmark datasets. The paper contributes to bridging the IO localization approaches with IO detection techniques so as to pave the way into the research domain for seamless positioning. Recent advances and hence, possible future research directions in the context of IO localization have also been articulated.
Article
People with Severe Speech and Motor Impairment (SSMI) often find it difficult to manipulate physical objects due to spasticity and have familiarity with eye pointing based communication. This paper presents a novel eye-gaze controlled augmented reality human-robot interface that maintains a safe distance of the robot from the operator. We used a bespoke appearance-based eye-gaze tracking algorithm and compared two different safe distance maintenance algorithms. We undertook simulation studies followed by user trial involving end users. Users with SSMI could bring the robotic arm at any designated point within its working envelope in less than 3 minutes.
Article
3D object detection is a critical part of environmental perception systems and one of the most fundamental tasks in understanding the 3D visual world, which benefit a series of downstream real-world applications. RGB-D images include object texture and semantic information, as well as depth information describing spatial geometry. Recently, numerous 3D object detection models for RGB-D images have been proposed with excellent performance, but summaries in this area are still absent. To stimulate future research, this paper provides a detailed analysis of current developments in 3D object detection methods for RGB-D images to motivate future research. It covers three major parts, including background on 3D object detection, RGB-D data details, and comparative results of state-of-the-art methods on several publicly available datasets, with an emphasis on contributions, design ideas, and limitations, as well as insightful observations and inspiring future research directions.
Article
Purpose Shared autonomy has played a major role in assistive mobile robotics as it has the potential to effectively balance user satisfaction and smooth functioning of systems by adapting itself to each user’s needs and preferences. Many shared control paradigms have been developed over the years. However, despite these advancements, shared control paradigms have not been widely adopted as there are several integral aspects that have not fully matured. The purpose of this paper is to discuss and review various aspects of shared control and the technologies leading up to the current advancements in shared control for assistive mobile robots. Methods A comprehensive review of the literature was conducted following a dichotomy of studies from the pre-2000 and the post-2000 periods to focus on both the early developments and the current state of the art in this domain. Results A systematic review of 135 research papers and 7 review papers selected from the literature was conducted. To facilitate the organization of the reviewed work, a 6-level ladder categorization was developed based on the extent of autonomy shared between the human and the robot in the use of assistive mobile robots. This taxonomy highlights the chronological improvements in this domain. Conclusion It was found that most prior studies have focussed on basic functionalities, thus paving the way for research to now focus on the higher levels of the ladder taxonomy. It was concluded that further research in the domain must focus on ensuring safety in mobility and adaptability to varying environments. • Implications for rehabilitation • Shared autonomy in assistive mobile robots plays a vital role in effectively adapting to ensure safety while also considering the user comfort. • User’s immediate desires should be considered in decision making to ensure that the users are in control of the assistive robots. • The current focus of research should be towards successful adaptation of the assistive mobile robots to varying environments to assure safety of the user.
Article
Fruit detection and localization are essential for future agronomic management of fruit crops such as yield prediction, yield mapping and automated harvesting. However, to perform robust and efficient fruit detection and localization in orchard is a challenging task under variable illumination, low-resolutions and heavy occlusion by neighboring fruits, foliage, or branches. Therefore, researches of fruit detection and localization by getting more information of objects are essential. RGB-D (Red, Green, Blue -Depth) cameras are promising sensors and widely used in fruit detection and localization given that they provide depth information and infrared information in addition to RGB information. After presenting a discussion on the advantages and disadvantages of RGB-D cameras with different depth measurement principles and application fields, this paper reviews various types of RGB-D sensor systems and image processing methods used for fruit detection and localization in the field. Finally, major challenges for the successful application of RGB-D camera-based machine vision system, and potential future directions for the research and development in this area are discussed.
Article
In this study, we present a path planning approach that is capable of generating a feasible trajectory for stable robotic wheelchair navigation in the environment with slope way. Firstly, the environment is modeled by a lightweight navigation map, with which the proposed sampling-based path planning scheme with a modified extension function can generate a feasible path. Then, the path is further optimized by the proposed utility function involving the human comfort and the path cost. To improve the searching efficiency of an optimal trajectory, we present an adaptive weighting Gaussian Mixture Model (GMM) based sampling strategy. Particularly, the weights of the components in GMM are adjusted adaptively in the planning process. It is also worth noting that the proposed sampling-based planning paradigm can indicate the unsafe regions in the navigation map, which forms a traversable map and further guarantees the safety of the wheelchair robot navigation. Furthermore, the effectiveness and the efficiency of the proposed path planning method are verified in both simulation and real-world experiments.
Article
Purpose: Unmet needs for assistive technologies (ATs) exist and the need for ATs is growing owing to demographic changes worldwide. Little comprehensive research has examined equity of access to ATs in Canada. Our study elucidates perspectives of policymakers and stakeholders on challenges and solutions for enhancing equitable access to ATs to advance policy discussions. Methods: We conducted a qualitative interview study with a purposive sample of policymakers and stakeholders. Stakeholders were from non-profit organisations; private insurance companies; ageing or technology industries; and advocacy, consumer, and support groups. We used thematic analysis to develop themes that summarised and facilitated data interpretation. Results: We conducted 24 interviews involving 32 participants. We present three themes: (1) User experiences, detailing challenges experienced by AT system users; (2) System characteristics: Challenges and solutions, outlining governance, financial, and delivery arrangements that create challenges for accessing AT, as well as participants’ proposed solutions; and (3) Shifts in models and principles, for approaches that may foster equitable access to ATs. We consolidate results into a set of valued qualities of a system that can enhance equitable AT access, and relate results to relevant national and international activities. Conclusions: This is the most comprehensive study of Canadian policymaker and stakeholder views on AT access to date. Identified challenges and solutions point to opportunities for policy action and to support work to create a national vision for AT access that strengthens the potential for ATs to enable daily activity participation, independence, and societal inclusion of seniors and people with disabilities. • IMPLICATIONS FOR REHABILITATION • AT use supports daily activity participation, independence, and societal inclusion of seniors and people with disabilities. • There is an urgent need to ensure that those who need ATs have access to them, considering the benefits of their use, current unmet needs for ATs, and the anticipated demand for ATs because of the ageing population and increased prevalence of chronic disease and disability. • A comprehensive understanding of policymakers’ and stakeholders’ perspectives on challenges and potential solutions for enhancing equitable access to ATs is critical to support development of evidence- and values-informed policies. • Understanding challenges and solutions identified by diverse policymakers and stakeholders can lead to national and local opportunities for policy action and support work to create a national vision for enhancing equitable access to AT.
Article
In the era of industrialization and automation, safety is a critical factor that should be considered during the design and realization of each new system that targets operation in close collaboration with humans. Of such systems are considered personal and professional service robots which collaborate and interact with humans at diverse applications environments. In this collaboration, human safety is an important factor in the wider field of human-robot interaction (HRI) since it facilitates their harmonic coexistence. The paper at hand aims to systemize the recent literature by describing the required levels of safety during human-robot interaction, focusing on the core functions of the collaborative robots when performing specific processes. It is also oriented towards the existing methods for psychological safety during human-robot collaboration and its impact at the robot behaviour, while also discusses in depth the psychological parameters of robots incorporation in industrial and social environments. Based on the existing works on safety features that minimize the risk of HRI, a classification of the existing works into five major categories namely, Robot Perceptions for Safe HRI, Cognition-enabled robot control in HRI, Action Planning for safe navigation close to humans, Hardware safety features, and Societal and Psychological factors is also applied. Finally, the current study further discusses the existing risk assessment techniques as methods to offer additional safety in robotic systems presenting thus a holistic analysis of the safety in contemporary robots, and proposes a roadmap for safety compliance features during the development of a robotic system.
Article
A key issue in brain-computer interface (BCI) is the detection of intentional control (IC) states and non-intentional control (NC) states in an asynchronous manner. Further, for steady-state visual evoked potential (SSVEP) BCI systems, multiple states (sub-states) exist within the IC state. Existing recognition methods rely on a threshold technique, which is difficult to realize high accuracy, i.e. simultaneously high true positive rate and low false positive rate. To address this issue, we proposed a novel convolutional neural network (CNN) to detect IC and NC states in a SSVEP-BCI system for the first time. Specifically, the steady-state motion visual evoked potentials (SSMVEP) paradigm, which has been shown to induce less visual discomfort, was chosen as the experimental paradigm. Two processing pipelines were proposed for the detection of IC and NC states. The first one was using CNN as a multi-class classifier to discriminate between all the states in IC and NC state (FFT-CNN). The second one was using CNN to discriminate between IC and NC states, and using canonical correlation analysis (CCA) to perform classification tasks within the IC (FFT-CNN-CCA). We demonstrated that both pipelines achieved a significant increase in accuracy for low-performance healthy participants when traditional algorithms such as CCA threshold were used. Further, the FFT-CNN-CCA pipeline achieved better performance than the FFT-CNN pipeline based on the stroke patients’ data. In summary, we showed that CNN can be used for robust detection in an asynchronous SSMVEP-BCI with great potential for out-of-lab BCI applications.
Article
Obstacle detection is an essential element for the development of intelligent transportation systems so that accidents can be avoided. In this study, we propose a stereovisionbased method for detecting obstacles in urban environment. The proposed method uses a deep stacked auto-encoders (DSA) model that combines the greedy learning features with the dimensionality reduction capacity and employs an unsupervised k-nearest neighbors algorithm (KNN) to accurately and reliably detect the presence of obstacles. We consider obstacle detection as an anomaly detection problem. We evaluated the proposed method by using practical data from three publicly available datasets, the Malaga stereovision urban dataset (MSVUD), the Daimler urban segmentation dataset (DUSD), and Bahnhof dataset. Also, we compared the efficiency of DSA-KNN approach to the deep belief network (DBN)-based clustering schemes. Results show that the DSA-KNN is suitable to visually monitor urban scenes.
Human-Robot Interaction -Perspectives and Applications
  • A K Chaudhary
  • V Gupta
  • K Gaurav
  • T K Reddy
  • L Behera
Chaudhary, A. K., Gupta, V., Gaurav, K., Reddy, T. K., & Behera, L. (2023). EEG Control of a Robotic Wheelchair. In Vinjamuri, R. (Ed.), Human-Robot Interaction -Perspectives and Applications. IntechOpen. https://doi.org/10.5772/intechopen.110679.
Robots in Medicine: Mobile Robots Versus Mobile Decision, Necessity Versus Possibility and Future Challenges
  • Z Nawrat
  • D Krawczyk
Nawrat, Z., & Krawczyk, D. (2023). Robot-Based Medicine. Robots in Medicine: Mobile Robots Versus Mobile Decision, Necessity Versus Possibility and Future Challenges. In Azar, A.T., Kasim Ibraheem, I., & Jaleel Humaidi, A. (eds), Mobile Robot: Motion Control and Path Planning (pp. 127-162). Studies in Computational Intelligence, vol 1090. Cham: Springer.
A Fuzzy AHP Approach to Evaluate the Strategic Design Criteria of a Smart Robotic Powered Wheelchair Prototype
  • S K Sahoo
  • B B Choudhury
Sahoo, S. K., & Choudhury, B. B. (2021). A Fuzzy AHP Approach to Evaluate the Strategic Design Criteria of a Smart Robotic Powered Wheelchair Prototype. In Udgata, S.K., Sethi, S., Srirama, S.N. (eds.), Intelligent Systems. Lecture Notes in Networks and Systems (Proceedings of ICMIB 2020) (pp. 451-464), Vol. 185. Singapore: Springer.