Figure 1 - uploaded by Anton Moscowsky
Content may be subject to copyright.
World model of the robot based on signs: (a) a sign consists of name, image, significance and personal meaning; (b) a semiotic network includes both signs and semantic networks built on uniform components of signs

World model of the robot based on signs: (a) a sign consists of name, image, significance and personal meaning; (b) a semiotic network includes both signs and semantic networks built on uniform components of signs

Contexts in source publication

Context 1
... entities perceived by an agent in the semiotic world model are represented as signs. Formally, each sign is described by an ordered set of four components (Figure 1): ...
Context 2
... entities perceived by an agent in the semiotic world model are represented as signs. Formally, each sign is described by an ordered set of four components (Figure 1): ...

Citations

... In this context, autonomous map-building approaches are proposed aiming to create and modify the environment map automatically by exploiting the latest advancements in computer vision. Readers may refer to the SLAM techniques such as EKF-SLAM [24], UKF-SLAM [25], MCL-SLAM [26], and evolutionary SLAM [27] based approaches. ...
... nal wheelchair with automatic postural adjustment designed to satisfy the user's nursing needs and reduce the workload of nursing staff. The kinetic characteristic curve obtained through kinematics analysis and simulation indicated that the process of wheelchair backrest reclining is smooth, and the rationality of the mechanism design is verified. Karpov et at. (2019) proposed software and hardware architecture for the robotic wheelchair and its multimodal user interface. This architecture supported several feedback types for the user including voice messages, screen output, as well as various light indications and tactile signals. (2019) proposed multifunctional capacity to direct the chair, by mean ...
Article
Full-text available
The objective of this study is to design and develop a mobility aid that performs three different functions to serve patients with a cost-effective space utilizing a wheelchair. The focus was to develop an affordable product considering the poor economic status of the targeted users. A convertible multifunctional wheelchair was developed that can be used as a stretcher and as a regular wheelchair, and it has a pair of crutches to assist in walking. The design and stress-strain analysis of the product was done using SolidWorks 2019. Then, the product was manufactured using the simplest available manufacturing tools and materials. It was possible to fabricate a prototype of this multifunctional at a very cheap cost of BDT 14,250 ($162 approximately). However, it is believed that in mass production, the unit cost can be reduced by 40-50%. The fabricated prototype was lightweight with an average weight of approximately 16 kg allowing it to be portable. The design specifications also ensure ergonomic comfort. It can be contended that this device can be a cost-effective solution for patients and public hospitals in Bangladesh and similar low-income countries.
... Recent technologies have been developed to aid those patients with several facilities that help them use electric wheelchairs independently or with minimal assistance. It varies from standard electrical wheelchairs [2], [3] to smart control systems with several interfaces that use Sip and Puff [4] voice [5]- [7], Electroencephalogram (EEC) based interface [8], [9], Gaze [10], [11], eye and head motion [12]- [16] and facial expressions [17]. Furthermore, a Gesture-based interface was developed for C7 quadriplegic patients, who have residual upper limb motion [18]. ...
Article
The robotic wheelchair is designed to allow independent movement for patients with severe musculoskeletal and neuromuscular disorders that interfere with normal wheelchair propulsion by hands. Control of the movement of the wheelchair necessities residual motion of the upper extremity which was not applicable for patients with upper extremity disability as in Quadriplegic patients. Several research efforts have been proposed to help quadriplegic patients independently use the robotic wheelchair. However, the proposed solutions had many problems with accuracy, simplicity, and cost. The current work aims to build an accurate, simple, and low-cost control system for robotic wheelchairs using head movement. The proposed system was based on the use of the MEMs Inertial measurement unit (IMU) to sense the head movement gestures and transfer the gestures into signals to control the wheelchair movements. A new lightweight algorithm for the detection of head gestures was proposed to improve the system’s accuracy. The proposed system has been applied and tested on volunteers. The results revealed that the accuracy of the proposed method in recognizing head movement was 97% on average.
... The robotic wheelchair ( Figure 1) [31] was designed as both a prototype of an assistive device and an experimental platform for various research in the fields of steering vehicles, robotics and human-machine interfaces. The robotic wheelchair was based on a motorized wheelchair, the Ortonica Pulse 330. ...
... Moreover, in different conditions, the wheelchair traveled, on average, almost the same distance (1.95 ± 0.14 in Act, 1.97 ± 0.14 in Exp, 1.92 ± 0.16 in Rem). Direct comparison of estimates instead of differences has already been used by other authors who studied IB (e.g., [31,32]). For each participant, we determined the median of the estimates in every experimental condition for all five distance values. ...
Article
Full-text available
Sense of agency (SoA) refers to an individual’s awareness of their own actions. SoA studies seek to find objective indicators for the feeling of agency. These indicators, being related to the feeling of control, have practical application in vehicle design. However, they have not been investigated for actions related to the agent’s body movement inherent to steering a vehicle. In our study, participants operated a robotic wheelchair under three conditions: active control by a participant, direct control by the experimenter and remote control by the experimenter. In each trial, a participant drove the wheelchair until a sound signal occurred, after which they stopped the wheelchair and estimated the travelled distance. The subjective estimates were significantly greater when participants operated the wheelchair by themselves. This result contrasts with observations under static settings in previous studies. In an additional study on the electroencephalographic response to a sound presented at a random time after movement onset, the observed latencies in the N1 component implied that participants might have a higher sense of control when they drove the wheelchair. The proposed methodology might become useful to indirectly assess the degree of operator control of a vehicle, primarily in the field of rehabilitation technologies.
... The control was carried out through fuzzy logic and allows the exchange of wheels. At the same time, Karpov et al. [38] designs a multimodal control for a wheelchair which can be used by any user. The user can enter a different signal for the movement of the chair, for example their voice, BCI, manual signals, or ocular movement, which converts it to a hybrid system, in which the wheelchair is instrumented in order to control the movements and has a camera for the detection of ocular movement. ...
... The sensors included in this revision present information about wheelchairs, however, one of the principal motives is to guarantee the safety of the user. To this end, ref. [6,10,11,14,16,18,20,23,32,33,36,38,40,[42][43][44]50,[58][59][60]64,67,71,[74][75][76][77]81,87,88] include safety systems to avoid obstacles. This indicates that approximately 40% of the articles mention the use of algorithms with ultrasonic sensors, lasers, infrared, cameras, among others, in order to provide security in the movement of wheelchairs and avoid crashes. ...
Article
Full-text available
Automatic wheelchairs have evolved in terms of instrumentation and control, solving the mobility problems of people with physical disabilities. With this work it is intended to establish the background of the instrumentation and control methods of automatic wheelchairs and prototypes, as well as a classification in each category. To this end a search of specialised databases was carried out for articles published between 2012 and 2019. Out of these, 97 documents were selected based on the inclusion and exclusion criteria. The following categories were proposed for these articles: (a) wheelchair instrumentation and control methods, among which there are systems that implement micro-electromechanical sensors (MEMS), surface electromyography (sEMG), electrooculography (EOG), electroencephalography (EEG), and voice recognition systems; (b) wheelchair instrumentation, among which are found obstacle detection systems, artificial vision (image and video), as well as navigation systems (GPS and GSM). The results found in this review tend towards the use of EEG signals, head movements, voice commands, and algorithms to avoid obstacles. The most used techniques involve the use of a classic control and thresholding to move the wheelchair. In addition, the discussion was mainly based on the characteristics of the user and the types of control. To conclude, the articles exhibited the existing limitations and possible solutions in their designs, as well as informing the physically disabled community about the technological developments in this field.
Conference Paper
This article presents a multimodal software architecture, developed to incorporate the functionalities of an autonomous robotic system with social interactions. The architecture will be implemented in a wheelchair, making it intelligent and enabling more than one form of navigation, always considering the data obtained from the environment and people.
Chapter
Problem statement: It is important to be able to share various types of information between robot modules, including high-level data received from the user and low-level signals from the sensors. This paper describes an implementation of such a system that uses semantic networks as a generic representation providing interpretability of instructions and flexibility in parameterization. Purpose of research: development of a robotic control system based on standard libraries for logical processing that can execute complex commands including high level instructions derived from human speech that involve objects, spatial relations and clarifying information. Results: The proposed semiotic control system consists of a database that stores knowledge about the robot's environment and its behavior using the RDF data model, inference tools, a pipeline of filter modules and the corresponding interfaces. SOAR provides a standard representation of facts and rules. The external interface of the control system based on the semiotic model transforms natural language commands into semantic networks. It can be easily expanded to support other input signals as eyetracking or encephalograph due to the ability to include clarifying information as extra nodes of the intermediate semantic network. The developed system can execute both direct control commands such as movement in a specified direction, and more complex procedures like moving to the named object with a heuristic choice between alternatives and parameterization of the trajectory. Practical significance: Using natural language voice commands, intonation, and looking at the target is essential for effective operator interfaces with mobile platforms and has applications in many areas including assistive and service robots. Using standard implementations and frameworks for logical processing makes the system more reliable, efficient and easier to understand.KeywordsSemiotic modelControl systemMobile robotVoice interfaceCognitive architectureMultimodal interface