ArticlePDF Available

Abstract and Figures

This research introduces an innovative smart cane architecture designed to empower visually impaired individuals. Integrating advanced sensors and social media connectivity, the smart cane enhances accessibility and encourages physical activity. Three meticulously developed algorithms ensure accurate step counting, swing detection, and proximity measurement. The smart cane’s architecture comprises the platform, communications, sensors, calculation, and user interface layers, providing comprehensive assistance for visually impaired individuals. Hardware components include an audio–tactile interaction module, input command module, microphone integration, local storage, step count module, cloud integration, and rechargeable battery. Software v1.9.7 components include Facebook Chat API integration, Python Facebook API integration, fbchat library integration, and Speech Recognition library integration. Overall, the proposed smart cane offers a comprehensive solution to enhance mobility, accessibility, and social engagement for visually impaired individuals. This study represents a significant stride toward a more inclusive society, leveraging technology to create meaningful impact in the lives of those with visual impairments. By fostering socialization and independence, our smart cane not only improves mobility but also enhances the overall well-being of the visually impaired community.
Content may be subject to copyright.
Citation: Messaoudi, M.D.; Menelas,
B.-A.J.; Mcheick, H. Integration of
Smart Cane with Social Media: Design
of a New Step Counter Algorithm for
Cane. IoT 2024,5, 168–186. https://
doi.org/10.3390/iot5010009
Academic Editor: Amiya Nayak
Received: 14 January 2024
Revised: 24 February 2024
Accepted: 27 February 2024
Published: 14 March 2024
Copyright: © 2024 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
IoT
Article
Integration of Smart Cane with Social Media: Design of a New
Step Counter Algorithm for Cane
Mohamed Dhiaeddine Messaoudi , Bob-Antoine J. Menelas * and Hamid Mcheick
Department of Computer Sciences and Mathematics, University of Quebec at Chicoutimi,
Chicoutimi, QC G7H 2B1, Canada; mohamed-dhiaeddine.messaoudi1@uqac.ca (M.D.M.);
hmcheick@uqac.ca (H.M.)
*Correspondence: bamenela@uqac.ca
Abstract: This research introduces an innovative smart cane architecture designed to empower
visually impaired individuals. Integrating advanced sensors and social media connectivity, the
smart cane enhances accessibility and encourages physical activity. Three meticulously developed
algorithms ensure accurate step counting, swing detection, and proximity measurement. The smart
cane’s architecture comprises the platform, communications, sensors, calculation, and user interface
layers, providing comprehensive assistance for visually impaired individuals. Hardware components
include an audio–tactile interaction module, input command module, microphone integration, local
storage, step count module, cloud integration, and rechargeable battery. Software v1.9.7 components
include Facebook Chat API integration, Python Facebook API integration, fbchat library integration,
and Speech Recognition library integration. Overall, the proposed smart cane offers a comprehensive
solution to enhance mobility, accessibility, and social engagement for visually impaired individuals.
This study represents a significant stride toward a more inclusive society, leveraging technology to
create meaningful impact in the lives of those with visual impairments. By fostering socialization and
independence, our smart cane not only improves mobility but also enhances the overall well-being of
the visually impaired community.
Keywords: IFTTT; JSON API; smart cane; visually challenged people (VCP); Facebook; social networks
1. Introduction
Humans have at their disposal several sensory motor channels to perceive the en-
vironment. In this set, vision plays a very important role in accessing the environment
around us, because 85% of the information about our surroundings is obtained through the
eyes [
1
]. Blindness is the state of condition in which a person is unable to sense information
conveyed through the vision channel. People who have little vision capabilities and depend
on another sensory organ are also considered as blind. Therefore, the visually challenged
are people who have partial vision loss or total vision loss [2].
According to the World Health Organization (WHO) and International Agency for
Prevention of Blindness (IAPB), around 285 million people are visually impaired in the
world, out of this, 39 million are blind [
3
]. Blind individuals face enormous challenges in
their daily routine and must rely on other people to accomplish some of their daily tasks.
In addition, for displacement, they must use traditional blind sticks.
In this modern era where technology is everywhere and involved in almost every
daily task, there have also been some advancements in blind stick technology. Indeed,
researchers have developed blind sticks equipped with obstacle detection, GPS, and indoor
navigation. In this information age, social media plays a very important role in connecting
people around the world. To enable people with visual impairments to access these tech-
nologies, serval research initiatives have been undertaken. Companies such as Facebook
are trying to make sure that information, depicted in their sites, is accessible to all kinds
IoT 2024,5, 168–186. https://doi.org/10.3390/iot5010009 https://www.mdpi.com/journal/iot
IoT 2024,5169
of users. Facebook plans to roll out AI-powered automatic alt-text to all screen readers. X
(formerly Twitter) already has AI-captioning for image mode. One understands that such
functionalities aim at assisting people with visual impairments in accessing social media
environments [4].
Empowering the visually impaired is not merely about enhancing accessibility; it is
about enriching lives and breaking barriers. Beyond the realm of technology, our initiative
strives to encourage individuals with visual impairments to embrace physical activity and
social interaction, essential facets of a fulfilling life. Resnick [
5
] underscores a critical issue:
blind children often face a lack of motivation and opportunities for physical activity, leading
to sedentary behavior and a sense of inadequacy. This trend continues into adulthood,
as Modell [
6
] and Jessup [
7
] corroborate, highlighting that individuals with disabilities
including visual impairments often participate less in recreational activities, leading to
profound social isolation. Moreover, Folmer [
8
] sheds light on the alarming consequences
of limited physical activity among the visually impaired, which include delays in motor
development and an increased susceptibility to various medical conditions.
Our research and the innovative smart cane architecture we propose are not only
technological advancements but also beacons of empowerment. By seamlessly integrating
advanced sensors, social media connectivity, and novel algorithms, our smart cane not only
enhances mobility and accessibility, but also serves as a catalyst for encouraging physical
activity and facilitating socialization among the visually impaired. We firmly believe that
fostering a sense of independence and belonging in the visually impaired community is not
just a goal; it is a societal responsibility. With our pioneering method, we are dedicated to
linking the physical challenges faced by the visually impaired with the limitless potential
for an active and socially connected existence [912].
This study presents a cutting-edge smart cane design aimed at empowering indi-
viduals with visual impairments. By incorporating advanced sensors and social media
connectivity, the smart cane not only improves accessibility but also promotes physical
activity. The implementation of three carefully crafted algorithms ensures precise step
counting, swing detection, and proximity measurement. Section 2discusses the related
work conducted in this domain and critically evaluates it. The architecture of the proposed
smart cane model and components is presented in Section 3. Section 4presents the results of
the performance of the three developed algorithms followed by Section 5, which discusses
these results. Finally, the main conclusions are summarized in Section 6.
2. Related Work
Social networks like Facebook and Twitter have become deeply embedded in modern
life, enabling connection, communication, and community. Currently, a number of people
are working to study social media. In fact, the effects of social media on a society are a
well-studied phenomenon. However, for the millions of people worldwide with visual
impairments, participating in these visual-centric platforms poses significant accessibility
challenges that have historically excluded blind people from full usage and engagement [
13
].
By enabling people to communicate and share information, social media plays a critical
role in strengthening the bonds between the communities, spreading critical information.
The value of social media varies among the different user groups. Many previous studies
examined the engagement of different social groups with social media [
14
]. According to
the study by the Pew Research Center, 43% of American Internet users, older than 65, are
using online social networks today, and the main function of social media for seniors is to
connect them to their families [
15
]. While discussing the integration of social media features
within the smart cane for blind people, it is imperative to acknowledge worsening social
isolation. While these technologies provide valuable communication opportunities, there is
also a risk that individuals may start to rely only on virtual connections and interactions
instead of face-to-face and real-life social engagement. In order to prevent an over-reliance
on online social interaction, the smart cane was designed with a balanced approach. It
enables the user to connect not only through social media platforms like Facebook, but also
IoT 2024,5170
incorporates other different messaging channels such as direct messaging, etc. This ensures
that individuals have various options to interact, minimizing the dependency on a single
social media platform or mode of communication in a negative way.
Morris et al. found that mothers’ use of social media differed significantly before
and after birth. It was found out that different social groups are embracing social media
for distinct reasons, which affects the way they interact with social media [
16
]. To enable
blind people to live an independent life, researchers have developed many technologies
because these devices are quite expensive, and common visually challenged people (VCP)
cannot benefit from this. Our purposed device is focused on enabling these common
VCP to live a normal life. The proposed model has many features that would enable
them to interact with their environment independently [
17
]. Innovations in assistive
technologies are progressively dismantling barriers to enable fuller, more equitable social
media participation and autonomy for the blind and visually impaired.
Screen magnification software can enlarge and optimize displays for those with resid-
ual vision. However, individuals without functional vision must rely on text-to-speech
screen readers that vocalize onscreen text and labels. Screen readers such as VoiceOver for
iOS and TalkBack for Android are built into smartphones, allowing users to navigate apps
and hear menus, posts, messages, and more read aloud [18].
Refreshable braille displays can connect to phones, converting text into tactile braille
characters. Screen readers have significantly increased accessibility, though some functions
like photo descriptions remain limited [
19
]. Still, they establish a strong foundation for so-
cial media usage. In addition, dedicated apps tailored for blind people provide streamlined
social media access. Easy Social is one popular app aggregating Facebook, Twitter, LinkedIn,
and Instagram into a simplified interface with voiceover and customizable fonts/contrast.
Blind-friendly apps enable posting statuses, commenting, messaging, and listening to feeds
without visually parsing crowded layouts [
20
]. However, app development tends to trail
mainstream platforms. Discrepancies in features and delays in accessing new options
persist as a drawback, though steady progress continues.
Vizwiz is a mobile application that enables blind people to take a picture of their
environment and ask questions about the picture, where the app will answer their questions
with screen reading software. In pilot testing, the answers were collected from the Amazon
Mechanical Turk service. Mechanical Turk is an online marketplace of human intelligence
tasks (HITs) that workers can complete for small amounts of money [21].
In 2009, a poll of 62 blind people by the American Foundation for the Blind revealed
that about half of the participants used Facebook, while a third used Twitter, and a quarter
used LinkedIn and My Space. Moreover, in a 2010 study, Wentz and Lazar found that
Facebook’s website was more difficult for blind users to navigate than Facebook’s mobile
phone application. The ease of access may affect the frequency of use [
10
]. Advance
technologies enable blind people to identify the visual content in pictures; these include
image recognition, crowd-powered systems, and tactile graphics. Further interaction with
visual objects is also possible, for example, through the use of technologies that enable
blind people to take better photos, and by enhancing the photo sharing experience with
audio augmentations. Luˇci´c, Sedlar, and Deli´c (2011) tested a prototype of the computer
educational game Lugram for visually challenged children. They found that basic motor
skills were important for a blind user to play Lugram. Initially, the blind children needed
the help of sighted children, and afterward, they started playing on their own.
Research conducted by Ulrich (2011) led to the development of a cane that used robot
technologies to assist blind people. It used ultrasonic sensors to detect obstacles and they
found a new way by using the embedded computer. The steering action was accomplished
by producing a noticeable force in the handle. Helal, Moore, and Ramachandran (2001)
studied a wireless pedestrian navigation system for visually impaired people. This system
is called Drishiti; it boosts the moving capability of a blind person and allows them to
navigate freely [
11
]. In this project, a new method was developed to enable blind people
to use social media using a smart cane. The developed system will enable the user to use
IoT 2024,5171
social media websites such as Facebook and Twitter. Jacob et al. conducted research on
screen readers such as JAWS (Job access with speech) [
12
] and NVDA (Nonvisual Desktop
Access) [
12
] along with the voiceover for iOS devices. To gain access to social media
platforms, blind people significantly use these devices. Such tools provide text-to-speech
abilities and Braille output, enabling the users to interact with the content [12].
Braille displays have also been developed that are tactile devices and provide access
to digital content or the content that is displayed on social media platforms. This process is
helpful for a visually challenged person when the text is displayed in braille. These devices
are considered to be beneficial as they enhance the social media experience of a user by
providing a more tactile and interactive interface for visually impaired users. Research
on these devices has been conducted by Kim (2019) [
22
], where the authors developed a
braille device to make it easy for visually impaired people to interact with online social
media platforms.
Additionally, to post photos and videos, smart canes such as the WeWalk’s smart
cane integrate cameras to recognize objects, faces, and text for audible identification [
23
].
Users can take photos by tapping the cane and share them on social sites. Computer vision
features will continue advancing, enabling more autonomous photo capturing. Limitations
remain with image esthetics and the inability to independently assess the composition
quality before sharing. Still, smart canes vastly widen participation. Additionally, linking
services like Siri and Alexa allow for hands-free social media use, from dictating posts to
asking for notifications to be read aloud [
9
]. Commands like “Hey Siri, post to Facebook”
streamline sharing by eliminating cumbersome typing. However, privacy risks arise with
always-listening devices, and glitchy transcription can garble posts. Human-like voice
assistants hold promise for managing increasingly natural conversational interactions.
Talkback and Voiceover are two text-to-speech software programs that have been
developed by Folego [
24
]. Here, Talkback can be used by Android users while Voiceover is
for iOS users. Both help in navigating social media apps, which is undertaken by audibly
describing the content available online, and voice commands are also provided. Thus, this
makes it easy for a blind person to understand everything without requiring any help
from someone.
Different social media platforms such as Facebook have introduced automatic alt text
features that use image recognition technology to generate descriptions of the photos in the
newsfeed of the user’s social profile. This feature provides visually impaired users with
more context when they must engage with the visual content on online social platforms.
In addition, another social media platform, for example, Twitter, also uses alt text for its
blind users to add alternative text descriptions to the images posted on social media or
in tweets, making visual content easily readable by individuals through screen readers.
This type of feature enables the users to provide descriptions for the images they share on
social media. Kuber et al. [
25
] conducted research on determining the way through which
these platforms use such features and developed mobile screen readers for users who are
visually impaired.
Smith-Jackson et al. [
26
] conducted research where they recommended the use of con-
toured shapes for improving and enhancing the grip, greater spacing among the buttons to
assist the “perception of targets”, and additional awareness of the adoption/selection through
feedback to aid the visually blind or even physically disabled users of mobile phones.
Singh et al. [
27
] conducted research to help blind users use digital devices and innova-
tions without another person’s assistance. The device also assists people with hearing aids
and enables them to link to the digital world. The proposed framework is known as the
“Haptic encoded language framework (HELF)”, which makes use of haptic technology to
enable a blind person to write text digitally by making use of swiping gestures as well as
comprehend the text via vibrations.
Resnick [
5
] emphasized that blind children often lack motivation and opportunity for
physical activity, resulting in sedentary behavior and feelings of inadequacy. Modell [
6
]
and Jessup [
7
] further supported these findings, indicating that people with disabilities
IoT 2024,5172
including visual impairments often participate less in recreational activities and experience
social isolation. Folmer [
8
] highlighted that a lack of physical activity is a concern for
individuals with visual impairments, leading to delays in motor development and an
increased risk of medical conditions.
In the literature, a number of sensor-based approaches have been discussed aimed
at enhancing the participation of visually impaired people in different physical activities.
These approaches include a range of technologies, for example, wearable sensors, haptic
feedback systems, and auditory cues, which provide real-time feedback and assistance
during activities such as walking, running, and sports.
For instance, researchers have explored the incorporation of measurement units (IMUs)
into gadgets to track movement patterns and offer assistance to individuals with visual
impairments while engaging in physical activities [
28
]. These devices are capable of
identifying alterations in posture walking style and orientation, providing auditory or
tactile cues to help users maintain technique and navigate around obstacles [28].
In addition, a haptic feedback system was also proposed by the authors of [
29
] to en-
hance the sensory perception of blind individuals during physical activities. Such systems
use vibratory or tactile stimuli to pass on information related to the environment like the
presence of nearby objects or changes in terrain, enabling users to navigate confidently and
safely [29].
Moreover, the developments in wearable technology as well as machine learning
algorithms have enabled the development of smooth navigation for visually impaired
individuals. These navigations utilize sensors to detect obstacles, map out surroundings,
and provide personalized guidance to users during outdoor activities like hiking or urban
navigation [30].
Researchers [
31
] highlighted recent advancements in assistive technologies for the
visually impaired, addressing challenges in mobility and daily life. With a focus on indoor
and outdoor solutions, the paper explores location and feedback methods, offering valuable
insights for the integration of smart cane technology.
The paper underscores the growing concern of visual impairment globally, with
approximately 1.3 billion affected individuals, a number projected to triple by 2050. Ad-
dressing the challenges faced by the visually impaired, the proposed “Smart Cane device”
leverages technological tools, specifically cloud computing and IoT wireless scanners, to en-
hance indoor navigation. In response to the limitations of traditional options such as white
canes and guide dogs, the Smart Cane aims to seamlessly facilitate the displacement of
visually impaired individuals, offering a novel solution for navigation and communication
with their environment [32].
In summary, a few studies [
9
12
] have indicated that individuals with visual impair-
ments have limited engagement in physical activities, which can have negative effects
on their health and well-being. The proposed approach has various unique features in
comparison to existing solutions such as WeWalk. First, it is integrated with Facebook Chat
API, enabling the user to use direct messaging and social interactions on this platform,
thereby improving the accessibility for visually impaired people. Moreover, it also involves
step challenge functionality, which fosters healthy competition as well as community en-
gagement among the visually impaired individuals, and promotes a healthier lifestyle.
Moreover, the system also integrates Raspberry Pi 4, which increases the connectivity
and performance for smoother operation, ensuring a reliable user experience. Apart from
these, fbchat and Python Facebook API integration allow for effective communication with
Facebook servers, helping with seamless interaction for the users. Speech Recognition
Library integration is one of the most significant features of this device as it enables device
management through voice commands, improving accessibility. This proposed solution
fills the gap by combining health promotion, social interaction, and accessibility features
tailored for blind people. These features make the device innovative and distinct in the
domain of assistive technology for the visually impaired.
IoT 2024,5173
3. Smart Cane Architectural Model and Components
This research work proposes a smart cane integrated with technological advancements
and incorporated with advanced sensors, social media connectivity, and algorithm. It has
the ability to enhance mobility and accessibility while serving as the catalyst to encourage
physical activity and build socialization among blind people. The proposed approach
empowers the blind individual while increasing their social and physical activity. The
implementation of three carefully crafted algorithms ensures precise step counting, swing
detection, and proximity measurement.
The architecture of the smart cane was designed to provide a comprehensive set
of functionalities to assist visually impaired individuals in their daily lives. This archi-
tectural model comprises several distinct layers, each responsible for specific functions
and interactions.
3.1. Architectural Model
3.1.1. Platform Layer
The platform layer serves as the foundation of the architecture, providing basic hard-
ware and software resources required for the smart cane’s operation. This can include the
underlying hardware, operating system, device drivers, etc.
3.1.2. Communications Layer
The communications layer facilitates the exchange of information between the smart
cane and other devices or systems. This can include wireless communication via Bluetooth
or Wi-Fi, allowing the cane to connect to other devices.
3.1.3. Sensor Layer
This layer is equipped with sensors such as a camera, ultrasonic sensor, and accelerom-
eter. These sensors collect data about the user ’s environment, helping the cane detect
obstacles, count steps, and interact with the external world.
3.1.4. Calculation (Operations) Layer
The calculation layer processes the data collected by the sensors. It performs calculations
and operations on this data including obstacle detection from camera images, processing
ultrasonic signals to measure obstacle distances, and analyzing accelerometer movements.
3.1.5. User Interface Layer
This layer provides the interface between the smart cane and user. It includes compo-
nents such as speech synthesis to provide information to the user, speech recognition to
receive voice commands, and hand gesture sensors for more complex interactions. This
layer enables the user to communicate with the cane and receive feedback. Figure 1shows
layered architectural model for the smart cane.
Each layer interacts with the others to allow the smart cane to function effectively and
intuitively. Data are collected by sensors, processed in the calculation layer, and results
are presented to the user through the user interface. The communication layer also allows
the cane to interact with other devices and services, providing an enriched and connected
user experience.
IoT 2024,5174
IoT2024,5,FORPEERREVIEW7
Figure1.Layeredarchitecturalmodelforthesmartcane.
Eachlayerinteractswiththeotherstoallowthesmartcanetofunctioneectively
andintuitively.Dataarecollectedbysensors,processedinthecalculationlayer,andre-
sultsarepresentedtotheuserthroughtheuserinterface.Thecommunicationlayeralso
allowsthecanetointeractwithotherdevicesandservices,providinganenrichedand
connecteduserexperience.
3.2.HardwareComponents
3.2.1.AudioTa ctile InteractionModule
Thismoduleservesastheprimaryinformationdeliverymechanismfortheusersof
thesmartcane.Itencompassestwomaincomponents:
Headphone:
Thisoutputdeviceiscrucialinfacilitatingthetext-to-speechfeatureofthesmart
cane.Withthehelpofatext-to-speechengine,wrienmessagesandnoticationsfrom
socialmediaaretranslatedintoauditorymessages.Thesystemofthesmartcaneisde-
signedtointerpretthecontentofthemessagesandconvertthemintoclear,audible
speech,whichisthendeliveredthroughtheheadphone.Thisfunctionalityensuresthat
usersareinformedofanysocialmediaactivitiesinreal-time,enhancingtheirabilityto
respondpromptlyandbeactivelyengaged.
DCMotorVibration:
Thisformsthetactilepartoftheinteraction.Thehapticfeedbackmechanismlever-
agesaDCmotorthattriggersvibrationswhenevertherearenoticationssuchasincom-
ingmessages,friendrequests,orlikesonapostontheuser’ssocialmediaproles.The
vibrationintensitycanbecustomizedaccordingtotheuser’spreference,ensuringcomfort
andeaseofuse.Thisnon-auditoryalertsystemservesasanecientanddiscretemethod
ofnotication,reducingrelianceonauditorysignalsalone.
3.2.2.InputCommandModule
Userscaninterfacewithandoperatethesmartcanesystemusingthismodule,which
consistsoftwoessentialparts:
High-sensitivitymicrophone:
Thismoduleallowsforthecorrectrecordingofspeechorders.Byspeakingdirectly
intothemicrophone,usersmaysaycommandslike“sendamessage,“likethispost”,or
“scrollthroughmenus”.Thesevocalinstructionsareprocessedbythespeech-to-texttech-
nologyintegratedintothesmartcane,makingitpossibletooperatewithouttheuseof
handsnorwriting.
GestureSensor:
Figure 1. Layered architectural model for the smart cane.
3.2. Hardware Components
3.2.1. Audio–Tactile Interaction Module
This module serves as the primary information delivery mechanism for the users of
the smart cane. It encompasses two main components:
Headphone:
This output device is crucial in facilitating the text-to-speech feature of the smart cane.
With the help of a text-to-speech engine, written messages and notifications from social
media are translated into auditory messages. The system of the smart cane is designed to
interpret the content of the messages and convert them into clear, audible speech, which is
then delivered through the headphone. This functionality ensures that users are informed
of any social media activities in real-time, enhancing their ability to respond promptly and
be actively engaged.
DC Motor Vibration:
This forms the tactile part of the interaction. The haptic feedback mechanism leverages
a DC motor that triggers vibrations whenever there are notifications such as incoming
messages, friend requests, or likes on a post on the user’s social media profiles. The
vibration intensity can be customized according to the user’s preference, ensuring comfort
and ease of use. This non-auditory alert system serves as an efficient and discrete method
of notification, reducing reliance on auditory signals alone.
3.2.2. Input Command Module
Users can interface with and operate the smart cane system using this module, which
consists of two essential parts:
High-sensitivity microphone:
This module allows for the correct recording of speech orders. By speaking directly
into the microphone, users may say commands like “send a message”, “like this post”,
or “scroll through menus”. These vocal instructions are processed by the speech-to-text
technology integrated into the smart cane, making it possible to operate without the use of
hands nor writing.
Gesture Sensor:
The gesture sensor detects hand gestures using optical or motion-detecting technolo-
gies and converts them into navigational commands. For instance, in the Messenger app,
swiping to the right may represent moving on to the following discussion, while swiping
up could mean scrolling up the stream. In the same way, a swipe to the right could mean
moving to the next conversation in the Messenger app, and a swipe up could indicate
scrolling up the feed. This gesture control system provides a tactile, intuitive way for users
to interact with their social media accounts.
IoT 2024,5175
3.2.3. Microphone Integration for Speech Recognition
A microphone for voice recognition was added to the smart cane system, which is
a considerable improvement. The gadget gains voice control capabilities through this
connection, enabling users to communicate with it verbally. The speech-to-text algorithm
operating on the Raspberry Pi 4 converts the user’s spoken commands into text after being
captured by the microphone. This feature makes it easy to navigate the system’s menus
and choices. Users may utilize voice commands to send messages, make menu selections,
create or accept challenges, administer groups, and carry out other tasks. Figure 2shows
the working of smart cane interconnections.
IoT2024,5,FORPEERREVIEW8
Thegesturesensordetectshandgesturesusingopticalormotion-detectingtechnol-
ogiesandconvertsthemintonavigationalcommands.Forinstance,intheMessengerapp,
swipingtotherightmayrepresentmovingontothefollowingdiscussion,whileswiping
upcouldmeanscrollingupthestream.Inthesameway,aswipetotherightcouldmean
movingtothenextconversationintheMessengerapp,andaswipeupcouldindicate
scrollingupthefeed.Thisgesturecontrolsystemprovidesatactile,intuitivewayforusers
tointeractwiththeirsocialmediaaccounts.
3.2.3.MicrophoneIntegrationforSpeechRecognition
Amicrophoneforvoicerecognitionwasaddedtothesmartcanesystem,whichisa
considerableimprovement.Thegadgetgainsvoicecontrolcapabilitiesthroughthiscon-
nection,enablinguserstocommunicatewithitverbally.Thespeech-to-textalgorithmop-
eratingontheRaspberryPi4convertstheuser’sspokencommandsintotextafterbeing
capturedbythemicrophone.Thisfeaturemakesiteasytonavigatethesystem’smenus
andchoices.Usersmayutilizevoicecommandstosendmessages,makemenuselections,
createoracceptchallenges,administergroups,andcarryoutothertasks.Figure2shows
theworkingofsmartcaneinterconnections.
Figure2.Smartcaneinterconnectionsandworking.
3.2.4.LocalStorage
Thesmartcanehasaninternaldigitalstoragesystemcalledlocalstoragethatallows
ittobrieypreservedatabeforesendingittothecloud.Thiscanincludedataonstep
totals,userpreferences,commandhistory,andinteractionlogs.Thelocalstoragehastwo
purposes:itguaranteesthedevicefunctionsindependentlyevenwithoutaconstantInter-
netconnectionandactsasabuerfordatastoragewhenquickaccesstocloudstorageis
notpossible(duetoconnectivityproblemsorotherreasons).Thisfeaturegivesthesmart
canesystemtheexibilitytooperateecientlyinvariouscircumstances.Thepersonal
dataofsmartcaneusersarenotsenttothecloud;thedatasenttothecloudincludessys-
temdataandthedailystepcountswiththecanes’IDs.
3.2.5.StepCountModule
Thestepcountmoduleisaninventivefeaturethatmeasuresphysicalactivity.Ituses
anaccelerometernotonlytotallytheuser’ssteps,butalsotodetecttheorientationand
accelerationofthecane.Byanalyzingvariationsintheaccelerationdata,theaccelerometer
candeterminethedirectionofmovement,oeringusersreal-timefeedbackabouttheir
orientation.Thisfeedbackisessentialforvisuallyimpairedindividualstonavigateeec-
tively,ensuringthattheymaintainastraightpathoradjusttheirdirectionasneeded.
Figure 2. Smart cane interconnections and working.
3.2.4. Local Storage
The smart cane has an internal digital storage system called local storage that allows
it to briefly preserve data before sending it to the cloud. This can include data on step
totals, user preferences, command history, and interaction logs. The local storage has
two purposes: it guarantees the device functions independently even without a constant
Internet connection and acts as a buffer for data storage when quick access to cloud storage
is not possible (due to connectivity problems or other reasons). This feature gives the smart
cane system the flexibility to operate efficiently in various circumstances. The personal data
of smart cane users are not sent to the cloud; the data sent to the cloud includes system
data and the daily step counts with the canes’ IDs.
3.2.5. Step Count Module
The step count module is an inventive feature that measures physical activity. It uses
an accelerometer not only to tally the user’s steps, but also to detect the orientation and
acceleration of the cane. By analyzing variations in the acceleration data, the accelerometer
can determine the direction of movement, offering users real-time feedback about their ori-
entation. This feedback is essential for visually impaired individuals to navigate effectively,
ensuring that they maintain a straight path or adjust their direction as needed.
As the movement and speed of each step are detected, these parameters are trans-
formed into digital information, allowing for a comprehensive analysis of the user’s gait
and walking patterns. The integration of an accelerometer with a gyroscope further refines
this capability by providing a three-dimensional understanding of the cane’s position,
which is crucial for determining the user’s orientation in space.
The data collected are stored locally on the smart cane’s built-in storage and can
be synchronized with social media platforms for challenges or health tracking purposes.
This encourages users to stay active while also providing a valuable set of data that can
be used for navigation assistance. With this advanced feature, the smart cane not only
promotes physical activity, but also enhances the user’s orientation and safety, reinforcing
their confidence as they engage with their environment.
IoT 2024,5176
3.2.6. Cloud Integration
A crucial component of the smart cane system that handles data management is the
cloud integration. The system automatically uploads all locally saved information such
as the number of steps, logs of performed activities, and user preferences to secure cloud
storage at the end of each day. This not only assures the security of the data but also enables
data analysis for system upgrades and customized user experiences.
3.2.7. Battery
An electrical power bank that can be recharged powers the smart cane system. To
provide the gadget with a lengthy operating period, this battery module offers a portable
yet potent energy supply. Using a standard charging wire, the power bank is easily
rechargeable. Because of its large capacity, the smart cane can accommodate the power
needs of multiple modules including the accelerometer, gesture sensor, microphone, and
other parts for extended periods. Because of this, users can depend on the smart cane
throughout the day, increasing their independence and self-assurance while they utilize
social media and engage with others. We have also developed different modes to save
energy. For example, Eco mode can be turned on for smart usage in order for a longer
usage of battery. Offline mode is also helpful when there is no need to communicate with
the cloud server, thus consuming less battery.
3.3. Software Components
3.3.1. Facebook Chat API Integration
The smart cane uses Facebook Chat API to communicate with Facebook’s messaging
platform directly. Through the smart cane, users can now send and receive messages,
view alerts, and carry out other Facebook-related actions. It provides a smooth, integrated
solution that increases accessibility for those with visual impairments on the most popular
social networking site.
The integration of a step counter algorithm allows the device to accurately count the
steps a user takes, fostering both a healthy lifestyle and social interaction through the
creation of challenges.
Main Menu:
This is the first layer of user interaction with the smart cane system. It includes:
Messages: This option gives users access to their Messenger inbox, allowing them to
listen to their messages through the headphones.
New Message: This feature enables users to compose and send a new message using
voice commands.
Open New Message: This functionality gives users the ability to open and listen to
new, unread messages.
Group Manager:
This is a feature that allows users to manage their group chats on Messenger. It
includes options to:
Add Group: Users can create a new group chat using voice commands.
Update Group: Allows users to make changes to existing group chats such as adding
or removing members or changing the group’s name.
Delete: This function enables users to remove a group chat from their list.
Challenge:
A unique feature of the smart cane system that enhances user engagement is that it
creates step challenges. The smart cane system’s challenge function is intended to promote
friendly competition and interpersonal engagement among users. It enables users who are
blind or visually impaired to take part in step challenges, fostering a healthy lifestyle and a
sense of community.
Create New Challenge:
1.
The user who initiates a new step challenge acts as the administrator. They must
provide the duration of the challenge (in days) and assign a name to it. As the
IoT 2024,5177
administrator, they have the authority to add or remove participants, giving them
control over the participants in the challenge.
2.
Once the challenge is created, the system can automatically generate and send in-
vitations to potential participants. This includes sending challenge requests to the
top 10 active discussions in the user’s Messenger. Additionally, the administrator
can manually add or invite people who are not in the top 10, ensuring flexibility in
participant selection.
3.
Invited participants have the authority to accept or refuse the challenge. If they accept,
they are automatically added to the challenge by the system, and they can begin
contributing to the step count.
4.
During the challenge, users can check the statistics such as who has the best score.
This real-time tracking allows participants to see their progress and standings before
the challenge finishes, adding excitement and motivation.
5.
Participants in the challenge can communicate within a group chat, allowing them
to motivate each other, share progress, and foster camaraderie. This social aspect
enhances engagement and creates a supportive community around the challenge.
6.
Every day at 11 p.m., an update of the daily step count is sent to all participants. At the
end of the challenge, the final standings are shared, and the winners can
be celebrated.
The step count, captured using the accelerometer and step counter algorithm, is stored
locally on the smart cane and uploaded to the cloud whenever Internet connectivity is
available. The step count algorithm in the smart cane system is a sophisticated method that
accurately measures the physical activity of the user, specifically the number of steps taken.
It combines the use of an accelerometer and triangulation techniques to calculate both the
steps and the distance traveled.
The accelerometer is a sensor that measures the acceleration forces exerted on the
smart cane. These forces can be used to detect the motion and speed of each step. As
the user walks, the accelerometer detects the distinct movement patterns associated with
each step. By analyzing these patterns, the algorithm can accurately count the number of
steps taken.
In addition to step counting, the system calculates the distance traveled by using
triangulation techniques with Wi-Fi signals. The smart cane detects Wi-Fi signals from
known access points and calculates the distance between the cane and each access point. By
measuring the distances to multiple access points and knowing their locations, the system
can triangulate the user’s position. Repeating this process over time allows the system to
track the user’s movement and calculate the total distance traveled.
This integration of the step counter algorithm and challenge feature brings a novel and
engaging aspect to the smart cane system, allowing visually impaired users to participate
in a health-centric social activity. It not only promotes a healthy lifestyle, but also enhances
their social life, thus fostering a sense of community and camaraderie.
3.3.2. Raspberry Pi 4 Integration
The smart cane system incorporates a Raspberry Pi 4 single-board computer. The sys-
tem gains extra memory options, dual-band wireless networking capabilities, and improved
processor power via this switch. It enables the different parts including the accelerometer,
gesture sensor, and audio–tactile interaction module to operate more effectively and de-
pendably. The Raspberry Pi 4’s improved connectivity choices allow the device to execute
tasks such as uploading step counts, receiving messages from social networking platforms,
and other functions that call for Internet access without problems.
3.3.3. Python Facebook API Integration
The python-facebook-api is a robust library that simplifies the process of interacting with
Facebook’s Graph API. It allows the smart cane system to connect and interact directly
with Facebook’s servers. It is responsible for a number of functionalities including fetching
user messages, sending new messages, creating groups, and managing group chats. The
IoT 2024,5178
python-facebook-api provides an efficient and secure way to communicate with Facebook,
enhancing the system’s functionality and reliability.
3.3.4. fbchat Library Integration
Another significant library utilized in the smart cane system is FB chat. It is a client
library for Facebook Messenger that enables direct communication between the design
and the Messenger network. It can perform a wide range of tasks including sending and
receiving messages, retrieving discussions from the recent past, maintaining seen marks,
typing indicators, and more. The foundation of the system’s social media interaction
capabilities is made up of the fbchat library and the python-Facebook-API.
3.3.5. Speech Recognition Library Integration
The smart cane system’s voice command feature is based on the SpeechRecognition
library. It is an effective technique for turning spoken words into written text. The
SpeechRecognition library converts spoken words into text when a user speaks into the
smart cane’s microphone. Raspberry Pi 4 then processes this text. The library is perfect
for the smart cane system since it supports several languages and has remarkable recogni-
tion accuracy. It offers a more accessible and intuitive user experience by enabling voice
commands to be used by users to manage the device.
The block diagram for the proposed smart cane is illustrated in Figure 3.
IoT2024,5,FORPEERREVIEW12
Figure3.ProposedSmartCane
4.Results
Inordertoacquirethedesiredoutcomefromtheproposedcane-stickdevice,three
algorithmsweretested.Therstalgorithmwasdesignedtomeasurethenumberofsteps
takenbytheuserbasedonthedatafromtheaccelerometer.Itsetssomeconstantssuchas
theminimumthresholdforstepdetection(ThresholdMin),adetectiontimewindow(time
Window)aswellasthesizeofthewindow(WindowSize)foranalyzingthedata.Inthis
algorithm,thecalculateAveragefunctionisusedtodeterminetheaverageofthecircular
buercomprisingoftheaccelerometerreadings.Afterthis,itisenteredinacontinuous
loopandrepeatedlyreadsdatafromtheaccelerometerandmeasuresthemagnitude;the
valueisstoredinthecircularbuer.
Aftersometime,theaveragevalueinthebueriscalculated.Ifitisgreaterthanthe
minimumthreshold,itincreasesthestepcount,showingthatstepshavebeendetected.
Afterthis,thealgorithmshiftsthecircularbuerbyonepositionandtheprocesscontin-
ues.Theloopkeepsrunninguntilallmeasurementsaretaken.Finally,thenalcountof
stepsbeingdetectedisstoredinthestepsvariable.
Thisalgorithmwastestedtentimes,andthetestresultswerecomparedamongthe
countedstepsbythealgorithmwiththatoftheactualnumberofstepstakenbytheuser.
Tabl e1showstheoutcomeachievedbyimplementingtherstalgorithm.Ithasbeenob-
Figure 3. Proposed Smart Cane.
IoT 2024,5179
4. Results
In order to acquire the desired outcome from the proposed cane-stick device, three
algorithms were tested. The first algorithm was designed to measure the number of steps
taken by the user based on the data from the accelerometer. It sets some constants such
as the minimum threshold for step detection (ThresholdMin), a detection time window
(timeWindow) as well as the size of the window (WindowSize) for analyzing the data. In this
algorithm, the calculateAverage function is used to determine the average of the circular
buffer comprising of the accelerometer readings. After this, it is entered in a continuous
loop and repeatedly reads data from the accelerometer and measures the magnitude; the
value is stored in the circular buffer.
After some time, the average value in the buffer is calculated. If it is greater than the
minimum threshold, it increases the step count, showing that steps have been detected.
After this, the algorithm shifts the circular buffer by one position and the process continues.
The loop keeps running until all measurements are taken. Finally, the final count of steps
being detected is stored in the steps variable.
This algorithm was tested ten times, and the test results were compared among the
counted steps by the algorithm with that of the actual number of steps taken by the
user. Table 1shows the outcome achieved by implementing the first algorithm. It has
been observed that the accuracy of this algorithm varies in different scenarios, sometimes,
overestimating and sometimes underestimating the original step count. Figure 4shows the
graphical representation of Algorithm 1 implementation.
Algorithm 1 Step Counter Algorithm Using Accelerometer Data and Moving Average Filter.
// Define variables
const thresholdMin = 0.1 // Minimum threshold for detecting a step
const timeWindow = 100 // Time window for detection in milliseconds
const windowSize = 10 // Size of the analysis window
const buffer = array of size windowSize
int steps = 0
// Function to calculate the average of the buffer
function calculateAverage(buffer):
sum = 0
for each value in buffer:
sum = sum + value
return sum / windowSize
// Loop to read data from the accelerometer
while true:
readAccelerometer() // Read accelerometer data
accelerationNorm = norm(of the read data) // Calculate the norm of the acceleration
// Add the acceleration norm to the buffer
buffer [current time % windowSize] = accelerationNorm
if current time >= timeWindow:
average = calculateAverage(buffer)
if average > thresholdMin:
steps = steps + 1
shift the buffer by one position
wait(sampling interval) // Wait for some time between readings
// At the end of the measurement, the “steps” variable will contain the number of detected steps
IoT 2024,5180
IoT2024,5,FORPEERREVIEW14
Figure4.GraphicalrepresentationoftheimplementationofAlgorithm1.
Thesecondalgorithmwasdesignedtoobserveaswellascountthenumberof
“swingsthataremadebythecanebymakinguseofthelateralaccelerationdataacquired
fromtheaccelerometer.Initially,fewofthevariableswereinitializedsimilartotherst
algorithm.Additionally,thebuerwasusedtostorethelateralaccelerationdata.
Thealgorithmiscomprisedofaloop,whichconstantlyreadsthedatafromtheac-
celerometer,particularlyfocusingonleft–rightmovement.Thesevaluesarestoredinthe
buerarray.Afterward,theaveragevaluesarecalculated.Iftheaveragevalueexceeds
theminimumthreshold,thenanincrementinswingcounterisachieved,showingthata
swinghasbeendetected.Next,thebuerisshiftedonepositiontoaccommodatenew
data.Theloopcontinuesuntilalloftheswingsvaluesaremeasured.Figure5showsthe
graphicalrepresentationofAlgorithm2.Ta ble 2showsthevaluesobtainedbyimplement-
ingthesecondalgorithm.Here,onestep=oneswing.
Algorithm2StepCounterAlgorithmUsingLateralAccelerometerData.
//Denevariables
constthresholdMin=0.1//Minimumthresholdfordetectingaswing
consttimeWindow=1000//Timewindowfordetectioninmilliseconds
constbuer=arrayofsizetimeWindow
intswings=0
//Functiontocalculatetheaverageofthebuer
functioncalculateAverage(buer):
sum=0
foreachvalueinbuer:
sum=sum+value
returnsum/timeWindow
//Looptoreaddatafromtheaccelerometer
whiletrue:
readAccelerometer()//Readaccelerometerdata
lateralAcceleration=accelerationinthelateraldirection(left-right)
//Addthelateralaccelerationtothebuer
buer[currenttime%timeWindow]=lateralAcceleration

Figure 4. Graphical representation of the implementation of Algorithm 1.
Table 1. Results obtained from the implementation of Algorithm 1.
Test Number Cane-Calculated Step Count Real Step Count
1 1 5
2 2 5
3 4 10
4 3 10
5 6 15
6 6 15
7 8 20
8 11 20
9 14 25
10 14 25
The second algorithm was designed to observe as well as count the number of “swings”
that are made by the cane by making use of the lateral acceleration data acquired from the
accelerometer. Initially, few of the variables were initialized similar to the first algorithm.
Additionally, the buffer was used to store the lateral acceleration data.
The algorithm is comprised of a loop, which constantly reads the data from the
accelerometer, particularly focusing on left–right movement. These values are stored in the
buffer array. Afterward, the average values are calculated. If the average value exceeds the
minimum threshold, then an increment in swing counter is achieved, showing that a swing
has been detected. Next, the buffer is shifted one position to accommodate new data. The
loop continues until all of the swings values are measured. Figure 5shows the graphical
representation of Algorithm 2. Table 2shows the values obtained by implementing the
second algorithm. Here, one step = one swing.
IoT 2024,5181
Algorithm 2 Step Counter Algorithm Using Lateral Accelerometer Data.
// Define variables
const thresholdMin = 0.1 // Minimum threshold for detecting a swing
const timeWindow = 1000 // Time window for detection in milliseconds
const buffer = array of size timeWindow
int swings = 0
// Function to calculate the average of the buffer
function calculateAverage(buffer):
sum = 0
for each value in buffer:
sum = sum + value
return sum / timeWindow
// Loop to read data from the accelerometer
while true:
readAccelerometer() // Read accelerometer data
lateralAcceleration = acceleration in the lateral direction (left-right)
// Add the lateral acceleration to the buffer
buffer [current time % timeWindow] = lateralAcceleration
if current time >= timeWindow:
average = calculateAverage(buffer)
if average > thresholdMin:
swings = swings + 1
shift the buffer by one position
wait(sampling interval) // Wait for some time between readings
// At the end of the measurement, the “swings” variable will contain the number of
detected swings
IoT2024,5,FORPEERREVIEW15
ifcurrenttime>=timeWindow:
average=calculateAverage(buer)
ifaverage>thresholdMin:
swings=swings+1
shiftthebuerbyoneposition

wait(samplinginterval)//Wai tforsometimebetweenreadings
//Attheendofthemeasurement,the“swings”variablewillcontainthenumberofde-
tectedswings
Tab le2.ResultsobtainedfromtheimplementationofAlgorithm2.
Tes tNumberCane-CalculatedStepCountRealStepCount
165
255
31210
41510
51615
61915
72220
82020
92625
102825
Figure5.GraphicalrepresentationoftheimplementationofAlgorithm2.
Fromthesecondalgorithm,itwasobservedthatagreaternumberofswingswere
countedthantheoriginalones.Thissuggeststhatthisalgorithmissomewhatsensitiveor
pronetooverestimatingtheswingsundercertainconditions.
Inthethirdalgorithm,acombinationofstepdetectionusingtheaccelerometerand
proximitymeasurementswasappliedbymakinguseoftheBluetoothandWi-FiRSSIsig-
nals.Similartothepreviousalgorithms,theminimalthresholdwassetforstepdetection
alongwiththeinitializationofthebuertotearthelateralacceleration.Fortheproximity
measurement,therssiThresholdwasconsidered.ThemaxDistanceaswellastwocounter
variablesweretakenintoconsiderationforcountingtheWi-FiproximitiesandBluetooth
proximities.
Figure 5. Graphical representation of the implementation of Algorithm 2.
IoT 2024,5182
Table 2. Results obtained from the implementation of Algorithm 2.
Test Number Cane-Calculated Step Count Real Step Count
1 6 5
2 5 5
3 12 10
4 15 10
5 16 15
6 19 15
7 22 20
8 20 20
9 26 25
10 28 25
From the second algorithm, it was observed that a greater number of swings were
counted than the original ones. This suggests that this algorithm is somewhat sensitive or
prone to overestimating the swings under certain conditions.
In the third algorithm, a combination of step detection using the accelerometer and
proximity measurements was applied by making use of the Bluetooth and Wi-Fi RSSI
signals. Similar to the previous algorithms, the minimal threshold was set for step de-
tection along with the initialization of the buffer to tear the lateral acceleration. For the
proximity measurement, the rssiThreshold was considered. The maxDistance as well as
two counter
variables were taken into consideration for counting the Wi-Fi proximities and
Bluetooth proximities.
To count the detected steps, the calculateAverage function was used and stored in the
buffer. On exceeding the lateral acceleration, an increment in step counter was observed,
indicating that a step had been detected.
For the initial processes, the same steps as that of the first two algorithms were
followed, but later, the algorithm was designed to read the Bluetooth and Wi-Fi RSSI signal
strength, and it was checked whether they fell within the threshold as well as the distance
range, along with the increments of the respective proximity counters. The system design,
then waits for the sampling interval prior to repeating the process.
At the end of the calculation, counts are provided by the algorithm for the de-
tected steps, Wi-Fi and Bluetooth proximities in variable steps, BluetoothDistance, and
wifiDistance, respectively. Figure 6shows the graphical representation of the implemen-
tation of
Algorithm 3
. Table 3shows the values measured by the implementation of the
third algorithm.
In our study, we conducted extensive testing of the three algorithms for step count
calculation using accelerometer data. To ensure robustness and consistency, each algo-
rithm was subjected to ten repetitions by each of the participants. The results are visually
represented in the following graph, providing a clear comparison of the performance of
these algorithms.
IoT 2024,5183
Algorithm 3 Step Counter Algorithm Using Accelerometer and RSSI Signals.
// Define variables for the accelerometer
const minThreshold = 0.1 // Minimum threshold to detect a step
const stepTimeWindow = 1000 // Time window for step detection in milliseconds
const stepBuffer = array of size stepTimeWindow
int steps = 0
// Define variables for distance measurement with RSSI
const rssiThreshold = 70 // RSSI threshold to consider proximity
const maxDistance = 10 // Maximum distance to consider proximity (in meters)
int bluetoothDistances = 0
int wifiDistances = 0
// Function to calculate the average of the buffer
function calculateAverage(buffer):
sum = 0
for each value in buffer:
sum = sum + value
return sum / stepTimeWindow
// Loop for reading accelerometer data
while true:
readAccelerometer() // Read accelerometer data
lateralAcceleration = acceleration in the lateral direction (left-right)
// Add lateral acceleration to the buffer
stepBuffer [current time % stepTimeWindow] = lateralAcceleration
if current time >= stepTimeWindow:
averageStep = calculateAverage(stepBuffer)
if averageStep > minThreshold:
steps = steps + 1
shift the stepBuffer by one position
// Read Bluetooth and WiFi RSSI
bluetoothSignalStrength = readBluetoothRSSI()
wifiSignalStrength = readWifiRSSI()
// Check for proximity based on RSSI
if bluetoothSignalStrength >= rssiThreshold && bluetoothSignalStrength <= maxDistance:
bluetoothDistances = bluetoothDistances + 1
if wifiSignalStrength >= rssiThreshold && wifiSignalStrength <= maxDistance:
wifiDistances = wifiDistances + 1
wait(sampling interval) // Wait for some time between readings
// At the end of the measurement, the “steps”, “bluetoothDistances”, and “wifiDistances”
variables will contain the respective counts of detected steps, Bluetooth proximities, and
WiFi proximities.
IoT 2024,5184
IoT2024,5,FORPEERREVIEW17
//CheckforproximitybasedonRSSI
ifbluetoothSignalStrength>=rssiThreshold&&bluetoothSignalStrength<=maxDis-
tance:
bluetoothDistances=bluetoothDistances+1
ifwiSignalStrength>=rssiThreshold&&wiSignalStrength<=maxDistance:
wiDistances=wiDistances+1
wait(samplinginterval)//Wai tforsometimebetweenreadings
//Attheendofthemeasurement,the“steps”,“bluetoothDistances”,and“wiDistances”
variableswillcontaintherespectivecountsofdetectedsteps,Bluetoothproximities,and
WiFiproximities.
Tab le3.ResultsobtainedfromtheimplementationofAlgorithm3.
Tes tNumberCane-CalculatedStepCountRealStepCount
155
255
31110
41010
51415
61715
72220
82120
92625
102525
Figure6.GraphicalrepresentationoftheimplementationofAlgorithm3.
Inourstudy,weconductedextensivetestingofthethreealgorithmsforstepcount
calculationusingaccelerometerdata.Toensurerobustnessandconsistency,eachalgo-
rithmwassubjectedtotenrepetitionsbyeachoftheparticipants.Theresultsarevisually
representedinthefollowinggraph,providingaclearcomparisonoftheperformanceof
thesealgorithms.
Figure 6. Graphical representation of the implementation of Algorithm 3.
Table 3. Results obtained from the implementation of Algorithm 3.
Test Number Cane-Calculated Step Count Real Step Count
1 5 5
2 5 5
3 11 10
4 10 10
5 14 15
6 17 15
7 22 20
8 21 20
9 26 25
10 25 25
5. Discussion
This research has presented and implemented three algorithms into a smart cane to
measure the steps of a visually impaired person. These algorithms measure not only
the steps, but also the swings and proximities by making use of Wi-Fi RSSI signals
and Bluetooth.
From the analysis of Algorithm 1, it was noted that the step counting mechanism was
influenced by the lateral movements of the cane, as visually impaired users often sweep
the cane left and right to detect obstacles. This motion could lead to an overestimation
or underestimation of the step count, as the algorithm did not adequately differentiate
between forward steps and lateral cane movements. Furthermore, Algorithm 1 lacked
validation for the user’s actual motion direction, whether they were moving forward,
backward, or to the side. This led to fluctuating accuracy rates under different walking
scenarios, highlighting the need for a more sophisticated algorithm that could discern the
intended direction of travel and discriminate between obstacle detection sweeps and the
actual steps taken.
In the second algorithm, which measured counts based on the swing detection phe-
nomenon, an overestimation of the swing was acquired in comparison to the real count.
This shows that this algorithm is sensitive to the overestimation of the swings under
particular conditions.
Considering the third and last algorithm, which involved the step detection phe-
nomenon along with proximity measurement, it was observed that the steps calculated by
implementing this algorithm closely matched the real step count. This result shows that
IoT 2024,5185
detection of steps by making use of the accelerometer provided the most accurate results.
On the other hand, proximity assisted in counting the Bluetooth and Wi-Fi proximities.
The results show the effectiveness of Algorithm 3 in accurately detecting steps and
also highlights the potential of a smart cane in helping visually challenged people in their
daily lives by tracking steps as well as enabling social media interaction.
6. Conclusions
This research has not only presented the design of a smart cane aimed at improving
the social media experiences of the visually impaired, but also recognized the essential
role of technology in enhancing the personal safety of individuals as they navigate outside
their homes. While the original study focused on integrating blind individuals into the
digital age and improving independence through social media access, it is paramount to
underscore the smart cane’s contribution to personal security.
The multifaceted design of the smart cane encompasses audio–tactile interaction,
gesture detection, speech-to-text translation, and cloud connectivity through Bluetooth,
which collectively serve to create a safer navigation experience. The addition of proximity
sensors, GPS tracking, and emergency alert systems provides users with the confidence
to explore their surroundings securely. The software components including Facebook
chat API and the advanced step count algorithm are complemented by the device’s voice
recognition capabilities, which not only enhance the user interaction with social media, but
also bolster the users’ safety by allowing hands-free operation and immediate access to
assistance if needed.
Algorithm 3, in particular, demonstrated superior performance in step count accuracy,
which is integral to the safety features, as precise step and swing detection are crucial for
avoiding obstacles and hazards.
Future work including user evaluations with visually impaired individuals will not
only assess the smart cane’s usability and effectiveness in real-world scenarios, but will also
prioritize the evaluation of its safety features. Ensuring the practical usability of the smart
cane includes a thorough validation of its security and emergency response systems, which
are vital for the safety and well-being of its users. By emphasizing personal safety alongside
social media enhancement, the smart cane represents a holistic approach to supporting the
visually impaired in their quest for a more independent and secure lifestyle.
Author Contributions: Software, hardware, evaluation and data acquisition: M.D.M.;
writing—original
draft preparation and final version: M.D.M.; writing—review and editing: B.-A.J.M., H.M. All authors
have read and agreed to the published version of the manuscript.
Funding: This research was funded by the Natural Sciences and Engineering Research Council of
Canada (NSERC), grant number RGPIN-2019-07169.
Data Availability Statement: The original contributions presented in the study are included in the
article, further inquiries can be directed to the corresponding author.
Conflicts of Interest: The authors declare no conflicts of interest.
References
1.
Gillen, G. Cognitive and Perceptual Rehabilitation: Optimizing Function; Elsevier Health Sciences:
Amsterdam, The Netherlands, 2008.
2.
Sapp, W. Visual impairment. In International Encyclopedia of Education, 3rd ed.; Peterson, P., Baker Eva, M.B., Eds.; Elsevier:
Leawood, KS, USA, 2010; pp. 880–885.
3.
Morone, P.; Cuena, E.C.; Kocur, I.; Banatvala, N. Investing in Eye Health: Securing the Support of Decision-Makers; World Health Organization:
Geneva, Switzerland, 2012; Available online: http://www.who.int/iris/handle/10665/258521 (accessed on 2 February 2024).
4. Subramoniam, S.; Osman, K. Smart Phone Assisted Blind Stick. Turk. Online J. Des. Art Commun. 2018, 2613–2621. [CrossRef]
5.
Resnick, M.D.; Harris, L.J.; Blum, R.W. The impact of caring and connectedness on adolescent health and well-being. J. Paediatr.
Child Health 1993,29 (Suppl. 1), S3–S9. [CrossRef] [PubMed]
6.
Modell, S.J.; Rider, R.A.; Menchetti, B.M. An exploration of the influence of educational placement on the community recreation
and leisure patterns of children with developmental disabilities. Percept. Mot. Ski. 1997,85, 695–704. [CrossRef] [PubMed]
IoT 2024,5186
7.
Jessup, G.; Bundy, A.C.; Broom, A.; Hancock, N. The social experiences of high school students with visual impairments. J. Vis.
Impair. Blind 2017,111, 5–19. [CrossRef]
8. Folmer, E. Exploring the use of an aerial robot to guide blind runners. ACM Sigaccess Access Comput. 2015,112, 3–7. [CrossRef]
9.
Emre, S.; Huang, Y.; Ramtekkar, U.; Lin, S. Readiness for voice assistants to support healthcare delivery during a health crisis and
pandemic. NPJ Digit. Med. 2020,3, 122. [CrossRef]
10.
Wentz, B.; Lazar, J.L. Are separate interfaces inherently unequal? In An Evaluation with Blind Users of the Usability of Two Interfaces
for a Social Networking Platform; iConference: Seattle, WA, USA, 2011.
11.
Helal, A.; Moore, S.; Ramachandran, B. Drishti: An integrated navigation system for visually impaired and disabled. In
Proceedings of the Proceedings Fifth International Symposium on Wearable Computers, Zurich, Switzerland, 8–9 October 2001;
pp. 149–156.
12.
Harrison, J.; Lucas, A.; Cunningham, J.; McPherson, A.P.; Schroeder, F. Exploring the Opportunities of Haptic Technology in the
Practice of Visually Impaired and Blind Sound Creatives. Arts 2023,12, 154. [CrossRef]
13.
Wu, G.; Pan, C. Audience engagement with news on Chinese social media: A discourse analysis of the People’s Daily official
account on WeChat. Discourse Commun. 2022,16, 129–145. [CrossRef]
14.
Wu, S.; Adamic, L.A. Visually impaired users on an online social network. In Proceedings of the 32nd Annual ACM Conference on
Human Factors in Computing Systems, Toronto, ON, Canada, 26 April–1 May 2014; ACM: New York, NY, USA, 2014; pp. 3133–3142.
15.
Boyd, D. Why youth <3 social network sites: The role of networked publics in teenage social life. In Youth, Identity, and Digital
Media; Buckingham, D., Ed.; MIT Press: Cambridge, MA, USA, 2008; pp. 119–142.
16.
Burke, M.; Marlow, C.; Lento, T. Social Network Activity and Social Well-Being. In Proceedings of the 28th International
Conference on Human Factors in Computing Systems, CHI 2010, Atlanta, GA, USA, 10–15 April 2010.
17.
Sahoo, N.; Lin, H.-W.; Chang, Y.-H. Design and Implementation of a Walking Stick Aid for Visually Challenged People. Sensors
2019,19, 130. [CrossRef]
18.
Mohit, J.; Diwakar, N.; Swaminathan, M. Smartphone usage by expert blind users. In Proceedings of the 2021 CHI Conference on
Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–15. [CrossRef]
19.
Cole, G.; Carrington, P.; Cassidy, C.; Morris, M.R.; Kitani, K.M.; Bigham, J.P. “It’s almost like they’re trying to hide it”: How
User-Provided Image Descriptions Have Failed to Make Twitter Accessible. In Proceedings of the World Wide Web Conference,
San Francisco, CA, USA, 13–17 May 2019; pp. 549–559. [CrossRef]
20.
Khan, A.; Khusro, S. A mechanism for blind-friendly user interface adaptation of mobile apps: A case study for improving the
user experience of the blind people. J. Ambient. Intell. Humaniz. Comput. 2022,13, 2841–2871. [CrossRef]
21.
Bigham, J.P.; Jayant, C.; Ji, H.; Little, G.; Miller, A.; Miller, R.C.; Miller, R.; Tatarowicz, A.; White, B.; White, S.; et al. VizWiz: Nearly
Real-Time Answers to Vvisual Questions. In Proceedings of the 23rd Annual ACM Symposium on User Interface Software and
Technology, New York, NY, USA, 3–6 October 2010.
22.
Kim, S.; Park, E.-S.; Ryu, E.-S. Multimedia vision for the visually impaired through 2D multiarray braille display. Appl. Sci. 2019,
9, 878. [CrossRef]
23.
Summer, A.; Mooney, B.; Ahmad, I.; Huber, M.; Clark, A. Object detection and sensory feedback techniques in building smart
cane for the visually impaired: An overview. In Proceedings of the 13th ACM International Conference on Pervasive Technologies
Related to Assistive Environments, Corfu, Greece, 30 June–3 July 2020; pp. 1–7. [CrossRef]
24. Folego, G.; Costa, F.; Costa, B.; Godoy, A.; Pita, L. Pay Voice: Point of Sale Recognition for Visually Impaired People. arXiv 2018,
arXiv:1812.05740.
25.
Kuber, R.; Hastings, A.; Tretter, M. Determining the accessibility of mobile screen readers for blind users. In Proceedings of the
IASTED Conference on Human-Computer Interaction, Baltimore, MD, USA, 14–16 May 2012; pp. 182–189.
26.
Smith-Jackson, T.; Nussbaum, M.; Mooney, A. Accessible cell phone design: Development and application of a needs analysis
framework. Disabil. Rehabil. 2003,25, 549–560. [CrossRef] [PubMed]
27.
Singh, S.; Jatana, N.; Goel, V. HELF (Haptic Encoded Language Framework): A digital script for deaf-blind and visually impaired.
Univers. Access Inf. Soc. 2021,22, 121–131. [CrossRef] [PubMed]
28.
Leiva, K.M.R.; Jaén-Vargas, M.; Codina, B.; Olmedo, J.J.S. Inertial Measurement Unit Sensors in Assistive Technologies for Visually
Impaired People, a Review. Sensors 2021,21, 4767. [CrossRef] [PubMed]
29.
Shull, P.B.; Damian, D.D. Haptic wearables as sensory replacement, sensory augmentation and trainer—A review. J. Neuroeng.
Rehabil. 2015,12, 59. [CrossRef] [PubMed]
30.
Joseph, A.M.; Kian, A.; Begg, R. State-of-the-Art Review on Wearable Obstacle Detection Systems Developed for Assistive
Technologies and Footwear. Sensors 2023,23, 2802. [CrossRef] [PubMed]
31.
Messaoudi, M.D.; Menelas, B.A.J.; Mcheick, H. Review of navigation assistive tools and technologies for the visually impaired.
Sensors 2022,22, 7888. [CrossRef] [PubMed]
32.
Messaoudi, M.D.; Menelas, B.A.J.; Mcheick, H. Autonomous smart white cane navigation system for indoor usage. Technologies
2020,8, 37. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Visually impaired and blind (VIB) people as a community face several access barriers when using technology. For users of specialist technology, such as digital audio workstations (DAWs), these access barriers become increasingly complex—often stemming from a vision-centric approach to user interface design. Haptic technologies may present opportunities to leverage the sense of touch to address these access barriers. In this article, we describe a participant study involving interviews with twenty VIB sound creatives who work with DAWs. Through a combination of semi-structured interviews and a thematic analysis of the interview data, we identify key issues relating to haptic audio and accessibility from the perspective of VIB sound creatives. We introduce the technical and practical barriers that VIB sound creatives encounter, which haptic technology may be capable of addressing. We also discuss the social and cultural aspects contributing to VIB people’s uptake of new technology and access to the music technology industry.
Article
Full-text available
Walking independently is essential to maintaining our quality of life but safe locomotion depends on perceiving hazards in the everyday environment. To address this problem, there is an increasing focus on developing assistive technologies that can alert the user to the risk destabilizing foot contact with either the ground or obstacles, leading to a fall. Shoe-mounted sensor systems designed to monitor foot-obstacle interaction are being employed to identify tripping risk and provide corrective feedback. Advances in smart wearable technologies, integrating motion sensors with machine learning algorithms, has led to developments in shoe-mounted obstacle detection. The focus of this review is gait-assisting wearable sensors and hazard detection for pedestrians. This literature represents a research front that is critically important in paving the way towards practical, low-cost, wearable devices that can make walking safer and reduce the increasing financial and human costs of fall injuries.
Article
Full-text available
The visually impaired suffer greatly while moving from one place to another. They face challenges in going outdoors and in protecting themselves from moving and stationary objects, and they also lack confidence due to restricted mobility. Due to the recent rapid rise in the number of visually impaired persons, the development of assistive devices has emerged as a significant research field. This review study introduces several techniques to help the visually impaired with their mobility and presents the state-of-the-art of recent assistive technologies that facilitate their everyday life. It also analyses comprehensive multiple mobility assistive technologies for indoor and outdoor environments and describes the different location and feedback methods for the visually impaired using assistive tools based on recent technologies. The navigation tools used for the visually impaired are discussed in detail in subsequent sections. Finally, a detailed analysis of various methods is also carried out, with future recommendations.
Article
Full-text available
PurposeDigital media has brought a revolution, making the world a global village. For people who are visually impaired and people with visual and hearing impairment, navigating through the digital world can be as precarious as moving through the real world. To enable them to connect with the digital world, we propose a solution, Haptic Encoded Language Framework (HELF), that uses haptic technology to enable them to write digital text using swiping gestures and understand the text through vibrations.Method We developed an Android application to present the concept of HELF and evaluate its performance. We tested the application on 13 users (five visually impaired and eight sighted individuals).ResultsThe preliminary exploratory analysis of the proposed framework using the Android application developed reveals encouraging results. Overall, the reading accuracy has been found to be approximately 91%, and the average CPM is found to be 25.7.Conclusion The volunteering users of the HELF Android application found it useful as a means of using the digital media and recommended its usage as an assistive technology for the visually challenged. The results of their performance of using the application motivate further research and development in the proposed work to make HELF more usable by people who are visually impaired and people with visual and hearing impairment.
Article
Full-text available
Access to ubiquitous devices such as smartphones, smartwatches, and wearable assistive bands is an emerging trend. Smartphones and smartwatches have become an essential tool for visually impaired and blind people in performing daily life activities, promoting independence, productivity, and self-reliance. The accessibility services such as talkback and voice assistants facilitate blind people in using smartphones to perform several activities such as placing calls, sharing pictures, reading books, and sending messages, etc. However, existing accessibility services face several problems, including late learning, accessing, and selecting non-visual items on the screen, finding objects of interest in a complex layout, multimodal interaction, and operating multiple connected devices. One solution to these problems is to design a blind-friendly interface to simplify and distribute User Interface Artefacts (UIAs), including labels, buttons, layouts, and panels across multiple devices. The framework tailor’s UI components into simplified blind-friendly user interfaces on smartphones and smartwatches, resulting in improving the user experience. Forty-nine blind people have participated in this empirical study revealing a considerable improvement in the user experience, task completion accuracy, and productivity.
Article
Full-text available
A diverse array of assistive technologies have been developed to help Visually Impaired People (VIP) face many basic daily autonomy challenges. Inertial measurement unit sensors, on the other hand, have been used for navigation, guidance, and localization but especially for full body motion tracking due to their low cost and miniaturization, which have allowed the estimation of kinematic parameters and biomechanical analysis for different field of applications. The aim of this work was to present a comprehensive approach of assistive technologies for VIP that include inertial sensors as input, producing results on the comprehension of technical characteristics of the inertial sensors, the methodologies applied, and their specific role in each developed system. The results show that there are just a few inertial sensor-based systems. However, these sensors provide essential information when combined with optical sensors and radio signals for navigation and special application fields. The discussion includes new avenues of research, missing elements, and usability analysis, since a limitation evidenced in the selected articles is the lack of user-centered designs. Finally, regarding application fields, it has been highlighted that a gap exists in the literature regarding aids for rehabilitation and biomechanical analysis of VIP. Most of the findings are focused on navigation and obstacle detection, and this should be considered for future applications.
Article
Full-text available
Delivering news on social media platforms is an increasingly important consideration in journalism practice. However, little attention has been paid to audience engagement with news on social media, especially the discursive presentation of news on the Chinese social media platform WeChat. Based on 36 news reports collected from the People’s Daily official account, this study analyses how news discourse is constructed and presented to engage audiences. The results suggest that highlighting proximity, personalisation, positivity and human interest in news values are the strategies adopted by journalists to engage audiences. The headline tends to use forward-referring terms and performs the speech acts of assertives and expressives to construct news values of proximity and positivity. The news story makes use of particular addressing terms, reported speeches and evaluative markers to construct news values of personalisation, positivity and human interest. The study enriches the analysis of journalistic practice of news on social media in the Chinese context.
Article
Full-text available
To prevent the spread of COVID-19 and to continue responding to healthcare needs, hospitals are rapidly adopting telehealth and other digital health tools to deliver care remotely. Intelligent conversational agents and virtual assistants, such as chatbots and voice assistants, have been utilized to augment health service capacity to screen symptoms, deliver healthcare information, and reduce exposure. In this commentary, we examined the state of voice assistants (e.g., Google Assistant, Apple Siri, Amazon Alexa) as an emerging tool for remote healthcare delivery service and discussed the readiness of the health system and technology providers to adapt voice assistants as an alternative healthcare delivery modality during a health crisis and pandemic.