PreprintPDF Available

An Analysis of QR Code-Integrated Webpages for Enhancing Object Literacy, Focusing on Health Literacy Applications

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract

Objects used in our day-today lives do not incorporate a high enough standard of accessibility for those with disabilities, especially those with vision and hearing difficulties. QR code technology has the potential to help individuals ascertain what certain objects are, their uses, and their dangers. Most objects that contain identifiers and instructions have them in the form of written text or images. Certain design elements of these identifiers such as text size and polarity can be negatively influenced by limiting factors of the object such as its size and material, and consequently affect people's ability to understand them. An inability to correctly comprehend an item's instructions could result in difficulty differentiating, identifying, and using certain objects and products. For objects with health risks, this also becomes a safety hazard. Utilising relevant academic literature and following appropriate user experience and user interface guidelines, various forms of disability-friendly instruction accessible through QR code technology will be tested on a control group. The findings will help to identify the overall benefit of QR code object-based guidance as an alternative to traditional forms of information and instruction.
An Analysis of QR Code-Integrated Webpages for Enhancing Object
Literacy, Focusing on Health Literacy Applications.
Sebastian Farrant
Dept. of Computing at Goldsmiths
London, England
sfarr002@campus.goldsmiths.ac.uk
Abstract Objects used in our day-to-day lives do not
incorporate a high enough standard of accessibility for those
with disabilities, especially those with vision and hearing
difficulties. QR code technology has the potential to help
individuals ascertain what certain objects are, their uses, and
their dangers. Most objects that contain identifiers and
instructions have them in the form of written text or images.
Certain design elements of these identifiers such as text size
and polarity can be negatively influenced by limiting factors of
the object such as its size and material, and consequently affect
people’s ability to understand them. An inability to correctly
comprehend an item’s instructions could result in difficulty
differentiating, identifying, and using certain objects and
products. For objects with health risks, this also becomes a
safety hazard.
Utilising relevant academic literature and following
appropriate user experience and user interface guidelines,
various forms of disability-friendly instruction accessible
through QR code technology will be tested on a control group.
The findings will help to identify the overall benefit of QR code
object-based guidance as an alternative to traditional forms of
information and instruction.
Keywords—accessibility, disability-friendly, guidance,
traditional forms and QR code technology.
I. INTRODUCTION
Many conventional objects with the capacity for a level of
interaction beyond the most basic of actions are often ill-equipped
to provide adequate instruction for those with visual or audible
impairments or other literacy-affecting disabilities. Such objects
can range from everyday household items, supermarket products,
furniture, or interaction points between modes of transport. Whilst
it is relatively well known that objects that display textual-based
information are often insufficient for the visually impaired, they
can also be a source of frustration for those with auditory
impairments. A study conducted in 2015 [29] found that 48% of
participants with auditory impairments had inadequate health
literacy, a rate 6.9 times higher than those without. Health literacy
is the capability to make informed decisions about one’s health
after obtaining, reading, and comprehending personal medical
information. This study highlights that a considerable percentage of
those with auditory impairments are unable, or struggle to process
and apply textual-based information.
The material, size, or nature of how certain objects are to be used
may not be appropriate to display text or image-based information.
Whilst this holds true for many fabric labels, one example is the
sizing information on the underside of the tongue of a shoe. The
text is usually small, often written on a piece of fabric, centimetres
wide. Fabric that is often prone to colour degradation and fraying,
causing a reduction of sharpness and polarity of the text. Alongside
its size, text polarity and sharpness are essential components in
determining its readability [12]. The sizes are also not universal,
with shoes often displaying sizing metrics standard to the UK, US,
and EU. For those who can identify numbers but have difficulty
reading, or struggle with both, this only adds to the confusion.
Replacing the label with a QR code that provides the relevant
information in multiple accessible formats based on the user’s
disability and linguistic background, could provide a potential
mitigation of these problems. The inability to quickly understand
what an object is or how it should be used may slow down
otherwise swift interactions or tasks, causing frustration and a
reduction in productivity. Not being able to understand written
instructions at all may reduce one’s independence as one may
require assistance performing tasks that involve large amounts of
object interaction, such as grocery shopping.
One of the most commonly accessible forms of object
identification is braille. Braille is a tactile language formed of
raised dots that can be interpreted through touch. In book or paper
format, it is a highly effective language for those with visual
impairments, with accomplished users reading up to half the speed
of a visual reader [43]. It is often seen on products with cardboard
packaging and high risks products like paracetamol. Like
traditional text-based information, however, the size, material and
nature of the object determine the efficacy of braille print. Since it
utilises tactile senses, objects with rough, bumpy, or otherwise
uneven surfaces could compromise the clarity in which users can
distinguish the individual raised bumps. Braille must be printed in
a certain size that could limit the amount of information available
to display on smaller objects. There are also a low number of
braille teachers, and not enough schools teaching it to visually
impaired students [11]. As a result, only around 7% of registered
blind or partially sighted people utilise braille script [38]. Other
scripts have been developed with an increase of learnability in
mind, but their tactile nature means that they will also ultimately
encounter the same limitations.
QR codes are easy to generate and can be done so online for free
and printed using relatively accessible technology. They can be as
small as two centimetres long, and two centimetres wide to still be
recognised by most smartphone devices [7]. As such, they can be
sized down to fit onto most everyday objects. QR codes function as
a proxy for digital information relating to the item they are printed
on. Since the information being delivered through them is digital,
and not displayed on the object itself, the amount of information
that can be presented is not constrained by the physical attributes of
the object it is placed on. This also allows for the information to be
displayed in multiple formats otherwise unavailable to the user.
Unlike their physical counterpart, virtual representations of
information can incorporate dynamic components that are proven
to aid understanding, such as audio instruction, text resizing, font
changes, polarity adjustments, and infographics [12], [30], [39].
Research Objectives
Following evaluation and usability guidelines tested on a relevant
control group, this research will aim to uncover the most effective
forms of accessible instruction to those with visual and audible
impairments or literacy-affecting disabilities. The findings will be
presented in a summary that aims help to identify the efficacy of
QR code object assistance in comparison to traditional forms of
instruction, alongside its feasibility for practical implementation.
II. LITERATURE REVIEW
Veispak & Ghesquière [45] have evaluated difficulties and
problems arising in visually impaired children trying to read and
understand braille. Arguing a case for those suffering, encountering
a form of dyslexia, citing that the problems encountered go further
than being a consequence of the language’s inherent complexity,
XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE
and could stem from temporal information processing deficits.
These deficits provoke phonological disorders also seen in learners
with visual dyslexia. The logographic nature of braille also
increases the difficulty of its learnability and some children with
such visual impairments ultimately fail to learn the script. The
technological advancements in electronic aids reviewed [17] have
importance in aiding object recognition beyond what has been
previously possible.
Vasco Lopes [44] describes how the ‘Internet of Things’
phenomenon will allow real-world objects to become uniquely
identifiable virtually at the application layer of a network stack.
This is possible using tagging systems such as QR codes, bar codes
RFID scanners and chips. They note that a unified architecture is
necessary to allow for cross-compatibility and interaction between
different systems. Regarding architectures for the disabled, the
current taxonomy is limited, and existing methods must study to
evaluate their suitability. They go on to add that the Internet of
Things architecture for those with disabilities needs to address nine
different aspects of usability: Availability, Performance,
Interoperability, Scalability, Evolution, Security, Resilience, Easy
to use interfaces and user-friendly applications.
Within the context of QR codes, we can address how well they fit
these criteria. QR codes can be printed by anyone and do not
require any specialised technology to do so. They are cheap to
produce and not particularly resource intensive. Any modern
smartphone device can read and access QR codes in seconds,
providing it has a stable internet connection. The standardised QR
code graphic is not constrained by a specific language I.e., they are
universally recognisable. Since they act as a proxy for web-based
services, they should adapt alongside any new web-based assistive
technology without compatibility problems. However, their relative
security is dependable on the link provided by the QR code. The
physical tag and graphic could be replaced or altered.
Reference [6] studied the implementation of a QR code-based
application in a museum, aiming to increase the level of
accessibility of information accompanying certain exhibitions
within the museum as well as the independence of the user. A user-
centred design approach was taken which developed into three
design cycles. Utilising a target group of 68 people, the application
was designed to provide the desired information in text, photos,
and sign language videos. The pilot application was tested on 28
audibly impaired participants with two rounds of relevant data
collected within a one-month period of one another. Participants
were allowed to roam the exhibitions freely with the application
installed on their devices. No instructions were given aside from
what was provided in the application itself. The participants were
recorded, and at the end of each session asked to complete a short
questionnaire. The mean time (in seconds) it took for the
participants to obtain the relevant information for each exhibit was
recorded. There was a noticeable decrease in the mean times
between the first (115.61s) and second (48.11s) exhibits reflecting
a high level of learnability for the QR code-based applications.
Users also displayed a reduction in time needed to access the
information for the first exhibit on their second round of data
collection, going from 115.61s to 51.43s, demonstrating a
proficient level of memorability regarding the use of the
application.
Reference [27] analysed how QR codes and RFID chips can be
utilised to increase the independence of those with visual
impairments, using supermarket shopping as an example. Those
with visual impairments often have trouble navigating the shop
floor and identifying target products. This study highlights a few
functionalities that QR codes offer relevant to this research: Eyes-
Free product selection and browsing. The utilisation of tactile QR
codes allows those with visual impairments to identify the
available range of products before the purchasing process is
initiated. Utilisation of existing devices. By using technology that is
already available to the user, thus reducing implementation costs
and accessibility. Minimal environment adjustments. Combatting
reluctance to introduce this form of accessible shopping within
stores thus maximising the feasibility of its implementation. It is
noted that in the usability study, the embossed QR codes were
deemed to be favourable over standardised UPC barcodes owing to
their speed and reliability. Further benefits of QR codes were
highlighted [20] on the use of QR codes and text compression to
convert plain text from books and documents to a playable speech
file. They found documents can be read quickly by those with
visual impairments using QR code technology, and these codes can
contain up to ten times more characters than traditional bar codes.
Stickers, Braille Seals and stereoscopic QR prints make it easier for
the visually impaired to find the codes. QR code text compression
also increases the information transfer capacity.
Reference [40] outlines mutual dependence issues, failure to
accommodate a specific user group and inadequately tailored user
interface design as the fundamental problems encountered when
designing applications for people with disabilities. These factors
affect the success criteria (usability) of the application by
modulating the time, efficiency, and costs. Disabled people have
been neglected as a target user group for design due to low
commercial profit. Approximately 90% of those who suffer from
visual impairments live in low-income settings worldwide [1]. It is
imperative that systems developed to aid those with such
impairments can be accessible to the masses. Most mobile
applications developed have a UI designed for people with at least
partial sight. “One of the most difficult tasks faced by the visually
impaired is identification of things that are useful in their daily
living.” [1].
Reference [1] developed an android application designed to aid the
visually impaired through several features: light recognition, colour
detection, object recognition and banknote recognition. Having a
modular application accessible from one device increases the
accessibility and simplicity of the system to the user. Rezaei et al.,
[40] positively acknowledge this concept: Prototype systems that
have been developed purely based on performing a singular task
neglect other UX aspects involved in the creation of well-designed
software such as mobility and entertainment. The overt focus on a
specific target group could cause problems when the applications'
usability aspects are made redundant due to secondary disabilities
affecting specific users. For instance, this interdependency problem
could occur within a text-to-speech application. If developed for
the weaker elderly with visual impairments, for example, it could
not be used correctly if it required a certain level of physical
interaction with the hands or limbs [40].
Reference [8] highlights the importance of a user-centred design
approach when creating systems for those with visual or auditory
impairments. In their initial development of a communication
board for those with auditory impairments, the context of use was
explored to understand the settings, environments, and
circumstances in which it would be implemented. Social, cultural,
and physical factors may affect the scope of its utilisation. The
fundamental user requirements are the foundations on which the
successful development and implementation of the proposed
software are built. Interviews were carried out to gain context into
what their needs and wants were regarding using such a system.
The semi-structured nature of the interview allowed a wider scope
to uncover the previously unknown nuances of the users’ problems.
The usability requirements were to be categorised into the
following: Effectiveness, Efficiency and Satisfaction. This process
helps quantify the ability and ease of users in their usage of the
system to fulfil its fundamental requirements. Jakob Nielsen [33]
categorises an application's usability into five criteria: Learnability,
Efficiency, Memorability, Errors, and Satisfaction. In the process
of evaluating the systems solutions against the user requirements,
questions were formatted to test these criteria utilising a Likert
scale for quantification. Utilising this iterative user-centred process
helped ensure user needs were met in the proposal of this system in
such a way as to execute “a perfect system to guarantee full
portrayal of the clients all through the product configuration
process” [8].
The potential benefits provided through QR code implementation
are only as effective as the accessibility of the interface itself [19].
In their development of a blind-friendly user interface, they
acknowledge the multitude of accessibility features such as screen
readers, multimodal interactions, haptic feedback, and gesture
interaction touchscreen devices contain. However, outline their
trade-offs in terms of discoverability, layout consistency,
navigational complexity, and cross-device intractability [19]. One
of the main problems identified in the identification of items of
interest within a UI with touch-based systems is the lack of tactile
sense inputs. Voice-based interaction “suffers from issues of ascent
noisy feedback and poor performance resulting in cognitive
overload” [19]. Gesture-based interactions such as swiping,
flicking, and rotating are also features that can increase
accessibility to those with visual impairments. Accessibility can be
increased through means of reducing the navigational complexity
of applications to allow users to memorise the ‘flow’ of interaction
required to perform specific tasks. This can be achieved through
layout consistency, interaction feedback, effectively designed error
handling and prevention and ease of bringing users ‘back to home’,
or their original starting point. The user experience of application
interaction has shifted from application experience towards a
subjective approach quantifying and analysing emotional elements
of individual exchanges. The design of an adaptable user interface
facilitates these needs by allowing users to customise interaction
methods based on specific needs and experiences. The resulting
user interface outperformed systems preceding it in standardised
usability tests through “logical organisation of actions”, “reduced
cognitive overload” and effectively implemented the use of haptic
feedback [19].
Whilst it seems that there is reasonable evidence for the
implementation of QR codes within this practical setting, they do
not come without their drawbacks. Reference [36] outlines the
inherent dangers that can be associated with widespread QR code
usage. Whilst the tags themselves aren’t dangerous, they could be
used with malicious intent and may lead to harmful websites. This
practice of printing malicious codes over legitimate ones is known
as atagging [36], and the widespread adoption of QR codes could
influence this malicious deployment which could pose a threat to
the user group of reference as they are more vulnerable than the
average person. In 2011, Russian, criminals utilised this scam to
steal hundreds of users’ personal information and cash [36].
III. HYPOTHESIS, METHODOLOGY AND DATA COLLECTION
Webpage Development
The web page was designed using a QR code and webpage
generation tool provided by QR Video Solutions [37]. A
minimalistic design approach was taken, stripping the web page
functionality to only fundamental interactions to increase the site's
learnability [3],[18]. Adopting constant fonts, button sizes and
navigational structure helped to keep a high level of consistency
within the webpage which can assist users to use the website and
maintain a level of trust [25]. The object used to be the foundation
of the test was a vitamin D bottle. Taking medication is a common
human experience, and under the context of issues regarding health
literacy discovered in deaf sign language users discussed in the
literature review [29], it seemed an appropriate choice. The
instructions written on the label were then converted into accessible
formats on a web page. They read as followed: “4000 IU Vitamin
D3 Tablets. Serving size: 1 tablet. Take one tablet daily with a full
glass of water. Caution: Do not exceed stated recommended dose.”
Miscellaneous details were excluded as it was deemed to provide no
further insight into the functional usability aspects of the system in
this testing stage.
Figure 1: Image of the webpage displaying information in various
formats.
The information formats included text (screen reader friendly),
video, audio, and sign language descriptions. The video was
created in iMovie and had text that appeared in synchronisation
with audio as in Figure 1. Black text against a white background
was used to ensure the highest level of polarity and not interfere
with any visual impairments involving colour blindness. Polarity is
an important design aspect in creating accessible content, as it can
have a strong effect on the user’s ability to comprehend text. Users
with visual impairments are more prone to having their literacy
affected by low polarity [24]. The font was a print typeface utilised
as they are designed with accessibility in mind. The clear spacing
between letters and words makes it easier for users to distinguish
them [23]. The audio description was generated using a free text-
to-speech tool [9] to ensure the audio was fluent and spoken at a
regular pace without any stuttering or mispronunciations. For the
sign Language description, a third-party sign language application
was used [13] to generate a digital model of a person using sign
language as shown in Figure 3.
Figure 2: Image of video description shown on the webpage.
Figure 3: Image still of sign-language description shown on the
webpage.
Testing Components
The data collected in this section was done so with essential
product requirements discovered within the background research
section of this project. In this case test, usability and user
experience falls under a wide spectrum of performance outcomes
as opposed to a quantitative set of data. As the test aimed to gather
experienced-based qualitative data, a binary ‘right or wrong’
hypothesis was not conceived, but rather a set of user experience
and usability requirements were deliberated and outlined to be
essential functionality requirements the system was to offer to
provide an effective service to the users. The results obtained from
the study would have evaluated compared to the initial
presumptions and demands. The set of system usability and
performance requirements are outlined below:
Fundamental usability should be key, as solutions with simple
interfaces and low navigational complexity are more intuitive and
easier to use [15]. The information provided should also be
coherent. The audio and video descriptions should be simple and
easy to understand for the user. These qualities are dependent upon
several factors: Speed, Quality and Language Used. The solution
should also have a high level of performance: Users of well-
designed systems should be able to access the web page
information and be able to utilise it efficiently and in a timely
manner [31]. The web page should also have a high level of
accessibility. Accessible web design should allow users with a
wide range of disabilities to make use of web page descriptions
[46]. Any inbuilt accessibility features from their smartphone or
browser should be tested to check their compatibility with the web
page. Fundamentally, all these characteristics tie in with the overall
quality of the user experience which should look to be optimised.
The overall experience gained from the user should be a high level
of satisfaction and invoke positive feelings towards the system
[41].
Control Group
The test group was chosen to be anyone that could find benefit from
utilising the system. Users with any form of visual impairment that
could affect their ability to read product descriptions written on
certain objects were ideal candidates. Other conditions that affect
literacy were included, such as developmental dyslexia as well as
any speech and language processing disorders or developmental
disabilities. A diverse testing group was important to help gather
insight into usability problems specific to certain user groups [21].
To gather responses, the test was published amongst several online
communities on Reddit and Facebook that were based on blindness,
dyslexia, and general visual impairments. In a bid to gather more
responses. Family and friends were also encouraged to share the
survey with any relevant people they knew. The test was made up of
two parts, a pre-test information gathering form and a ‘during’ /
post-test information and feedback response form. To try and keep
the test as close to an ‘in person’ scenario as possible, the QR code
was digitally implemented on a mock-up of a vitamin D bottle that
could be scanned the same way one would in real life.
Figure 4: QR-Code printed on vitamin bottle mock-up.
Methodology
A formal testing methodology was outlined prior to execution and
involved a pilot test to ensure the process runs smoothly and
effectively. By utilising a pilot test, one can make certain the test
layout and methodology are suitable for the desired outcome [22].
To analyse the test data in a constructive manner, a pre-test
information gathering survey was conducted on participants to
gather information on their disability type, previous experience with
QR-code technology, tech literacy and any specific ailments
encountered with existing methods for object identification. Such
‘pre-test’ questionnaires ensure that the participants being tested and
the subsequent information they impart provides relevant
information towards the hypothesis [2]. The test itself was based,
where several simple tasks were assigned to the user to carry out.
Tasked-based tests are useful as they provide more insight into the
‘real-life’ application of a system versus traditional interviews and
questionnaires [5]. These tasks were designed to utilise all the
features available on the web page e.g., using the audio and video
descriptions (if applicable to the user) and utilising their
browsers/smartphones inbuilt accessibility features, such as a screen
reader. The tasks are as follows:
1. Finding the QR code on the object (If applicable)
2. Scanning the QR code on the object. (If applicable)
3. Accessing the QR code web page.
4. Locating and playing the video description.
5. Locating and playing the audio description.
6. Basic web page navigation. Think aloud to review its layout
and usability.
7. Evaluate (think aloud) the quality of the video and audio
instructions based on intelligibility, clearness etc.
8. Evaluate (think aloud) the performance of the system, using
the criteria outlined in the requirements and test
objectives section.
Seven complete responses from relevant individuals were recorded
from participants responding from blind/disabled online user
groups. Nine responses were recorded in total, but two did not
follow through with the feedback response form. The
names/identifiers of participants partaking in this research have been
and will remain to be kept anonymous. The participants have been
identified using a numerical system instead. The information
collected from this research is all qualitative long-form data
comprising multifarious questions and responses. As such, the raw
data is too lengthy to be published in this article but will be
referenced in the data analysis.
Data Collection
The of majority the responses gathered were curt and could benefit
from extra detail, however, they still provide valuable insight.
Finding willing participants that also fit the criteria for testing was
a struggle, which is reflected in the response count (7). In
hindsight, the use of two separate surveys was an inefficiency in
the research process that led to a few unfinished responses that
could have provided useful information. However, data from as
little as five people can provide useful and adequate insight into
standard usability criteria [32] that was outlined in the essential
requirements. In the next section, this data will be analysed in
relation to the hypothesis of requirements outlined prior to the test
conceptualisation and completion. Subsequently, an evaluative
analysis of the system will be made. Frameworks, areas for
improvement and guidelines will be deliberated and proposed.
IV. RESEARCH DATA ANALYSIS
Seven participants were recruited for this study, and provided a
diverse sample of ages, disabilities, and genders. One participant
identified as non-binary, one as ‘other’, two as men and three as
women. The ages of the participants spanned from 22 to 43. The
disabilities of the participants included blindness, partial sight,
dyslexia, and ADHD. Some participants reported using assistive
technologies such as a screen reader. There was a healthy mix of
participants that had prior experience with QR-code technology
and those that didn’t. This spread was important as it indicated a
diverse background of participants and allowed the responses to
collect data that would be considerate to users with varying
technological experience [14].
Most participants were able to complete all the designated tasks
with ease, but a minority had difficulty scanning the QR code and
traversing the site. User 1 highlighted their concern about being
able to locate the QR code as a person suffering from blindness. A
potential solution to this would be an increase in size of the QR
code, and a possible implementation of a tactile indicator on the
physical object. Tactile indicators have proven to be effective for
identification in other formats, such as braille [42]. Navigational
buttons could also be increased in size and made more prominent
on the page [21].
The results portrayed mixed reviews for the audio feedback, stating
the use of the robotic voice was difficult to understand, and that a
real voice would have been preferred instead. Whilst the robotic
voice was implemented to provide a guaranteed level of
consistency and fluency within the audio, these results align with
research [16] that concluded that artificial voices are often harder
to understand than real voice, especially for those with hearing
difficulties. All but one of the participants stated the webpage
loaded smoothly and allowed them to access the relevant
information in a timely manner. This is a promising result from the
study and an important aspect of developing a satisfactory level of
user experience; slow-loading webpages can cause annoyance and
put off users from interacting with the site [28].
All the participants responded stating that the webpage provided an
adequate level of accessibility that catered to their specific
disability or impairment, although some visual aspects were
recommended to be improved upon. User 7 stated that the audio
and video descriptions were useful, but the overall look of the site
felt dated. This remark is important as the visual appeal of a
website ties in strongly to its trustworthiness and credibility [26].
Under the context of the site providing advice for taking a
supplement that could be potentially harmful if misused, trust is an
especially important aspect of the webpage design in this instance.
The participants' views of their experience with the system were
varied. A few participants indicated their interest and willingness to
utilise a system of this nature if it became widely implemented,
whilst others questioned its necessity in relation to existing
accessibility systems. User 1 stated that existing technology such as
text-to-speech scanning applications provide the same functionality
with a greater level of accessibility whilst not requiring users to
have an internet connection or traverse an unfamiliar website. The
overall quality of user experience could be improved upon, with
the implementation of more customisable features within the
webpage that could adapt to the users’ specific needs and
requirements. These could include the ability to alter the size of the
text, contrast, and colour arrangement [10]. Additionally, the
accessibility of the QR code itself needs to be revised for real-life
applications, potentially with tactile indicators. The optimisation
for the site on different devices could also be more thoroughly
considered to not limit a user’s quality of experience due to factors
of device type [4].
A few participants provided some useful insight into refining the
experience of the system. These included modernising the website's
design, changing colour choices and revising its overall aesthetic.
Modernised and professional design elements are known to provide
a big role in a website’s perceived credibility as well as the holistic
experience of the user [26]. It would be wise to revisit and further
utilise proven frameworks and design principles in the
development process such as consistency, user control, freedom,
and error prevention [34]. User 1’s insight calls attention to the
importance of being aware of other accessibility technology such
as text-to-speech scanning applications. Future iterations of this
system could look to integrate such technology to maximise
accessibility and provide a greater user experience.
Findings from this study could provide several implications
regarding further research into QR codes and accessible web
design. It is imperative that accessible solutions adapt alongside the
continuous advancements in technology to ensure inclusivity for
those with disabilities and impairments. A standardised user-
centred design approach should be more intensely adopted during
the reiterations of the system to ensure the website meets the
essential user requirements outlined. This would translate to
ongoing user testing throughout the entire developmental cycle and
introducing iterative alterations to the design based on user
feedback [35]. Further research on the benefit of the system as a
whole in the long term could be achieved through longitudinal
studies. In relation to health literacy, a large problem for those with
disabilities [29], these studies could measure participants'
understanding of given medicinal information as well as their
adherence to healthcare instructions, additionally looking at their
overall health outcomes. This could provide potential evidence for
the widespread adoption of systems like this in disabled patients to
promote health literacy.
Ultimately, the feedback from this study gave some useful insight
into the system’s accessibility and usability features and has
provided a solid foundation to improve upon. Through the
acknowledgement of the issues identified during the study, systems
such as this have the potential to become a widely adopted solution
for achieving object accessibility for those with visual impairments
and other disabilities. Further research should focus on
modernising and streamlining the system’s aesthetic appeal,
optimising its functionality, as well as exploring the integration of
existing synergistic technologies for a seamless user experience.
This research aims to contribute to the objective of utilising
technology to ensure the highest level of ubiquitous accessibility
for all, regardless of disability or circumstance.
ACKNOWLEDGMENTS
I would like to express my sincere gratitude to Dr James
Ohene-Djan for his continued support and advice during the
development of this paper. I would also like to extend my
thanks to the whole team at QR Video Solutions for allowing
me to utilise their services and providing insight into any
queries I had.
REFERENCES
[1] Awad, M. et al. (2018) “Intelligent Eye: A mobile application for
assisting blind people,” 2018 IEEE Middle East and North
Africa Communications Conference (MENACOMM) [Preprint]. Available
at: https://doi.org/10.1109/menacomm.2018.8371005.
[2] Barnum, C. (2010) Usability Testing Essentials: Ready, set ... test!
Morgan Kaufmann Publishers Inc.
[3] Brooks, B. (2023) Thinking fast and slow in website design strategy,
HyperWeb. Available at: https://hyperweb.com.au/thinking-fast-and-slow-
in-website-design-strategy/ (Accessed: March 2023).
[4] Budiu, R. (2016) Mobile: Native apps, web apps, and Hybrid Apps,
Mobile: Native Apps, Web Apps, and Hybrid Apps. Nielsen Norman
Group. Available at: https://www.nngroup.com/articles/mobile-native-apps/
(Accessed: April 2023).
[5] Cockton, G. and Woolrych, A. (2001) “Understanding inspection
methods: Lessons from an assessment of Heuristic Evaluation,” People and
Computers XV—Interaction without Frontiers, pp. 171–191. Available at:
https://doi.org/10.1007/978-1-4471-0353-0_11.
[6] Constantinou, V., Loizides, F. and Ioannou, A. (2016) “A personal tour
of cultural heritage for deaf museum visitors,” Digital Heritage. Progress in
Cultural Heritage: Documentation, Preservation, and Protection, pp. 214–
221. Available at: https://doi.org/10.1007/978-3-319-48974-2_24.
[7] Coull, H. (2022) The Ultimate QR code sizing guide - what size should
a QR code be?, Blinq. Available at: https://blinq.me/blog/what-size-should-
a-qr-code-be (Accessed: January 19, 2023).
[8] Dermawi, R., Tolle, H. and Aknuranda, I. (2018) “Design and usability
evaluation of Communication Board for deaf people with user-centered
design approach,” International Journal of Interactive Mobile Technologies
(iJIM), 12(2), p. 197. Available at: https://doi.org/10.3991/ijim.v12i2.8100.
[9] Free text to speech tool (n.d.) TextMagic. Available at:
https://freetools.textmagic.com/text-to-speech (Accessed: February 2023).
[10] Gajos, K.Z., Weld, D.S. and Wobbrock, J.O. (2010) “Automatically
generating personalized user interfaces with supple,” Artificial Intelligence,
174(12-13), pp. 910–950. Available at:
https://doi.org/10.1016/j.artint.2010.05.005.
[11] Gauber, B. (2010) Navigation, Braille Illiteracy is a Growing Problem
| Alliance for Equality of Blind Canadians. Available at:
http://www.blindcanadians.ca/publications/cbm/31/braille-illiteracy-
growing-problem (Accessed: January 18, 2023).
[12] Hall, R.H. and Hanna, P. (2007) “The impact of web page text-
background colour combinations on readability, retention, aesthetics and
behavioural intention,” Behaviour & Information Technology, 23(3),
pp. 183–195. Available at:
https://doi.org/10.1080/01449290410001669932.
[13] Hand talk app (2022) Hand Talk - Learn ASL today. Available at:
https://www.handtalk.me/en/app/ (Accessed: February 2023).
[14] Hassenzahl, M. and Tractinsky, N. (2006) “User experience -
 a research agenda,” Behaviour & Information Technology,
25(2), pp. 91–97. Available at:
https://doi.org/10.1080/01449290500330331.
[15] Islam, M. R., Rashid, M. M., & Rahman, M. M. (2020). Mobile
Application Accessibility for the Visually Impaired: A Survey of Design
Issues. International Journal of Human-Computer Interaction, 36(13), 1189-
1205.
[16] Istance, H., Vickers, S. and Hyrskykari, A. (2009) “Gaze-based
interaction with massively multiplayer on-line games,” CHI '09 Extended
Abstracts on Human Factors in Computing Systems [Preprint]. Available
at: https://doi.org/10.1145/1520340.1520670.
[17] Jabnoun, H., Benzarti, F. and Amiri, H. (2014) “Object Recognition
for Blind People Based on Features Extraction,” International Image
Processing, Applications and Systems Conference [Preprint]. Available at:
https://doi.org/10.1109/ipas.2014.7043293.
[18] Kahneman, D. (2011) Thinking, fast and slow. Farrar, Straus and
Giroux.
[19] Khan, A. and Khusro, S. (2019) “Blind-Friendly User Interfaces – a
pilot study on improving the accessibility of touchscreen interfaces,”
Multimedia Tools and Applications, 78(13), pp. 17495–17519. Available
at: https://doi.org/10.1007/s11042-018-7094-y.
[20] Kim, J.H. et al. (2018) “Compressed QR code-based Mobile Voice
Guidance Service for the visually disabled,” 2018 20th International
Conference on Advanced Communication Technology (ICACT) [Preprint].
Available at: https://doi.org/10.23919/icact.2018.8323779.
[21] Krug, S. (2006) Don't make me think!: A common sense approach to
web usability. Berkeley, California: New Riders Pub.
[22] Lancaster, G.A., Dodd, S. and Williamson, P.R. (2004) “Design and
analysis of pilot studies: Recommendations for good practice,” Journal of
Evaluation in Clinical Practice, 10(2), pp. 307–312. Available at:
https://doi.org/10.1111/j..2002.384.doc.x.
[23] Legge, G.E. and Bigelow, C.A. (2011) “Does print size matter for
reading? A review of findings from Vision Science and Typography,”
Journal of Vision, 11(5). Available at: https://doi.org/10.1167/11.5.8.
[24] Legge G.E., Ross JA, Luebker A, LaMay JM. (1989) Psychophysics
of reading. VIII. The Minnesota Low-Vision Reading Test. Optom Vis Sci.
1989 Dec;66(12):843-53. doi: 10.1097/00006324-198912000-00008.
PMID: 2626251.
[25] Lidwell, W., Butler, J. and Holden, K. (2010) Universal principles of
design: 125 ways to enhance usability, influence perception, increase
appeal, make better design decisions, and teach through design. Rockport.
[26] Lindgaard, G. et al. (2006) “Attention web designers: You have 50
milliseconds to make a good first impression!,” Behaviour &
Information Technology, 25(2), pp. 115–126. Available at:
https://doi.org/10.1080/01449290500330448.
[27] López-de-Ipiña, D., Lorido, T. and López, U. (2011) “Indoor
navigation and product recognition for Blind People Assisted Shopping,”
Ambient Assisted Living, pp. 33–40. Available at:
https://doi.org/10.1007/978-3-642-21303-8_5.
[28] Manhas, J. (2013) “A Study of Factors Affecting Websites Page
Loading Speed for Efficient Web Performance,” International Journal of
Computer Sciences and Engineering, 1(3).
[29] McKee, M.M. et al. (2015) “Assessing health literacy in Deaf
American Sign Language Users,” Journal of Health Communication,
20(sup2). Available at: https://doi.org/10.1080/10810730.2015.1066468.
[30] Morera-Vidal, F. (2022) “Infographics, a better medium than plain
text for increasing knowledge.,” grafica, 10(19). Available at:
https://doi.org/10.5565/rev/grafica.204.
[31] Nielsen, J. (2009) Usability engineering. Amsterdam: Morgan
Kaufmann.
[32] Nielsen, J. (2000) Why you only need to test with 5 users, Why You
Only Need to Test with 5 Users. Nielsen Norman Group. Available at:
https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/
(Accessed: March 2023).
[33] Nielsen, J. (2012) Usability 101: Introduction to usability, Nielsen
Norman Group. Available at: https://www.nngroup.com/articles/usability-
101-introduction-to-usability/ (Accessed: January 27, 2023).
[34] Nielsen, J. (2020) 10 usability heuristics for user interface design, 10
Usability Heuristics for User Interface Design. Nielsen Norman Group.
Available at: https://www.nngroup.com/articles/ten-usability-heuristics/
(Accessed: April 2023).
[35] Norman, D.A. and Draper, S.W. (1987) “User Centred System Design
-New Perspectives on Human/Computer Interaction,” Journal of
Educational Computing Research 3(1) [Preprint]. Available at:
https://doi.org/10.1201/b15703.
[36] Petrova, K. et al. (2016) “QR codes advantages and dangers,”
Proceedings of the 13th International Joint Conference on e-Business and
Telecommunications [Preprint]. Available at:
https://doi.org/10.5220/0005993101120115.
[37] QR Video Solutions (n.d.). Available at:
https://www.qrvideosolutions.com/ (Accessed: January 2023).
[38] Reading and braille research (2022) RNIB. Available at:
https://www.rnib.org.uk/professionals/health-social-care-education-
professionals/knowledge-and-research-hub/research-archive/reading-and-
braille-research/ (Accessed: January 18, 2023).
[39] Rello, L. and Baeza-Yates, R. (2016) “The effect of font type on
screen readability by people with dyslexia,” ACM Transactions on
Accessible Computing, 8(4). Available at: https://doi.org/10.1145/2897736.
[40] Rezaei, Y.A., Heisenberg, G. and Heiden, W. (2014) “User interface
design for disabled people under the influence of time, efficiency and
costs,” HCI International 2014 - Posters’ Extended Abstracts, pp. 197–202.
Available at: https://doi.org/10.1007/978-3-319-07854-0_35.
[41] Sharp, H., Rogers, Y. and Preece, J. (2023) Interaction design:
Beyond human-computer interaction. Hoboken: John Wiley & Sons,
Inc.
[42] Silverman, A.M. and Bell, E.C. (2018) “The association between
braille reading history and well-being for blind adults,” Journal of
Blindness Innovation and Research, 8(1). Available at:
https://doi.org/10.5241/8-141.
[43] Stanfa, K. and Johnson, N. (2015) “Improving braille reading fluency:
The bridge to comprehension,” Journal of Blindness Innovation and
Research, 5(2). Available at: https://doi.org/10.5241/5-83.
[44] Vasco Lopes, N. (2020) “Internet of things feasibility for disabled
people,” Transactions on Emerging Telecommunications Technologies,
31(12). Available at: https://doi.org/10.1002/ett.3906.
[45] Veispak, A. and Ghesquière, P. (2010) “Could specific braille reading
difficulties result from developmental dyslexia?,” Journal of Visual
Impairment & Blindness, 104(4), pp. 228–238. Available at:
https://doi.org/10.1177/0145482x1010400406.
[46] “World Wide Web Consortium” (2018) The SAGE Encyclopedia of
the Internet [Preprint]. Available at:
https://doi.org/10.4135/9781473960367.n290.
ResearchGate has not been able to resolve any citations for this publication.
ResearchGate has not been able to resolve any references for this publication.