BookPDF Available

Design and Development of Smart Surgical Assistant Technologies: A Case Study for Translational Sciences

Authors:
Design and
Development of Smart
Surgical Assistant
Technologies
Are Amazon Alexa and Google Home limited to our bedrooms, or
can they be used in hospitals? Do you envision a future where physi-
cians work hand-in-hand with voice AI to revolutionize healthcare
delivery? In the near future, clinical smart assistants will be able
to automate many manual hospital tasks – and this will be only the
beginning of the changes to come.
Voice AI is the future of physician–machine interaction, and this
focus book provides invaluable insight into its next frontier. It begins
with a brief history and current implementations of voice-activated
assistants and illustrates why clinical voice AI is at its inection
point. Next, it describes how the authors built the world’s rst smart
surgical assistant using an off-the-shelf smart home device, outlining
the implementation process in the operating room. From quantitative
metrics to surgeons’ feedback, the authors discuss the feasibility of
this technology in the surgical setting. The book then provides an in-
depth development guideline for engineers and clinicians desiring to
develop their own smart surgical assistants. Lastly, the authors delve
into their experiences in translating voice AI into the clinical setting
and reect on the challenges and merits of this pursuit.
The world’s rst smart surgical assistant has not only reduced sur-
gical time but eliminated major touch points in the operating room,
resulting in positive, signicant implications for patient outcomes and
surgery costs. From clinicians eager for insight into the next digital
health revolution to developers interested in building the next clinical
voice AI, this book offers a comprehensive guide for both audiences.
Design and
Development of Smart
Surgical Assistant
Technologies
A Case Study for
Translational Sciences
Jeff J. H. Kim, Richard Um, Rajiv Iyer,
Nicholas Theodore, and Amir Manbachi
First edition published 2023
by CRC Press
6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742
and by CRC Press
4 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN
CRC Press is an imprint of Taylor & Francis Group, LLC
© 2023 Jeff J.H. Kim, Richard Um, Rajiv Iyer, Nicholas Theodore, Amir Manbachi
Reasonable efforts have been made to publish reliable data and information, but the author
and publisher cannot assume responsibility for the validity of all materials or the conse-
quences of their use. The authors and publishers have attempted to trace the copyright
holders of all material reproduced in this publication and apologize to copyright holders if
permission to publish in this form has not been obtained. If any copyright material has not
been acknowledged please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted,
reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other
means, now known or hereafter invented, including photocopying, microlming, and
recording, or in any information storage or retrieval system, without written permission
from the publishers.
For permission to photocopy or use material electronically from this work, access www.
copyright.com or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood
Drive, Danvers, MA 01923, 978-750-8400. For works that are not available on CCC please
contact mpkbookspermissions @tandf .co .uk
Trademark notice: Product or corporate names may be trademarks or registered trade-
marks and are used only for identication and explanation without intent to infringe.
ISBN: 9781032168722 (hbk)
ISBN: 9781032181967 (pbk)
ISBN: 9781003253341 (ebk)
DOI: 10.12 01/97810 03253341
Typeset in Times
by Deanta Global Publishing Services, Chennai, India
We dedicate this book to our
families and our loved ones.
v i i
CONTENTS
Acknowledgments xi
Author Biographies xiii
1 Introduction to Voice-Activated Assistants 1
1.1 What Is Voice-Activated Technology? 1
1.2 System Architecture of Voice-Activated Technology 2
1.3 History 4
1.3.1 Humble Beginnings: The 1920s 4
1.3.1.1 Radio Rex 4
1.3.2 Significant Developments in Speech
Recognition: 1950 to 1990 4
1.3.2.1 Digit Recognizers 4
1.3.2.2 DARPA 5
1.3.3 Modern Developments: 1990 to 2020 6
1.3.3.1 Development of Internet,
Microprocessors, and Internet
of Things Devices 6
1.3.3.2 Modern Voice-Activated
Smart Assistant Technology 7
Further Reading 8
References 8
2 Commercial Voice-Activated Assistants 11
2.1 Reason for Adoption 11
2.1.1 Hands-Free Interaction 11
2.1.2 Voice Is the Most Natural Mode of
Communication 12
2.1.3 Voice Is the Most Efficient Mode of
Communication 12
2.2 Current Applications of Voice-Activated Assistants 13
2.2.1 Mobile Assistants 13
2.2.2 Smart Home Assistants 14
2.2.3 Commercial Assistants 15
2.3 The Future of Voice-Activated Assistants 15
2.3.1 Conversational AI 16
2.4 Challenges 16
2.4.1 Privacy 16
2.4.2 Public Perception 17
v i i i C o n t e n t s
2.4.3 Exclusion of Certain Users 17
2.4.4 Introduction to Healthcare Applications 18
References 18
3 Development of Smart Surgical Assistants 21
3.1 Introduction 21
3.1.1 Similar Technology at Work 22
3.2 Methods 23
3.2.1 Identifying Problems to Tackle 23
3.2.1.1 Reduced Surgical Site Infection 23
3.2.1.2 Shorter Operative Time 24
3.2.1.3 More Efficient Allocation of
Human Resources 25
3.2.2 Observations to Guide Our Proposed
Solution 26
3.2.3 Design Requirements 27
3.2.3.1 System Design 27
3.2.4 Development of the System 30
3.2.5 Testing and Implementation 32
3.3 Results 33
3.4 Discussion 34
3.4.1 Merits 36
3.4.2 Future Outlook 38
References 40
4 Merits and Challenges of Translational Sciences 43
4.1 Introduction 43
4.1.1 Translational Science from the
Engineering Perspective 46
4.2 Merits of Translational Science 48
4.2.1 Healthcare Innovations from a Solution-
Oriented Approach 48
4.2.2 Drug Device and Discovery 50
4.2.3 Promotion of Multidisciplinary
Collaboration 51
4.3 Challenges of Translational Science 52
4.3.1 Complex Processes and Extended
Length of Time 52
4.3.2 Immature Technology and Techniques
Inhibiting Translational Work 53
4.3.3 Challenges in Teamwork and
Management 55
4.3.4 The Importance of Advanced Training 56
i x C o n t e n t s
4.3.5 The Reality of Translational Research in
the Private Sector and Academia 57
4.3.6 Challenges Associated with
Engineering-Related Translational
Research 59
References 60
5 Overcoming the Challenges of Translational Research 63
5.1 Expansion of Translational Science Education
and Research 63
5.2 Increased Funding in Translational Research
Apparatus and Methods 64
5.3 Continued Investment into Physician Scientists 65
References 66
Index 69
x i
ACKNOWLEDGMENTS
The authors would like to acknowledge our invaluable team mem-
bers, Jonathan Liu and Japesh Patel, for their part in engineering
the smart surgical assistant and the faculty from the Johns Hopkins
School of Medicine Department of Neurosurgery, Department of
Radiology, and Department of Biomedical Engineering. We would
like to also thank Gina Lynn Adrales, M.D., M.P.H., Ivan George,
and Nick Louloudis from MISTIC at Johns Hopkins University. We
acknowledge Clare Sonntag for her edits. Last, but not least, we thank
Carolina Antunes and Betsy Byers at CRC Press for this amazing
opportunity and for bringing this book to reality.
Nicholas Theodore and Amir Manbachi acknowledge funding
support from Defense Advanced Research Projects Agency, DARPA,
Award Contract #: N660012024075. In addition, Amir Manbachi
acknowledges funding support from Johns Hopkins Institute for
Clinical and Translational Research (ICTR)’s Clinical Research
Scholars Program (KL2), administered by the National Center for
Advancing Translational Sciences (NCATS), National Institute of
Health (NIH).
x i i i
AUTHOR BIOGRAPHIES
Jeff J.H. Kim is an MD/PhD student at the University of Illinois
College of Medicine, Chicago, Illinois, leading developments in
medical AI, health technology, and organ-on-chip microuidics.
During his master’s program at Johns Hopkins University, Baltimore,
Maryland, he spearheaded the development of a voice-activated
surgical assistant, where he successfully implemented his pioneer-
ing work in the operating room, improving both patient safety and
surgical efciency. His work is published in the 2021 SPIE Medical
Imaging Conference Proceedings. He also led the development of
several innovative works in neurosurgery and cardiology, developing
a voice-activated smart surgical bed and cardiac patch that detects
lethal arrhythmia. He obtained a dual bachelor’s degree in Electrical
Engineering and Neuroscience at Johns Hopkins University.
Richard Um is a graduate researcher at the Johns Hopkins University
School of Medicine. His research focuses on translational neurosur-
gery with an emphasis on in-vitro benchtop models to test certain
devices used in surgical procedures. With his extensive design and
development skills, he created the rst prototype of a working voice-
controlled operating bed. He has further developed a smart hospital
assistant that provides surgeons with complete control over equip-
ment in the operating room, prioritizing patient safety and workow
efciency. Richard Um received a bachelor’s degree in Biomedical
Engineering with a minor in Robotics from Johns Hopkins University,
as well as a master’s in Biomedical Sciences from Tufts University,
Medford, Massachusetts.
x i v A u t h o r B i o g r A p h i e s
Rajiv Iyer is a pediatric neurosurgeon who recently completed a neu-
rosurgical residency at The Johns Hopkins Hospital and a pediatric
neurosurgery fellowship at Primary Children’s Hospital/University of
Utah, Salt Lake City, Utah, and is currently completing an Advanced
Pediatric Spinal Deformity Fellowship at Columbia University/
Morgan Stanley Children’s Hospital, New York City, New York,
under the mentorship of Dr. Lawrence Lenke and Dr. Michael Vitale.
Dr. Iyer will soon be joining the pediatric neurosurgery group at the
University of Utah as Assistant Professor of Neurosurgery. There, his
clinical focus will be on the treatment of complex pediatric spinal
disorders, including spinal deformity, craniocervical junction disor-
ders, and spinal column/spinal cord tumors. Thus far in his career,
Dr. Iyer has been passionate about learning from experts in the eld
and utilizing the best possible techniques to care for his patients. He
is enthusiastic to begin his academic neurosurgical career, where
he hopes to deliver outstanding pediatric neurosurgical care, while
adopting new technology in and out of the operating room in an effort
to improve patient outcomes and advance the eld.
Nicholas Theodore is an American neurosurgeon and researcher
at Johns Hopkins University School of Medicine. He is known for
his work in spinal trauma, minimally invasive surgery, robotics, and
personalized medicine. He is Director of the Neurosurgical Spine
Program at Johns Hopkins and Co-Director of the Carnegie Center for
Surgical Innovation and Co-Director of the HEPIUS Neurosurgical
Innovation Lab at Johns Hopkins.
In 2016 he became the second Donlin M. Long Professor of
Neurosurgery at Johns Hopkins Hospital. Dr. Theodore also holds
professorships in Orthopedics and Biomedical Engineering at
Johns Hopkins. He is actively involved in the area of preventative
medicine within neurosurgery. He has been associated with the
ThinkFirst Foundation for several years, having served as the foun-
dation’s Medical Director and President. In 2017, Dr. Theodore was
appointed to the National Football League’s Head, Neck and Spine
Committee, of which he became Chairman in 2018. In 2020, Michael
J. Fox revealed in his memoir that Dr. Theodore performed a risky
but successful surgery on him to remove an ependymoma in Fox’s
spinal cord.
In 2020, Dr. Theodore received a grant in the amount of $13.48
million from the Defense Advanced Research Projects Agency’s
(DARPA) Bridging the Gap Plus (BG+) program to fund research in
x v A u t h o r B i o g r A p h i e s
new approaches to the treatment of spinal cord injury. With this grant,
Dr. Theodore is co-leading the effort to treat patients with spinal cord
injury by integrating injury stabilization, regenerative therapy, and
functional restoration using targeted electrical ultrasound modalities.
As the principal investigators for the program, Dr. Theodore and Dr.
Manbachi will coordinate teams at Johns Hopkins and its Applied
Physics Laboratory, Columbia University, and Sonic Concepts.
Dr. Amir Manbachi is an Assistant Professor of Neurosurgery,
Biomedical Engineering, Mechanical Engineering, Electrical
and Computer Engineering at Johns Hopkins University. He is
the Co-Founder and Co-Director of the HEPIUS Neurosurgical
Innovation Lab. His research interests include applications of sound
and ultrasound to various neurosurgical procedures. These applica-
tions include imaging the spine and brain, detection of foreign body
objects, remote ablation of brain tumors, monitoring of blood ow
and tissue perfusion, and other upcoming interesting applications
such as neuromodulation and drug delivery. His pedagogical activi-
ties have included teaching engineering design, innovation, transla-
tion, and entrepreneurship, as well as close collaboration with clinical
experts in Surgery and Radiology at Johns Hopkins.
Dr. Manbachi is an author of over 25 peer-reviewed journal articles,
over 30 conference proceedings, over 10 invention disclosures/patent
applications, and a book entitled Towards Ultrasound-Guided Spinal
Fusion Surgery. He has mentored more than 150 students and has so
far raised $15 million in funding, and his interdisciplinary research
has been recognized by a number of awards, including the University
of Toronto’s 2015 Inventor of Year award, the Ontario Brain Institute
2013 fellowship, the Maryland Innovation Initiative, and the Johns
Hopkins Institute for Clinical and Translational Research’s Career
Development Award.
Dr. Manbachi has extensive teaching experience, particularly
in the elds of engineering design, medical imaging, and entrepre-
neurship (at both Hopkins and Toronto), for which he has received
numerous awards: the University of Torontos Teaching Excellence
award (2014), the Johns Hopkins University career center’s award
nomination for students’ “Career Champion” (2018), and the Johns
Hopkins University Whiting School of Engineering’s Robert B. Pond
Sr. Excellence in Teaching Excellence Award (2018).
1
Smart Surgical Assistant Technologies 1
INTRODUCTION TO VOICE-
ACTIVATED ASSISTANTS
When people think of articial intelligence (AI), many envision vir-
tual butlers who are capable of handling anything we ask of them.
We can attribute this impression to the mass media —consider Jarvis
from Marvel’s Iron Man series. From manufacturing Iron Man’s suits
to helping him ght his enemies, Jarvis does it all under Tony Stark’s
orders. Although voice-activated smart assistants fail to encompass
the vastness that is AI, they have undoubtedly played a vital role in
familiarizing people with it.
Of all the AI applications available in the world today, voice-assis-
tant technology has been one of the most pervasive and widespread.
Today, over half of Americans interact with virtual assistants embed-
ded in their smartphones and many have one or more consumer smart
home devices.1 However, this was not the norm even a decade ago.
Hardware and software advancements have allowed for rapid growth
and expansion of voice-activated assistant technology. Once only
depicted in science-ction novels, voice-activated assistants have
evolved into a common household technology. What contributed to
this rise? Where will the technology go from here?
1.1 WHAT IS VOICE-ACTIVATED TECHNOLOGY?
Before we move forward, we must set the operational denition of
voice-activated technology. Here, voice-activated technology is any
technology capable of executing pre-programmed tasks based on
vocal input by a user. This technology goes a step beyond speech-rec-
ognition technology, as it can not only understand users’ requests but
can also deliver convenience by executing user-specied commands.
To be considered a voice-activated smart assistant, the technol-
ogy must satisfy three main criteria. First, it must be able to cap-
ture and decode human speech. This is the human equivalent of
Smart Surgical Assistant Technologies Introduction to Voice-Activated Assistants
DOI: 10.1201/9781003253341-1
10.1201/97810 03253341-1
2sm A r t s u r g i C A l A s s i s t A n t te C h n o l o g i e s
comprehension. Second, the technology needs to carry out a plurality
of tasks that offer convenience to the user. This is what makes them
“smart assistants”. Third, it requires a human–machine interface,
which often takes the form of a smart device that facilitates the inter-
action between the AI and the user.
1.2 SYSTEM ARCHITECTURE OF VOICE-
ACTIVATED TECHNOLOGY
The proliferation of smart voice-activated assistant technology would
not have been possible without the foundational bedrock that supports
its existence. The most important support frameworks are the inter-
net of things (IoT) and Natural Language Processing (NLP) technol-
ogy. IoT gives the physical assembly and the network access for smart
voice-activated assistants. It also allows voice-activated technology
to connect to other IoT devices, allowing functional exibility. In
2018, the number of connected IoT devices reached 22 billion around
the world and that number is still growing.2 And this feat is made pos-
sible by the maturing technology in the space of wireless connectiv-
ity, battery, integrated circuit, and cloud computing. We will briey
dive into each of these components (Figure 1.1).
First and foremost, wireless technology, specically, Wi-Fi, low-
energy Bluetooth, and Low Power Wide Area Network (LPWAN),
established a communication protocol that connects machine to
machine (M2M), allowing for innovative and hassle-free device
interconnectivity.3 Second, the development and expansion of
rechargeable lithium-ion batteries revolutionized electronics devel-
opment as it allowed for compact and portable electronics like
wearables and IoT devices. Next, the invention of the integrated
circuit (IC) gave rise to the compact housing of transistors lead-
ing to advanced microprocessors. The role of IC is reviewed more
extensively in Section 1.3.3.1. Lastly, cloud computing granted IoT
devices to take advantage of remote computer system resources like
computing power and data, reducing the on-board hardware require-
ments. Each of these elements contributed to a exible, mobile, and
compact electronic arrangement, which is deemed critical to the
success of IoT devices.
The other crucial constituent of a smart voice-activated assistant
is NLP. Today’s NLP effort is made possible by the advancements
in speech recognition and machine learning beginning in the 20th
century. For the modern voice-activated assistants, however, it is
3in t r o d u C t i o n t o vo i C e - A C t i v A t e d A s s i s t A n t s
the transition from the statistical NLP to the neural NLP, a neural
network-dependent machine learning, that gave the greatest contribu-
tion.4 The efforts in speech recognition are explored in greater detail
below.
The voice-activated assistant technology would not have been
made possible if any one of the supporting frameworks was absent.
And because these supporting frameworks have reached their matu-
rity, the development of a diverse smart voice-activated assistant
application is the most favorable today. It is important to keep the
Figure 1.1 A flowchart that illustrates the components and their
respective key development milestones that allowed for the birth of
voice-activated assistant technology. Simply put, the voice-activated
assistant technology is a convergence of internet of things (IoT) technol-
ogy that gives it a flexible framework and Natural Language Processing
(NLP) that allows it to comprehend and speak human language. IoT
is comprised of four different components – wireless technology, bat-
tery, cloud computing, and integrated circuit. NLP is a combination of
speech-recognition technology and machine learning.
4sm A r t s u r g i C A l A s s i s t A n t te C h n o l o g i e s
system architecture in mind as we move forward and understand
how these building blocks interact with one another. It would be use-
ful when we dive into the engineering process of the smart surgical
assistant.
1.3 HISTORY
Because major advancements in commercial voice-activated smart
assistant technology occurred just in the past decade, it is easy to mis-
label the technology as a recent development. However, this could not
be farther from the truth. The effort to create voice-activated technol-
ogy dates back as early as the 1920s. It is important to understand
how voice-activated assistants have evolved in order to anticipate
the future trajectory of their development. Looking at what has been
done can give us an idea of where the technology can go from here.
1.3.1 Humble Beginnings: The 1920s
1.3.1.1 Radio Rex
The very rst voice-activated technology was neither smart nor par-
ticularly useful, but it did spark joy among the masses in the early
1920s. Twenty years before the advent of the rst computer, Elmwood
Button Co. produced a toy called Radio Rex – a toy dog that crawled
out of its home when its name, “Rex”, was called. The mechanism
of this toy, though simple, was quite clever. The acoustic energy in
the word “Rex”, specically the vowel [eh], triggered a harmonic
oscillator that released Rex from a current-energized magnet. The
frequency detector in Radio Rex, however, had its shortcomings: it
would respond to other words at 500 Hz frequency. It also had trouble
detecting the vocal frequency of children and females. However, this
quirky toy would mark the beginning of using vocal frequency as
part of speech recognition that would guide future developments5
(Figure1.2).
1.3.2 Significant Developments in Speech
Recognition: 1950 to 1990
1.3.2.1 Digit Recognizers
Fast forward 30 years to 1952. Bell Labs introduced Audrey, a digit
recognizer that stood 6 feet tall and contained analog lters and
circuitry. Despite its enormous size, Audrey boasted the ability to
5in t r o d u C t i o n t o vo i C e - A C t i v A t e d A s s i s t A n t s
recognize just ten numbers, from 0 to 9. The operator would speak
into a built-in telephone, and Audrey would match the speech sounds
to pre-referenced electrical buses stored within an analog memory.
Flashing light indicated a visible output. The system had its short-
comings; for example, the analog reference memory had to be tailored
to the operator, restricting the number of users. However, once paired
with an operator, Audrey boasted a 97% accuracy rate. Its ability to
recognize ten digits was enough to showcase the untapped potential
of speech recognition technology. Scientists, engineers, and the gen-
eral public alike were fascinated by a non-living entity processing
the complexity that is human speech. Audrey’s greatest legacy is the
scores of developments that took place to expand speech-recogni-
tion technology, which lay the foundation for voice-activated smart
assistants.6,7
1.3.2.2 DARPA
One key development that followed Audrey is the Speech
Understanding Research (SUR) Program. Launched in 1976, the
SUR program was funded by the Defense Advanced Research
Projects Agency (DARPA) – $15 million was dedicated to building
Figure 1.2 Photo of Radio Rex, the earliest known voice-activated
electronics. A dog toy crawls out of its house when its name “Rex” is
called out by the user. The specific frequency in the word “Rex” triggers
a harmonic oscillator, which releases the dog figure from the current-
energized magnet. Image used with permission of The Warden and
Scholars of Winchester College.
6sm A r t s u r g i C A l A s s i s t A n t te C h n o l o g i e s
a system that could understand 1,000 words, or that would be equiv-
alent to the vocabulary of a three-year-old. The initial design goal
was satised by the Harpy System, which was developed at Carnegie
Mellon. Harpy understood over 90% of spoken sentences from a pre-
determined 1,000-word database. This was a signicant jump from
Audrey, and Harpy showed improvements in the spectrum and num-
ber of words recognized. From this DARPA program, scientists and
engineers introduced a guideline to develop the next generation of
speech-recognition systems. They envisioned scaling existing tech-
niques, such as linear predictive coding, dynamic time warping, and
hidden Markov models. The projects that followed the SUR program
also expanded on applying neural networks for automatic speech rec-
ognition.3 We will not dive into these techniques, as such an analysis
would deviate from the focus of this book. However, readers can refer
to the further readings listed below if they are interested in learning
more.
1.3.3 Modern Developments: 1990 to 2020
1.3.3.1 Development of Internet, Microprocessors,
and Internet of Things Devices
The signicant expansion of the internet and advanced microproces-
sor technology has enabled the rise of next-generation voice-activated
assistants. Advancement in the microprocessor space enabled an
exponential increase in processing power due to the rapid rise of tran-
sistor count.8 As mentioned previously, the root of this phenomenon
was the rise of system-on-chip circuits, also known as IC. Instead
of utilizing motherboard-based PC architecture, which separates
components based on functionality, integrated circuits allowed the
consolidation of main and peripheral processing cores in a compact
form. This had two main benets that ultimately laid the groundwork
for IoT technology.9 The rst is the signicant reduction of size and
vast improvement in computing and battery performance. Second,
the rise of the modern internet and short-range wireless technology
demonstrated the value of IoT technology, as access to the internet
and device interconnectivity expanded the scope of voice-activated
smart assistant capabilities. Short-range wireless technology such as
Bluetooth advanced inter-device communication, while faster data
transfer protocol expanded the scope of control over sensors and
actuators.10 This, in simple terms, gave arms and legs to the brain of
the system, thereby facilitating smart functions such as controlling a
Introduction to Voice-Activated Assistants
Gold, Ben. , Nelson. Morgan , and Dan. Ellis . Speech and audio signal processing:
Processing and perception of speech and music. John Wiley & Sons, 2011.
O'Shaughnessy, Douglas. . “Linear predictive coding.” IEEE Potentials 7.1 (1988):
29–32.
Müller, Meinard. . “Dynamic time warping.” Information Retrieval for Music and
Motion (2007): 69–84.
Varga, A. P. , and Roger K. Moore . “Hidden Markov model decomposition of
speech and noise.” International Conference on Acoustics, Speech, and Signal
Processing. IEEE, 1990.
Kinsella, Bret. . “Smart home ownership nearing 50% of U.S. adults with voice
assistant control becoming more popular – New research.” Voicebot.ai, 18
December 2020, https://voicebot.ai/2020/12/18/smart-home-ownership-nearing-50-
of-u-s-adults-with-voice-assistant-control-becoming-more-popular-new-reserach/.
Vailshery, Lionel Sujay. . “Internet of Things (IoT) – Statistics & facts.” Statista.
Chen, Min. , Jiafu. Wan , and Fang. Li . “Machine-to-machine communications:
Architectures, standards and applications.” KSII Transactions on Internet and
Information Systems (TIIS) 6.2 (2012): 480–497.
Goldberg, Yoav. . “A primer on neural network models for natural language
processing.” Journal of Artificial Intelligence Research 57 (2016): 345–420.
David, E. E. , and O. G. Selfridge . “Eyes and ears for computers.” Proceedings of
the IRE 50.5 (1962): 1093–1101.
Gold, Ben. , Nelson. Morgan , and Dan. Ellis . Speech and audio signal processing:
Processing and perception of speech and music. John Wiley & Sons, 2011.
Moskvitch, Katia. . “The machines that learned to listen.” BBC Future 15 (2017).
Schaller, Robert R. “Moore's law: Past, present and future.” IEEE Spectrum 34.6
(1997): 52–59.
Alioto, Massimo. , ed. Enabling the Internet of Things: From integrated circuits to
integrated systems. Springer, 2017.
Kocakulak, Mustafa. , and Ismail. Butun . “An overview of wireless sensor networks
towards internet of things.” 2017 IEEE 7th Annual Computing and Communication
Workshop and Conference (CCWC). IEEE, 2017.
Memeti, Suejb. , and Sabri. Pllana . “PAPA: A parallel programming assistant
powered by IBM Watson cognitive computing technology.” Journal of Computational
Science 26 (2018): 275–284.
Commercial Voice-Activated Assistants
Olmstead, Kenneth. . “Nearly half of Americans use digital voice assistants, mostly
on their smartphones.” Pew Research Center 12 (2017).
Ruan, Sherry. , et al. “Comparing speech and keyboard text entry for short
messages in two languages on touchscreen phones.” Proceedings of the ACM on
Interactive, Mobile, Wearable and Ubiquitous Technologies 1.4 (2018): 1–23.
Gross, Doug. . “Apple introduces Siri, Web freaks out.” CNN.com (2011).
Rye, Dave. . “My life at x10.” X10 (USA) Inc., USA (1999).
Dojchinovski, Dimitri. , Andrej. Ilievski , and Marjan. Gusev . “Interactive home
healthcare system with integrated voice assistant.” 2019 42nd International
Convention on Information and Communication Technology, Electronics and
Microelectronics (MIPRO). IEEE, 2019.
Terzopoulos, George. , and Maya. Satratzemi . “Voice assistants and artificial
intelligence in education.” Proceedings of the 9th Balkan Conference on Informatics.
Association for Computing Machinery, 2019.
Hajare, Raju. , et al. “Design and development of voice activated intelligent system
for elderly and physically challenged.” 2016 International Conference on Electrical,
Electronics, Communication, Computer and Optimization Techniques (ICEECCOT).
IEEE, 2016.
Balasuriya, Saminda Sundeepa. , et al. “Use of voice activated interfaces by people
with intellectual disability.” Proceedings of the 30th Australian Conference on
Computer-Human Interaction. Association for Computing Machinery, 2018.
Trzos, Michal. , et al. “Voice control in a real flight deck environment.” International
Conference on Text, Speech, and Dialogue. Springer, 2018.
Stackpole, B. “Are virtual assistants headed to the plant floor?”
AutomationWorld.com (2020).
Gillies, C. “Talk! With! Me! Digital assistants are on the rise.”
Transformationbeats.com (2017).
Longo, Francesco. , Letizia. Nicoletti , and Antonio. Padovano . “Smart operators in
industry 4.0: A human-centered approach to enhance operators' capabilities and
competencies within the new smart factory context.” Computers & Industrial
Engineering 113 (2017): 144–159.
Golgowski, Nina. . “This burger king ad is trying to control your google home
device.” Huffpost, April 12 (2017): 7.
Yan, Qiben. , et al. “Surfingattack: Interactive hidden attack on voice assistants
using ultrasonic guided waves.” Network and Distributed Systems Security (NDSS)
Symposium. Institute of Electrical and Electronics Engineers, 2020.
Sugawara, Takeshi. , et al. “Light commands:{Laser-based} audio injection attacks
on {voice-controllable} systems.” 29th USENIX Security Symposium (USENIX
Security 20). Association for Computing Machinery, 2020.
Pfeifle, Anne. . “Alexa, what should we do about privacy: Protecting privacy for users
of voice-activated devices.” Washington Law Review 93 (2018): 421.
Easwara Moorthy, Aarthi. , and Kim-Phuong L. Vu . “Voice activated personal
assistant: Acceptability of use in the public space.” International Conference on
Human Interface and the Management of Information. Springer, 2014.
Development of Smart Surgical Assistants
Wang, Pu. , et al. “Development and validation of a deep-learning algorithm for the
detection of polyps during colonoscopy.” Nature Biomedical Engineering 2.10
(2018): 741–748.
Khuri, Shukri F. , et al. “The national veterans administration surgical risk study: Risk
adjustment for the comparative assessment of the quality of surgical care.” Journal
of the American College of Surgeons 180.5 (1995): 519–531.
“Surgical site infections.” Johns Hopkins Medicine,
https://www.hopkinsmedicine.org/health/conditions-and-diseases/surgical-site-
infections.
Broex, E. C. J. , et al. “Surgical site infections: how high are the costs?” Journal of
Hospital Infection 72.3 (2009): 193–201.
Berríos-Torres, Sandra I. , et al. “Centers for disease control and prevention
guideline for the prevention of surgical site infection, 2017.” JAMA Surgery 152.8
(2017): 784–791.
Bures, Sergio. , et al. “Computer keyboards and faucet handles as reservoirs of
nosocomial pathogens in the intensive care unit.” American Journal of Infection
Control 28.6 (2000): 465–471.
Weber, David J. , Deverick. Anderson , and William A. Rutala . “The role of the
surface environment in healthcare-associated infections.” Current Opinion in
Infectious Diseases 26.4 (2013): 338–344.
Birnbach, David J. , et al. “The use of a novel technology to study dynamics of
pathogen transmission in the operating room.” Anesthesia & Analgesia 120.4
(2015): 844–847.
Cheng, Hang. , et al. “Prolonged operative duration is associated with complications:
A systematic review and meta-analysis.” Journal of Surgical Research 229 (2018):
134–144.
Valsangkar, Nakul. , et al. “Operative time in esophagectomy: Does it affect
outcomes?” Surgery 164.4 (2018): 866–871.
Zdichavsky, Marty. , et al. “Impact of risk factors for prolonged operative time in
laparoscopic cholecystectomy.” European Journal of Gastroenterology &
Hepatology 24.9 (2012): 1033–1038.
Jackson, Timothy D. , et al. “Does speed matter? The impact of operative time on
outcome in laparoscopic surgery.” Surgical Endoscopy 25.7 (2011): 2288–2295.
Slack, P. S. , et al. “The effect of operating time on surgeons' muscular fatigue.” The
Annals of The Royal College of Surgeons of England 90.8 (2008): 651–657.
Childers, Christopher P. , and Melinda. Maggard-Gibbons . “Understanding costs of
care in the operating room.” JAMA Surgery 153.4 (2018): e176233–e176233.
Juraschek, Stephen P. , et al. “United States registered nurse workforce report card
and shortage forecast.” American Journal of Medical Quality 27.3 (2012): 241–249.
Andersson, Annette Erichsen. , et al. “Traffic flow in the operating room: An
explorative and descriptive study on air quality during orthopedic trauma implant
surgery.” American Journal of Infection Control 40.8 (2012): 750–755.
Kim, Jeong Hun. , et al. “Development of voice-controlled smart surgical bed.”
Frontiers in Biomedical Devices. Vol. 83549. American Society of Mechanical
Engineers, 2020.
Allaf, M. E. , et al. “Laparoscopic visual field.” Surgical Endoscopy 12.12 (1998):
1415–1418.
Kim, Jeong Hun. , et al. “Development of a smart hospital assistant: Integrating
artificial intelligence and a voice-user interface for improved surgical outcomes.”
Medical Imaging 2021: Imaging Informatics for Healthcare, Research, and
Applications. Vol. 11601. International Society for Optics and Photonics, 2021.
Merits and Challenges of Translational Sciences
Waldman, Scott A. , and Andre. Terzic . “Clinical and translational science: From
benchbedside to global village.” Clinical and Translational Science 3.5 (2010): 254.
DiMasi, Joseph A. , Henry G. Grabowski , and Ronald W. Hansen . “Innovation in
the pharmaceutical industry: New estimates of R&D costs.” Journal of Health
Economics 47 (2016): 20–33.
Greatbatch, Wilson. . The making of the pacemaker: Celebrating a lifesaving
invention. Prometheus Books, 2011.
Frankel, Richard I. “Centennial of Röntgen's discovery of x-rays.” Western Journal of
Medicine 164.6 (1996): 497.
Deisseroth, Karl. . “Optogenetics.” Nature Methods 8.1 (2011): 26–29.
Boveda, Serge. , Stéphane. Garrigue , and Philippe. Ritter . “The history of cardiac
pacemakers and defibrillators.” Dawn and evolution of cardiac procedures. Springer,
2013, 253–264.
Bennett, Joan W. , and King-Thom. Chung . “Alexander Fleming and the discovery
of penicillin.” Advances in Applied Microbiology 49 (2001): 163–184.
Austin, Christopher P. “Opportunities and challenges in translational science.”
Clinical and Translational Science 14.5 (2021): 1629–1647.
Johnson & Johnson . 2020 annual report. March, 2021.
Roberts, J. , S. Waddy , and P. Kaufmann . “Recruitment and retention monitoring:
Facilitating the mission of the National Institute of Neurological Disorders and Stroke
(NINDS).” Journal of Vascular and Interventional Neurology 5.1.5 (2012): 14.
Eisenberg, Mark J. The physician scientist's career guide. Springer Science &
Business Media, 2010.
Overcoming the Challenges of Translational Research
National Institutes of Health . “National center for advancing translational sciences.”
http://www.ncats,nih.gov.
“Translational science training and educational resources.” National Center for
Advancing Translational Sciences, U.S. Department of Health and Human Services,
https://ncats.nih.gov/training-education/resources.
Khetani, Salman R. , and Sangeeta N. Bhatia . “Microscale culture of human liver
cells for drug development.” Nature Biotechnology 26.1 (2008): 120–126.
Dunn, Andrew K. “Laser speckle contrast imaging of cerebral blood flow.” Annals of
Biomedical Engineering 40.2 (2012): 367–377.
Kosik, R. O. , et al. “Physician scientist training in the United States: A survey of the
current literature.” Evaluation & the Health Professions 39.1 (2016): 3–20.
Article
Full-text available
The rapid advancement of artificial intelligence (AI), including flexible systems like generative AI, has raised concerns about its psychological impacts, necessitating a multidisciplinary approach to address AI-induced anxiety. This article systematically investigates AI anxiety's unique stressors across different age groups – young adults, middle-aged, and seniors – and delineates common causes such as privacy concerns, AI-generated misinformation, uncontrolled AI development, and inherent biases. While focusing on non-clinical aspects of AI anxiety, this article delves into its extensive effects on mental and physical well-being, examining the consequent implications for healthcare costs, economic productivity, and fertility rates. To address AI anxiety effectively, the article proposes multidisciplinary solutions. First, the paper highlights the urgent need for AI-adapted clinical guidelines, incorporating tailored diagnostic and therapeutic measures aimed at mitigating AI anxiety. It also emphasizes the role of continuous education in demystifying and adapting to the age of AI, advocating for prompt updates to school curricula to match the rapid progress in AI technology. Emphasis is given to promoting regulatory measures to balance AI innovation and societal adaptability, including engineering constraints and ethical design as essential components of responsible AI development. Lastly, it explores pioneering solutions to mitigate AI anxiety, such as leveraging AI in mental health interventions. The article concludes that an intensified focus and strategic resource allocation are imperative to address the escalating mental health challenges as society navigates the impending era of pervasive AI.
Article
Full-text available
The mission of Translational Science is to bring predictivity and efficiency to the development and dissemination of interventions that improve human health. Ten years ago this year, the National Center for Advancing Translational Sciences was founded to embody, conduct, and support this new discipline. The Center’s first decade has brought substantial progress across a broad range of translational areas, from diagnostic and drug development to clinical trials to implementation science to education. The origins of the translational science and advances to this point are reviewed here and allow the establishment of an ambitious future research agenda for the field.
Article
Full-text available
The detection and removal of precancerous polyps via colonoscopy is the gold standard for the prevention of colon cancer. However, the detection rate of adenomatous polyps can vary significantly among endoscopists. Here, we show that a machine-learning algorithm can detect polyps in clinical colonoscopies, in real time and with high sensitivity and specificity. We developed the deep-learning algorithm by using data from 1,290 patients, and validated it on newly collected 27,113 colonoscopy images from 1,138 patients with at least one detected polyp (per-image-sensitivity, 94.38%; per-image-specificity, 95.92%; area under the receiver operating characteristic curve, 0.984), on a public database of 612 polyp-containing images (per-image-sensitivity, 88.24%), on 138 colonoscopy videos with histologically confirmed polyps (per-image-sensitivity of 91.64%; per-polyp-sensitivity, 100%), and on 54 unaltered full-range colonoscopy videos without polyps (per-image-specificity, 95.40%). By using a multi-threaded processing system, the algorithm can process at least 25 frames per second with a latency of 76.80 ± 5.60 ms in real-time video analysis. The software may aid endoscopists while performing colonoscopies, and help assess differences in polyp and adenoma detection performance among endoscopists.
Article
Full-text available
Background: The aim of this study was to systematically synthesize the large volume of literature reporting on the association between operative duration and complications across various surgical specialties and procedure types. Methods: An electronic search of PubMed, Cochrane Central Register of Controlled Trials, and Cochrane Database of Systematic Reviews from January 2005 to January 2015 was conducted. Sixty-six observational studies met the inclusion criteria. Results: Pooled analyses showed that the likelihood of complications increased significantly with prolonged operative duration, approximately doubling with operative time thresholds exceeding 2 or more hours. Meta-analyses also demonstrated a 14% increase in the likelihood of complications for every 30 min of additional operating time. Conclusions: Prolonged operative time is associated with an increase in the risk of complications. Given the adverse consequences of complications, decreased operative times should be a universal goal for surgeons, hospitals, and policy-makers. Future study is recommended on the evaluation of interventions targeted to reducing operating time.
Conference Paper
In recent years, Artificial Intelligence (AI) has shown significant progress and its potential is growing. An application area of AI is Natural Language Processing (NLP). Voice assistants incorporate AI using cloud computing and can communicate with the users in natural language. Voice assistants are easy to use and thus there are millions of devices that incorporates them in households nowadays. Most common devices with voice assistants are smart speakers and they have just started to be used in schools and universities. The purpose of this paper is to study the capabilities of voice assistants in the classroom and to present findings from previous studies.
Article
Registered nurses (RNs) play a critical role in health care delivery. With an aging US population, health care demand is growing at an unprecedented pace. Using projected changes in population size and age, the authors developed demand and supply models to forecast the RN job shortage in each of the 50 states. Letter grades were assigned based on projected RN job shortage ratios. The number of states receiving a grade of “D” or “F” for their RN shortage ratio will increase from 5 in 2009 to 30 by 2030, for a total national deficit of 918 232 (725 619 - 1 112 112) RN jobs. There will be significant RN workforce shortages throughout the country in 2030; the western region will have the largest shortage ratio of 389 RN jobs per 100 000. Increased efforts to understand shortage dynamics are warranted.
Article
Background: The effect of operative duration on postoperative outcomes of esophagectomy is not well understood. The relationship between operative duration and postoperative complications was explored. Methods: Esophagectomies with gastric reconstruction performed between 2010 and 2015 were queried from the National Surgical Quality Improvement Program. Linear and multivariate regression analyses were used to determine if operative duration correlated with outcomes independent of comorbidities. Subset analysis was performed by the type of esophagectomy. Results: There were 5,098 patients with a median age and operative time of 64 years and 353 minutes, respectively. In the transhiatal group, longer operative times correlated with increased rates of pneumonia, prolonged intubation, unplanned reintubation, septic shock, unplanned reoperation, duration of stay, and mortality. For Ivor-Lewis esophagectomy, there were similar correlations with postoperative complications but not mortality. With the McKeown approach, there were no correlations between operative duration and postoperative outcomes. Conclusion: Prolonged operative time has an independent adverse impact on postoperative morbidity, which varies by surgical approach. We have identified unique cut points in the operative time for transhiatal (333 minutes) and Ivor-Lewis esophagectomy (422 minutes), which can be used as a prognostic marker for postoperative outcomes as well as a quality metric in well-selected patients.
Conference Paper
Here we have successfully implemented Voice activated intelligent system for elderly and physically challenged people to make them more selfdependent. This is a major breakthrough in automation technology. The prototype developed using energia tool is capable of controlling electrical devices in a home or office. This is mainly targeted to elderly and disabled people.The system incorporates automatic voice recognition engines. It is demonstrated with practical coverage up to 10 meters. Here our system controls multiple home appliances based on the user voice commands. The response of the proposed system to different voice commands is good compared to other existing systems.
Article
Importance Increasing value requires improving quality or decreasing costs. In surgery, estimates for the cost of 1 minute of operating room (OR) time vary widely. No benchmark exists for the cost of OR time, nor has there been a comprehensive assessment of what contributes to OR cost. Objectives To calculate the cost of 1 minute of OR time, assess cost by setting and facility characteristics, and ascertain the proportion of costs that are direct and indirect. Design, Setting, and Participants This cross-sectional and longitudinal analysis examined annual financial disclosure documents from all comparable short-term general and specialty care hospitals in California from fiscal year (FY) 2005 to FY2014 (N = 3044; FY2014, n = 302). The analysis focused on 2 revenue centers: (1) surgery and recovery and (2) ambulatory surgery. Main Outcomes and Measures Mean cost of 1 minute of OR time, stratified by setting (inpatient vs ambulatory), teaching status, and hospital ownership. The proportion of cost attributable to indirect and direct expenses was identified; direct expenses were further divided into salary, benefits, supplies, and other direct expenses. Results In FY2014, a total of 175 of 302 facilities (57.9%) were not for profit, 78 (25.8%) were for profit, and 49 (16.2%) were government owned. Thirty facilities (9.9%) were teaching hospitals. The mean (SD) cost for 1 minute of OR time across California hospitals was $37.45 ($16.04) in the inpatient setting and $36.14 ($19.53) in the ambulatory setting (P = .65). There were no differences in mean expenditures when stratifying by ownership or teaching status except that teaching hospitals had lower mean (SD) expenditures than nonteaching hospitals in the inpatient setting ($29.88 [$9.06] vs $38.29 [$16.43]; P = .006). Direct expenses accounted for 54.6% of total expenses ($20.40 of $37.37) in the inpatient setting and 59.1% of total expenses ($20.90 of $35.39) in the ambulatory setting. Wages and benefits accounted for approximately two-thirds of direct expenses (inpatient, $14.00 of $20.40; ambulatory, $14.35 of $20.90), with nonbillable supplies accounting for less than 10% of total expenses (inpatient, $2.55 of $37.37; ambulatory, $3.33 of $35.39). From FY2005 to FY2014, expenses in the OR have increased faster than the consumer price index and medical consumer price index. Teaching hospitals had slower growth in costs than nonteaching hospitals. Over time, the proportion of expenses dedicated to indirect costs has increased, while the proportion attributable to salary and supplies has decreased. Conclusions and Relevance The mean cost of OR time is $36 to $37 per minute, using financial data from California’s short-term general and specialty hospitals in FY2014. These statewide data provide a generalizable benchmark for the value of OR time. Furthermore, understanding the composition of costs will allow those interested in value improvement to identify high-yield targets.