Figure - available from: Nature Metabolism
This content is subject to copyright. Terms and conditions apply.
Quality control algorithm to assess increasing basal insulin dosage
Quality control algorithm to assess increasing basal insulin dosage. User features and glycemic outcomes are loaded by the algorithm and assessed for physician-informed metrics of nocturnal hypoglycaemia, near hypoglycaemia episodes, subject time in target range, subject adherence, and insulin formulation-dependent requirements.

Quality control algorithm to assess increasing basal insulin dosage Quality control algorithm to assess increasing basal insulin dosage. User features and glycemic outcomes are loaded by the algorithm and assessed for physician-informed metrics of nocturnal hypoglycaemia, near hypoglycaemia episodes, subject time in target range, subject adherence, and insulin formulation-dependent requirements.

Source publication
Article
Full-text available
Type 1 diabetes (T1D) is characterized by pancreatic beta cell dysfunction and insulin depletion. Over 40% of people with T1D manage their glucose through multiple injections of long-acting basal and short-acting bolus insulin, so-called multiple daily injections (MDI)1,2. Errors in dosing can lead to life-threatening hypoglycaemia events (<70 mg d...

Citations

... With little reason to trust the systems and their core algorithms, users and developers are also reluctant to incorporate experimental algorithms into the prediction loop, such as advanced MLdriven insulin delivery. Researchers have shown that ML can improve automated insulin delivery systems [64], [33], [63], [59], [65], [58], yet the lack of security and safety for these ML predictions keeps people from using it. If ML is going to be part of medical decisions, we need the security community to lead the way on how to do this safely. ...
... The reason for using this more traditional form of ML is sound: it provides determinism and has a physiological basis that people can use to vet its decisions. However, all of the recent advances in ML, like image classifiers and large language models, have come from deep neural networks, which researchers have shown also work well for predicting metabolic states for use in automated insulin delivery systems [64], [33], [63], [59], [65], [58]. Despite this promise, none of the current automated insulin delivery systems use more advanced deep neural networks due to the black-box nature of deep neural networks and the potentially dire consequences of mispredictions -an automated insulin delivery system's unchecked "hallucination" [1] could be lethal if it delivers an inappropriate insulin dose [20], [13]. ...
Preprint
Full-text available
Type 1 Diabetes (T1D) is a metabolic disorder where an individual's pancreas stops producing insulin. To compensate, they inject synthetic insulin. Computer systems, called automated insulin delivery systems, exist that inject insulin automatically. However, insulin is a dangerous hormone, where too much insulin can kill people in a matter of hours and too little insulin can kill people in a matter of days. In this paper, we take on the challenge of building a new trustworthy automated insulin delivery system, called GlucOS. In our design, we apply separation principles to keep our implementation simple, we use formal methods to prove correct the most critical parts of the system, and we design novel security mechanisms and policies to withstand malicious components and attacks on the system. We report on real world use for one individual for 6 months using GlucOS. Our data shows that for this individual, our ML-based algorithm runs safely and manages their T1D effectively. We also run our system on 21 virtual humans using simulations and show that our security and safety mechanisms enable ML to improve their core T1D measures of metabolic health by 4.3\% on average. Finally, we show that our security and safety mechanisms maintain recommended levels of control over T1D even in the face of active attacks that would have otherwise led to death. GlucOS is open source and our code is available on GitHub.
... 12 It can integrate data from wearable sensors to provide insulin, carbohydrate, and exercise recommendations throughout the day. 13 Although several systems have been developed to help people adjust their basal insulin, calculate mealtime insulin boluses, or recommend carbohydrates to maintain safe glucose throughout the day 12,14-17 , few tools exist that can provide exercise-specific treatment recommendations. 18 Clinical guidelines are often used to provide recommendations around exercise in people with T1D. ...
Article
Background Managing glucose levels during exercise is challenging for individuals with type 1 diabetes (T1D) since multiple factors including activity type, duration, intensity and other factors must be considered. Current decision support tools lack personalized recommendations and fail to distinguish between aerobic and resistance exercise. We propose an exercise-aware decision support system (exDSS) that uses digital twins to deliver personalized recommendations to help people with T1D maintain safe glucose levels (70-180 mg/dL) and avoid low glucose (<70 mg/dL) during and after exercise. Methods We evaluated exDSS using various exercise and meal scenarios recorded from a large, free-living study of aerobic and resistance exercise. The model inputs were heart rate, insulin, and meal data. Glucose responses were simulated during and after 30-minute exercise sessions (676 aerobic, 631 resistance) from 247 participants. Glucose outcomes were compared when participants followed exDSS recommendations, clinical guidelines, or did not modify behavior (no intervention). Results exDSS significantly improved mean time in range for aerobic (80.2% to 92.3%, P < .0001) and resistance (72.3% to 87.3%, P < .0001) exercises compared with no intervention, and versus clinical guidelines (aerobic: 82.2%, P < .0001; resistance: 80.3%, P < .0001). exDSS reduced time spent in low glucose for both exercise types compared with no intervention (aerobic: 15.1% to 5.1%, P < .0001; resistance: 18.2% to 6.6%, P < .0001) and was comparable with following clinical guidelines (aerobic: 4.5%, resistance: 8.1%, P = N.S.). Conclusions The exDSS tool significantly improved glucose outcomes during and after exercise versus following clinical guidelines and no intervention providing motivation for clinical evaluation of the exDSS system.
... Artificial intelligence (AI) has been introduced to healthcare with the promise of assisting or automating tasks to reduce human workload. In publications, medical AI models have been reported to produce promising results in a variety of data-driven scenarios, including clinical decision support, medical image interpretation and risk prediction [1][2][3] . However, real-world implementation of medical AI interventions has so far been limited and the potential benefits not yet realised. ...
Article
Full-text available
The Consolidated Standards of Reporting Trials extension for Artificial Intelligence interventions (CONSORT-AI) was published in September 2020. Since its publication, several randomised controlled trials (RCTs) of AI interventions have been published but their completeness and transparency of reporting is unknown. This systematic review assesses the completeness of reporting of AI RCTs following publication of CONSORT-AI and provides a comprehensive summary of RCTs published in recent years. 65 RCTs were identified, mostly conducted in China (37%) and USA (18%). Median concordance with CONSORT-AI reporting was 90% (IQR 77–94%), although only 10 RCTs explicitly reported its use. Several items were consistently under-reported, including algorithm version, accessibility of the AI intervention or code, and references to a study protocol. Only 3 of 52 included journals explicitly endorsed or mandated CONSORT-AI. Despite a generally high concordance amongst recent AI RCTs, some AI-specific considerations remain systematically poorly reported. Further encouragement of CONSORT-AI adoption by journals and funders may enable more complete adoption of the full CONSORT-AI guidelines.
... Aunque los sistemas de IA se han investigado durante algún tiempo, los recientes avances en el aprendizaje profundo y las redes neuronales han ganado un considerable interés por su potencial en las aplicaciones sanitarias. Los ejemplos de estas aplicaciones son muy variados e incluyen sistemas de IA para el cribado y el triaje 15,16 , el diagnóstico [17][18][19][20] , el pronóstico 21,22 , el apoyo a la toma de decisiones 23 y la recomendación de tratamientos 24 . Sin embargo, en los casos más recientes, las pruebas publicadas han consistido en una validación in silico en fase inicial. ...
Article
La declaración CONSORT 2010 proporciona unas directrices mínimas para informar sobre los ensayos clínicos aleatorizados. Su uso generalizado ha sido fundamental para garantizar la transparencia en la evaluación de nuevas intervenciones. Más recientemente, se ha reconocido cada vez más que las intervenciones con inteligencia artificial (IA) deben someterse a una evaluación rigurosa y prospectiva para demostrar su impacto en la salud. La extensión CONSORT-AI (Consolidated Standards of Reporting Trials-Artificial Intelligence) es una nueva pauta de información para los ensayos clínicos que evalúan intervenciones con un componente de IA, esta se desarrolló en paralelo con su declaración complementaria para los protocolos de ensayos clínicos: SPIRIT-AI (Standard Protocol Items – Artificial Intelligence: Recomendaciones para ensayos clínicos de intervención - Inteligencia Artificial). Ambas directrices se desarrollaron a través de un proceso de consenso por etapas que incluía la revisión de la literatura y la consulta a expertos para generar 29 elementos candidatos, que fueron evaluados por un grupo internacional de múltiples partes interesadas en una encuesta Delphi de dos etapas (103 partes interesadas congregados en una reunión de consenso de dos días (31 partes interesadas) y refinados a través de una lista de verificación piloto (34 participantes). La ampliación del CONSORT-AI incluye 14 nuevos elementos que se consideraron lo suficientemente importantes para las intervenciones de IA como para que se informen de forma rutinaria, además de los elementos básicos del CONSORT 2010. CONSORT-AI recomienda que los investigadores proporcionen descripciones claras de la intervención de IA, incluyendo las instrucciones y las habilidades requeridas para su uso, el entorno en el que se integra la intervención de IA, el manejo de los datos de entrada y los datos de salida de la intervención de IA, la interacción entre el ser humano y la IA y la provisión de un análisis de los casos de error. CONSORT-AI ayudará a promover la transparencia y la exhaustividad en los informes de los ensayos clínicos de las intervenciones de AI, también ayudará a los editores y revisores, así como a los lectores en general, a entender, interpretar y valorar críticamente la calidad del diseño del ensayo clínico y el riesgo de sesgo en los resultados comunicados.
... Aunque los sistemas de IA se han investigado durante algún tiempo, los recientes avances en el aprendizaje profundo y las redes neuronales han ganado un considerable interés por su potencial en las aplicaciones en salud. Los ejemplos de estas aplicaciones son muy variados e incluyen sistemas de IA para el cribado y el triaje 7,8 , el diagnóstico [9][10][11][12] , el pronóstico 13,14 , el apoyo a la toma de decisiones 15 y la recomendación de tratamientos 16 . Sin embargo, en la mayoría de los casos recientes, la mayor parte de las pruebas publicadas han consistido en una validación in silico en fase inicial. ...
Article
La declaración SPIRIT 2013 tiene como objetivo mejorar la exhaustividad de los informes de los protocolos de los ensayos clínicos proporcionando recomendaciones basadas en la evidencia para el conjunto mínimo de elementos que deben abordarse. Esta guía ha sido fundamental para promover la evaluación transparente de nuevas intervenciones. Más recientemente, se ha reconocido cada vez más que las intervenciones con inteligencia artificial (IA) deben someterse a una evaluación rigurosa y prospectiva para demostrar su impacto en los resultados médicos. La extensión SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials-Artificial Intelligence, por sus siglas en inglés) es una nueva directriz para el reporte de los protocolos de ensayos clínicos que evalúan intervenciones con un componente de IA. Esta directriz se desarrolló en paralelo con su declaración complementaria para los informes de ensayos clínicos: CONSORT-AI (Consolidated Standards of Reporting Trials-Artificial Intelligence). Ambas directrices se desarrollaron a través de un proceso de consenso por etapas que incluía la revisión de la literatura y la consulta a expertos para generar 26 ítems candidatos, que fueron consultados por un grupo internacional de múltiples partes interesadas en una encuesta Delphi de dos etapas (103 partes interesadas), acordados en una reunión de consenso (31 partes interesadas) y refinados a través de una lista de verificación piloto (34 participantes). La ampliación de SPIRIT-AI incluye 15 nuevos elementos que se consideraron suficientemente importantes para los protocolos de los ensayos clínicos con intervenciones de IA. Estos nuevos ítems deben ser reportados rutinariamente además de los ítems centrales de SPIRIT 2013. SPIRIT-AI recomienda que los investigadores proporcionen descripciones claras de la intervención de IA, incluyendo las instrucciones y las habilidades necesarias para su uso, el entorno en el que se integrará la intervención de IA, las consideraciones para el manejo de los datos de entrada y salida, la interacción entre el ser humano y la IA y el análisis de los casos de error. SPIRIT-AI ayudará a promover la transparencia y la exhaustividad de los protocolos de los ensayos clínicos de las intervenciones de IA. Su uso ayudará a los editores y revisores, así como a los lectores en general, a comprender, interpretar y valorar críticamente el diseño y el riesgo de sesgo de un futuro ensayo clínico.
... The expanding use of AI technologies in health care and medicine is gradually changing biomedical research and patient care (Ayers et al. 2023;Cho 2021;Danks and London 2017;Elul et al. 2021;Ho 2023;Lee et al. 2023;Ravi et al. 2016;Topol 2019;Yu et al. 2018), and promises to improve the efficiency and performance of the healthcare system (Rubeis 2022). There is great hope among developers and the health system's workforce that AI can improve patients' health by offering tools that can enhance clinical diagnosis and decision-making (Cutler 2023; Shiraishi et al. 2011;Tavanapong et al. 2022;Tyler et al. 2020;Villar et al. 2015); promote surgical precision and predictability (Hashimoto et al. 2018); support mental health (Rubeis 2022); free up clinicians to spend more time with their patients (Lim et al. 2020;Topol 2015); reduce human error (Diprose and Buist 2016;Lim et al. 2020); and lower health care cost (Akkus et al. 2021;Kooli and Al Muftah 2022). Implementing AI (especially machine learning) has shown some success in the early detection of Parkinson's disease (Belić et al. 2019). ...
Article
Full-text available
Artificial intelligence (AI) technologies in medicine are gradually changing biomedical research and patient care. High expectations and promises from novel AI applications aiming to positively impact society raise new ethical considerations for patients and caregivers who use these technologies. Based on a qualitative content analysis of semi-structured interviews and focus groups with healthcare professionals (HCPs), patients, and family members of patients with Parkinson’s Disease (PD), the present study investigates participant views on the comparative benefits and problems of using human versus AI predictive computer vision health monitoring, as well as participants’ ethical concerns regarding these technologies. Participants presumed that AI monitoring would enhance information sharing and treatment, but voiced concerns about data ownership, data protection, commercialization of patient data, and privacy at home. They highlighted that privacy issues at home and data security issues are often linked and should be investigated together. Findings may help technologists, HCPs, and policymakers determine how to incorporate stakeholders’ intersecting but divergent concerns into developing and implementing AI PD monitoring tools.
... Aunque los sistemas de IA se han investigado durante algún tiempo, los recientes avances en el aprendizaje profundo y las redes neuronales han ganado un considerable interés por su potencial en las aplicaciones en salud. Los ejemplos de estas aplicaciones son muy variados e incluyen sistemas de IA para el cribado y el triaje 7,8 , el diagnóstico [9][10][11][12] , el pronóstico 13,14 , el apoyo a la toma de decisiones 15 y la recomendación de tratamientos 16 . Sin embargo, en la mayoría de los casos recientes, la mayor parte de las pruebas publicadas han consistido en una validación in silico en fase inicial. ...
Article
Objective To determine if there was an association between intrapartum stillbirths and both traveled distance for delivery and delivery care accessibility, assessing periods before and during the COVID-19 pandemic. Methods This is a population-based cohort study. Patients had birth occurring after the onset of labor; the primary outcome was intrapartum stillbirth. City of residence was classified according to the ratio between deliveries performed and total births among its residents; values lower than 0.1 indicated low delivery care accessibility. Travel distance was calculated using the Haversine formula. Education level, maternal age, and birth sex were included. In each period, relative risk was assessed by generalized linear model with Poisson variance. Results There were 2 267 534 deliveries with birth occurring after the onset of labor. Most patients were between age 20 and 35 years, had between 8 and 11 years of education, and resided in cities with high delivery care accessibility. Low delivery care accessibility increased risk of intrapartum stillbirth in the pre-pandemic (relative risk [RR] 2.02; 95% CI [1.64, 2.47]; p < 0.01) and the pandemic period (RR 1.69; 95% CI [1.09, 2.55]; p = 0.015). This was independent of other risk-increasing factors, such as travel distance and fewer years of education. Conclusions Low delivery care accessibility is associated with the risk of intrapartum stillbirths, and accessibility reduced during the pandemic. Delivery of patients by family physicians and midwives, as well as official communication channels between primary care physicians and specialists, could improve patient healthcare-seeking behavior.
... For patients with T1D, Tyler et al. utilized k-nearestneighbor methods to generate recommendations for optimal insulin dosing in the context of a quality control algorithm. 79 Pesl et al. used case-based reasoning in the ABC4D (short for Advanced Bolus Calculator for Diabetes) bolus calculator for meal-time dosing advice. 80 For patients with T2D, Bergenstal et al. demonstrated that the combination of automated insulin titration guidance with support from health-care professionals offers superior glycemic control compared with support from health-care professionals alone in a multi-center randomized controlled trial. ...
... Multiple groups have shown good accuracy in forecasting future glucose values but only when tested in silico and thus provide no indication of their ability to actually reduce hypoglycaemia in vivo. [15][16][17][18] Decision support systems (DSSs) have demonstrated reductions in hypoglycaemia but also in silico 19 or with vastly complex inputs required by patients. 20 Those DSSs that have been evaluated in vivo mostly fail to demonstrate a reduction in hypos. ...
Article
Full-text available
Background Children with hypoglycaemia disorders, such as congenital hyperinsulinism (CHI), are at constant risk of hypoglycaemia (low blood sugars) with the attendant risk of brain injury. Current approaches to hypoglycaemia detection and prevention vary from fingerprick glucose testing to the provision of continuous glucose monitoring (CGM) to machine learning (ML) driven glucose forecasting. Recent trends for ML have had limited success in preventing free-living hypoglycaemia, due to a focus on increasingly accurate glucose forecasts and a failure to acknowledge the human in the loop and the essential step of changing behaviour. The wealth of evidence from the fields of behaviour change and persuasive technology (PT) allows for the creation of a theory-informed and technologically considered approach. Objectives We aimed to create a PT that would overcome the identified barriers to hypoglycaemia prevention for those with CHI to focus on proactive prevention rather than commonly used reactive approaches. Methods We used the behaviour change technique taxonomy and persuasive systems design models to create HYPO-CHEAT (HYpoglycaemia-Prevention-thrOugh-Cgm-HEatmap-Assisted-Technology): a novel approach that presents aggregated CGM data in simple visualisations. The resultant ease of data interpretation is intended to facilitate behaviour change and subsequently reduce hypoglycaemia. Results HYPO-CHEAT was piloted in 10 patients with CHI over 12 weeks and successfully identified weekly patterns of hypoglycaemia. These patterns consistently correlated with identifiable behaviours and were translated into both a change in proximal fingerprick behaviour and ultimately, a significant reduction in aggregated hypoglycaemia from 7.1% to 5.4% with four out of five patients showing clinically meaningful reductions in hypoglycaemia. Conclusions We have provided pilot data of a new approach to hypoglycaemia prevention that focuses on proactive prevention and behaviour change. This approach is personalised for individual patients with CHI and is a first step in changing our approach to hypoglycaemia prevention in this group.
... Algorithms can convert raw data into meaningful information such that accuracy and personalization make the content meaningful and valuable not only in supporting decision-making in complex environments (e.g., Haki et al., 2022) and but also in facilitating behavior change (e.g., Tyler et al., 2020). Whereas positive behavioral outcomes cannot be expected if the user does not perceive a nudge's information as relevant and valuable (Čaić et al., 2019), a fit between the nudge and personal characteristics can lead to significant outcomes. ...
Conference Paper
Energy and water consumption are significant sources of greenhouse gas emissions. While facilitating conservation behaviors in private households can help to mitigate these emissions, the effects of such mitigations are often indirect and delayed. Presenting meaningful feedback about consumption can help to make clear the positive effects of conservation behaviors to those who undertake them. We propose a large-scale field experiment to increase energy and water conservation through algorithmic eco-nudges. We use smart metering data to provide transparency, social references, and information about the environmental effects of conservation behavior. The proposed research is planned in a longitudinal design for 8 weeks in the winter of 2022/23 in Germany. The findings are expected to contribute to scholarly research on nudging and practice as well as to housing providers and policymakers who are interested in green nudging.