Fig 2 - uploaded by Olivier Chassany
Content may be subject to copyright.
Examples of reasons for certain types of errors and procedures for reducing them. 

Examples of reasons for certain types of errors and procedures for reducing them. 

Source publication
Article
Full-text available
The development of medicinal products is subject to quality standards aimed at guaranteeing that database contents accurately reflect the source documents. Paradoxically, these standards hardly address the quality of the source data itself. The objective of this work was to propose recommendations to improve data quality in three fields (pharmacovi...

Context in source publication

Context 1
... from the previously described analyses, the following recommendations were formulated according to a common princi- ple (e.g. see figure 2). Some of these recommendations are original, whereas others are adapted from existing guidelines when the appli- cation needs reinforcing. ...

Similar publications

Article
Full-text available
Observed‐to‐expected (OE) analyses, together with data mining algorithms1, 2, 3, 4, 5, 6, 7 and pharmacoepidemiological studies,8 are part of the quantitative pharmacovigilance toolkit for vaccines. While data mining algorithms generate hypotheses about potential safety concerns and pharmacoepidemiological studies test specific hypotheses or measur...

Citations

... A significant challenge in pharmacovigilance is to assess the likelihood that the medicinal product caused or contributed to the occurrence of the event experienced by the patient, i.e. causality assessment. In order to assess the possibility of a causal relationship between exposure to a medicinal product during pregnancy and an adverse pregnancy outcome, it is important that the underlying data (raw variables) and information (processing and interpretation of raw data), in individual case reports and case series, are both present and of high quality [7]. Therefore, in order to establish reliably whether there is a causal relationship between a medicinal product and a reported event, relevant information for the causality assessment needs to be present. ...
Article
Full-text available
To assess the causal relationship between a medicinal product and a reported event, relevant information needs to be present. Information elements for assessing cases of exposure to medicinal products during pregnancy were predefined and used in a new tool to assess the quality of information. However, the extent in which the presence or absence of these predefined information elements is associated with the overall clinical quality of these cases, as evaluated by pharmacovigilance experts, remains uncertain. We aimed to validate a novel method to assess the clinical quality of information in real-world pregnancy pharmacovigilance case reports. The clinical quality of case reports regarding medicinal product exposure and pregnancy-related outcomes was appraised from spontaneous reports, literature, Teratology Information Services (UK and Switzerland), The Dutch Pregnancy Drug Register, the Gilenya pregnancy registry and the Enhanced PV programme of Novartis. Assessment was done by means of the novel standardised tool based on the presence and relevance of information, and by expert judgement. The novel tool was validated compared to the expert assessment as the gold standard expressed as the area under the receiver operating characteristic curves, after which the sensitivity and specificity were calculated using cross-tabulations. Inter-rater variability was determined by means of weighted Cohen’s kappa. One hundred and eighty-six case reports were included. The clinical quality score as assessed by the novel method was divided into three categories with cut-off values of 45% (poor to intermediate) and 65% (intermediate to excellent). Sensitivity was 0.93 and 0.96 for poor to intermediate and intermediate to excellent, respectively. Specificity was respectively 0.52 and 0.73. Inter-rater variability was 0.65 (95% confidence interval 0.53–0.78) for the newly developed approach, and 0.40 (95% confidence interval 0.28–0.52) for the gold standard assessment. The tool described in this study using the presence and relevance of elements of information is the first designed, validated and standardised method for the assessment of the quality of information of case reports in pregnancy pharmacovigilance data. This method confers less inter-rater variability compared with a quality assessment by experts of pregnancy-related pharmacovigilance data.
... Incomplete or inaccurate information can affect signal detection and risk assessment. Variations in data collection and reporting systems across different regions and healthcare settings can further complicate the quality of data [1]. 3. Signal Detection in Large Datasets: With the increasing volume of data in pharmacovigilance databases, identifying meaningful signals of potential risks becomes challenging. ...
Article
Full-text available
Pharmacovigilance plays a crucial role in ensuring drug safety and promoting patient well-being throughout the life cycle of medicinal products. However, this field faces several challenges, including underreporting of adverse events, data quality issues, and the complexity of signal detection in large datasets. To address these challenges and enhance drug safety monitoring, there is a growing interest in harnessing the potential of generative artificial intelligence (AI) techniques. This article explores the applications and implications of generative AI in pharmacovigilance. It provides an overview of popular generative models and their working principles, highlighting their ability to analyse drug databases, medical literature, and real-world data sources to identify drug-drug interactions, adverse events, and potential safety signals. Moreover, it emphasizes the importance of human validation and expert oversight in interpreting and acting on the insights generated by generative AI algorithms. The integration of generative AI with traditional pharmacovigilance methods creates a synergistic approach, combining the computational power of AI with human expertise. This integration can lead to improved signal detection, efficient case report generation, proactive risk assessment, and optimized resource allocation. Additionally, the article addresses challenges related to data quality, interpretability, and model validation in generative AI applications, emphasizing the need for standardized protocols and collaborative efforts among stakeholders. Overall, the potential of generative AI in pharmacovigilance is vast. By leveraging its capabilities, we can enhance drug safety monitoring, facilitate early detection of adverse events, and improve patient outcomes. However, it is crucial to address ethical considerations, ensure data privacy, and maintain human oversight to foster responsible and effective implementation of generative AI in pharmacovigilance practices.
... This tool is complex and requires specific training to avoid variability in coding processes; data interpretation in reports can cause problems. Therefore, some recommendations for data quality assurance can be applied concerning the investigator (certification, reinforcing professionalism in research, improving accessibility in pharmacovigilance reporting); the origin, transcription, and validation of the critical data; the data processing; and reporting/ publication processes [6]. ...
... The pitfalls of the secondary use of observational data to support research are well documented. [8][9][10][11][12][13] Typically these data are collected either for billing or diagnostic purposes and not with research endpoints in mind. Von Lucadou, et al notes that information in the electronic health record may not be as granular as data captured during the course of a clinical trial and time stamps of clinical events should be examined prior to inferring temporal relationships. ...
Article
Full-text available
Objective Advances in standardization of observational healthcare data have enabled methodological breakthroughs, rapid global collaboration, and generation of real-world evidence to improve patient outcomes. Standardizations in data structure, such as use of common data models, need to be coupled with standardized approaches for data quality assessment. To ensure confidence in real-world evidence generated from the analysis of real-world data, one must first have confidence in the data itself. Materials and Methods We describe the implementation of check types across a data quality framework of conformance, completeness, plausibility, with both verification and validation. We illustrate how data quality checks, paired with decision thresholds, can be configured to customize data quality reporting across a range of observational health data sources. We discuss how data quality reporting can become part of the overall real-world evidence generation and dissemination process to promote transparency and build confidence in the resulting output. Results The Data Quality Dashboard is an open-source R package that reports potential quality issues in an OMOP CDM instance through the systematic execution and summarization of over 3300 configurable data quality checks. Discussion Transparently communicating how well common data model-standardized databases adhere to a set of quality measures adds a crucial piece that is currently missing from observational research. Conclusion Assessing and improving the quality of our data will inherently improve the quality of the evidence we generate.
... The pitfalls of the secondary use of observational data to support research are well documented. [8][9][10][11][12][13] Typically these data are collected either for billing or diagnostic purposes and not with research endpoints in mind. Von Lucadou, et al. notes that information in the electronic health record may not be as granular as data captured during the course of a clinical trial and time stamps of clinical events should be examined prior to inferring temporal relationships. ...
Preprint
Full-text available
Advances in standardization of observational healthcare data have enabled methodological breakthroughs, rapid global collaboration, and generation of real-world evidence to improve patient outcomes. Standardizations in data structure, such as use of Common Data Models (CDM), need to be coupled with standardized approaches for data quality assessment. To ensure confidence in real-world evidence generated from the analysis of real-world data, one must first have confidence in the data itself. The Data Quality Dashboard is an open-source R package that reports potential quality issues in an OMOP CDM instance through the systematic execution and summarization of over 3,300 configurable data quality checks. We describe the implementation of check types across a data quality framework of conformance, completeness, plausibility, with both verification and validation. We illustrate how data quality checks, paired with decision thresholds, can be configured to customize data quality reporting across a range of observational health data sources. We discuss how data quality reporting can become part of the overall real-world evidence generation and dissemination process to promote transparency and build confidence in the resulting output. Transparently communicating how well CDM standardized databases adhere to a set of quality measures adds a crucial piece that is currently missing from observational research. Assessing and improving the quality of our data will inherently improve the quality of the evidence we generate.
... Moreover, some studies may be challenged or stopped due to low quality of the performed studies. These challenges were documented in a survey about the contribution of PASS to drug safety, conducted over three-year period (2008 until 2010) [20,21]. Another limit is the wide use of statistics and multiple testing, which may lead to associations between drug use and health outcomes that are anything but causal [22]. ...
Article
During the past few decades, it has been stated that a paradigm shift has occurred in the assessment and management of patient related drug safety. Some of these changes have resulted in a significant increase in the importance of pharmacoepidemiology and its use in pharmacovigilance. For European member states, the Pharmacovigilance Risk Assessment Committee (PRAC) is responsible for assessing the protocols and results of imposed and non-imposed post-authorization safety studies (PASS). Between 2013 and 2017, the total number of PASS during this 5-years period of the different products, including protocols and results, was 1062. The number of protocols of PASS is increasing over time, except in 2017 where a 25% decrease has been observed. Whereas, PASS results steadily increased over the 5 years period. Between 2014 and 2017, about 29% (n = 137) of PRAC reviewed protocols were imposed. The number of imposed PASS was almost constant over time with a mean of 34.3 ± 7.6 imposed protocols per year and 3.5 ± 1.74 imposed results per year. The need for the implementation of PASS for pharmacovigilance regulatory activities is increasing. Nevertheless, conducting such studies remains difficult. © 2019 Société française de pharmacologie et de thérapeutique
... The quality of medical data, in a PV context, is of utmost importance [31]. Without ADR reports, PV systems cannot function and literature regularly emphasise the need for advancements to be made with regards to the quality of such reports [32]. ...
Conference Paper
Full-text available
Pharmacovigilance (PV) is defined as the science and activities relating to drug safety surveillance, i.e. the detection, assessment, understanding and prevention of adverse effects or any other drug-related problem. Even though substantial progress has been made over the past several decades to improve the effectiveness and efficiency of PV systems, literature suggests that the vast majority of PV systems are still burdened by similar challenges. Typical challenges often relate to aspects considered to have the ability to facilitate improved PV, i.e. engaging the public, building collaborations and partnerships, incorporating informatics into PV systems, adopting a global (standardised) approach, and assessing the impact of efforts. Furthermore, researchers argue that these challenges are not new and have been, and still is, pivotal research objectives within PV studies. Advances in science and technology over the past couple of decades have seen technologies being developed, and subsequently successfully employed, to address similar challenges in other industries. The impact of these technologies are described in literature, proving their success. This paper argues the case for such technologies within PV systems, proposing a research agenda for identifying technologies that hold the potential to address the challenges faced by PV systems.
Article
Background: Issues concerning inadequate source data of clinical trials rank second in the most common findings by regulatory authorities. The increasing use of electronic clinical information systems by healthcare providers offers an opportunity to facilitate and improve the conduct of clinical trials and the source documentation. We report on a number of tools implemented into the clinical information system of a university hospital to support clinical research. Methods: In 2011/2012, a set of tools was developed in the clinical information system of the University Hospital Zurich to support clinical research, including (1) a trial registry for documenting metadata on the clinical trials conducted at the hospital, (2) a patient-trial-assignment-tool to tag patients in the electronic medical charts as participants of specific trials, (3) medical record templates for the documentation of study visits and trial-related procedures, (4) online queries on trials and trial participants, (5) access to the electronic medical records for clinical monitors, (6) an alerting tool to notify of hospital admissions of trial participants, (7) queries to identify potentially eligible patients in the planning phase as trial feasibility checks and during the trial as recruitment support, and (8) order sets to facilitate the complete and accurate performance of study visit procedures. Results: The number of approximately 100 new registrations per year in the voluntary trial registry in the clinical information system now matches the numbers of the existing mandatory trial registry of the hospital. Likewise, the yearly numbers of patients tagged as trial participants as well as the use of the standardized trial record templates increased to 2408 documented trial enrolments and 190 reports generated/month in the year 2013. Accounts for 32 clinical monitors have been established in the first 2 years monitoring a total of 49 trials in 16 clinical departments. A total of 15 months after adding the optional feature of hospital admission alerts of trial participants, 107 running trials have activated this option, including 48 out of 97 studies (49.5%) registered in the year 2013, generating approximately 85 alerts per month. Conclusions: The popularity of the presented tools in the clinical information system illustrates their potential to facilitate the conduct of clinical trials. The tools also allow for enhanced transparency on trials conducted at the hospital. Future studies on monitoring and inspection findings will have to evaluate their impact on quality and safety.