Table 1 - uploaded by Olle ten Cate
Content may be subject to copyright.
Steps in the admission of foreign medical graduates. 

Steps in the admission of foreign medical graduates. 

Source publication
Article
Full-text available
A need was felt to improve the quality of admission and licensing procedures for international medical graduates in The Netherlands. A clinical skills assessment was designed as part of a new procedure to realize a high-stakes, fair, transparent, and a time-limited path of admission for international medical graduates to the Dutch health care syste...

Contexts in source publication

Context 1
... insufficiently qualified independent providers take a place in the health care system. . By describing the design and development of our procedure, we do not pretend to set a standard, but rather intend to contribute to transparency and discussions in the field. usually no assessment of the applicant's actual medical competence was carried out . Between 1999 and 2002 the number of IMGs entering The Netherlands and applying for recognition of their diplomas increased significantly, from 180 to 400 per year, and medical schools as well as the government were increasingly unsatisfied with the procedure (CIBG- Brochure 2004). The time had come to develop a new procedure. This new procedure was to provide equal treatment for all medical graduates from outside the European Economical Area (EEA, European Union (EU) member states plus Norway, Iceland, Switzerland and Liechtenstein), as within the EEA there is an open labour market and the assessment of ...
Context 2
... new Dutch assessment of medical competence of foreign medical graduates (DAMCFG) procedure consists of (a) a portfolio, in which education and work experience are explained, (b) tests of general skills necessary to work in a Dutch health care environment, including Dutch medical language proficiency, knowledge of the organization of Dutch health care and English reading proficiency and (c) a series of assessments of medical competence, including a computer- based assessment of the knowledge of basic and clinical sciences and a hands-on assessment of clinical skills ( Table 1). ...
Context 3
... first step in the assessment procedure, namely the general skills tests, was failed by 161 candidates, mainly because of insufficient mastery of the Dutch language, in terms of speaking, reading and writing. As a consequence, only 39 candidates took the assessment of medical competence (Step IIB, see Table 1) between April 2006 and October 2008. ...

Similar publications

Article
Full-text available
The purpose of this study was to investigate the predictive and construct validity of a high-stakes objective structured clinical examination (OSCE) used to select candidates for a 3-month clinical rotation to assess practice-readiness status. Analyses were undertaken to establish the reliability and validity of the OSCE. The generalizability coeff...
Article
Full-text available
Purpose: United States (US) and Canadian citizens attending medical school abroad often desire to return to the US for residency and therefore must pass US licensing exams. We describe a 2 day United States Medical Licensing Examination (USMLE) Step 2 Clinical Skills (CS) preparation course for students in the Technion American Medical School (TEA...
Article
Full-text available
Context Differential performance in postgraduate examinations between home medical graduates and those who qualified outside their country of practice is well recognised. This difference is especially marked in the practical component of the UK Membership of the Royal College of General Practitioners (MRCGP) examination. The potential causes of suc...
Article
Full-text available
We present the results of the medical knowledge test after fulfilled internship for Swedish medical authorization during the years 2009 to the spring of 2015. A total of 7,613 tests were analyzed. Interns graduated from Swedish universities failed in 2.7% to 3.8% of the test moments. Interns who graduated from countries within the European Union (E...

Citations

... Given the growing contribution of specialist international medical graduates (SIMGs) in many countries [1][2][3][4] and to Australia and New Zealand in particular [5,6], where it is estimated that between 30 to 40% of the current medical workforce received their medical degree overseas [7], assessing their performance in comparison to their nationally-trained counterparts is important for ensuring consistent attainment of clinical skills [8,9] as well as interpersonal and communication skills [10,11]. SIMGs wishing to enter specialist areas may need to achieve certification as specified by specific boards and colleges given differences in exposure to such specialties in different countries [12,13]. ...
Article
Full-text available
Background Representation of specialist international medical graduates (SIMGs) in specific specialties such as surgery can be expected to grow as doctor shortages are predicted in the context of additional care provision for aging populations and limited local supply. Many national medical boards and colleges provide pathways for medical registration and fellowship of SIMGs that may include examinations and short-term training. There is currently very little understanding of how SIMGs are perceived by colleagues and whether their performance is perceived to be comparable to locally trained medical specialists. It is also not known how SIMGs perceive their own capabilities in comparison to local specialists. The aim of this study is to explore the relationships between colleague feedback and self-evaluation in the specialist area of surgery to identify possible methods for enhancing registration and follow-up training within the jurisdiction of Australia and New Zealand. Methods Feedback from 1728 colleagues to 96 SIMG surgeons and 406 colleagues to 25 locally trained Fellow surgeons was collected, resulting in 2134 responses to 121 surgeons in total. Additionally, 98 SIMGs and 25 Fellows provided self-evaluation scores (123 in total). Questionnaire and data reliability were calculated before analysis of variance, principal component analysis and network analysis were performed to identify differences between colleague evaluations and self-evaluations by surgeon type. Results Colleagues rated SIMGs and Fellows in the ‘very good’ to ‘excellent’ range. Fellows received a small but statistically significant higher average score than SIMGs, especially in areas dealing with medical skills and expertise. However, SIMGs received higher scores where there was motivation to demonstrate working well with colleagues. Colleagues rated SIMGs using one dimension and Fellows using three, which can be identified as clinical management skills, inter-personal communication skills and self-management skills. On self-evaluation, both SIMGs and Fellows gave themselves a significant lower average score than their colleagues, with SIMGs giving themselves a statistically significant higher score than Fellows. Conclusions Colleagues rate SIMGs and Fellows highly. The results of this study indicate that SIMGs tend to self-assess more highly, but according to colleagues do not display the same level of differentiation between clinical management, inter-personal and self-management skills. Further research is required to confirm these provisional findings and possible reasons for lack of differentiation if this exists. Depending on the outcome, possible support mechanisms can be explored that may lead to increased comparable performance with locally trained graduates of Australia and New Zealand in these three dimensions.
... 61 Others have suggested that the Swedish experience is not unique in Europe. 62 In relation to this form of NLE approach, and unlike the previous approaches described, it is important to note that there is a lack of readily available research from which conclusions can be drawn. ...
Article
Full-text available
Background: National licensing examinations (NLEs) are large-scale examinations usually taken by medical doctors close to the point of graduation from medical school. Where NLEs are used, success is usually required to obtain a license for full practice. Approaches to national licensing, and the evidence that supports their use, varies significantly across the globe. This paper aims to develop a typology of NLEs, based on candidacy, to explore the implications of different examination types for workforce planning. Methods: A systematic review of the published literature and medical licensing body websites, an electronic survey of all medical licensing bodies in highly developed nations, and a survey of medical regulators. Results: The evidence gleaned through this systematic review highlights four approaches to NLEs: where graduating medical students wishing to practice in their national jurisdiction must pass a national licensing exam before they are granted a license to practice; where all prospective doctors, whether from the national jurisdiction or international medical graduates, are required to pass a national licensing exam in order to practice within that jurisdiction; where international medical graduates are required to pass a licensing exam if their qualifications are not acknowledged to be comparable with those students from the national jurisdiction; and where there are no NLEs in operation. This typology facilitates comparison across systems and highlights the implications of different licensing systems for workforce planning. Conclusion: The issue of national licensing cannot be viewed in isolation from workforce planning; future research on the efficacy of national licensing systems to drive up standards should be integrated with research on the implications of such systems for the mobility of doctors to cross borders.
... There is no doubt that there is now a high degree of sophistication in the testing and assessment involved in licensure exams, particularly the USMLE. For this reason perhaps, the technical aspects of NLEs are well evidenced in the literature [30, 47, 48] , assuring the examination from pedagogic and legal standpoints. However, claims made that NLEs lead to improved patient safety [8, 38, 49], enhanced quality of care [50], and the identification of doctors likely to subsequently face disciplinary action [39], are less evidenced and rely on correlations not causal evidence. ...
... In view of the significant part IMGs play in the physician workforce of many countries and the apparent difficulties they present to regulators, this is an area of research that needs to be better understood. The research there is (at least that which met our inclusion criteria) suggests IMGs may, for a number of reasons, work in occupations that do not necessarily match their skills or qualifications [45, 47]. If this is so, and if licensure examinations are a contributory factor, then in a world where physician shortages exist it seems appropriate to understand this better. ...
Article
Full-text available
Background To investigate the existing evidence base for the validity of large-scale licensing examinations including their impact. Methods Systematic review against a validity framework exploring: Embase (Ovid Medline); Medline (EBSCO); PubMed; Wiley Online; ScienceDirect; and PsychINFO from 2005 to April 2015. All papers were included when they discussed national or large regional (State level) examinations for clinical professionals, linked to examinations in early careers or near the point of graduation, and where success was required to subsequently be able to practice. Using a standardized data extraction form, two independent reviewers extracted study characteristics, with the rest of the team resolving any disagreement. A validity framework was used as developed by the American Educational Research Association, American Psychological Association, and National Council on Measurement in Education to evaluate each paper’s evidence to support or refute the validity of national licensing examinations. Results24 published articles provided evidence of validity across the five domains of the validity framework. Most papers (n = 22) provided evidence of national licensing examinations relationships to other variables and their consequential validity. Overall there was evidence that those who do well on earlier or on subsequent examinations also do well on national testing. There is a correlation between NLE performance and some patient outcomes and rates of complaints, but no causal evidence has been established. Conclusions The debate around licensure examinations is strong on opinion but weak on validity evidence. This is especially true of the wider claims that licensure examinations improve patient safety and practitioner competence.
... From 2009 to 2014, PA students near the end of their training (4 months before graduation) from the HAN University of Applied Sciences (Nijmegen) were invited through mailings to participate in this study. Data of MDs were obtained as part of another study [14], in which MDs were invited through advertisements at their University Medical Centers (UMC). MDs were recently graduated, with a maximum of 3 months of clinical experience after graduation. ...
... A part of the OSCE can be the formulation of a differential diagnosis and management plan. The OSCE used in this study has been developed by the Radboud University Medical Center, as part of a new procedure to realize a high-stake, fair and transparent clinical skills examination [14]. ...
... We used an objective structured clinical examination that was developed at the Radboud University Medical Center in Nijmegen [14]. The reliability of the total OSCE used in this study is slightly higher than the reliability of OSCEs used in other studies [18]. ...
Article
Rationale, aims and objectivesThe physician assistant (PA) is trained to perform clinical tasks traditionally performed by medical doctors (MDs). Previous research showed no difference in the level of clinical skills of PAs compared with MDs in a specific niche, that is the specialty in which they are employed. However, MDs as well as PAs working within a specialty have to be able to recognize medical problems in the full scope of medicine. The objective is to examine PA students' level of general clinical skills across the breadth of clinical cases.MethodA cross-sectional study was conducted. PA students and recently graduated MDs in the Netherlands were observed on their clinical skills by means of an objective structured clinical examination comprising five stations with common medical cases. The level of mastering history taking, physical examination, communication and clinical reasoning of PA students and MDs were described in means and standard deviation. Cohen's d was used to present effect sizes.ResultsPA students and MDs score about equal on history taking (PA 5.8 ± 0.8 vs. MD 5.7 ± 0.7), physical examination (PA 4.8 ± 1.3 vs. MD 5.4 ± 0.8) and communication (PA: 8.2 ± 0.8 vs. MD: 8.6 ± 0.5) in the full scope of medicine. In the quality of the report, including the patient management plan, PA students scored a mean of 6.0 ± 0.6 and MDs 6.8 ± 0.6.Conclusions In this setting in the Netherlands, PA students and MDs score about equal in the appraisal of common cases in medical practice. The slightly lower scores of PA students' clinical reasoning in the full scope of clinical care may have raise attention to medical teams working with PAs and PA training programmes.
... In dit onderzoek werd bij niet te simuleren lichamelijke afwijkingen schriftelijke informatie gegeven als er bij lichamelijk onderzoek afwijkende bevindingen dienden te zijn. Het is gebruikelijk dit bij OSCE-examens op deze manier uit te voeren; 18 het is echter hierdoor niet te beoordelen of een deelnemer deze afwijking ook daadwerkelijk correct zou hebben vastgesteld. ...
Article
Full-text available
To compare the clinical competencies of second-year anaesthesiology residents and physician assistants (PA) in the preoperative anaesthesiology outpatient clinic. Comparative qualitative observational study. The two study groups were compared using 5 test stations representing 5 different cases of varying degrees of complexity with standardized patients. For each case, the patients and two anaesthesiologists assessed the results of the PAs and the residents using a quantitative scoring system for 4 clinical skills relevant to the preoperative anaesthesiology outpatient clinic. These skills were history-taking, physical examination, communication, and reporting. At each station, a score was calculated for each skill. The groups' scores were subsequently compared. 9 PAs and 11 residents carried out the station tests. There were no significant differences between the two groups of participants. In this study in a preoperative anaesthesiology outpatient clinic no difference in clinical competencies was found between PAs and second-yearanaesthesiology residents.
... In 2009 verscheen een studie van Sonderen et al. over de assessment procedure. Zij presenteerden de ervaringen ten aanzien van de medische kennis-en vaardigheden toets (fase 3) als good practice voor andere Europese landen: 'Out of 200 participants taking part in the assessment process, 161 (80%) failed, mainly because of insufficient mastery of the Dutch language'13 De auteurs merkten op dat de clinical skills assessment valide en betrouwbaar was, maar onthielden een oordeel over de validiteit en betrouwbaarheid van de AKV-toets. Opmerkelijk was dat Sonderen et al. kennelijk in 2009 al beschikten over aantallen participanten aan de assessment procedure die nog niet publiek waren en die in hoge mate afweken van de in 2011 beschikbaar gestelde data door de Commissie Buitenslands Gediplomeerden Volksgezondheid (Jaarverslag CBGV 2010).6 ...
Book
Full-text available
The book contains an evaluation of a Dutch assessment procedure for international medical graduates which was introduced in December 2005. Since then, the influx of international medical graduates deminished strongly. This book shows numbers of applicants who failed and who succeeded. In many cases the reasons for failure were language deficits. In the early years of the assessment procedure bridging courses for IMGs were not available. This was the major reason for the decrease of influx of IMGs in Dutch medical schools.
... Many IMGs gave up their belief in ever passing the phase 2 exams. It was quite astonishing to read that Sonderen et al. (2009) were presenting phase 3 exams as good practice for other European countries. Out of 200 participants entering the assessment procedure 161 (80%) failed "mainly because of insufficient mastery of the Dutch language". ...
Article
Full-text available
Discontinuation of the United States Medical Licensing Examination (USMLE) Step 2 Clinical Skills (CS) exam and Comprehensive Osteopathic Medical Licensing Examination (COMLEX) Level 2 Performance Evaluation (2-PE) raised questions about the ability of medical schools to ensure the clinical skills competence of graduating students. In February 2021, representatives from all Florida, United States, allopathic and osteopathic schools initiated a collaboration to address this critically important issue in the evolving landscape of medical education. A 5-point Likert scale survey of all members (n=18/20 individuals representing 10/10 institutions) reveals that initial interest in joining the collaboration was high among both individuals (mean 4.78, SD 0.43) and institutions (mean 4.69, SD 0.48). Most individuals (mean 4.78, SD 0.55) and institutions (mean 4.53, SD 0.72) are highly satisfied with their decision to join. Members most commonly cited a "desire to establish a shared assessment in place of Step 2 CS/2-PE" as their most important reason for joining. Experienced benefits of membership were ranked as the following: 1) Networking, 2) Shared resources for curriculum implementation, 3) Scholarship, and 4) Work towards a shared assessment in place of Step 2 CS/2-PE. Challenges of membership were ranked as the following: 1) Logistics such as scheduling and technology, 2) Agreement on common goals, 3) Total time commitment, and 4) Large group size. Members cited the "administration of a joint assessment pilot" as the highest priority for the coming year. Florida has successfully launched a regional consortium for the assessment of clinical skills competency with high levels of member satisfaction which may serve as a model for future regional consortia.
Article
National licensing examinations are typically large‐scale examinations taken early in a career or near the point of graduation, and, importantly, success is required to subsequently be able to practice. They are becoming increasingly popular as a method of quality assurance in the medical workforce, but debate about their contribution to patient safety and the improvement of healthcare outcomes continues. A systematic review of the national licensing examination literature demonstrates that there is disagreement between assessment experts about the strengths and challenges of licensing examinations. This is characterized by a trans‐Atlantic divide between the dominance of psychometric reliability assurance in North America and the wider interpretations of validity, to include consequences, in Europe. We conclude that the debate might benefit from refocusing to what a national licensing examination should assess: to achieve a balance between assessing a breadth of skills and the capacity for such skills in practice, and focusing less on reproducibility.
Article
Introduction: An extended clinical examination (ECE) was administered to 85 final year medical students at the Radboud University Medical Centre in the Netherlands. The aim of the study was to determine the psychometric quality and the suitability of the ECE as a measurement tool to assess the clinical proficiency of eight separate clinical skills. Methods: Generalizability studies were conducted to determine the generalizability coefficient and the sources of variance of the ECE. An additional D-study was performed to estimate the generalizability coefficients with altering numbers of stations. Results: The largest sources of variance were found in skill difficulties (36.18%), the general error term (26.76%) and in the rank ordering of skill difficulties across the stations (21.89%). The generalizability coefficient of the entire ECE was above the 0.70 lower bound (G = 0.74). D studies showed that the separate skills could yield sufficient G coefficients in seven out of eight skills, if the ECE was lengthened from 8 to 14 stations. Discussion: The ECE proved to be a reliable clinical assessment that enables examinees to compose a clinical reasoning path through self-obtained data. The ECE can also be used as an assessment tool for separate clinical skills.