ArticlePDF AvailableLiterature Review

Bias in published cost effectiveness studies: Systematic review

Authors:

Abstract and Figures

To investigate if published studies tend to report favourable cost effectiveness ratios (below 20,000 dollars, 50,000 dollars, and 100,000 dollars per quality adjusted life year (QALY) gained) and evaluate study characteristics associated with this phenomenon. Systematic review. Studies reviewed 494 English language studies measuring health effects in QALYs published up to December 2001 identified using Medline, HealthSTAR, CancerLit, Current Content, and EconLit databases. Incremental cost effectiveness ratios measured in dollars set to the year of publication. Approximately half the reported incremental cost effectiveness ratios (712 of 1433) were below 20,000 dollars/QALY. Studies funded by industry were more likely to report cost effectiveness ratios below 20,000 dollars/QALY (adjusted odds ratio 2.1, 95% confidence interval 1.3 to 3.3), 50,000 dollars/QALY (3.2, 1.8 to 5.7), and 100,000 dollars/QALY (3.3, 1.6 to 6.8). Studies of higher methodological quality (adjusted odds ratio 0.58, 0.37 to 0.91) and those conducted in Europe (0.59, 0.33 to 1.1) and the United States (0.44, 0.26 to 0.76) rather than elsewhere were less likely to report ratios below 20,000 dollars/QALY. Most published analyses report favourable incremental cost effectiveness ratios. Studies funded by industry were more likely to report ratios below the three thresholds. Studies of higher methodological quality and those conducted in Europe and the US rather than elsewhere were less likely to report ratios below 20,000 dollars/QALY.
Content may be subject to copyright.
Research
Bias in published cost effectiveness studies: systematic review
Chaim M Bell, David R Urbach, Joel G Ray, Ahmed Bayoumi, Allison B Rosen, Dan Greenberg, Peter J Neumann
Abstract
Objective To investigate if published studies tend to report
favourable cost effectiveness ratios (below $20 000, $50 000,
and $100 000 per quality adjusted life year (QALY) gained) and
evaluate study characteristics associated with this phenomenon.
Design Systematic review.
Studies reviewed 494 English language studies measuring
health effects in QALYs published up to December 2001
identified using Medline, HealthSTAR, CancerLit, Current
Content, and EconLit databases.
Main outcome measures Incremental cost effectiveness ratios
measured in dollars set to the year of publication.
Results Approximately half the reported incremental cost
effectiveness ratios (712 of 1433) were below $20 000/QALY.
Studies funded by industry were more likely to report cost
effectiveness ratios below $20 000/QALY (adjusted odds ratio
2.1, 95% confidence interval 1.3 to 3.3), $50 000/QALY (3.2, 1.8
to 5.7), and $100 000/QALY (3.3, 1.6 to 6.8). Studies of higher
methodological quality (adjusted odds ratio 0.58, 0.37 to 0.91)
and those conducted in Europe (0.59, 0.33 to 1.1) and the
United States (0.44, 0.26 to 0.76) rather than elsewhere were
less likely to repor t ratios below $20 000/QALY.
Conclusion Most published analyses report favourable
incremental cost effectiveness ratios. Studies funded by industry
were more likely to report ratios below the three thresholds.
Studies of higher methodological quality and those conducted
in Europe and the US rather than elsewhere were less likely to
report ratios below $20 000/QALY.
Introduction
Cost effectiveness analysis can help inform policy makers on
better ways to allocate limited resources.
1–3
Some form of cost
effectiveness is now required for health interventions to be cov-
ered by many insurers.
145
The quality adjusted life year (QALY)
is used to compare the effectiveness of a wide range of interven-
tions. Cost effectiveness analysis produces a numerical ratio
the
incremental cost effectiveness ratio
in dollars per QALY. This
ratio is used to express the difference in cost effectiveness
between new diagnostic tests or treatments and current ones.
Interpreting the results of cost effectiveness analysis can be
problematic, making it difficult to decide whether to adopt a
diagnostic test or treatment. The threshold for adoption is
thought to be somewhere between $20 000 (£11 300, 16 500)/
QALY and $100 000/QALY, with thresholds of $50-60 000/
QALY frequently proposed.
6–9
Regardless of the true value of the willingness of society to
pay, studies of healthcare interventions would be expected to
report a wide range of incremental cost effectiveness ratios.
When published ratios cluster around a proposed threshold, bias
may exist, and health policies based on their values may be
flawed.
To describe the distribution of reported incremental cost
effectiveness ratios and characteristics of studies associated with
favourable ratios, we systematically reviewed cost effectiveness
studies in health care that used QALYs as an outcome measure.
We hypothesised that authors tend to report favourable
incremental cost effectiveness ratios, such as those below
$50 000 per QALY.
Methods
We conducted a systematic literature search of Medline, Health-
STAR, CancerLit, Current Contents Connect (all editions), and
EconLit databases for all original cost effectiveness analyses
published in English between 1976 and 2001 that expressed
health outcomes in QALYs.
10–13
Cost effectiveness analyses are
reported as dollars per QALY.
1
We used a standard data
collection form, and two reviewers independently evaluated each
study and abstracted the data. Disagreements were resolved by
consensus. Details of the Tufts-NEMC CEA Registry (formerly
the Harvard School of Public Health CEA Registry) are available
online (http://tufts-nemc.org/cearegistry).
For each article, we documented the name of the journal, the
year of publication, the disease category, and the countr y where
the study was carried out. We used the Science Citation Index
database to assign an impact factor for the year before
publication to each journal. The sources of funding were identi-
fied as “industr y” (partial or complete funding by a pharmaceu-
tical or medical device company indicated in the manuscript) or
“non-industry. Studies for which a funding source was not listed
were identified as “not specified. We also assigned a quality score
to each article, ranging from 1 (low) to 7 (high), based on the
overall quality of the study methods, assumptions, and reporting
practices.
10
Because cost effectiveness analyses often compare several
programmes and include scenarios specific to patient subgroups
or settings, each study may have contributed more than one cost
effectiveness ratio. All cost effectiveness ratios were converted to
US dollars at the exchange rate prevalent in the year of publica-
tion.
14
Because we wanted to test whether the ratios targeted cer-
tain thresholds of the willingness of society to pay, such as
$50 000/QALY, we did not adjust the ratios to constant dollars.
Statistical analysis
We analysed the distribution of all incremental cost effectiveness
ratios and of the smallest and largest ratios from each study. We
excluded nine ratios for which both the incremental cost and the
incremental QALYs were negative. Although such interventions
may be economically efficient, decision makers might not want
to adopt interventions associated with reduced health.
15
Cite this article as: BMJ, doi:10.1136/bmj.38737.607558.80 (published 22 February 2006)
BMJ Online First bmj.com page 1 of 5
Generalised estimating equations were used to evaluate study
characteristics associated with incremental cost effectiveness
ratios below the threshold values of $20 000, $50 000, and
$100 000, as recommended previously.
689
We used these
equations because they take into account the correlation of cost
effectiveness ratios derived from within the same study.
16
We esti-
mated odds ratios for associations between study characteristics
and the presence of a favourable cost effectiveness ratio.
Adjusted odds ratios were estimated by fitting a non-
parsimonious model that included a priori predictor variables.
We used SAS statistical software version 8.2 for all analyses.
Two sided P values less than 0.05 were considered significant.
Results
We screened more than 3300 study abstracts and identified 533
original cost-utility analyses. Thirty nine studies were excluded
because they did not report numerical incremental cost
effectiveness ratios. In total, 1433 cost effectiveness ratios were
reported in these 494 studies, with a median of 2.0 (interquartile
range 1-3) and a range of 1-20 ratios per study. Overall, 130
incremental cost effectiveness ratios (9%) were reported as cost
saving (they saved money and improved health simultaneously),
124 (9%) were dominated by their comparators (had worse
health outcomes and increased costs), and 1179 (82%) increased
costs but improved health outcomes.
Most studies were published in the 1990s (table 1). The cita-
tion impact factor in the year before publication was available for
449 studies (91%). Cardiovascular and infectious disease
interventions were the most commonly studied. Most studies
were from the United States. About 18% were sponsored by
industry, almost half were sponsored by non-industry sources,
and sponsorship could not be determined in 34% of studies.
Figure 1 shows the frequency distribution of all 1433
incremental cost effectiveness ratios. The median (interquartile
range) ratio per QALY was $20 133 ($4520-74 400). Approxi-
mately half of the ratios (712; 50%) were below $20 000/QALY,
two thirds (974; 68%) were below $50 000/QALY, and more
than three quarters (1129; 79%) were below $100 000/QALY.
When analysed according to study sponsorship, median (range)
ratios per QALY were $13 083 ($3600-33 000) for those
sponsored by industry and $27 400 ($4600–96 600) for those
with non-industry sponsors. The median (range) cost effective-
ness ratio per QALY for studies with unknown sponsorship was
$18 900 ($4 960–64 300). Restricting the analysis to the lowest
and highest ratios reported by each study yielded median ratios
of $8784/QALY and $31 104/QALY (fig 2).
Several study characteristics were associated with reporting
incremental cost effectiveness ratios below one or all three
thresholds (table 2). The more quoted journals with a citation
impact factor above 4 were less likely to publish ratios below
$20 000/QALY (crude odds ratio 0.60, 95% confidence interval
0.42 to 0.86) or $50 000/QALY (crude 0.56, 0.38 to 0.82) than
less quoted journals with a lower impact factor. However, this
finding was not significant within the multivar iable model (table
2).
Studies funded by industry were more likely to report cost
effectiveness ratios less than $20 000/QALY (adjusted odds ratio
2.1, 1.3 to 3.3), $50 000/QALY (3.2, 1.8 to 5.7), or
$100 000/QALY (3.3, 1.6 to 6.8) than studies funded by
non-industry sources (table 2). Studies carried out in the US and
Europe were significantly less likely to find favourable incremen-
tal cost effectiveness ratios than studies carried out elsewhere.
Studies with quality scores for methodology above 5.5 were sig-
nificantly less likely to report ratios below $20 000/QALY (0.48,
0.33 to 0.70) and $50 000/QALY (0.57, 0.39 to 0.83). Within the
multivariable model, the association with quality remained
significant only for cost effectiveness ratios below $20 000/
QALY (adjusted odds ratio 0.58, 0.37 to 0.91; table 2).
Discussion
About half of all cost effectiveness studies published over a 25
year period reported highly f avourable incremental cost
Table 1 Characteristics of 494 cost-utility analyses of health interventions
published between 1976 and 2001
Study characteristic No. (%)
Publication year
1976-91 47 (9)
1992-6 125 (25)
1997-2001 322 (65)
Journal impact factor*
<2 157 (32)
2-4 137 (28)
>4 155 (31)
Not available 45 (9)
Disease category
Cardiovascular 110 (22)
Endocrine 30 (6)
Infectious 94 (19)
Musculoskeletal 21 (4)
Neoplastic 76 (15)
Neurological or psychiatric 43 (9)
Other 120 (24)
Sponsorship or funding source
Non-industry 240 (49)
Industry† 88 (18)
Not specified 166 (34)
Region of study
Europe 118 (24)
United States 306 (62)
Other‡ 70 (14)
Methodological quality§
1.0-4.0 214 (43)
4.5-5.0 159 (32)
5.5-7.0 121 (25)
*Impact factor for that journal in the year before publication of the study.
†Funding by a pharmaceutical or medical device manufacturer.
‡Canada 41 (59%), Australia 18 (26%), Japan 2 (3%), New Zealand 2 (3%), South Africa 2
(3%), other 5 (7%).
§Mean quality scores for the two reviewers.
Incremental cost effectiveness ratio ($1000/QALY)
No of cost effectiveness ratios
Cost saving
0
200
300
400
$20 000/
QALY
threshold
$50 000/
QALY
threshold
$100 000/
QALY
threshold
100
0-10
10-20
20-30
30-40
40-50
50-60
60-70
70-80
80-90
90-100
100-200
>200
Dominated
Fig 1 Frequency distribution of 1433 incremental cost effectiveness ratios for
health interventions
Research
page2of5 BMJ Online First bmj.com
effectiveness ratios of less than $20 000/QALY. More than half
of the highest ratios reported by each study were below
$50 000/QALY. In multivariable analysis, location of the study,
methodological quality, and sponsorship were associated with a
favourable cost effectiveness ratio. Studies sponsored by industry
were more than twice as likely as studies sponsored by
non-industry sources to report ratios below $20 000/QALY and
over three times more likely to report ratios below $50 000/
QALY or $100 000/QALY. Studies sponsored by industry were
also more likely to be of lower methodological quality and to be
published in journals with lower impact factors.
Few studies have described the distribution of reported
incremental cost effectiveness ratios.
12 17
We reviewed many stud-
ies, but we restricted our analysis to studies that measured health
outcomes with QALYs. Although the QALY measure remains
controversial, it has been endorsed by authoritative bodies, and it
potentially allows cost effectiveness analysis to assess both
allocative and technical efficiency.
11819
It would be interesting to
examine whether the use of alternative measures of health
outcome results in more or less favourable cost effectiveness
ratios. A limitation of our analysis is that some cost effectiveness
analyses used in decision making may not have been published.
5
However, analyses of cost effectiveness assessments submitted to
the National Institute for Health and Clinical Excellence (NICE)
in the United Kingdom found that cost effectiveness ratios sub-
mitted by manufacturers were significantly lower than analyses
of identical technologies performed by assessor s from an
academic centre.
20
Publication bias
We found relatively few published incremental cost effectiveness
ratios between $50 000/QALY and $100 000/QALY; many
were below $20 000/QALY and some were above $100 000/
QALY. This indicates that cost effectiveness analyses tend to
report “positive” or “negative” results but not intermediate
results. There are three possible explanations for these findings.
Firstly, they may reflect the true distribution of cost effectiveness
Incremental cost effectiveness ratio ($1000/QALY)
No of cost effectiveness ratios
Cost saving
0-10
10-20
20-30
30-40
40-50
50-60
60-70
70-80
80-90
90-100
100-200
>200
Dominated
0
80
120
200
160
$20 000/
QALY
threshold
$50 000/
QALY
threshold
$100 000/
QALY
threshold
40
Fig 2 Frequency distribution of lowest (brown) and highest (white) incremental
cost effectiveness ratios in each study
Table 2 Characteristics of studies associated with favourable incremental cost effectiveness ratios according to three threshold values. Values are odds ratios
(95% confidence intervals)
Study characteristic
Crude OR (95% CI) Adjusted OR (95% CI)*
<$20 000/QALY <$50 000/QALY <$100 000/QALY <$20 000/QALY <$50 000/QALY <$100 000/QALY
Publication year
1976-91 1.6 (0.98 to 2.7) 1.4 (0.80 to 2.4) 1.2 (0.67 to 2.3) 1.6 (0.96 to 2.7) 1.3 (0.76 to 2.3) 1.2 (0.61 to 2.2)
1992-6 1.3 (0.94 to 1.9) 1.4 (0.93 to 2.3) 1.1 (0.68 to 1.6) 1.3 (0.87 to 1.8) 1.3 (0.87 to 1.9) 1.0 (0.64 to 1.6)
1997-2001 1.0 1.0 1.0 1.0 1.0 1.0
Journal impact factor†
<2 1.0 1.0 1.0 1.0 1.0 1.0
2-4 0.62 (0.42 to 0.91) 0.62 (0.41 to 0.94) 0.59 (0.38 to 0.94) 0.75 (0.50 to 1.1) 0.82 (0.53 to 1.3) 0.77 (0.47 to 1.2)
>4 0.60 (0.42 to 0.86) 0.56 (0.38 to 0.82) 0.83 (0.53 to 1.3) 0.95 (0.63 to 1.4) 0.81 (0.52 to 1.3) 1.1 (0.66 to 1.9)
Disease category
Cardiovascular 1.0 1.0 1.0 1.0 1.0 1.0
Endocrine 1.3 (0.68 to 2.6) 1.2 (0.58 to 2.5) 1.3 (0.58 to 3.0) 1.2 (0.56 to 2.4) 1.1 (0.52 to 2.3) 1.2 (0.53 to 2.7)
Infectious 1.1 (0.66 to 1.7) 0.79 (0.48 to 1.3) 0.74 (0.43 to 1.3) 1.0 (0.64 to 1.7) 0.75 (0.44 to 1.3) 0.71 (0.39 to 1.3)
Musculoskeletal 1.4 (0.60 to 3.3) 1.3 (0.51 to 3.1) 1.4 (0.50 to 3.7) 1.1 (0.43 to 2.7) 0.89 (0.34 to 2.3) 1.1 (0.37 to 3.1)
Neoplastic 0.91 (0.56 to 1.5) 0.79 (0.46 to 1.3) 0.77 (0.42 to 1.4) 0.78 (0.47 to 1.3) 0.64 (0.37 to 1.1) 0.69 (0.36 to 1.3)
Neurological/psychiatric 0.76 (0.40 to 1.5) 0.78 (0.40 to 1.5)) 0.66 (0.31 to 1.4) 0.75 (0.39 to 1.4) 0.70 (0.34 to 1.4) 0.61 (0.27 to 1.4)
Other 1.2 (0.75 to 1.8) 0.67 (0.42 to 1.1) 0.52 (0.31 to 0.88) 1.0 (0.63 to 1.6) 0.53 (0.31 to 0.88) 0.49 (0.27 to 0.86)
Study funding source‡
Non-industry 1.0 1.0 1.0 1.0 1.0 1.0
Industry 2.2 (1.4 to 3.4) 3.5 (2.0 to 6.1) 3.4 (1.6 to 7.0) 2.1 (1.3 to 3.3) 3.2 (1.8 to 5.7) 3.3 (1.6 to 6.8)
Not specified 1.3 (0.95 to 1.9) 1.5 (1.1 to 2.2) 1.4 (0.93 to 2.1) 1.3 (0.89 to 1.8) 1.5 (1.0 to 2.1) 1.5 (0.97 to 2.2)
Region of study
Europe 0.50 (0.28 to 0.89) 0.43 (0.21 to 0.87) 0.46 (0.21 to 1.0) 0.59 (0.33 to 1.1) 0.42 (0.21 to 0.86) 0.43 (0.19 to 0.96)
United States 0.35 (0.21 to 0.57) 0.29 (0.16 to 0.55) 0.33 (0.16 to 0.66) 0.44 (0.26 to 0.76) 0.35 (0.18 to 0.67) 0.33 (0.16 to 0.68)
Other§ 1.0 1.0 1.0 1.0 1.0 1.0
Methodological quality¶
1.0-4.0 1.0 1.0 1.0 1.0 1.0 1.0
4.5-5.0 0.92 (0.64 to 1.3) 0.95 (0.64 to 1.4) 0.96 (0.62 to 1.5) 1.0 (0.70 to 1.5) 1.1 (0.70 to 1.6) 1.0 (0.63 to 1.6)
5.5-7.0 0.48 (0.33 to 0.70) 0.57 (0.39 to 0.83) 0.82 (0.52 to 1.3) 0.58 (0.37 to 0.91) 0.72 (0.45 to 1.2) 0.90 (0.51 to 1.6)
QALY, quality adjusted life year.
*Adjusted for all other study characteristics.
†Impact factor in the year before publication.
‡Funding by a pharmaceutical or medical device manufacturer.
§Canada 41 (59%), Australia 18 (26%), Japan 2 (3%), New Zealand 2 (3%), South Africa 2 (3%), other 5 (7%).
¶Mean score from two reviewers.
Research
BMJ Online First bmj.com page 3 of 5
ratios for healthcare interventions. Perhaps interventions that
manufacturers deem to be economically unattractive are not
brought to market.
21
Secondly, analysts may not be interested in
studying interventions with mid-range cost effectiveness ratios or
some journals may not want to publish such studies. Thirdly,
some cost effectiveness analyses may be modelled to yield
favourable ratios or studies with unfavourable ratios may be sup-
pressed. These findings do not tell us whether authors (and their
sponsors) fail to report undesirable cost effectiveness results
while reporting those that are positive. It is unclear whether pub-
lication bias occurs at a conscious or unconscious level. In any
case, our results support concerns about the presence of signifi-
cant and persistent bias in both the conduct and reporting of
cost effectiveness analyses.
22 23
It could be argued that all cost
effectiveness analyses should be registered before they start, as
for randomised clinical trials, but this may be unrealistic given
the way they are currently conducted.
24
If bias is an important explanation for our findings, our
results indicate that the “target” cost effectiveness ratio for a
healthcare intervention is between $20 000/QALY and
$50 000/QALY. The true willingness of society to pay for an
extra QALY is unknown, although some have tried to derive this
willingness from economic principles, or to infer it from policy
level decisions.
25–27
In the US, the threshold most often used is
$50 000/QALY, based on Medicare’s coverage of dialysis,
although the criteria for selecting this value have been
criticised.
826
Furthermore, interventions may be adopted despite
having an unfavourable ratio if other concerns such as disease
burden and health equity are dealt with.
25
Recent attempts to standardise the conduct and reporting of
economic analyses and modelling studies may help prevent the
manipulation of studies.
1 18 28–30
Such guidelines offer the produc-
ers of cost effectiveness analyses, reviewers, readers, and journal
editors a framework against which they can assess analyses,
although the complexity of models and publication bias remain
important issues.
5822
Electronic publishing could enhance trans-
parency in modelling by making technical appendices available
and improving the presentation of results.
323
Furthermore,
distribution of the underlying decision analysis models to the
public should be considered.
Industry sponsorship and the location of the study do not
fully explain the tendency for incremental cost effectiveness
ratios to fall below desirable thresholds. Fewer than 20% of stud-
ies reviewed were funded by industry, and another 33% did not
specify their source of funding. Moreover, the median ratio for
studies not sponsored by industry was well below the
$50 000/QALY threshold. The lower ratios found in US studies
cannot be fully accounted for by foreign exchange rates and
inflation because the US results were similar to those found in
European studies. The methodological quality of studies is
another factor that should be explored in future research.
Journal editors and reviewers can help reduce publication
bias.
23 31–33
Potential conflicts of interest of study sponsors and
authors need to be scrutinised.
23 34–36
One approach is to restrict
the publication of cost effectiveness analyses funded by industry
if at least one author has direct financial ties with the sponsoring
company, as adopted by the New England Journal of Medicine in
1994.
37
Journal editors may show bias by publishing studies with
positive results but not studies with negative results, although this
practice may not be common.
33 38–41
One study proposes that dif-
ferences between economic analyses cannot be explained by
selective submission or editorial selection bias alone and
probably reflect a more fundamental difference in the studies.
42
Conclusions
More rigour and openness is needed in the discipline of health
economics before decision makers and the public can be
confident that cost effectiveness analyses are conducted and
published in an unbiased manner. These considerations are a
prerequisite for these analyses to compare health management
strategies. A heightened awareness of the limitations of cost
effectiveness analyses and potentially influential factors may help
users to interpret the conclusions of these analyses.
The paper was presented in abstract form at the Fifth International
Congress on Peer Review and Biomedical Publication in Chicago, IL, 16-18
September 2005.
Contributors: All authors contributed to the conception and design of the
project, revised the article critically for important intellectual content, and
provided final approval of the final version. CMB, DRU, JGR, and AB ana-
lysed and interpreted the data and helped to draft the article. CMB had
access to all of the data and takes responsibility for the integrity of the data
and the accuracy of the data analysis. CMB and PJN are guarantors.
Funding: Agency for Health Care Research and Quality (RO1 HS10919).
CMB and JGR are recipients of a phase 2 clinician scientist award and a new
investigator award, both from the Canadian Institutes of Health Research.
DRU holds a career scientist award from the Ontario Ministry of Health.
Competing interests: None declared.
1 Gold MR, Siegel JE, Russell LB, Weinstein MC. Cost-effectiveness in health and medicine.
New York: Oxford University Press, 1996.
2 Granata AV, Hillman AL. Competing practice guidelines: using cost-effectiveness
analysis to make optimal decisions. Ann Intern Med 1998;128:56-63.
3 Pignone M, Saha S, Hoerger T, Lohr KN, Teutsch S, Mandelblatt J. Challenges in
systematic reviews of economic analyses. Ann Intern Med 2005;142:1073-9.
4 Laupacis A. Incorporating economic evaluations into decision-making: the Ontario
experience. Med Care 2005;43(suppl 7):15-9.
5 Hill SR, Mitchell AS, Henry DA. Problems with the interpretation of pharmacoeco-
nomic analyses: a review of submissions to the Australian pharmaceutical benefits
scheme. JAMA 2000;283:2116-21.
6 Laupacis A, Feeny D, Detsky AS, Tugwell PX. How attractive does a new technology
have to be to warrant adoption and utilization? Tentative guidelines for using clinical
and economic evaluations. CMAJ 1992;146:473-81.
7 Owens DK. Interpretation of cost-effectiveness analyses. J Gen Intern Med
1998;13:716-7.
8 Evans C, Tavakoli M, Crawford B. Use of quality adjusted life years and life years gained
as benchmarks in economic evaluations: a critical appraisal. Health Care Manage Sci
2004;7:43-9.
9 Eichler HG, Kong SX, Gerth WC, Mavros P, Jonsson B. Use of cost-effectiveness analy-
sis in health-care resource allocation decision-making: how are cost-effectiveness
thresholds expected to emerge? Value Health 2004;7:518-28.
10 Neumann PJ, Stone PW, Chapman RH, Sandberg EA, Bell CM. The quality of report-
ing in published cost-utility analyses, 1976-1997. Ann Intern Med 2000;132:964-72.
11 Neumann PJ, Greenberg D, Olchanski NV, Stone PW, Rosen AB. Growth and quality of
the cost utility literature, 1976-2001. Value Health 2005;8:3-9.
12 Chapman RH, Stone PW, Sandberg EA, Bell C, Neumann PJ. A comprehensive league
table of cost-utility ratios and a sub-table of “panel-worthy” studies. Med Decis Making
2000;20:451-67.
What is already known on this topic
Cost effectiveness analysis is widely used to inform policy
makers about the efficient allocation of resources
Various thresholds for cost effectiveness ratios have been
proposed to identify good value, but the distribution of
published ratios with respect to these thresholds has not
been investigated
What this study adds
Two thirds of published cost effectiveness ratios were below
$50 000 per quality adjusted life year (QALY) and only 21%
were above $100 000/QALY
Published cost effectiveness analyses are of limited use in
identifying health interventions that do not meet popular
standards of “cost effectiveness”
Research
page4of5 BMJ Online First bmj.com
13 Greenberg D, Rosen AB, Olchanski NV, Stone PW, Nadai J, Neumann PJ. Delays in
publication of cost utility analyses conducted alongside clinical tr ials: registry analysis.
BMJ 2004;328:1536-7.
14 Monthly exchange rates series. Federal Reserve Bank of St Louis. http://
research.stlouisfed.org/fred2/categories/15 (accessed 9 Feb 2006).
15 O’Brien BJ, Gertsen K, Willan AR, Faulkner LA. Is there a kink in consumers’ thresh-
old value for cost-effectiveness in health care? Health Econ 2002;11:175-80.
16 Zeger SL, Liang KY, Albert PS. Models for longitudinal data: a generalized estimating
equation approach. Biometr ics 1988;44:1049-60.
17 Tengs TO, Adams ME, Pliskin JS, Safran DG, Siegel JE, Weinstein MC, et al.
Five-hundred life-saving interventions and their cost-effectiveness. Risk Anal
1995;15:369-90.
18 Canadian Coordinating Office for Health Technology Assessment. Common drug review
submission guidelines. Ottawa: Canadian Coordinating Office for Health Technology
Assessment (CCOHTA), 2005. www.ccohta.ca/CDR/cdr_pdf/cdr_submiss_guide.pdf
(accessed 9 Feb 2006).
19 Oliver A, Healey A, Donaldson C. Choosing the method to match the perspective: eco-
nomic assessment and its implications for health-services efficiency. Lancet
2002;359:1771-4.
20 Miners AH, Garau M, Fidan D, Fischer AJ. Comparing estimates of cost effectiveness
submitted to the National Institute for Clinical Excellence (NICE) by different organi-
sations: retrospective study. BMJ 2005;330:65.
21 Neumann PJ, Sandberg EA, Bell CM, Stone PW, Chapman RH. Are pharmaceuticals
cost-effective? A review of the evidence. Health Aff (Millwood) 2000;19:92-109.
22 Freemantle N, Mason J. Publication bias in clinical trials and economic analyses. Phar-
macoeconomics 1997;12:10-6.
23 Hillman AL, Eisenberg JM, Pauly MV, Bloom BS, Glick H, Kinosian B, et al. Avoiding
bias in the conduct and reporting of cost-effectiveness research sponsored by pharma-
ceutical companies. N Engl J Med 1991;324:1362-5.
24 Rennie D. Peer review in Prague. JAMA 1998;280:214-5.
25 Devlin N, Parkin D. Does NICE have a cost-effectiveness threshold and what other fac-
tors influence its decisions? A binary choice analysis. Health Econ 2004;13:437-52.
26 Hirth RA, Chernew ME, Miller E, Fendrick AM, Weissert WG. Willingness to pay for a
quality-adjusted life year: in search of a standard. Med Decis Making 2000;20:332-42.
27 Garber AM, Phelps CE. Economic foundations of cost-effectiveness analysis. J Health
Econ 1997;16:1-31.
28 Sculpher M, Fenwick E, Claxton K. Assessing quality in decision analytic
cost-effectiveness models. A suggested framework and example of application.
Pharmacoeconomics 2000;17:461-77.
29 Philips Z, Ginnelly L, Sculpher M, Claxton K, Golder S, Riemsma R, et al. Review of
guidelines for good practice in decision-analytic modelling in health technology
assessment. Health Technol Assess 2004;8:1-172.
30 Weinstein MC, O’Brien B, Hornberger J, Jackson J, Johannesson M, McCabe C, et al.
Principles of good practice for decision analytic modeling in health-care evaluation:
report of the ISPOR task force on good research practices
modeling studies. Value
Health 2003;6:9-17.
31 Sharp DW. What can and should be done to reduce publication bias? The perspective
of an editor. JAMA 1990;263:1390-1.
32 Chalmers TC, Frank CS, Reitman D. Minimizing the three stages of publication bias.
JAMA 1990;263:1392-5.
33 Dickersin K. The existence of publication bias and risk factors for its occurrence. JAMA
1990;263:1385-9.
34 Haivas I, Schroter S, Waechter F, Smith R. Editors’ declaration of their own conflicts of
interest. Can Med Assoc J 2004;171:475-6.
35 Lexchin J, Bero LA, Djulbegovic B, Clark O. Pharmaceutical industry sponsorship and
research outcome and quality: systematic review. BMJ 2003;326:1167-70.
36 Friedberg M, Saffran B, Stinson TJ, Nelson W, Bennett CL. Evaluation of conflict of
interest in economic analyses of new drugs used in oncology. JAMA 1999;282:1453-7.
37 Kassirer JP, Angell M. The journal’s policy on cost-effectiveness analyses. N Engl J Med
1994;331:669-70.
38 Ray JG. Judging the judges: the role of journal editors. QJM 2002;95:769-74.
39 Murray MD, Birt JA, Manatunga AK, Darnell JC. Medication compliance in elderly out-
patients using twice-daily dosing and unit-of-use packaging. Ann Pharmacother
1993;27(5):616-21.
40 Dickersin K, Chan S, Chalmers TC, Sacks HS, Smith H, Jr. Publication bias and clinical
trials. Control Clin Trials 1987;8:343-53.
41 Olson CM, Rennie D, Cook D, Dickersin K, Flanagin A, Hogan JW, et al. Publication
bias in editorial decision making. JAMA 2002;287:2825-8.
42 Baker CB, Johnsrud MT, Crismon ML, Rosenheck RA, Woods SW. Quantitative analy-
sis of sponsorship bias in economic studies of antidepressants. Br J Psychiatry
2003;183:498-506.
(Accepted 22 December 2005)
doi 10.1136/bmj.38737.607558.80
St Michael’s Hospital, Toronto, Ontario, Canada M5B 1W8
Chaim M Bell assistant professor of medicine and health, policy, management, and
evaluation
Joel G Ray assistant professor of medicine and health, policy, management, and evaluation
Ahmed Bayoumi assistant professor of medicine and health, policy, management, and
evaluation
University Health Network, Toronto
David R Urbach assistant professor of medicine and health, policy, management, and
evaluation
University of Michigan, Ann Arbor, MI, USA
Allison B Rosen assistant professor of medicine
Health Systems Management, Ben-Gurion University of the Negev, Beersheba,
Israel
Dan Greenberg senior lecturer
Center for the Evaluation of Value and Risk in Health, Institute for Clinical
Research and Health Policy Studies, Tufts University School of Medicine, Boston,
USA
Peter J Neumann director
Correspondence to: C M Bell, St Michael’s Hospital, Toronto, Ontario, Canada
M5B 1W8 bellc@smh.toronto.on.ca
Research
BMJ Online First bmj.com page 5 of 5
... It has been suggested that early action with proactive (preventive) strategies rather than reactive (emergency) responses or even inaction will be more cost-effective (Vazquez-Prokopec et al., 2010;Fitzpatrick et al., 2017;Roiz et al., 2018;Ahmed et al., 2022). Cost-effectiveness studies of management strategies with statistically significant positive findings favorable to the intervention under study are also more likely to be published, creating a potential publication bias (Bell et al., 2006). There is still work to be done to provide pertinent, robust costeffectiveness guidance for evidence-based management strategies, as little research has so far been published (Tschampl et al., 2020;. ...
Article
Full-text available
Invasive Aedes aegypti and Aedes albopictus mosquitoes transmit viruses such as dengue, chikungunya and Zika, posing a huge public health burden as well as having a less well understood economic impact. We present a comprehensive, global-scale synthesis of studies reporting these economic costs, spanning 166 countries and territories over 45 years. The minimum cumulative reported cost estimate expressed in 2022 US$ was 94.7 billion, although this figure reflects considerable underreporting and underestimation. The analysis suggests a 14-fold increase in costs, with an average annual expenditure of US$ 3.1 billion, and a maximum of US$ 20.3 billion in 2013. Damage and losses were an order of magnitude higher than investment in management, with only a modest portion allocated to prevention. Effective control measures are urgently needed to safeguard global health and well-being, and to reduce the economic burden on human societies. This study fills a critical gap by addressing the increasing economic costs of Aedes and Aedes-borne diseases and offers insights to inform evidence-based policy.
... Fifth, this cost-effectiveness analysis is independent of industry influence. 43 This study also has limitations. First, rituximab is used as part of the initial therapy in iTTP, and it was not included as a cost in our study. ...
Article
Full-text available
While awaiting confirmatory results, empiric therapy for patients suspected to have immune thrombotic thrombocytopenic purpura (iTTP) provides benefits and also accrues risks and costs. Rapid assays for ADAMTS13 may be able to avoid the cost and risk exposure associated with empiric treatment. We conducted the first cost-effectiveness evaluation of testing strategies with rapid versus traditional ADAMTS13 assays in patients with intermediate to high-risk PLASMIC scores, with and without caplacizumab use. We built a Markov cohort simulation with four clinical base-case analyses: 1) Intermediate-risk PLASMIC score with caplacizumab, 2) Intermediate-risk PLASMIC score without caplacizumab, 3) High-risk PLASMIC score with caplacizumab, 4) High-risk PLASMIC score without caplacizumab. Each of these evaluated three testing strategies: 1) rapid assay (<1-hour turnaround), 2) in-house FRET-based assay (24-hour turnaround), and 3) send-out FRET-based assay (72-hour turnaround). The primary outcome was the incremental net monetary benefit (iNMB) reported over a 3-day time horizon and across accepted willingness-to-pay thresholds in USD per quality-adjusted life-year (QALY). While accruing the same amount of QALYs, the rapid assay strategy saved up to $46,820 (95% CI $41,961-$52,486) per-patient-tested. No parameter variation changed the outcome. In probabilistic sensitivity analyses, the rapid assay strategy was favored in 100% (three base-cases and scenario analyses) and 99% (one base-case and scenario analysis) across 100,000 Monte Carlo iterations within each. Rapid ADAMTS13 testing for patients with intermediate- or high-risk PLASMIC scores yields significant per-patient cost savings, achieved by reducing the costs associated with unnecessary therapeutic plasma exchange and caplacizumab therapy in patients without iTTP.
... A lower number of administered blood products can, however, reduce cost (117). However, calculating the extent of cost-reduction is variable prone to bias (118). Finally, the evidence is much more uncertain about the effect of PBM on plasma and platelet transfusions, and because blood products are of limited resources, wise utilization, regardless of PBM, is required. ...
Article
Full-text available
Humans have always been intrigued by blood. During World War II, the rationale of transfusion to save lives has remained the guiding principle of transfusion; however, by the mid-20th century, medical practice changed once blood was readily available. It soon became the norm to order at least a type and crossmatch for major surgery of any kind. This led to a need for patient blood management (PBM), which is a natural continuation of the history of modern transfusion. PBM further expanded and evolved to a multidisciplinary, multimodal, multifaceted approach to optimize patient outcomes by providing an evidencebased transfusion standard of care while incorporating the management of anemia and hemostasis, improving physiologic reserve, and minimizing blood loss. The treatment of Jehovah’s Witness patients (JWP) using methods introduced to avoid blood use, such as maintaining iron stores, optimizing hemostasis, minimizing blood loss, and using cell salvage (CS), laid the foundation of the current practice of PBM. In the 1980’s, the discovery that human immunodeficiency virus (HIV) was transfusion-transmitted caused many of the “basic indications” for transfusions to be reconsidered. As the 20th century ended and the 21st century began, PBM was now beginning to be considered as a “standard of care” and various organizations embarked on the journey to support research into this area. Several professional societies heralded this change towards PBM and created standards and certifications. The move from paper to electronic documents has further expanded the role of PBM, allowing for transfusion-related high-quality data collection, data mining, and big data analytics that can promote informed PBM healthcare decisions. The aim of this manuscript is to walk the reader through the evolution of transfusion medicine that came to fruition in the 20th century through the development of PBM programs in the first quarter of the 21st century and then to provide some insight into what can be expected head during the second quarter of this century. Keywords: Patient blood management (PBM); history of transfusion; anemia
... Publication bias appears because the studies published are usually those which are in favour of experimental treatment instead of control treatment. 86,87 This and other biases (selection, implementation, detection, attrition and/or notification, 88 quantifiable by different techniques, often come from the clinical trials on which efficacy and safety results are based, and which subsequently inherit the economic evaluations that are based on them. It should be mentioned that the good methodological practice of clinical trials and economic evaluations is the way to contain and manage the appearance of biases. ...
Article
Full-text available
The approval of new non-insulin treatments has broadened the therapeutic arsenal, but it has also increased the complexity of choice for the treatment of type 2 diabetes mellitus (DM2). The objective of this study was to systematically review the literature on economic evaluations associated with non-insulin antidiabetic drugs (NIADs) for DM2. We searched in Medline, IBECS, Doyma and SciELO databases for full economic evaluations of NIADs in adults with DM2 applied after the failure of the first line of pharmacological treatment, published between 2010 and 2017, focusing on studies that incorporated quality-adjusted life years (QALYs). The review included a total of 57 studies, in which 134 comparisons were made between NIADs. Under an acceptability threshold of 25,000 euros per QALY gained, iSLGT-2 were preferable to iDPP-4 and sulfonylureas in terms of incremental cost-utility. By contrast, there were no conclusive comparative results for the other two new NIAD groups (GLP-1 and iDPP-4). The heterogeneity of the studies’ methodologies and results hindered our ability to determine under what specific clinical assumptions some NIADs would be more cost-effective than others. Economic evaluations of healthcare should be used as part of the decision-making process, so multifactorial therapeutic management strategies should be established based on the patients’ clinical characteristics and preferences as principal criteria.
Article
Background A literature review concerning the economic evaluation of telemonitoring was requested by the authority in charge of health evaluation in France, in a context of deployment of remote patient monitoring and identification of its financing. Due to the heterogeneity of existing telemonitoring solutions, it was necessary to stratify the evaluation according to patient involvement. Three levels of patient involvement are considered: weak (automated monitoring), medium (monitoring supported by a professional), and strong (active remote participation). Objectives We performed a scoping review to provide a comprehensive overview of different systems of telemonitoring and their reported cost-effectiveness. Methods Following PRISMA-ScR guidelines, a search was performed in four databases: PubMed, MEDLINE, EMBASE, and Cochrane Library between January 1, 2013 and May 19, 2020. Remote patient monitoring should include the combination of three elements: a connected device, an organizational solution for data analysis and alert management, and a system allowing personalized interactions, and three degrees of involvement. Results We identified 61 eligible studies among the 489 records identified. Heart failure remains the pathology most represented in the studies selected ( n = 24 ). The cost-utility analysis was chosen in a preponderant way ( n = 41 ). Forty-four studies (72 percent) reported that the intervention was expected cost-effective. Heterogeneity has been observed in the remote monitoring solutions but all systems are reported cost-effective. The small number of long-term studies does not allow conclusions to be drawn on the transposability. Conclusions Remote patient monitoring is reported to be cost-effective whatever the system and patient involvement.
Article
Background: Seasonal influenza vaccination is clinically important and reduces hospitalization costs for pregnant women. However, is it also a cost-effective intervention? Method: We conducted a systematic search of Medline (via PUBMED), EMBASE, SCOPUS, and Web of Science databases. We included any economic evaluation studies that reported Incremental Cost-Effectiveness Ratios for vaccinating pregnant women against influenza. Result: Out of 927 potentially eligible articles, only 14 full texts met our inclusion criteria. In almost all studies, vaccinating pregnant women was a cost-effective and cost-saving strategy. In one study, it was not cost-effective when the researchers used costs and probabilities related to other groups (healthy adults) due to the lack of data for pregnant women. The main factors influencing the cost-effectiveness of the studies were vaccine efficacy and vaccination cost. Conclusion: Influenza vaccination of pregnant women is a cost-effective intervention. More studies on the cost-effectiveness of this intervention in underdeveloped countries are needed.
Article
Full-text available
Publication bias is the tendency on the parts of investigators, reviewers, and editors to submit or accept manuscripts for publication based on the direction or strength of the study findings. Much of what has been learned about publication bias comes from the social sciences, less from the field of medicine. In medicine, three studies have provided direct evidence for this bias. Prevention of publication bias is important both from the scientific perspective (complete dissemination of knowledge) and from the perspective of those who combine results from a number of similar studies (meta-analysis). If treatment decisions are based on the published literature, then the literature must include all available data that is of acceptable quality. Currently, obtaining information regarding all studies undertaken in a given field is difficult, even impossible. Registration of clinical trials, and perhaps other types of studies, is the direction in which the scientific community should move.
Article
We gathered information on the cost-effectiveness of life-saving interventions in the United States from publicly available economic analyses. “Life-saving interventions” were defined as any behavioral and/or technological strategy that reduces the probability of premature death among a specified target population. We defined cost-effectiveness as the net resource costs of an intervention per year of life saved. To improve the comparability of cost-effectiveness ratios arrived at with diverse methods, we established fixed definitional goals and revised published estimates, when necessary and feasible, to meet these goals. The 587 interventions identified ranged from those that save more resources than they cost, to those costing more than 10 billion dollars per year of life saved. Overall, the median intervention costs $42,000 per life-year saved. The median medical intervention costs $19,000/life-year; injury reduction $48,000/life-year; and toxin control $2,800,000/life-year. Cost/life-year ratios and bibliographic references for more than 500 life-saving interventions are provided.
Article
Despite the growing use of decision analytic modelling in cost-effectiveness analysis, there is a relatively small literature on what constitutes good practice in decision analysis. The aim of this paper is to consider the concept of `validity' and `quality' in this area of evaluation, and to suggest a framework by which quality can be demonstrated on the part of the analyst and assessed by the reviewer and user. The paper begins by considering the purpose of cost-effectiveness models and argues that the their role is to identify optimum treatment decisions in the context of uncertainty about future states of the world. The issue of whether such models can be defined as `scientific' is considered. The notion that decision analysis undertaken at time t can only be considered scientific if its outputs closely predict the results of a trial undertaken at time t+1 is rejected as this ignores the need to make decisions on the basis of currently available evidence. Rather, the scientific characteristic of decision models is based on the fact that, in principle at least, such analyses can be falsified by comparison of two states of the world, one where resource allocation decisions are based on formal decision analysis and the other where such decisions are not. This section of the paper also rejects the idea of exact codification of scientific method in general, and of decision analysis in particular, as this risks rejecting potentially valuable models, may discourage the development of novel methods and can distort research priorities. However, the paper argues that it is both possible and necessary to develop a framework for assessing quality in decision models. Building on earlier work, various dimensions of quality in decision modelling are considered: model structure (disease states, options, time horizon and cycle length); data (identification, incorporation, handling uncertainty); and consistency (internal and external). Within this taxonomy a (nonexhaustive) list of questions about quality is suggested which are illustrated by their application to a specific published model. The paper argues that such a framework can never be prescriptive about every aspect of decision modelling. Rather, it should encourage the analyst to provide an explicit and comprehensive justification of their methods, and allow the user of the model to make an informed judgment about the relevance, coherence and usefulness of the analysis.
Article
Because economic evaluations of health care services are being published with increasing frequency it is important to (a) evaluate them rigorously and (b) compare the net benefit of the application of one technology with that of others. Four "levels of evidence" that rate economic evaluations on the basis of their methodologic rigour are proposed. They are based on the quality of the methods used to estimate clinical effectiveness, quality of life and costs. With the use of the magnitude of the incremental net benefit of a technology, therapies can also be classified into five "grades of recommendation." A grade A technology is both more effective and cheaper than the existing one, whereas a grade E technology is less or equally effective and more costly. Those of grades B through D are more effective and more costly. A grade B technology costs less than $20,000 per quality-adjusted life-year (QALY), a grade C one $20,000 to $100,000/QALY and a grade D one more than $100,000/QALY. Many issues other than cost effectiveness, such as ethical and political considerations, affect the implementation of a new technology. However, it is hoped that these guidelines will provide a framework with which to interpret economic evaluations and to identify additional information that will be useful in making sound decisions on the adoption and utilization of health care services.
Article
Because of the growing focus on containing health care costs, pharmaceutical companies are trying to demonstrate the cost effectiveness of their products relative to alternatives. In Europe and Australia, economic analyses are often required for government approval and pricing of new pharmaceuticals. In the United States, such analyses are increasingly being used for marketing and to obtain formulary approval. Because of the corporate need for timely results and confidentiality about a new drug in the period before marketing begins and because other financial support is limited, pharmaceutical companies themselves sponsor most academic research into the cost effectiveness of pharmaceuticals. The . . .
Article
"Publication bias" has three facets: (1) bias perceived by disappointed authors, (2) bias that journal policy may introduce, and (3) bias intrinsic in design and interpretation of the work itself. The third type, though a target of peer review, is not considered here, and the first type is more often imagined than real. However, general journals have to adopt policies on priorities that an outsider may see as bias, in the broadest sense. Opportunities for bias exist (18 varieties are listed here), but more objective evaluation is required before journals need to alter their peer review practices. In terms of work load and financial considerations, the price of some proposed correctives is high. Journals should monitor refereeing systems and allow appeals, but the case for policing systems--"blinding" referees (and, logically, editors), introducing tight codes of practice, and seeking solemn declarations of integrity, for example--needs more hard evidence.
Article
Publication bias can be considered to have three stages: (1) Prepublication bias occurs in the performance of research, caused by ignorance, sloth, greed, or the double standard applied to clinical trials but not to clinical practice. (2) Publication bias refers to basing acceptance or rejection of a manuscript on whether it supports the treatment tested. Potentially biased reviewers are of equal concern. (3) Postpublication bias occurs in publishing interpretations, reviews, and meta-analyses of published clinical trials. Bias can be minimized by (1) insisting on high-quality research and thorough literature reviews, (2) eliminating the double standard concerning peer review and informed consent applied to clinical research and practice, (3) publishing legitimate trials regardless of their results, (4) requiring peer reviewers to acknowledge conflicts of interest, (5) replacing ordinary review articles with meta-analyses, and (6) requiring the authors of reviews to acknowledge possible conflicts of interest.
Article
This article discusses extensions of generalized linear models for the analysis of longitudinal data. Two approaches are considered: subject-specific (SS) models in which heterogeneity in regression parameters is explicitly modelled; and population-averaged (PA) models in which the aggregate response for the population is the focus. We use a generalized estimating equation approach to fit both classes of models for discrete and continuous outcomes. When the subject-specific parameters are assumed to follow a Gaussian distribution, simple relationships between the PA and SS parameters are available. The methods are illustrated with an analysis of data on mother's smoking and children's respiratory disease.