ArticlePDF AvailableLiterature Review

Interventions to promote technology adoption in firms: A systematic review

Wiley
Campbell Systematic Reviews
Authors:

Abstract and Figures

Background The adoption of improved technologies is generally associated with better economic performance and development. Despite its desirable effects, the process of technology adoption can be quite slow and market failures and other frictions may impede adoption. Interventions in market processes may be necessary to promote the adoption of beneficial technologies. This review systematically identifies and summarizes the evidence on the effects of interventions that shape the incentives of firms to adopt new technologies. Following Foster and Rosenzweig, technology is defined as “the relationship between inputs and outputs,” and technology adoption as “the use of new mappings between input and outputs and the corresponding allocations of inputs that exploit the new mappings.” The review focuses on studies that include direct evidence on technology adoption, broadly defined, as an outcome. The term intervention refers broadly to sources of exogenous variation that shape firms' incentives to adopt new technologies, including public policies, interventions carried out by private institutions (such as NGOs), experimental manipulations implemented by academic researchers trying to understand technology adoption, and natural experiments. Objective The objective of this review is to answer the following research questions: 1. To what extent do interventions affect technology adoption in firms? 2. To what extent does technology adoption affect profits, employment, productivity, and yields? 3. Are these effects heterogeneous across sectors, firm size, countries, workers' skill level, or workers' gender? Selection Criteria To be included, papers had to meet the inclusion criteria described in detail in Section 3.1 which is grouped into four categories: (1) Participants, (2) Interventions, (3) Methodology, and (4) Outcomes. Regarding participants, our focus was on firms, and we omitted studies at the country or region level. In terms of interventions, we included studies that analyzed a source of exogenous variation in incentives for firms to adopt new technologies and estimated their effects. Thus, we left out studies that only looked at correlates of technology adoption, without a credible strategy to establish causality, and only included studies that used experimental or quasi‐experimental methods. Regarding outcomes, papers were included only if they estimated effects of interventions (broadly defined) on technology adoption, although we also considered other firm outcomes as secondary outcomes in studies that reported them. Search Methods The first step in selecting the studies to be included in the systematic review was to identify a set of candidate papers. This set included both published and unpublished studies. To look for candidate papers, we implemented an electronic search and, in a subsequent step, a manual search. The electronic search involved running a keyword search on the most commonly used databases for published and unpublished academic studies in the broad topic area. The words and their Boolean combinations were carefully chosen (more details in Section 3.2). The selected papers were initially screened on title and abstract. If papers passed this screen, they were screened on full text. Those studies that met the stated criteria were then selected for analysis. The manual search component involved asking for references from experts and searching references cited by papers selected through the electronic search. These additional papers were screened based on title and abstract and the remaining were screened on full text. If they met the criteria they were added to the list of selected studies. Data Collection and Analysis For the selected studies, the relevant estimates of effects and their associated standard errors (SEs) were entered into an Excel spreadsheet along with other related information such as sample size, variable type, and duration for flow variables. Other information such as authors, year of publication, and country and/or region where the study was implemented was also included in the spreadsheet. Once the data were entered for each of the selected studies, the information on sample size, effect size and SE of the effect size was used to compute the standardized effect size for each study to make the results comparable across studies. For those studies for which relevant data were not reported, we contacted the authors by email and incorporated the information they provided. Forest plots were then generated and within‐study pooled average treatment effects were computed by outcome variable. In addition, an assessment of reporting on potential biases was conducted including (1) reporting on key aspects of selection bias and confounding, (2) reporting on spillovers of interventions to comparison groups, (3) reporting of SEs, and (4) reporting on Hawthorne effects and the collection of retrospective data. Results The electronic and manual searches resulted in 42,462 candidate papers. Of these, 80 studies were ultimately selected for the review after screenings to apply the selection criteria. Relevant data were extracted for analysis from these 80 studies. Overall, 1108 regression coefficients across various interventions and outcomes were included in the analysis, representing a total of 4,762,755 firms. Even though the search methods included both high‐income and developing countries, only 1 of the 80 studies included in the analysis was in a high‐income country, while the remaining 79 were in developing countries. We discuss the results in two parts, looking at firms in manufacturing and services separately from firms (i.e., farms) in agriculture. In each case, we consider both technology adoption and other firm outcomes. Authors' Conclusions Overall, our results suggest that some interventions led to positive impacts on technology adoption among firms across manufacturing, services, and agriculture sectors, but given the wide variation in the time periods, contexts, and study methodologies, the results are hard to generalize. The effects of these interventions on other firm performance measures such as farm yields, firm profits, productivity, and employment were mixed. Policy‐makers must be careful in interpreting these results as a given intervention may not work equally well across contexts and may need to be adjusted to each specific regional context. There is great need for more research on the barriers to technology adoption by firms in developing countries and interventions that may help alleviate these obstacles. One major implication for researchers from our review is that there is a need to carefully measure technology adoption.
This content is subject to copyright. Terms and conditions apply.
Campbell Systematic Reviews. 2021;17:e1181. wileyonlinelibrary.com/journal/cl2
|
1of36
https://doi.org/10.1002/cl2.1181
DOI: 10.1002/cl2.1181
SYSTEMATIC REVIEWS
Interventions to promote technology adoption in firms: A
systematic review
David AlfaroSerrano
1
|Tanay Balantrapu
2
|Ritam Chaurey
3
|
Ana Goicoechea
2
|Eric Verhoogen
4
1
Cornerstone Research, New York,
New York, USA
2
World Bank Group, Washington,
District of Columbia, USA
3
School of Advanced International Studies
(SAIS), Johns Hopkins University, Washington,
District of Columbia, USA
4
Department of Economics and School of
International and Public Affairs, Columbia
University, New York, New York, USA
Correspondence
Ana Goicoechea, World Bank Group, 1818 H
St NW, Washington, DC 20433, USA.
Email: agoicoechea@worldbank.org
Abstract
Background: The adoption of improved technologies is generally associated with better
economic performance and development. Despite its desirable effects, the process of
technology adoption can be quite slow and market failures and other frictions may
impede adoption. Interventions in market processes may be necessary to promote the
adoption of beneficial technologies. This review systematically identifies and sum-
marizes the evidence on the effects of interventions that shape the incentives of firms
to adopt new technologies. Following Foster and Rosenzweig, technology is defined as
the relationship between inputs and outputs,and technology adoption as the use of
new mappings between input and outputs and the corresponding allocations of inputs
that exploit the new mappings.The review focuses on studies that include direct
evidence on technology adoption, broadly defined, as an outcome. The term interven-
tion refers broadly to sources of exogenous variation that shape firms' incentives to
adopt new technologies, including public policies, interventions carried out by private
institutions (such as NGOs), experimental manipulations implemented by academic
researchers trying to understand technology adoption, and natural experiments.
Objective: The objective of this review is to answer the following research questions:
1. To what extent do interventions affect technology adoption in firms?
2. To what extent does technology adoption affect profits, employment, pro-
ductivity, and yields?
3. Are these effects heterogeneous across sectors, firm size, countries, workers'
skill level, or workers' gender?
Selection Criteria: To be included, papers had to meet the inclusion criteria de-
scribed in detail in Section 3.1 which is grouped into four categories: (1) Partici-
pants, (2) Interventions, (3) Methodology, and (4) Outcomes.
Regarding participants, our focus was on firms, and we omitted studies at the
country or region level. In terms of interventions, we included studies that analyzed
a source of exogenous variation in incentives for firms to adopt new technologies
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium,
provided the original work is properly cited.
© 2021 The Authors. Campbell Systematic Reviews published by John Wiley & Sons Ltd on behalf of The Campbell Collaboration
Funding information
Investment Climate Advisory Services;
United States Agency for International
Development
and estimated their effects. Thus, we left out studies that only looked at correlates
of technology adoption, without a credible strategy to establish causality, and only
included studies that used experimental or quasiexperimental methods. Regarding
outcomes, papers were included only if they estimated effects of interventions
(broadly defined) on technology adoption, although we also considered other firm
outcomes as secondary outcomes in studies that reported them.
Search Methods: The first step in selecting the studies to be included in the sys-
tematic review was to identify a set of candidate papers. This set included both
published and unpublished studies. To look for candidate papers, we implemented
an electronic search and, in a subsequent step, a manual search.
The electronic search involved running a keyword search on the most commonly
used databases for published and unpublished academic studies in the broad topic
area. The words and their Boolean combinations were carefully chosen (more de-
tails in Section 3.2). The selected papers were initially screened on title and abstract.
If papers passed this screen, they were screened on full text. Those studies that met
the stated criteria were then selected for analysis.
The manual search component involved asking for references from experts and
searching references cited by papers selected through the electronic search. These
additional papers were screened based on title and abstract and the remaining were
screened on full text. If they met the criteria they were added to the list of selected
studies.
Data Collection and Analysis: For the selected studies, the relevant estimates of
effects and their associated standard errors (SEs) were entered into an Excel
spreadsheet along with other related information such as sample size, variable type,
and duration for flow variables. Other information such as authors, year of pub-
lication, and country and/or region where the study was implemented was also
included in the spreadsheet.
Once the data were entered for each of the selected studies, the information on
sample size, effect size and SE of the effect size was used to compute the stan-
dardized effect size for each study to make the results comparable across studies.
For those studies for which relevant data were not reported, we contacted the
authors by email and incorporated the information they provided. Forest plots were
then generated and withinstudy pooled average treatment effects were computed
by outcome variable.
In addition, an assessment of reporting on potential biases was conducted including
(1) reporting on key aspects of selection bias and confounding, (2) reporting on
spillovers of interventions to comparison groups, (3) reporting of SEs, and (4) re-
porting on Hawthorne effects and the collection of retrospective data.
Results: The electronic and manual searches resulted in 42,462 candidate papers.
Of these, 80 studies were ultimately selected for the review after screenings to
apply the selection criteria. Relevant data were extracted for analysis from these 80
studies. Overall, 1108 regression coefficients across various interventions and
outcomes were included in the analysis, representing a total of 4,762,755 firms.
Even though the search methods included both highincome and developing
2of36
|
ALFAROSERRANO ET AL.
countries, only 1 of the 80 studies included in the analysis was in a highincome
country, while the remaining 79 were in developing countries.
We discuss the results in two parts, looking at firms in manufacturing and services
separately from firms (i.e., farms) in agriculture. In each case, we consider both
technology adoption and other firm outcomes.
Authors' Conclusions: Overall, our results suggest that some interventions led to
positive impacts on technology adoption among firms across manufacturing, ser-
vices, and agriculture sectors, but given the wide variation in the time periods,
contexts, and study methodologies, the results are hard to generalize. The effects of
these interventions on other firm performance measures such as farm yields, firm
profits, productivity, and employment were mixed.
Policymakers must be careful in interpreting these results as a given intervention
may not work equally well across contexts and may need to be adjusted to each
specific regional context. There is great need for more research on the barriers to
technology adoption by firms in developing countries and interventions that may
help alleviate these obstacles. One major implication for researchers from our re-
view is that there is a need to carefully measure technology adoption.
PLAIN LANGUAGE SUMMARY
Interventions to promote technology adoption in firms: limited
evidence of positive effects
Some interventions lead to an increase in technology adoption
among firms across manufacturing, services, and agriculture sectors, but
these effects are contextspecific, as well as interventionspecific. The
effects of these interventions on other firm performance measures such
as farm yields, firm profits, productivity, and employment are mixed.
What is this review about?
This review summarizes the evidence on the effects of inter-
ventions that may affect technology adoption, such as providing
firms with training or grants, or a change in a trade policy. Inter-
ventions can be carried out by governments, private institutions
(such as NGOs), or researchers trying to understand technology
adoption, or they can occur as natural experiments.
What is the aim of this review?
This Campbell systematic review seeks to answer three ques-
tions: To what extent do interventions affect technology adoption in
firms? To what extent does technology adoption affect profits,
employment, productivity, and yields? Are these effects hetero-
geneous across sectors, firm size, countries, workers' skill level, or
workers' gender?
What studies are included?
Included studies had to analyze firms, and examine the effects of
an intervention. Studies at country or regional level were omitted.
Interventions were broadly defined, including the direct provi-
sion of funding for technological adoption (Direct Financial Support),
support to pay for the cost of the adoption projects without directly
providing funding (Indirect Financial Support), nonpecuniary
interventions (Other Direct Support), and rules, policies, and char-
acteristics of the environment that affect agents' incentives (Reg-
ulations and Standards).
Studies had to assess the causal effects of interventions with
experimental and quasiexperimental methods, excluding those that
look at correlations. And they had to have technology adoption as
the primary outcome of interest. This therefore excluded studies that
do not present a measure of technology adoption.
Overall, 80 studies were included in the review, 79 analyzing
effects of technology adoption in developing countries and one in a
highincome country. These studies analyzed the effects on tech-
nology adoption in 4,762,755 firms.
What are the main findings of this review?
Some interventions lead to positive impacts on technology adoption
among firms, but these effects are contextspecific and intervention
specific. In manufacturing and services, 19 of the 33 studies analyzed find
positive and statistically significant standardized effects on technology
adoption. In agriculture, 20 of the 47 studies analyzed find positive and
statistically significant standardized effect sizes.
Most studies focused on analyzing the effects of Other Direct
Support,which includes nonpecuniary interventions such as exten-
sion services, training, consulting, and SMS reminders. Overall, one
group of interventions cannot be said to lead to a higher impact than
others. Furthermore, the effects of these interventions on other firm
performance measures such as farm yields, firm profits, productivity,
and employment were mixed.
Due to the wide range of interventions and outcomes used
across the studies analyzed, it is not possible to assess whether ef-
fects are similar across groups, or to calculate an average treatment
effect across studies.
ALFAROSERRANO ET AL.
|
3of36
What do the findings of this review mean?
A statistically insignificant finding for a type of intervention in a
particular context does not mean that all interventions of that type
are not worthy of consideration. Policymakers should pay attention
to how programmes can be improved and better tailored to parti-
cular environments, to achieve better outcomes.
Areas of future research could include both an understanding of
barriers to technology adoption, and interventions that lead to in-
creased adoption through removal of those barriers. This should
analyze interventions that are less studied. For example, Indirect
Financial Supportincluding interventions such as access to credit,
and incentive payments, as well as Regulations and Standardshave
been less studied for their effects on technology adoption.
Studies should also provide all the information required to
compute standardized mean differences. They should also improve
reporting on heterogeneous effects and the Hawthorne effect (in
which the monitoring process itself affects behavior).
How uptodate is this review?
The review authors searched for studies between 2000
and 2020.
1|BACKGROUND
1.1 |The policy issue
The adoption of improved technologies is generally associated with
better economic performance and development. Governments and
development agencies have incorporated the promotion of firms'
competitiveness into their priorities (IADB, 2016; World Bank, 2017)
and recognized that the adoption of modern technologies is one of its
drivers. Two reasons lie behind this interest in competitiveness in
general and technological upgrading in particular: first, the ex-
pectation that technological upgrading by firms will deliver benefits
like higher productivity, more jobs, better wages and better working
conditions at the micro level, and higher growth at the aggregate
level; and second, the idea that the slowness, or even absence, of the
process of technology adoption is due in part to market failures due
to externalities, imperfect information, and coordination problems
that call for public intervention. Furthermore, these market failures
are often more salient in developing countries, leading to lower rates
of new technology adoption, and consequently slower growth rates.
History provides examples of major increases in wellbeing
broadly defined (including output, output per worker, size, and pro-
ductivity) associated with the widespread adoption of improved
productive methods. Cirera et al. (2017) point to the socalled three
industrial revolutions in manufacturing: the introduction of steam
powered machines, the adoption of electricitypowered production
methods, and the use of information technologies to automate
manufacturing. In agriculture, the green revolution, which involved
the adoption of new technologies, including highyielding varieties,
and the increased use of fertilizers and pesticides, had a comparable
impact. The key aspect in these revolutionary transformations was
not the mere invention of new technologies, but their widespread
adoption by firms.
Despite its desirable effects, the process of technology adoption
can be quite slow (Geroski, 2000; Rosenberg, 1972) and market
failures and other frictions may impede adoption (Foster &
Rosenzweig, 2010; Verhoogen, 2020). Such market failures and
frictions impeding technology adoption may also be different for
agricultural firms compared to nonagricultural firms. For example,
information frictions may be more significant for nonagricultural
firms than for agricultural firms. This may in part be because agri-
cultural firms can observe technology adoption by neighboring firms,
but this is often not possible for nonagricultural firms. Furthermore,
nonagricultural firms seldom share information on new beneficial
technology with competing firms. Thus, interventions in market
processes may be necessary to promote the adoption of beneficial
technologies.
The lack of technology adoption in cases in which the potential
gains are clear have been observed in specific industries like textiles
(Bloom et al., 2013) and soccerball production (Atkin et al., 2017).
Foster et al. (2008) show that there is large heterogeneity in pro-
ductivity even within narrowly defined industries, which may be the
consequence of the lack of technology adoption by the firms in the
left tail of the distribution.
To help inform governments' actions, this systematic review
aggregates the existing evidence on interventions that induce tech-
nology adoption and their implications for firm outcomes.
1.2 |Potential channels of effects
We think of the effects of interventions as occurring in two stages. In
the first stage, interventions may induce the adoption of better
technologies in the directly affected firms, provided that that the
technologies are indeed better (after considering adoption costs) and
that, after providing the treatment, firms are able to notice that
advantage. There are three broad categories of reasons for low
technology adoption (i) internal to the firm, (ii) on the input side, and
(iii) on the output side (Verhoogen, 2020). First, there may be low
adoption of new technologies because firms may not be profit
maximizing, may lack information, or may not have the capability to
apply the information in practice. These are factors internal to the
firm. Second, on the input side, newer technology often requires
highly skilled workers and highquality inputs, which may be scarce
and expensive. Finally, on the output side, demand conditions may
affect the incentives of firms to adopt (or not) new technologies. For
example, richer customers may have preferences for higherquality
goods, which may require the application of new technologies.
In a second stage, the adoption of a better technology by treated
firms may lead to an increase in total output, output per unit of input,
unit cost, firmlevel wages, employment, total factor productivity,
firm survival, and/or exports. Productivity, understood as the inverse
of cost, is expected to increase because the reduction in (quality
adjusted) cost is presumably what makes the new technology
4of36
|
ALFAROSERRANO ET AL.
attractive to the firm. Output is expected to increase if a firm can
increase its market share based on the reduction in costs. As con-
sequence of the expansion in output, firmlevel employment is also
generally expected to increase. Wages (although not necessarily
wages relative to other input prices) may also go up after adoption
due to the rise in marginal labor productivity. In general, we expect a
new technology to lead to an increase in inputs in production.
However, we note that it is theoretically possible that some tech-
nological innovations may be inputsaving; that is, they may be
labor, capital, or landsaving. In fact, because of the possibility that
a new technology may be inputsubstituting, studies often report
multiple firm performance indicators such as profits, and pro-
ductivity along with changes in inputs (labor, capital, land).
Figure 1presents the theory of change of the interventions of
interest. The solid boxes represent the intervention and outcomes
and the dashed boxes represent the assumptions. As stated before,
this review is not devoted to a single intervention or technology
adoption outcome. Instead, it includes several interventions inducing
technology adoption in firms (measured in several ways). Despite
these differences, it is useful to provide a general theory of change
describing the way in which the interventions, intermediate out-
comes, and final outcomes relate.
1.3 |Why it is important to do the review
There is a need to aggregate evidence across multiple studies con-
cerning similar economic phenomenon to translate research into
policy (Allcott, 2015; Banerjee et al., 2015; Dehejia, 2015;
Meager, 2019). A systematic review allows one to survey the lit-
erature on a given economic phenomenon, identify the knowledge
gaps, and compare effects across contexts, implementers and time,
facilitating the integration of research into policy.
Systematic reviews about technology adoption are mostly fo-
cused on agricultural firms (Fuglie et al., 2019; Obayelu et al., 2017;
Silva et al., 2015; Waddington et al., 2014). There has been little
effort to do the same for other economic sectors, despite the recent
publication of highprofile papers on technology adoption in manu-
facturing (e.g., Atkin et al., 2017; Bloom et al., 2013). Consequently,
existing summaries of evidence for nonagricultural firms are almost
all nonsystematic literature reviews. Our review addresses this
knowledge gap by summarizing the existing evidence on technology
adoption in nonagricultural as well as agricultural firms.
A review close to ours is Piza et al. (2016). The authors sys-
tematically review the evidence about the impact of businesses
support services for small and medium enterprises (SMEs) in low and
middleincome countries. Our review complements their findings by
focusing specifically on technologyadoption outcomes for a broader
set of interventions and a broader set of firms and countries. In-
cluding a broader set of firms is important because, even though
SMEs are particularly relevant for outcomes like employment, larger
firms are key for other important outcomes like exports. Considering
also the experience of highincome
1
countries is warranted because
active policies to promote technology adoption are widespread
around the globe, not only in the developing world. The experience
of these countries is relevant for policy making everywhere. There
are also aspects of the technologyadoption process, for instance
knowledge spillovers across firms (Hausmann & Rodrik, 2003), that
carry distinctive implications for the design of policy but that argu-
ably are not as salient in the context of other probusiness inter-
ventions, such as programs to increase formalization and access to
working capital.
Existing reviews on technology adoption in nonagricultural firms
that do not follow the Campbell guidelines for systematic reviews
include HerbertCopley (1990), Keller (2004), Oliveira and Martins
(2011), and Foster and Rosenzweig (2010). HerbertCopley (1990)
FIGURE 1 Potential channels of effects
1
We use the World Bank's World Development Indicators classification of countries into
low, lower middle, upper middle and high. We collectively refer to all countries not in the
highincome group as developing.
ALFAROSERRANO ET AL.
|
5of36
reviews case studies of technical change in manufacturing firms in
the 1980s in Latin America. The study assesses the role of the nature
of technology, market structure, government policy, firm character-
istics and the location of the international technological frontier on
the level of technology adoption. Keller (2004) focuses at an ag-
gregate level and on diffusion across countries. Coming from the
information systems literature, Oliveira and Martins (2011) compile
studies on the adoption of information technology at firm level. Their
review does not attempt to compare the results across studies. Close
in spirit to the current review, Foster and Rosenzweig (2010) review
microeconomic studies of the barriers to technology adoption in low
income countries. Among the factors they find to be important are
financial and nonfinancial returns to adoption, individual learning and
social learning, technological externalities, scale economies, school-
ing, credit constraints, risk and incomplete insurance, and departures
from behavioral rules implied by simple models of rationality. The
current review differs in the systematic search to select studies of
interest, and in its coverage of more recent work.
2|OBJECTIVE
The primary objective of this review is to assess the extent to which
interventions affect technology adoption in firms. The secondary objec-
tives are to assess to what extent technology adoption affects other firm
outcomes and whether these effects differ across certain groups.
In particular, the review aims to answer the following research
questions:
1. To what extent do interventions affect technology adoption in
firms?
2. To what extent does technology adoption affect profits, em-
ployment, productivity, and yields?
3. Are these effects heterogeneous across sectors, firm size, coun-
tries, workers' skill level, or workers' gender?
Question 1 refers to the immediate impact of interventions to
promote technology adoption. Question 2 explores the subsequent
impact of technology adoption on other economic outcomes.
Question 3 explores heterogeneous effects across relevant groups.
3|METHODS
3.1 |Criteria for considering studies for this
review
After conducting predefined manual and electronic searches which
used keywords and imposed restrictions in terms of years and lan-
guage covered, we applied screening criteria to identify studies to be
included in the review. These criteria were grouped into four cate-
gories described in detail below and in the published review protocol
(Verhoogen et al., 2018).
3.1.1 |Types of participants considered
To be included in this review, studies must have firms as the unit of
analysis. The term firmrefers to productive units of any size, in-
cluding singleperson businesses and farmers or farms. This focus
leaves out studies of technology adoption at the country or region
level.
This review does not impose any restriction regarding the level
of development of the country in which the intervention takes place.
It includes studies in both highincome and developing countries.
3.1.2 |Types of interventions considered
This review focuses on interventions that induce adoption of a new
technology by a firm. Following Foster and Rosenzweig (2010), we
define technology as the relationship between inputs and outputs,
and technology adoption as the use of new mappings between input
and outputs and the corresponding allocations of inputs that exploit
the new mappings.This definition of technology adoption is broad,
and includes the overall production plan that firms implement as well
as changes in specific practices.
The term intervention refers broadly to sources of exogenous
variation that shape firms' incentives to adopt new technologies,
including public interventions, interventions carried out by private
institutions (such as NGOs), experimental manipulations deliberately
induced by academic researchers trying to understand technology
adoption, and natural experiments. For instance, Bloom et al. (2016)
study how the inclusion of China into a trade agreement allowing
Chinese imports into Europe impacts technology adoption among
firms. In this case, we consider the inclusion in the trade agreement
as an intervention.
Although this review is not focused on a particular type of policy,
it is possible to provide a broad classification of the interventions we
expect to find. Here we follow the classification presented by Cirera
and Maloney (2017):
Direct financial support. These are interventions that directly
provide funding for technological adoption. This assistance could
take many forms, including access to credit, insurance, subsidies,
inkind transfers, or cash towards the takeup of technology.
There are studies that explore the impact of access to credit to
firms (Giné & Dean, 2009), access to insurance (Karlan
et al., 2014), inkind provision of equipment (Atkin et al., 2017;de
Mel et al., 2012), and/or cash (Chudnovsky et al., 2011).
Indirect financial support. These are interventions that help
businesses to pay for the cost of the adoption projects without
directly providing the resources. Loan guarantees (Tan, 2009),
minimum support price (Abate et al., 2018), and changing policy
on the supply chain to reduce the price of inputs (Ogunniyi
et al., 2017) are examples of interventions in this category.
Other direct support. These are nonpecuniary interventions, in-
cluding policies implemented by governments such as technology
6of36
|
ALFAROSERRANO ET AL.
extension services, and experimental interventions such as the
direct provision of management consultancy services (Bloom
et al., 2013; Bruhn et al., 2018), awareness interventions (Beaman
et al., 2014), provision of information (Ali & Rahut, 2013; Beaman
et al., 2018; Benyishay et al. (2016); Kondylis et al., 2017), and
training (Feder et al., 2003; Field et al., 2010; Karlan &
Valdivia, 2011).
Regulation and standards. These are rules, policies, and char-
acteristics of the environment that affect agents' incentives, for
example a trade policy that induces technology adoption (Bloom
et al., 2016), or a positive demand shock to technology adoption
induced by a government regulation (Crouzet et al., 2019;
Higgins, 2019).
While the categories above each represent a broad set of in-
terventions, studies may combine interventions across the cate-
gories. For instance, a government policy that helps subsidize take
up of a new input and provides training can be classified under both
direct financial supportand indirect financial support(e.g.,
Ogunniyi et al., 2017). Similarly, a study that includes provision of
loans, as well as training to microfinance clients (e.g., Gine &
Mansuri, 2014) can be classified under direct financial supportand
other direct support.
It is important to note that the focus of this review is the effect
of the interventions on technology adoption by existing firms. In this
sense, the interventions considered are different from interventions
to promote entrepreneurship. We expect that most of the studies
are about the adoption of technologies that are new to the firm, but
not to the world.
Papers that do not focus on an intervention (defined broadly as
above) are not included in this review. For example, the seminal
papers by Conley and Udry (2010) and Tavneet Suri (2011), while
important and influential for the broader literature, do not focus on
an explicit intervention (defined as above) and have therefore been
excluded.
3.1.3 |Methodologies considered
The review includes studies that take the firm as the unit of analysis
and employ an identification strategy explicitly aiming to estimate
causal effects on technology adoption. Experimental and quasi
experimental methods of causal inference attempt to disentangle the
effect of the intervention of interest on a given outcome from the
effect of omitted variables that also affect the outcome and correlate
with the intervention. If such omitted variables are present, their
influence on the outcome will be confounded with the influence of
the intervention and the analyst could end up attributing to the
treatment what is really the result of the omitted variable.
Randomized control trials (RCTs) are commonly perceived to be
the gold standard in conducting causal inference. But RCTs are costly
to run and often infeasible. In addition to experimental studies, we
include papers using quasiexperimental methods, including
instrumental variables, regressiondiscontinuity designs, difference
indifferences, and propensityscore matching.
We generally require that the comparison group is a set of firms
that received no treatment. Two sorts of exceptions to this rule are
considered acceptable. One is when different groups receive exo-
genously different doses,or amounts, of treatment, but no group
receives zero treatment. The other is when the researchers provide
compensation to the comparison group to replicate the non-
intervention situation or to isolate some aspect of the treatment. In
these cases, it is still possible to estimate causal effects cleanly, and
hence they are considered in the review.
3.1.4 |Types of outcome measures considered
The primary outcome of interest for the review is technology
adoption. Measures of adoption can take the form of dichotomous
variables (that, e.g., takes value one when a technology is adopted
and zero otherwise) or a count of the number of new practices
adopted. Because the primary objective of this review is to examine
the effects of interventions on technology adoption, we exclude
papers that do not consider a technology adoption outcome.
The secondary outcomes are other firm outcomes that may be
affected by technology adoption. These are variables in the second
step of the causal chain. In particular, we consider measures of
output per unit of input, unit cost, wages, employment, output, total
factor productivity, and profits. Regarding measurement, we require
that (i) output and output per worker are measured in physical or
monetary units; (ii) wages are measured in monetary units; and (iii)
employment is measured in hours or days or number of employees.
Total factor productivity is treated as a dimensionless variable.
Papers analyzing effects on secondary outcomes are included
only if they also present estimates for the primary outcome (tech-
nology adoption).
3.2 |Search methods for identification of studies
The first step in selecting studies for the review was to identify a set
of candidate papers. This set includes both published and un-
published studies. To look for candidate papers, we implemented an
electronic search and, in a subsequent step, a manual search.
3.2.1 |Electronic search
The electronic search included bibliographic databases and specia-
lized journals. We also looked for studies on the websites of key
organizations to ensure good coverage of unpublished literature.
Bibliographic databases:
1. Academic Search Complete (through EBSCO Host).
2. Business Source Complete (through EBSCO Host).
ALFAROSERRANO ET AL.
|
7of36
3. EconLit (through EBSCO Host).
4. Ideas Repec (through EBSCO Discovery).
5. PAIS Index (through ProQuest).
6. ProQuest Dissertations & Theses Global.
Websites of key organizations:
1. 3ie database of impact evaluations.
2. African Development Bank.
3. Agricultural Technology Adoption Initiative (ATAI).
4. American Economic Association RCT Registry.
5. Asian Development Bank.
6. InterAmerican Development Bank.
7. Organizations for the Economic Cooperation and Develop-
ment (OECD).
8. UK Department for International Development (DFID).
9. US Agency for International Development (USAID).
10. World Bank Group.
Our electronic search used Boolean operators to express the
following requirements:
1. The studies should include terms related to the outcomes of in-
terest for the study in the title or abstract.
2. The studies should include terms related to the methodology
employed in the title, the abstract, or the main text.
3. The papers should be dated in the year 2000 or later. Similar to
Piza et al. (2016), we impose this limit because we focus on stu-
dies using impactevaluation techniques, which have been widely
adopted in development economics since that time. Even though
it is possible that some papers on technology adoption were
produced before this year, we expect that few address en-
dogeneity issues in the way required to be included in this review.
To keep the electronic search as broad as possible, we did not
differentiate between our primary outcomes (technology adoption)
or secondary outcomes (other firm outcomes). The keyword searches
were piloted multiple times on at least two different databases to
ensure the keywords were able to identify the right set of papers.
The pilots were assessed based on whether some of the manually
selected seminal papers would show up when conducting keyword
searches. The final list of key words was selected once we had found
the combination to be working on at least two different databases.
In each electronic database, we first imposed filters for publication
date on or after 2000 and English language. The language filter was
mostly due to resource constraints and the year filter to the fact that
methodologies analyzing causal effects of exogenous variations were not
popular before 2000. Then, following EBSCO host's search syntax, the
following Boolean expression was implemented:
(TI(outcomes) OR AB(outcomes)) AND (TI(methods) OR AB
(methods) OR TX(methods))
The exact syntax used in EBSCO host, ProQuest, and Ideas Re-
pec through EBSCO Discovery is:
(TI(((technolog* OR manag* OR innovat* OR practice*) W4 (adopt*
OR diffus* OR chang* OR alter*)) OR productivity OR output perOR
(unit* W4 cost)) OR AB(((technolog* OR manag* OR innovat* OR prac-
tice*) W4 (adopt* OR diffus* OR chang* OR alter*)) OR productivity OR
output perOR (unit* W4 cost))) AND (TI((quasi experiment*or quasi
experiment* or random* control* trial*or random* trial*or RCT or
(random* N3 allocat*) or match* or propensity scoreor PSM or re-
gression discontinuityor discontinuous designor RDD or difference in
difference*or difference
indifference* or diff in diffor case control
or cohort or propensity weightedor propensityweighted or inter-
rupted time seriesor Control* evaluationor Control treatmentor
instrumental variable*or heckman or IV or ((quantitative or compar-
ison group*or counterfactual or counter factualor counterfactual or
experiment*) N3 (design or study or analysis)))) OR AB((quasi experi-
ment*or quasiexperiment* or random* control* trial*or random*
trial*or RCT or (random* N3 allocat*) or match* or propensity scoreor
PSM or regression discontinuityor discontinuous designor RDD or
difference in difference*or differenceindifference* or diff in diffor
case controlor cohort or propensity weightedor propensityweighted
or interrupted time seriesor Control* evaluationor Control treat-
mentor instrumental variable*or heckman or IV or ((quantitative or
comparison group*or counterfactual or counter factualor counter
factual or experiment*) N3 (design or study or analysis))))OR TX((quasi
experiment*or quasiexperiment* or random* control* trial*or ran-
dom* trial*or RCT or (random* N3 allocat*) or match* or propensity
scoreor PSM or regression discontinuityor discontinuous designor
RDD or difference in difference*or differenceindifference* or diff in
diffor case controlor cohort or propensity weightedor propensity
weighted or interrupted time seriesor Control* evaluationor Control
treatmentor instrumental variable*or heckman or IV or ((quantitative
or comparison group*or counterfactual or counter factualor counter
factual or experiment*) N3 (design or study or analysis)))))
W4is a search syntax that stands for within 4 words of each
other.Similarly, N3stands for near 3 words of each other.
For other journal websites the syntax was adapted as ap-
propriate. The electronic searches were run on May 2nd and
3rd, 2018.
3.2.2 |Manual search
In addition to the electronic search, we conducted a manual search
implemented between May, 2018 and June, 2020. It entailed the
following three procedures, also explained in detail in Appendix A:
Recommendations: We sought recommendations of additional
references from experts and practitioners.
Backwardcitation search: for papers that met the criteria and
were selected initially, we conducted a backwardcitation search
in Google Scholar and all papers that cited the initially selected
papers were screened.
Reference search: for the initially selected papers, we screened
the papers that they cited.
8of36
|
ALFAROSERRANO ET AL.
3.3 |Data collection and analysis
3.3.1 |Screening procedure
The output of the electronic and manual searches was a very broad
set of candidate papers (42,462), including many articles that were
not relevant for our review. We used a twostep screening procedure
to identify papers that met the criteria for the review (see Figure 2):
Step 1Title and abstract screening:
All candidate papers were screened based on their title and
abstract. Candidate papers were grouped in batches of 350. Initially,
two batches were doublescreened, with two reviewers screening
each paper. Each reviewer applied the criteria independently and
once they completed a batch of 350, their assessments were com-
pared and discussed before they continued to the second batch. In
the discussion, areas of disagreement were resolved.
Theobjectiveofthedoublescreening procedure was to promote
learning and increase consistency across reviewers. To move from double
to single screening, we required a reduction in discrepancies between
two reviewers in the second batch when compared to the first and in
<5% of screened papers. After two batches (700 papers) were double
screened in this way, this rule was met by all reviewers and each re-
viewer continued the title and abstract screening with singlescreening.
Therefore, the remaining papers were screened only once.
Step 2Full text screening:
Papers that passed the title and abstract screening were
screened again based on the full article text. This time, to ensure
consistency and learning across reviewers, we implemented the
doublecoding procedure for the first 20 papers coded by each re-
viewer, grouped in two batches of 10.
In each of the screening stages, the reviewer had to determine
whether a paper should be excluded or not based on the four ele-
ments of the criteria developed in Section 3.1, as follows:
1. Exclude based on participants. The study was excluded if the unit
of analysis was not firms.
2. Exclude based on intervention. The study was excluded if it did
not mention explicitly that the intervention had the potential of
inducing technology adoption by a firm.
3. Exclude based on methodology. The study was excluded if it did
not use one of the following methodologies: RCT, instrumental
variables, regression discontinuity, differenceindifferences,
propensity score matching or synthetic control.
4. Exclude based on outcomes. The study was excluded if it did not
report the effect of the intervention on technology adoption. At
this point, we did not impose restrictions regarding the secondary
outcomes.
Papers that passed both screenings were selected for the data ex-
traction phase. The twostep screening procedure did not exclude articles
based on whether the reported estimates were usable in the review
or not.
Papers found manually in relevant websites as well as papers
recommended by experts also went through the twostep screening
procedure.
Once a paper was selected for data extraction, the backward
citation and reference searches described in Section 3.2 were conducted.
3.3.2 |Data extraction and management
Using the variables for data extraction presented in Appendix B,relevant
information was entered in an Excel file for papers that passed the
screening. This file included a coding sheet with basic characteristics at
the study level such as citation, authors, year of publication, country, and
methodology, among others. In addition, the file included a data analysis
sheet with information at the regression level, including impact estimates,
number of observations, and SEs.
A doubleentry procedure was carried out for a subset of the
data from the selected papers (such as the impact estimates) to
ensure the quality of the data entry. This was done for a randomly
selected sample of 33% of the papers.
3.3.3 |Measures of treatment effect
Comparing different pieces of evidence about an intervention re-
quires comparable effectsize estimates. These comparable measures
are usually not directly provided in the original study and need to be
computed by the reviewer based on the information in the paper.
Different studies often measure outcomes in different units. For
example, an outcome measure such as employment, may be expressed
as number of workers, number of worker hours, number of hours per
FIGURE 2 Screening procedure
ALFAROSERRANO ET AL.
|
9of36
day, and so forth. This creates problems of comparisons across and
within studies. Moreover, studies often use multiple indicators to ex-
press changes in relevant outcomes, exacerbating problems of com-
parison. As indicated in Borenstein et al. (2009), when the outcomes are
not directly measured in a common and meaningful scale, one could use
the standardized mean difference (SMD). This measure reexpresses
the impact measure of a study relative to the outcome variability ob-
served in that study. A positive value of the SMD indicates a positive
impact of the intervention on the outcome.
In several studies in our setting we were not able to find the
breakdown of the sample between treatment and control groups. For
this reason, as well as to ensure standardization across studies, we
assume Nt = Nc = N/2. We then use the formula for SMD as provided
in Waddington et al. (2019, p. 26):
=
β
ˆ
N
S
MD
2
,
SE
=+
S
ESMD
N
SMD N() 4(/4)
,
2
where
β
ˆ
is the treatment effect, SE is the standard error and Nis the
sample size of the study.
3.3.4 |Unit of analysis issues
As discussed earlier, we focus on papers that take the firm as the unit of
analysis. However, the unit of analysis might not coincide with the level
of experimental or quasiexperimental variation. For instance, a study
might report estimates of firmlevel impacts of a technology adoption
program that was assigned to the firms in selected districts. In this case,
the treatment assignment status varies at the district level instead of the
firm level. Studies like this should consider the intracluster correlation
when computing the SEs of their effect estimates. Fortunately, this
practice has been increasingly adopted in social sciences with the use of
clusterrobust SEs. We did not have any studies that did not consider the
intracluster correlation while computing the SEs.
3.3.5 |Dealing with missing data
When relevant information is not reported (like the pooled standard
deviations or sample sizes), we contacted the authors of the studies to
ask for it. However, in nine cases we did not hear back or were unable to
contact the authors. In such cases, we did not include the study in the
analysisaswewereunabletocomputetheSMDortheSE.
3.3.6 |Assessing reporting biases
We assessed reporting biases with a modified version of the 3ie risk
of bias tool (Hombrados & Waddington, 2012), which is presented in
detail in Appendix C. The tool covers four dimensions: (1) Reporting
on key aspects of selection bias and confounding, (2) Reporting on
spillovers of intervention to comparison groups, (3) Reporting of SEs,
and (4) Reporting on Hawthorne effect and baseline data. If a study
uses various methodologies, the assessments under selection bias
and confounding are based on the most rigorous methodology.
For each of these dimensions, a set of considerations were de-
fined. The question whether the study addresses each consideration
can receive one of three answers: reported,”“not reported,or not
applicable.
3.3.7 |Data synthesis
For the studies under consideration, we extracted available regres-
sion coefficients. In the context of multivariate regressions we ex-
tracted only the regression coefficients relevant to the outcomes we
consider. All the effect sizes from the selected papers were con-
verted into a standardized effect using the formula discussed above
in the section on Measures of treatment effect.To avoid treating
different results from the same paper as independent results, we
computed an average standardized effect for all results, as well as
the average SEs for these standardized effects within a paper for the
same variable (Waddington et al., 2014).
ThestandardizedeffectsandtheirrespectiveSEswerecom-
puted manually. We generated forest plots to present the manu-
ally computed standardized effect sizes using the Stata command
metan. The effect size as well as the confidence intervals for these
outcomes are presented using a forest plot. It must be noted that
we do not attempt to provide a summary effect across studies,
because selected studies included several interventions and sev-
eral definitions of technology adoption outcomes. Hence, we are
not concerned with the computation of the summary effect, or the
SE of the summary effect, commonly applied in other meta
analyses/metaregressions.
3.3.8 |Deviation from protocol
The review has been executed to follow the steps described in the
protocol (Verhoogen et al., 2018), with the following deviations:
The protocol did not describe some of the manual searches con-
ducted (reference search and citation search). These are now
described in detail in Appendix A.
The protocol included the following research questions to be
answered by the review:
1. To what extent do particular interventions affect technology
adoption in firms?
2. Is this effect heterogeneous across sectors, firm size, countries
or owner's gender?
3. To what extent does technology adoption affect total output,
output per unit of input, unit cost, firmlevel wages, employ-
ment, totalfactor productivity, exports, and survival?
10 of 36
|
ALFAROSERRANO ET AL.
4. Are these effects heterogeneous across sectors, firm size,
countries, workers' skill level, or workers' gender?
Few of the studies analyzed explore the effects on total
output, output per unit of input, unit cost, firmlevel wages,
exports, and survival as listed in Question 3 in the protocol.
Therefore, it was not possible to analyze effects on these
secondary outcomes. The research question was revised in
this version to reflect the secondary outcomes analyzed
(profits, employment, productivity, and yields).
A doubleentry for a set of papers was conducted during data
extraction as explained in Section 3.3, under Data extraction and
management.
We modified the tool for assessing risk of biases. We piloted the
procedure described in the protocol and found that many studies
received unclearclassifications in the assessments. To avoid
generating assessments that we felt might be misleading, we
modified the tool to make it less subjective. In addition, we in-
troduced the classifications Not applicableand Not reported
instead of Unclear.The updated tool is reported in Appendix C
and the assessments in Appendix D.
Given that several measures of technology and several types of
interventions were used in the selected papers, we decided that
averages of effect sizes across studies would not be meaningful
and we do not present them. We do present average effect sizes
within studies as appropriate.
4|RESULTS
4.1 |Description of studies
4.1.1 |Included studies
After applying the search and screening procedures described in the
previous sections, 80 studies were selected for the analysis
(Figure 3). These studies were conducted in 45 countries, covering all
regions of the world including Africa, South Asia, Latin America,
Eastern and Western Europe, and East Asia (Table 1) between 2000
and 2019 (Figure 4). While a majority of the studies (79) included
were from developing countries, there was one from a highincome
FIGURE 3 Number of included and excluded papers. Fulltext articles can be excluded for multiple reasons. For instance, the same study
can be excluded for not meeting criteria in terms of participants and methodology. N refers to the number of studies
ALFAROSERRANO ET AL.
|
11 of 36
country, as can be seen (Table 2). The data fields extracted from the
included studies can be found in Appendix B.
In total, we estimated the standardized effect sizes for 1108
regression coefficients from the selected studies. Figure 5presents
the number of regression coefficients by category of intervention.
Given that a study can present multiple results for different inter-
ventions, we present the following analysis by regression coefficients
and not by study. For instance, if a given study presents a regression
coefficient for grant provision on technology adoption and another
regression coefficient for training on technology adoption, we pre-
sent them separately for each intervention. Most regressions (684)
assess the effects of other direct support”—such as provision of
training, management consulting services, technology extension
services, as well as informational interventionsexclusively. The
secondlargest category of interventions is direct financial support
(171)which includes grants, loans, insurance, and subsidies, among
othersexclusively. There were very few regressions assessing the
role of regulations and standardsor indirect financial support.
Some studies assessed the impact of a combination of inter-
ventions and were therefore classified in more than one intervention
category. For instance, a program analyzing the impact of providing
firm owners with both training and contract orders (e.g., Hardy &
McCasland, 2018) would fall under direct financial supportand
other direct support.Similarly, a program providing loans and
training to microfinance clients (e.g., Gine & Mansuri, 2014) would
also fall under direct financial supportand other direct support.
The outcome variables presented in the selected studies can be
broadly categorized into technologyadoption outcomes and other
firm outcomes. Based on our selection criteria, all included studies
reported at least one technologyadoption outcome. There are 26
studies that have only reported a technologyadoption outcome and
no other firm outcomes. The remaining studies present more than
one other firm outcome. Therefore, we have more regression coef-
ficients reported on other firm outcomes than on technology
adoption outcomes.
In Figure 6, we present a breakdown of the types of technology
adoption outcomes used in selected studies. In total, there were 555
regression coefficients measuring the effects of interventions on
technology adoption. Since there are too many coefficients to list
them individually, we grouped similar ones into broader categories.
The attempt is to provide an overview to the reader of the types of
variables that have been considered by the studies in our selection.
Most of these coefficients analyzed the impact of an intervention on
the adoption of a new input or a new technique of production. A
small number of regression coefficients analyzed the impact on de-
velopment of a new product or changes in marketing.
Figure 7presents a breakdown of the other firm outcomes that
we consider in our analysis (secondary outcomes). Overall, we find
553 regression coefficients measuring the effects of interventions on
other firm outcomes among selected studies. Given the large number
of coefficients found, a broad categorization was used to provide an
overview to the reader. The categories with the largest number of
TABLE 1 Number of studies by region
Region No. of studies
Africa 41
East Asia 7
Europe 3
Latin America 13
South Asia 16
Total 80
FIGURE 4 Number of studies in the
analysis by year
12 of 36
|
ALFAROSERRANO ET AL.
regression coefficients were financial performance, output, employ-
ees, and productivity.
These figures showcase the wide variety of settings, types of
interventions, and outcome variables considered in this analysis. The
wide range of interventions and outcome variables used does not
lend itself to a metaregression type of analysis which is often found
in other systematic reviews. Therefore, we do not present a meta
regression as part of this review.
Also for this reason, we were unable to answer the third ques-
tion stated in the objectives. Given the broad range of interventions
and outcomes, an analysis of heterogeneous effects was not possible.
In addition, analyzing heterogeneous effects was also hard because
only a few studies reported information on variables such as gender
and workers' skills.
4.1.2 |Excluded studies
There were two rounds of exclusions as described in Figure 3. In the
first round, 40,894 studies were excluded based on title and abstract
out of 42,564 candidate papers. In the second round, another 1581
studies were excluded based on full text screening. Exclusions in
both rounds were based on the fourelement criteria discussed in
Section 3.1: intervention, participants, outcomes, and methodology.
For instance, studies were excluded based on the intervention cri-
terion if they did not analyze the effect of credibly exogenous variation in
the incentives for firms to adopt a new technology. Thus, we excluded
TABLE 2 Number of studies by income category
Income category No. of studies
High income 1
Low income 21
Lower middle income 36
Upper middle income 22
Total 80
FIGURE 5 Number of regression coefficients by intervention
categories. This figure plots the number of regression coefficients
that fit into different intervention categories. Categories are
described in detail in Section 3.1. DFS or Direct Financial Support
includes the direct provision of funding for technological adoption,
for example, access to credit and cash transfers. IFS or Indirect
Financial Supportincludes support to pay for the cost of the
adoption projects without directly providing funding, for example,
fiscal incentives and credit assurances. ODS or Other Direct
Supportincludes nonpecuniary interventions, for example,
technology extension services and provision of management
consultancy and training. RS or Regulations and Standardsincludes
rules, policies, and characteristics of the environment that affect
agents' incentives, for example, the level of competition or the
characteristics of contracts
FIGURE 6 Number of regression coefficients for technology
adoption outcomes. This figure plots the number of regression
coefficients that are related to the technology adoption outcomes
under a broad categorization of technology adoption measures.
Otherincludes diversified business activities and spillover adoption
FIGURE 7 Number of regressions for secondary outcomes. This
figure plots the number of regression coefficients that are related to
other firm outcomes under a broad categorization of secondary
outcomes. Other firm outcomes (or secondary outcomes) are
variables that are affected by technology adoption such as financial
performance, productivity, and employment, among others. Other
includes HR index and sought government assistance
ALFAROSERRANO ET AL.
|
13 of 36
studies that only focused on correlates of technology adoption without
an identification strategy to establish causality. Studies were excluded
based on participants if they did not analyze effects at the firm level. For
example, the study by Hjort and Poulsen (2019) analyzes the impact of
adoption at the regional level and was not included in our analysis.
Studies were excluded based on outcomes if they did not have a tech-
nology adoption outcome variable, for example McKenzie (2017)and
Das et al. (2013). Lastly, studies were excluded based on methodology if
they did not implement one of the methodologies for causal inference
prespecified in our protocol; for example, Conley and Udry (2010)was
excluded for this reason.
In addition, of the 89 selected studies that met the criteria, 9 did
not report the sample size and/or SE and we were unable to receive
this information from their authors. These were also excluded for the
analysis.
4.2 |Assessment of reporting biases
We assessed the quality of the evidence in terms of the complete-
ness of reporting in four categories: (1) reporting on key aspects of
selection bias and confounding, (2) reporting on spillovers of inter-
ventions to comparison groups, (3) reporting on SEs, and (4) re-
porting on Hawthorne effect and collection of retrospective data.
The tool followed in the assessments is presented in Appendix Cand
the results of the assessment in Appendix D.
Most studies report on key aspects of selection bias and con-
founding, regardless of the methodology used, as presented in
Table 3. RCTs typically report on considerations regarding random
assignment, baseline characteristics, and attrition. However, there
is room for improvement when it comes to reporting on hetero-
geneous effects. Of the 39 studies reporting heterogeneous effects,
24 do not report evidence that the strategy for choosing groups
was planned or announced before the randomization. Among the
30 studies not reporting on the strategy for choosing some groups,
only nine report on potential threats to validity of causal claims due
to specification searching in the analysis of heterogeneous impacts.
Also, there were six studies that report evidence that the strategy
for choosing groups was planned or announced before the rando-
mization for some of the groups used to analyze heterogeneous
effects but not for others.
Most studies also report on considerations related to spillovers
of the interventions to comparison groups and SEs. However, of
47 papers collecting data in the context of the study, only 13 discuss
whether the monitoring process itself affected behavior (Hawthorne
effect).
4.3 |Synthesis of results
This section presents the standardized effect sizes for all the se-
lected papers. The results are presented using forest plots to allow
for visual comparison.
The forest plots show the SMD by study and the associated con-
fidence interval.
2
The Intervention Category column indicates the type of
intervention as described in Section 3.1.DFSorDirect Financial Sup-
portincludes the direct provision of funding for technological adoption,
for example access to credit and cash transfers. IFS or Indirect Financial
Supportincludes support to pay for the cost of the adoption projects
without directly providing funding, for example fiscal incentives and
credit assurances. ODS or Other Direct Supportincludes nonpecuniary
interventions, for example technology extension services and provision of
management consultancy and training. RS or Regulations and Standards
includes rules, policies, and characteristics of the environment that affect
agents' incentives, for example the level of competition or the char-
acteristics of contracts.
To follow the logic presented under Section 1.2 Potential Channels
of Effectsand address our the research questions, we first synthesize
our findings on the impact of interventions on technologyadoption
outcomes and then focus on the consequent effects on other firm
outcomes.
There is wide variation in the types of interventions examined by
included studies. These interventions included training, consulting, in-
formation provision, provision of equipment, access to credit, and so
forth, to name just a few. There was also a wide range of technology
adoption outcomes. Due to this situation, we decided it was not proper
to conduct analysis of heterogeneous effects and therefore could not
achieve the third objective of the review. In addition, the selected papers
rarely provided sufficient information for us to address heterogeneity.
However, we analyze the effects of interventions separately for agri-
cultural firms and for firms in the manufacturing and services (non-
agricultural) sector. This is because the factors impeding technology
adoption for agricultural and nonagricultural firms are somewhat differ-
ent. Information barriers may be more significant for nonagricultural
firms relative to agricultural firms. For example, in contrast to agricultural
firms, those in the nonagricultural sector often cannot learn about new
technologies by observing neighbors. Moreover, nonagricultural firms
seldom have any incentive to share information on new technologies
with competing firms.
4.3.1 |Effects of interventions on technology
adoption (research question 1)
Manufacturing and services
We analyze 33 studies that focus on technology adoption in manu-
facturing and services. Figure 8shows the forest plots along with the
standardized effect sizes for the associated studies. Overall, there is
mixed evidence on the impact of various interventions at the firm
level on technology adoption. Of the 33 studies, 19 find positive and
statistically significant standardized effects on technology adoption.
The other 14 studies do not find statistically significant effects. We
2
The metancommand on Stata is used to generate the forest plots. The average pooled
effect is directly computed by the command. The average pooled effect here refers to a
withinstudy mean.
14 of 36
|
ALFAROSERRANO ET AL.
TABLE 3 Completeness of reporting assessment (number of studies)
Completeness of reporting (number of papers)
R NR NA Total
(1) Reporting on key aspects of selection bias and confounding
RCT (1) The study reports that the assignment was done at random, or describes a random component as
part of the assignment procedure.
62 0 0 62
(2a) If the study shows heterogeneous effects at the group level, it reports some evidence of the
strategy for choosing groups before the randomization.
15 24 23 62
(2b) If the study shows heterogeneous effects at the group level but no evidence of the strategy for
choosing groups before the randomization, it reports potential threats to validity of causal claims
based on the analysis of heterogeneous impacts.
9213262
(3) The study reports baseline characteristics of the treatment and control groups, and/or that overall
they are statistically similar.
61 1 0 62
(4) If there is some difference in baseline characteristics between treated and control groups, the study
reports that this difference is accounted for.
35 0 27 62
(5) If there is attrition, the study reports that the loss of sample units can be considered random and/or
intentiontotreat estimates.
39 0 0 39
RDD (1) The study reports that the allocation is based on a threshold in a continuous variable. 0 0 0 0
(2) The study reports that the participants cannot manipulate the assignment variable. 0 0 0 0
(3) The study reports that the estimation is robust to different choices of bandwidth. 0 0 0 0
(4) The study reports that baseline characteristics are overall continuous around the assignment
threshold and/or if some baseline characteristic show a discontinuity around the assignment
threshold, the study reports that this difference is controlled for when estimating the treatment
effects.
00 0 0
DiD (1) The study reports that the outcome variables' trends are parallel before the introduction of the
treatment. This could be in the form of a table [some regressionbased test] or graphs.
70 0 7
PSM or stat control (1) The study describes the control method used and it is based on baseline characteristics that are
relevant to explain participation.
90 0 9
(2) If the study reports baseline characteristics that are not used for control, it reports that these are
overall statistically similar in different groups.
12 6 9
IV 1) The paper acknowledges and addresses potential threats to validity (e.g., (violation of exclusion
restriction).
20 0 2
(2) The study reports the firststage regression Fstatistic. 2 0 0 2
Synthetic control (1) The study reports that the synthetic control matches the characteristics of the treated units before
the treatment.
00 0 0
(2) The study reports robustness to the weights used to measure de difference between the
characteristics of the treated unit and the synthetic control.
00 0 0
(2) Reporting on spillover of intervention to comparison groups
(1) The study reports whether there are likely to be spillovers to comparison group. 49 19 12 80
(2) If the study reports that there are likely to be spillovers to the comparison group, then it accounts
for these in the analysis.
28 4 48 80
(3) Reporting of standard errors
(1) If the study report that observations might be independent, this is taken into account by computing
the heteroskedasticityrobust SE. In all other cases, clustered SE have been reported.
68 12 0 80
(4) Reporting on Hawthorne effect and collection of retrospective data
(1) If the data are collected in the context of the study, the study reports that the monitoring process
did not affect behavior (Hawthorne effect).
13 34 33 80
(2) If baseline information was collected retrospectively, the study reports this. 13 0 67 80
Abbreviations: NA, not applicable; NR, not reported; R, reported.
ALFAROSERRANO ET AL.
|
15 of 36
briefly highlight below the interventions that led to statistically sig-
nificant increases in technology adoption. Note however, that a si-
milar intervention in a different context may not have resulted in
statistically significant increases in technology adoption.
There is a wide set of interventions that report a positive effect on
technology adoption. These include ICT provision in the form of mon-
itoring and tracking devices in Liberia and Kenya (de Rochambeau, 2017;
Kelley et al., 2018), consulting on management and production practices
(Bloom et al., 2013;Cruzetal.,2018; Iacovone et al., 2019), and man-
agement and business training including accounting, marketing, and fi-
nance (AndersonMacdonald et al., 2018; de Mel et al., 2012; Drexler
et al., 2014; Higuchi et al., 2019;Manoetal.,2012; Nakasone &
Torero, 2014). Campos et al. (2017) find that a psychologybased
personalinitiativetraining approach that teaches a proactive mindset
and focuses on entrepreneurial behaviors in Togo had a positive effect on
adoption. There is also evidence that increased import competition from
China increased measures of technical change across firms in 12
countries (Bloom et al., 2016). However, several other studies that pro-
vided business training or management consulting to firms do not find a
statistically significant standardized effect (Bruhn & Zia, 2013;Field
et al., 2010;Gine&Mansuri,2014; Karlan & Valdivia, 2011;
Valdivia, 2015). Cai and Szeidl (2018) randomized Chinese firms into
small groups whose managers held monthly meetings for 1 year and find
positive effects of these business networks on firms' management scores.
Direct provision of new techniques of production also led to higher
takeup by firms when they were combined with other interventions.
Atkin et al. (2017) provide a new cutting technology to soccerball firms
in Pakistan and find low takeup. However, a subsequent small incentive
payment to key employees resulted in positive takeup of the technology.
Hardy and McCasland (2018) randomly seed training in a newly devel-
oped weaving technique, and techniquespecific, timelimited, onetime
contracts among garment making firms in Ghana. They find that random
contract offers increase both learning by potential adopters and sharing
by incumbent adopters.
FIGURE 8 Forest plot of the effects of firmlevel interventions in manufacturing and services on technology adoption. ES is the SMD and
95% CI is the associated confidence interval. DFS, direct financial support; IFS, indirect financial support; ODS, other direct support; RS,
regulations and standards; SMD, standardized mean difference.
16 of 36
|
ALFAROSERRANO ET AL.
Various national policy interventions have also been effective in
increasing technology adoption among firms. For example, Chudnovsky
et al. (2011)studytheNonReimbursable Funds program of the Argen-
tinean Technological Fund (FONTAR) and find positive effects. Crouzet
et al. (2019) study the effects of the 2016 Indian demonetization episode,
which led to a large but temporary decline in the availability of cash and
find a persistent increase in the growth rate of the user base of a new
payment technology. Higgins (2019)studiestherollout of debit cards in
Mexico between 2009 and 2012 and finds that small retailers adopt
pointofsale (POS) terminals to accept card payments, which leads other
consumers to adopt cards.
Agriculture
We analyze 47 studies that assess technology adoption as an outcome
variable in the agricultural sector. Figure 9shows the forest plots along
with the standardized effect sizes for the associated studies. Similar to
the results for manufacturing and services, there is mixed evidence on
the impact of various interventions on technology adoption in agriculture.
Of the 47 studies, 20 studies find positive and statistically significant
standardized effect sizes, and are briefly highlighted below. The other
27 studies do not find statistically significant effects.
For the agricultural sector, several studies focusing on financial
support (either direct or indirect) find positive impacts on technology
adoption. These interventions include for example, agricultural micro-
credit in Bangladesh (Hossain et al., 2016), vouchers for fertilizer and
improved seeds in Mozambique (Carter et al., 2014), addition of input
subsidies to an agricultural extension program in Democratic Republic of
Congo (Leuveld et al., 2018), an NGOrun agricultural input subsidy and
extension program in Uganda (Fishman et al., 2017), random provision of
fertilizer to rice farmers in Mali (Beaman et al., 2013), nonreimbursable
vouchers to partially finance the total cost of improved pastures tech-
nology in the Dominican Republic (Aramburu et al., 2019), and access to
credit to invest in floating net aquaculture after being relocated due to a
reservoir construction project in Indonesia (Miyata & Sawada, 2007).
FIGURE 9 Forest plot of the effects of firmlevel interventions in agriculture on technology adoption. ES is the SMD and 95% CI is the
associated confidence interval. DFS, direct financial support; IFS, indirect financial support; ODS, other direct support; RS, regulations and
standards; SMD, standardized mean difference
ALFAROSERRANO ET AL.
|
17 of 36
Studies analyzing the provision of insurance also found positive
effects on adoption. For example, Karlan et al. (2014) randomly as-
sign farmers in northern Ghana to receive cash grants, opportunities
to purchase rainfall index insurance, or a combination of the two and
find that insurance leads to significantly larger agricultural invest-
ment and riskier production choices. Freudenreich and Mußhoff
(2018) find that bundling hybrid seeds with an insurance scheme
increases adoption in Mexico. However, many similar interventions
in other contexts do not lead to increased adoption. For example,
Giné and Dean (2009) find little evidence that randomized offers of
credit and weather insurance in Malawi increased adoption.
Several studies also find positive effects on adoption as a result of
learning through agricultural extension services (Rotondi et al., 2015),
social networks (BenYishay & Mobarak, 2018; Benyishay et al., 2016;
Kondylis et al., 2017;Tjernström,2017); ICT including sending text
messages(Fafchamps&Minten,2012;Kiiza&Pederson,2012;
Larochelle et al., 2019), and joining value chains (Biggeri et al., 2018).
However, there is substantial heterogeneity in learning through social
networks. Benyishay et al. (2016) find that other farmers are less willing
to learn from female communicators about a new technology. BenYishay
and Mobarak (2018) find that the social identity of the communicator
influences others' learning and adoption and farmers appear most con-
vinced by communicators who share a group identity with them, or who
face comparable agricultural conditions. Tjernström (2017)studiesthe
randomized introduction of a hybrid maize seed in rural Kenyan villages,
which induces experimental variation in the information available to
farmers through their social networks, and finds that learning from
others depends on the soil quality of the farmer.
4.3.2 |Effects of interventions on other firm
outcomes (research question 2)
Having discussed the effects of various interventions on technology
adoption for firms, we next focus on the second research question on the
effects of interventions on other firm outcomes: profits, employees,
productivity, and yield. In the plots below, we discuss the effects of
interventions on other firm outcomes and indicate whether the study
also finds a positive effect on technology adoption outcomes.
Manufacturing and services
In manufacturing and services, fifteen studies report profits as an out-
come variable and we present the corresponding forest plot in Figure 10.
Of these, 3 studies find a positive standardized effect size on profits.
AndersonMacdonald et al. (2018) measure the impact of marketing and
finance skills training for small firms in South Africa; Campos et al. (2017)
measure the impact of psychologybased personal initiative training in
Togo; and Higgins (2019) studies a shock to credit card adoption in
FIGURE 10 Forest plot of the effects of firmlevel interventions on profits. ES is the SMD and 95% CI is the associated confidence interval.
(+) indicates the study reports a positive significant technology adoption outcome. DFS, direct financial support; IFS, indirect financial support;
ODS, other direct support; RS, regulations and standards; SMD, standardized mean difference
18 of 36
|
ALFAROSERRANO ET AL.
Mexico and finds an increase in profits for corner stores that adopt POS
terminals to accept card payments. It is important to note that these
three studies also find a positive effect on technology adoption.
The other 12 studies that report effects on profits find statisti-
cally insignificant standardized effect sizes. Some also involved
business training (Bruhn & Zia, 2013; Gine & Mansuri, 2014) and five
find positive effects of interventions on technology adoption.
Regarding employment, 13 studies report on the impact of an
intervention on the number of employees as an outcome variable.
These are presented in Figure 11. Of the 13 studies, 12 report small
and statistically insignificant impacts. Seven of the studies report a
positive and statistically significant standardized effect size on
technologyadoption outcomes. The only study that reports a posi-
tive impact on the number of employees (although no significant
effect on technology adoption) is Tan (2009), which looks at the
effects of a SME support program in Chile.
In Figure 12, we show the six studies that report results on the
effects of interventions on productivity. The studies that have reported
on total factor productivity, output per worker, and change in value
added were included under this category. None of the papers find a
statistically significant standardized effect size on firm productivity at the
95% level, although three report a positive and statistically significant
standardized effect size on technology adoption outcomes. Bloom et al.
(2013) finds the largest effect size in our selected sample of studies, but
thestandardizedeffectsizeissignificant only at the 90% level. Programs
that provide training (Valdivia, 2015), learning from peers (Cai &
Szeidl, 2018) and other SME support (Tan, 2009) have point estimates (of
standardized effect sizes) in the expected direction but are not statisti-
cally significant.
Agriculture
Out of the sample of selected studies, 18 report effects on yield as on
outcome variable. The forest plot is presented in Figure 13. Of the
18 studies, 8 find a statistically significant and positive standardized
effect size of interventions on yields and most of these (4 of 8) also
find a positive effect on technology adoption outcomes. Another
9 studies find insignificant effects of interventions on yields; 4 of
these find positive effects on technology adoption. One study reports
a statistically significant and negative standardized effect size on
yields (Fischer & Qaim, 2012).
There is mixed evidence that direct financial support to farmers
has an effect on yields. On the one hand, direct provision of fertili-
zers (Beaman et al., 2013), grants to farmers (Beaman et al., 2015),
and subsidized agricultural inputs (Ogunniyi et al., 2017) have posi-
tive standardized effect sizes on yields. However, other similar in-
terventions do not result in higher yieldsfor example, an
agricultural input subsidy and extension program in Uganda
(Fishman et al., 2017), a provision of cash grants and rainfall index
insurance in Ghana (Karlan et al., 2014), or an agricultural micro-
credit in Bangladesh (Hossain et al., 2016).
Other interventions that led to higher yields include farmer field
schools in Pakistan that provided skills on integrated pest management
FIGURE 11 Forest plot of the effects of firmlevel interventions on the number of employees. ES is the SMD and 95% CI is the associated
confidence interval. (+) indicates the study reports a positive significant technology adoption outcome. DFS, direct financial support; IFS,
indirect financial support; ODS, other direct support; RS, regulations and standards; SMD, standardized mean difference
ALFAROSERRANO ET AL.
|
19 of 36
(Ali & Muhammad, 2012), the marketing assistance component of
Ethiopia's Agricultural Transformation Agency (Abate et al., 2018), NGO
led services to help farmers adopt and market export crops in Kenya
(Ashraf et al., 2009), ICTbased market information through FM radio
stations (Kiiza & Pederson, 2012), and SMS messages with agricultural
advice for farmers in Kenya (Casaburi et al., 2019; Van Campenhout
et al., 2018).
5|DISCUSSION
5.1 |Overall completeness and applicability of
evidence
We reviewed evidence in 80 studies on the impact of interventions
on technology adoption and other firm outcomes. All studies include
an identification strategy explicitly aiming to estimate causal effects
on technology adoption and address threats to internal validity.
Although the studies consider a broad range of interventions, those
related to the provision of Other Direct Supportand Direct Financial
Supportaremorecommon.RelativelyfewpapersstudiedIndirect Fi-
nancial Supportor Regulation and Standardsinterventions.
Regarding outcomes, all selected studies (80) examine the ef-
fects on technologyadoption outcomes as this was embedded in the
eligibility criteria. However, only 15 studies estimate effects on
profits, 13 on employment, and 6 on productivity.
Due to externalvalidity limitations, results in one context can-
not necessarily be generalized to another. Indeed, in some cases
interventions found to have positive impact in one context did not
have a positive impact in other contexts. Adding to the complexity of
the results is the wide range in types of interventions. Coming up
with a consistent taxonomy of outcome variables is challenging in
itself. Given the wide variety of interventions and outcomes, we have
opted not to conduct heterogeneity analysis of the results. With all
of these caveats in mind, we ask the reader to consider the results
presented in the review as suggestive rather than definitive.
5.2 |Quality of the evidence
Overall, 62 out of the 80 papers selected use a RCT which is the
most rigorous methodology to assess threats to validity. The rest use
quasiexperimental methods. Regarding reporting biases, most stu-
dies followed best practice when it comes to reporting on selection
bias and confounding factors considerations, spillovers of the inter-
ventions to the comparison groups, and SEs. However, there is room
for improvement when it comes to reporting on heterogeneous ef-
fects and Hawthorne effects.
5.3 |Limitations and potential biases in the review
process
While we have tried to mitigate many of the potential biases that
could affect the results of our study, there were certain aspects of
the process that were determined by the constraints of human
FIGURE 12 Forest plot of the effects of firmlevel interventions on productivity. ES is the SMD and 95% CI is the associated confidence
interval. (+) indicates the study reports a positive significant technology adoption outcome. DFS, direct financial support; IFS, indirect financial
support; ODS, other direct support; RS, regulations and standards; SMD, standardized mean difference
20 of 36
|
ALFAROSERRANO ET AL.
resources and time. We would like to acknowledge that these could
be a potential source of concern.
1. Lack of double coding and double screening for all studies: Re-
source constraints dictated that we were only able to ensure
double screening for the initial batch of 700 papers per person
before each screener went on to screen independently. Similarly,
given the small number of studies selected, data extraction was
completed by one person. This may be a potential source of bias.
2. Limit to studies in English: Due to resource constraints, we con-
fined our electronic and manual search to only the English lan-
guage. This could potentially be another source of bias.
3. Exclusion of studies: In 46 of the 89 studies, we had to approach
the authors either for the missing information or for clarifications.
We did not hear back from the author(s) in the case of nine
studies.
4. Generalizing across contexts: The purpose of the study was to
compile the impactevaluation literature on technology adoption
in a systematic manner to identify knowledge gaps as well as
areas where evidence is strong. One of the challenges of a study
of this scope and magnitude lies in the broad variety of contexts,
interventions, and outcomes to consider and compile. Given the
wide variety, the potential to overlook the many nuances become
all the more likely. For this reason, we recommend the reader to
take these results as indicative and not conclusive evidence.
5.4 |Agreements and disagreements with other
studies or reviews
Unlike existing systematic reviews about technology adoption that
are mostly focused on agricultural firms (Fuglie et al., 2019; Obayelu
et al., 2017; Silva et al., 2015; Waddington et al., 2014), this review
includes studies from other sectors such as manufacturing (e.g., Atkin
et al., 2017; Bloom et al., 2013). It also complements Piza et al.
(2016) by including a wider set of interventions but focusing solely
on technology adoption. Just as in those studies, we do find inter-
ventions that have led to an increase in technology adoption. How-
ever, other contextspecific factors seem to matter in how much
firms benefit from technology adoption in terms of other firm out-
comes. On the whole we find the literature in this area to be still in a
nascent stage. There is a need to conduct replications of similar
programs across different contexts to have stronger evidence on
successful interventions.
FIGURE 13 Forest plot of the effects of firmlevel interventions on yields. ES is the SMD and 95% CI is the associated confidence interval.
(+) indicates the study reports a positive significant technology adoption outcome. DFS, direct financial support; IFS, indirect financial support;
ODS, other direct support; RS, regulations and standards; SMD, standardized mean difference
ALFAROSERRANO ET AL.
|
21 of 36
6|AUTHORS' CONCLUSIONS
6.1 |Implications for practice and policy
Technology adoption is associated with better economic performance and
is considered an important policy goal. Policymakers in many countries
have incorporated the promotion of firms' competitiveness into their
priorities. However, it is unclear what sorts of interventions work in in-
centivizing firms to adopt new technologies and improve their perfor-
mance. This review has taken a first step toward filling this knowledge gap
by examining the existing evidence on a wide range of interventions that
affected firms' incentives for technology adoption in the manufacturing,
services, and agriculture sectors. We have also reviewed the evidence on
firm performance from these interventions. We have assessed interven-
tions implemented in 45 countries, covering Africa, South Asia, Latin
America, Eastern Europe and East Asia. Since the interventions and out-
come variables considered in the analysis differed considerably across
studies, we have not presented a metaregression/metaanalysis as part of
this review. We have, however, presented standardized effect sizes for
the various interventions and outcome variables.
We have studied interventions that affect firms' incentives for
technology adoption (e.g., adoption of a new input, new technique of
production, improved record keeping and analysis, new equipment
adoption, and increased knowledge). For the agricultural sector, we have
found that studies focusing on financial support (either direct or
indirect) including agricultural microcredit, fertilizer and improved seeds
vouchers, input subsidies, provision of fertilizer, vouchers to finance im-
proved technology, and access to credit and insurance often find posi-
tive impacts on technology adoption. However, many similar financial
support interventions in other contexts do not lead to increased adoption.
In several cases, other direct support, including agricultural extension
services, exposure to peer firms through social networks, ICT, and value
chains led to increased technology adoption.
In manufacturing and services, studies have found that a wide set of
interventions led to positive effects on technology adoption depending on
the setting. In several cases, interventions such as direct provision of new
techniques, consulting on management and production practices, man-
agement and business training including accounting, marketing, finance,
and psychologybased personal initiative training increased technology
adoption. However, in other contexts, studies that provided business
training or management consulting to firms did not find a statistically
significant standardized effect on technology adoption.
We have also studied the effects of interventions in our sample of
papers on other firm performance measures. We find that studies that
report a positive effect on technologyadoption outcomes do not ne-
cessarily find a corresponding positive effect on other firm outcomes.
Overall, in the agricultural sector, there is mixed evidence that direct
financial support to farmers has an effect on yields. Other interventions
that have led to higher yields include farmer field schools, marketing
assistance, NGOled services to help farmers adopt and market export
crops, ICTbased market information and SMS messages with agricultural
advice for farmers. For firms in manufacturing and services, most of the
studies we reviewed find statistically insignificant effects on profits, pro-
ductivity, and employment.
Taken together, our results suggest that while various interventions
can generate positive impacts on technology adoption among firms, these
effects tend to be contextspecific. We have found that the effects on
farm yields, firm profits, productivity, and employment also tend to be
mixed. Therefore, overall, we are unable to say that one group of inter-
ventions led to a higher impact and can be favored over others. Policy
makers must be careful in interpreting these results, as the same inter-
vention cannot be assumed to work equally well across contexts and
needstobeadjustedtoeachspecificregional context. That said, policy-
makers need to try out different types of interventions to see what works
in their setting. We also emphasize that a statistically insignificant finding
for a type of intervention in a particular context does not mean that all
interventions of that type are unworthy of consideration. Rather, atten-
tion should be paid to how programs can be improved and better tailored
to particular environments to achieve better outcomes.
6.2 |Implications for research
The results of this review strongly point to the need for additional
research to understand: (i) what sorts of interventions work in in-
ducing firms to adopt new technologies, and (ii) the effects of tech-
nology adoption on other measures of firm performance. We have
found that the effects of interventions can vary widely across con-
texts. There is a need to conduct more replications of similar pro-
grams across different contexts and to examine closely the processes
of implementation in different studies. This will help policymakers
understand which interventions work and why.
The three primary recommendations for future research based
on this review are:
1. Identify intervention areas that are less studied. While some inter-
ventions classified as Other Direct Supportand Direct Financial
Supportsuch as consulting, training, grants, and subsidies have re-
ceived significant attention from researchers, the impact of Indirect
Financial Supportincluding interventions such as access to credit, and
incentive payments, as well as Regulations and Standardshave been
less studied in the context of technology adoption.
2. It would be helpful for researchers to provide all the necessary
information required to compute SMDs. We had to contact 46 of
the 89 selected studies for either clarifications or information to
compute SMDs. Reporting on the results with all the appropriate
information can go a long way in reducing the effort required to
collate the information in a meaningful way.
3. Based on the assessment of reporting biases, researchers could
improve reporting on heterogeneous effects by including some
evidence of the strategy for choosing groups before the rando-
mization or potential threats to validity of causal claims based on
the analysis of heterogeneous impacts. Researchers can also im-
prove reporting on Hawthorne effects by reporting whether the
monitoring process may affect behavior.
Although this review only focused on interventions that induce
firms to adopt new technology, researchers also need to focus on the
22 of 36
|
ALFAROSERRANO ET AL.
reasons behind low technology adoption by firms in developing coun-
tries. As a broad categorization, reasons for low adoption may be (i)
internal to the firm, (ii) on the input side, and (iii) on the output side
(Verhoogen, 2020). Considering factors internal to the firm, there may
be low adoption of new technologies because firms may not be profit
maximizing, may lack information, or may not have the capability to
apply the information in practice. There is a need for more research
analyzing interventions that alleviate these concerns such as the effects
of increasing competition, or the effects of learning through various
channels, for instance through interactions with peers, customers, sup-
pliers, employees, or consultants. On the input side, newer technology
often requires highly skilled workers and highquality inputs, which may
be scarce and expensive. Similarly, on the output side, demand condi-
tions may affect the incentives of firms to adopt (or not) new technol-
ogies. Often, customers in richer countries have preferences for higher
quality goods, which may require the application of new technologies. In
this respect, there is a need for increased research on the effects of
exporting or participation in global value chains on technology adoption.
Understanding barriers to technology adoption will also help re-
searchers design interventions that help alleviate them, thereby leading
to increased adoption.
Overall, we believe that fruitful areas of future research would
include both an understanding of barriers to technology adoption and
interventions that lead to increased adoption through removal of those
barriers.
INFORMATION ABOUT THIS REVIEW
Review authors
Lead review author:
The lead author is the person who develops and coordinates the
review team, discusses and assigns roles for individual members of the
review team, liaises with the editorial base and takes responsibility for
the ongoing updates of the review
Name: Ana Goicoechea
Title: Senior Economist
Affiliation: World Bank Group
Address: 1818 H Street NW
City, State, Province or County: Washington DC
Postal Code: 20433
Country: USA
Phone: +12024589781
Email: agoicoechea@worldbank.org
Coauthors (in alphabetical order):
Name: David AlfaroSerrano
Title: PhD Candidate
Affiliation: Columbia University
Address: 420 W. 118th St., MC 3308
City, State, Province or County: New York, NY
Postal Code: 10027
Country: USA
Email: da2628@columbia.edu
Name: Tanay Balantrapu
Title: Research Analyst
Affiliation: World Bank Group
Address: 1818 H Street NW
City, State, Province or County: Washington DC
Postal Code: 20433
Country: USA
Email: tbalantrapu@ifc.org
Name: Ritam Chaurey
Title: Assistant Professor
Affiliation: Johns Hopkins University, School of
Advanced International Affairs (SAIS)
Address: 1717, Massachusetts Avenue NW
City, State, Province
or County:
Washington D.C.
Postal Code: 20036
Country: USA
Email: rchaurey@jhu.edu
Name: Eric Verhoogen
Title: Professor
Affiliation: Economics and SIPA, Columbia
University
Address: 420 W. 118th St., Room 1022
City, State, Province or County: New York, NY
Postal Code: 10027
Country: USA
Email: eric.verhoogen@columbia.edu
ALFAROSERRANO ET AL.
|
23 of 36
Research Assistants (in alphabetical order):
Name: Snigdha Dewal
Title: Research Assistant
Affiliation:
Address:
City, State, Province or County: Washington DC
Postal Code: 2009
Country: USA
Name: Wentian Jiang
Title: Research Assistant
Affiliation: World Bank Group
Address: 1818 H Street NW
City, State, Province or County: Washington DC
Postal Code: 20433
Country: USA
Name: Tommy Jungyul Kim
Title: Research Analyst
Affiliation: World Bank Group
Address: 1818 H Street NW
City, State, Province or County: Washington DC
Postal Code: 20433
Country: USA
Name: Jincy Wilson
Title: Research Assistant
Affiliation: World Bank Group
Address: 1818 H Street NW
City, State, Province or County: Washington DC
Postal Code: 20433
Country: USA
Roles and responsibilities
All authors contributed to the writing and revising of this review. Ritam
Chaurey, Ana Goicoechea, and Eric Verhoogen provided content and
methodological expertise. Tanay Balantrapu led the information re-
trieval and statistical analysis. David AlfaroSerrano led the
development of the search strategy and protocol. Advice on in-
formation retrieval was kindly offered by John Eyers.
In particular, roles and responsibilities were distributed as fol-
lows (names listed in alphabetical order):
Content: All coauthors.
Systematic review methods: All coauthors.
Statistical analysis: Tanay Balantrapu and Ritam Chaurey.
Information retrieval: Tanay Balantrapu, Snigdha Dewal, Wentian
Jiang, Tommy Kim, and Jincy Wilson.
SOURCES OF SUPPORT
We thank Hugh Waddington of 3ie, and Caio Piza of the World Bank
Group, for guidance and support. The study has been funded by Com-
petitiveness Policy Evaluation Lab of the World Bank Group with con-
tributions from the facility for Investment Climate Advisory Services
(FIAS) and the United States Agency for International development
(USAID).
DECLARATIONS OF INTEREST
There are no conflicts of interest identified at the time of the review.
PLANS FOR UPDATING THE REVIEW
Any of the coauthors may consider updating the review in 2 years'
time or when there has been considerable advances in the literature.
AUTHOR DECLARATION
Authors' responsibilities
By completing this form, you accept responsibility for maintaining the
review in light of new evidence, comments and criticisms, and other
developments, and updating the review at least once every
5 years, or, if requested, transferring responsibility for maintaining the
review to others as agreed with the Coordinating Group. If an update is
not submitted according to agreed plans, or if we are unable to contact
you for an extended period, the relevant Coordinating Group has the
right to propose the update to alternative authors.
Publication in the Campbell Library
The Campbell Collaboration places no restrictions on publication of the
findings of a Campbell systematic review in a more abbreviated form as a
journal article either before or after the publication of the monograph
version in Campbell Systematic Reviews. Some journals, however, have
24 of 36
|
ALFAROSERRANO ET AL.
restrictions that preclude publication of findings that have been, or will be,
reported elsewhere, and authors considering publication in such a journal
should be aware of possible conflict with publication of the monograph
version in Campbell Systematic Reviews. Publication in a journal after
publication or in press status in Campbell Systematic Reviews should ac-
knowledge the Campbell version and include a citation to it. Note that
systematic reviews published in Campbell Systematic Reviews and co
registered with the Cochrane Collaboration may have additional re-
quirements or restrictions for copublication. Review authors accept re-
sponsibility for meeting any copublication requirements.
REFERENCES
References to included studies
Abate, G. T., Bernard, T., Brauw, A., & Minot, N. (2018). The impact of the
use of new technologies on farmers' wheat yield in Ethiopia:
Evidence from a randomized control trial. Agricultural Economics,49,
409421. https://doi.org/10.1111/agec.12425
Alem, Y., & Broussard, N. H. (2013). Do safety nets promote technology
adoption? Panel data evidence from rural Ethiopia. Working Papers 556,
University of Gothenburg, Department of Economics.
Ali, A., & Rahut, D. B. (2013). Impact of agricultural extension services on
technology adoption and crops yield: Empirical evidence from Pakistan.
Asian Journal of Agriculture and Rural Development,3(11), 801812.
Ali, A., & Muhammad, S. (2012). Impact of farmer field schools on adoption of
integrated pest management practices among cotton farmers in Pakistan.
Journal of the Asia Pacific Economy,17(3), (2012) 498513.
Ambler, K., de Brauw, A., & Godlonton, S. (2017). Cash transfers and
management advice for agriculture: Evidence from Senegal.IFPRI
discussion papers, 1659, International Food Policy Research
Institute (IFPRI).
AndersonMacdonald, S., Chandy, R., & Zia, B. (2018). Pathways to profits:
The impact of marketing vs. finance skills on business performance.
Management Science,64, 55595583. https://doi.org/10.1287/mnsc.
2017.2920
Aramburu, J., Garone, L. F., Maffioli, A., Salazar, L., & Lopez, C. A. (2019).
Direct and spillover effects of agricultural technology adoption programs:
Experimental evidence from the Dominican Republic. IDB Working
Paper Series No. IDBWP971.
Ashraf, N., Giné, X., & Karlan, D. (2009). Finding missing markets (and a
disturbing epilogue): Evidence from an export crop adoption and
marketing intervention in Kenya. American Journal of Agricultural
Economics,91(4), 973990.
Atkin, D., Chaudhry, A., Chaudry, S., Khandelwal, A. K., & Verhoogen, E.
(2017). Organizational barriers to technology adoption: Evidence
from soccerball producers in Pakistan. The Quarterly Journal of
Economics,132(3), 11011164. https://doi.org/10.1093/qje/qjx010
Beaman, L. A., Magruder, J., & Robinson, J. (2014). Minding small change among
small firms in Kenya. Journal of Development Economics,108(C), 6986.
Beaman, L. A., BenYishay, A., Magruder, J., & Mobarak, A. M. (2018). Can
network theorybased targeting increase technology adoption? Working
Paper no. w24912, National Bureau of Economics Research, Inc.
Beaman, L., Karlan, D., Thuysbaert, B., & Udry, C. (2013). Profitability of
fertilizer: Experimental evidence from female rice farmers in Mali.
American Economic Review,103(3), 381386.
Beaman, L., Karlan, D., Thuysbaert, B., & Udry, C. (2015). Selection into
credit markets: Evidence from agriculture in Mali. Working Paper,
Northwestern University: Evanston, IL, USA.
BenYishay, A., & Mobarak, A. M. (2018). Social learning and incentives for
experimentation and communication. The Review of Economic Studies,
86(3), 9761009.
Benyishay, A., Jones, M. R., Kondylis, F., & Mobarak, A. M. (2016). Are
gender differences in performance innate or socially mediated? Policy
Research Working Paper Series 7689, The World Bank.
Berge, L., Ivar, O., Kjetil, B., & Bertil, T. (2014). Human and financial capital
for microenterprise development: Evidence from a field and lab
experiment. Management Science,61(4), 707722.
Bernard, T., De Janvry, A., Mbaye, S., & Sadoulet, E. (2016). Product market
reforms and technology adoption by senegalese onion producers.
Working Paper Series, qt9wj41042, Department of Agricultural &
Resource Economics, UC Berkeley.
Biggeri, M., Burchi, F., Ciani, F., & Herrmann, R. (2018). Linking smallscale
farmers to the durum wheat value chain in Ethiopia: Assessing the effects
on production and wellbeing. Food Policy, Elsevier 79(C), 7791.
Blair, R., Fortson, K., Lee, J., & Rangarajan, A. (2013, August). Should foreign
aid fund agricultural training? Evidence from Armenia. Mathematica
Policy Research Working Paper, 19.
Bloom, N., Draca, M., & Van Reenen, J. (2016). Trade induced technical
change? The impact of Chinese imports on innovation, IT and
productivity. The Review of Economic Studies,83(1), 87117.
Bloom, N., Eifert, B., Mahajan, A., McKenzie, D., & Roberts, J. (2013). Does
management matter? Evidence from India. The Quarterly Journal of
Economics,128(1), 151.
Brooks, W., Donovan, K. M., & Johnson, T. (2016). The dynamics of interfirm
skill transmission among Kenyan microenterprises. Working Paper,
Helen Kellogg Institute for International Studies.
Bruhn, M., Karlan, D., & Schoar, A. (2018). The impact of consulting services on
small and medium enterprises: Evidence from a randomized trial in
Mexico. Journal of Political Economy,126(2), 635687.
Bruhn, M., & Zia, B. (2013). Stimulating managerial capital in emerging
markets: The impact of business training for young entrepreneurs.
Journal of Development Effectiveness,5(2), 232266.
Cai, J., & Szeidl, A. (2018). Interfirm relationships and business
performance. The Quarterly Journal of Economics,133(3), 12291282.
Campos, F., Frese, M., Goldstein, M., Iacovone, L., Johnson, H. C.,
McKenzie, D., & Mensmann, M. (2017). Teaching personal initiative
beats traditional training in boosting small business in West Africa.
Science,357(6357), 12871290.
Carter, M. R., Laajaj, R., & Yang, D. (2014). Subsidies and the persistence of
technology adoption: Field experimental evidence from Mozambique.
National Bureau of Economic Research, No. w20465.
Casaburi, L., Kremer, M., Mullainathan, S., & Ramrattan, R. (2019).
Harnessing ict to increase agricultural production: Evidence from Kenya.
Working Paper, Harvard University.
Chudnovsky, D., López, A., Rossi, M., & Ubfal, D. (2011). Evaluating a
program of public funding of private innovation activities: An econometric
study of FONTAR in Argentina. IDB Publications (Working Papers),
2829, InterAmerican Development Bank.
Cole, S. A., & Fernando, A. (2016). Mobile'izing agricultural advice:
Technology adoption, diffusion and sustainability. Harvard Business
School Finance Working Paper, No. 13047.
Crouzet, N., Gupta, A., & Mezzanotti, F. (2019). Shocks and technology
adoption: Evidence from electronic payment systems. Working Paper,
Northwestern University.
Cruz, M., Bussolo, M., & Iacovone, L. (2018). Organizing knowledge to
compete. Journal of international economics,111(C), 120.
Dalton,P.,Pamuk,H.,vanSoest,D.,Ramrattan,R.,&Uras,B.(2018).Payment
technology adoption by SMEs, experimental evidence
from Kenya's mobile money. DFID Working Paper, Tilburg University.
Dalton, P., Zia, B., Rüschenpöhler, J., & Uras, B. (2018). Learning business
practices from peers: Experimental evidence from smallscale retailers in
an emerging market. DIFD Working Paper, Tilburg University.
de Mel, S., McKenzie, D., & Woodruff, C. (2012). Business training and
female enterprise startup, growth, and dynamics: Experimental evidence
from Sri Lanka. Policy Research Working Papers, The World Bank.
https://doi.org/10.1596/1813-9450-6145
ALFAROSERRANO ET AL.
|
25 of 36
de Rochambeau, G. (2017). Monitoring and intrinsic motivation: Evidence
from Liberia's trucking firms. Working Paper, Columbia University.
Drexler, A., Fischer, G., & Schoar, A. (2014). Keeping it simple: Financial
literacy and rules of thumb. American Economic Journal: Applied
Economics,6(2), 131.
Duflo, E., Kremer, M., & Robinson, J. (2011). Nudging farmers to use
fertilizer: Theory and experimental evidence from Kenya. American
Economic Review,101(6), 23502390.
Emerick, K., de Janvry, A., Sadoulet, E., & Dar, M. H. (2016a).
Technological innovations, downside risk, and the modernization
of agriculture. American Economic Review,106(6), 15371561.
Emerick, K., de Janvry, A., Sadoulet, E., & Dar, M. (2016b). Identifying
early adopters, enhancing learning, and the diffusion of agricultural
technology. Public documents, World Bank,3, 3315.
Fafchamps, M., & Minten, B. (2012). Impact of SMSbased agricultural
information on Indian farmers. The World Bank Economic Review,
26(3), 383414.
Feder, G., Murgai, R., & Quizon, J. (2003). Sending farmers back to school:
The impact of farmer field schools in Indonesia. Policy Research
Working Paper No. 3022, The World Bank.
Field, E., Jayachandran, S., & Pande, R. (2010). Do traditional institutions
constrain female entrepreneurship? A field experiment on business
training in India. American Economic Review,100(2), 125129.
Fischer, E., & Qaim, M. (2012). Linking smallholders to markets:
Determinants and impacts of farmer collective action in Kenya.
World Development,40(6), 12551268.
Fishman,R.,Smith,S.C.,Bobić, V., & Sulaiman, M. (2017). How sustainable are
benefits from extension for smallholder farmers? Evidence from a randomized
phaseout of the BRAC program in Uganda. Working Paper No.
10641, IZA.
Francesconi,G.N.,&Heerink,N.(2010).Ethiopian agricultural cooperatives in
an era of global commodity exchange: Does organisational form matter?
Journal of African Economies,20(1), 153177.
Freudenreich, H., & Mußhoff, O. (2018). Insurance for technology adoption:
An experimental evaluation of schemes and subsidies with maize
farmers in Mexico. Journal of Agricultural Economics,69(1), 96120.
Gine, X., & Mansuri, G. (2014). Money or ideas? A field experiment on
constraints to entrepreneurship in rural Pakistan. World Bank Policy
Research Working Paper No. 6959. Available from SSRN: https://
ssrn.com/abstract=2461015
Giné, X., & Dean, Y. (2009). Insurance, credit, and technology adoption:
Field experimental evidence from Malawi. Journal of Development
Economics,89(1), 111.
Hardy, M., & McCasland, J. (2018). It takes two: Experimental evidence on
the determinants of technology diffusion. Working Paper, University of
British Columbia.
Hanna, R., Mullainathan, S., & Schwartzstein, J. (2014). Learning through
noticing: Theory and evidence from a field experiment. The Quarterly
Journal of Economics,129(3), 13111353.
Higgins, S. (2019). Financial technology adoption and retailer competition.
Working Paper, Northwestern University.
Higuchi, Y., Mhede, E. P., & Sonobe, T. (2019). Shortand mediumrun
impacts of management training: An experiment in Tanzania. World
development,114, 220236.
Hossain,M.,Malek,M.A.,Hossain,M.A.,Reza,M.H.,&Ahmed,M.S.(2016).
Impact assessment of credit program for the tenant farmers in Bangladesh:
Evidence from a field experiment. Working Paper No. CIRJEF1025.
CIRJE, Faculty of Economics, University of Tokyo.
Iacovone, L., Maloney, W. F., & Mckenzie, D. J. (2019). Improving
management with individual and groupbased consulting: Results
from a randomiz ed experiment in Colombia. Policy Research
Working Papers, The World Bank. https://doi.org/10.1596/1813-
9450-8854
Islam, A., Ushchev, P., Zenou, Y., & Zhang, X. (2018). The value of information in
technology adoption: Theory and evidence from Bangladesh.CEPR
Discussion Paper No. DP13419, Centre for Economic Policy and
Research.
Islam, M. (2014). Can a ruleofthumb tool improve fertilizer management?
Experimental evidence from Bangladesh. Working Paper, Harvard
University, Cambridge, MA (2014).
Jack, B. K., Oliva, P., Severen, C., Walker, E., & Bell, S. (2015). Technology
adoption under uncertainty: Takeup and subsequent investment in Zambia.
NBER Working Papers, 21414, National Bureau of Economic
Research, Inc.
Karlan, D., & Valdivia, M. (2011). Teaching entrepreneurship: Impact of
business training on microfinance clients and institutions. Review of
Economics and Statistics,93(2), 510527.
Karlan, D., Osei, R., OseiAkoto, I., & Udry, C. (2014). Agricultural
decisions after relaxing credit and risk constraints. The Quarterly
Journal of Economics,129(2), 597652.
Karlan, D., Knight, R., & Udry, C. (2015). Consulting and capital
experiments with microenterprise tailors in Ghana. Journal of
Economic Behavior & Organization,118, 281302.
Kelley, E. M., Lane, D., & Schonholzer, D. (2018). The impact of monitoring
technologies on contracts and employee behaviour: Experimental
evidence from Kenya's transit industry. Working Paper, Corpus ID:
199524660, Semantic Scholar.
Kiiza, B., & Pederson, G. (2012). ICTbased market information and
adoption of agricultural seed technologies: Insights from Uganda.
Telecommunications Policy,36(4), 253259.
Kondylis, F., Valerie, M., & Zhu, J. (2017). Seeing is believing? Evidence
from an extension network experiment. Journal of Development
Economics,125(C), 120.
Larochelle, C., Alwang, J., Travis, E., Barrera, V. H., & Dominguez Andrade, J. M.
(2019). Did you really get the message? Using text reminders to
stimulate adoption of agricultural technologies. The Journal of
Development Studies,55(4), 548564.
Leuveld, K., Nillesen, E., Pieters, J., Ross, M., Voors, M., & Wang Soone, E.
(2018). Agricultural extension and input subsidies to reduce food insecurity.
Evidence from a field experiment in the Congo. MERIT Working Papers
009, United Nations UniversityMaastricht Economic and Social
Research Institute on Innovation and Technology (MERIT).
Maertens, A., Michelson, H., & Nourani, V. (2018) How do farmers learn from
extension services? Evidence from Malawi. Working Paper. Available at
SSRN at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3321171
Magnan, N., Spielman, D. J., Lybbert, T. J., & Gulati, K. (2013). Leveling with
friends: Social networks and indian farmers demand for agricultural
custom hire services. IFPRI Discussion Papers 1302, International
Food Policy Research Institute (IFPRI).
Mano, Y., Iddrisu, A., Yoshino, Y., & Sonobe, T. (2012). How can micro and
small enterprises in subSaharan Africa become more productive?
The impacts of experimental basic managerial training. World
Development,40(3), 458468.
Miyata, S., & Sawada, Y. (2007). Learning, risk and credit in households'
new technology investments: The case of aquaculture in rural
Indonesia. Working Paper. International Food Policy Research
Institute (IFPRI).
Nakasone, E., & Torero, M. (2014). Soap operas for female micro
entrepreneur training. MPRA Paper 61302, University Library of
Munich, Germany.
Ogunniyi, A., Oluseyi, O. K., Adeyemi, O., Kabir, S. K., & Phillips, F. (2017).
Scaling up agricultural innovation for inclusive livelihood and
productivity outcomes in SubSaharan. Africa: The case of Nigeria.
African Development Review,29(S2), 121134.
Praneetvatakul, S., & Waibel, H. (2006). Impact assessment of farmer field
school using a multiperiod panel data model. Annual Meeting, August
1218, 2006, Queensland, Australia 25499, International
Association of Agricultural Economists.
Rotondi, V., Bonan, J., & Pareglio, S. (2015). Extension services, production and
welfare evidence from a field experiment in Ethiopia.UniversityofMilan
26 of 36
|
ALFAROSERRANO ET AL.
Bicocca Department of Economics, Management and Statistics Working
Paper No. 31. Available from SSRN: https://ssrn.com/abstract=2678961
or https://doi.org/10.2139/ssrn.2678961
Shikuku, K. M. (2019). Information exchange links, knowledge exposure,
and adoption of agricultural technologies in northern Uganda. World
development,115(2019), 94106.
Tan, H. (2009). Evaluating SME support programs in Chile using panel firm
data. World Bank Policy Research Working Paper Series No. 5082.
Sonobe, T., Suzuki, A., Otsuka, K., & Nam, V. H. (2011). KAIZEN for
managerial skills improvement in small and medium enterprises: An
impact evaluation study in a knitwear cluster in Vietnam. Working
Papers 29, Development and Policies Research Center (DEPOCEN).
Tjernström, E. (2017). Learning from others in heterogeneous environments.
ATAI Working Paper, University of Wisconsin, Madison.
Valdivia, M. (2015). Business training plus for female enterpreneurship?
Short and mediumterm experimental evidence from Peru? Journal
of Development Economics,113,3351.
Van Campenhout, B., Spielman, D. J., & Lecoutere, E. (2018). Information and
communication technologies (ICTs) to provide agricultural advice to
smallholder farmers: Experimental evidence from Uganda (IFPRI
Discussion Paper 1778). International Food Policy Research Institute.
Additional references
Allcott, H. (2015). Site selection bias in program evaluation. The Quarterly
Journal of Economics,130(3), (2015) 11171165.
Banerjee, A. V., Duflo, E., Glennerster, R., & Kinnan, C. (2015). The miracle
of microfinance? Evidence from a randomized evaluation. American
Economic Journal: Applied Economics,7(1), 2253.
Borenstein, M., Hedges, V. L., Higgins, T. P., & Rothstein, R. H. (2009).
Introduction to metaanalysis. John Wiley & Sons, Ltd. https://doi.org/
10.1002/9780470743386
Cirera, X., & Maloney, W. F. (2017). The innovation paradox. Developing
countries capabilities and the unrealized promise of technological catch
up. World Bank Group Publication.
Cirera, X., Cruz, M., Beisswenger, S., & Schueler, G. (2017). Technology
adoption in developing countries in the age of industry 4.0. Unpublished
manuscript, World Bank Group.
Conley, T. G., & Udry, C. R. (2010). Learning about a new technology:
Pineapple in Ghana. American Economic Review,100(1), 3569.
Das, S., Krishna, K., Lychagin, S., & Somanathan, R. (2013). Back on the
rails: Competition and productivity in stateowned industry.
American Economic Journal: Applied Economics,5(1), 136162.
Dehejia, R. (2015). Experimental and nonexperimental methods in
development economics: A porous dialectic. Journal of Globalization
and Development,6(1), 4769.
Foster, A., & Rosenzweig, M. (2010). Microeconomics of technology
adoption. Annual Reviews of Economics,2(1), 395424.
Foster, L., Haltiwanger, J., & Syverson, C. (2008). Reallocation, firm
turnover, and efficiency: Selection on productivity or profitability?
American Economic Review,98(1), 394425.
Fuglie, K., Gautam, M., Goyal, A., & Maloney, W. F. (2019). Harvesting
prosperity: Technology and productivity growth in agriculture. World
Bank. https://openknowledge.worldbank.org/handle/10986/32350
Geroski, P. A. (2000). Models of technology diffusion. Research Policy,
29(45), 603625.
Hausmann, R., & Rodrik, D. (2003). Economic development as self
discovery. Journal of Development Economics,72(2), 603633.
HerbertCopley, B. (1990). Technical change in Latin American manufacturing
firms: Review and synthesis. World development,18(11), 14571469.
Hjort, J., & Poulsen, J. (2019). The arrival of fast internet and employment
in Africa. American Economic Review,109(3), 10321079.
Hombrados, J., & Waddington, H. (2012). A tool to assess risk of bias in
experimental and quasiexperimental research. Mimeo, 3ie.
IADB (2016). La política de innovación en América Latina y el Caribe. Nuevos
caminos. Banco Interamericano de Desarrollo. https://publications.
iadb.org/handle/11319/7705
Keller, W. (2004). International technology diffusion. Journal of Economic
Literature,42(3), 752782.
McKenzie, D. (2017). Identifying and spurring highgrowth
entrepreneurship: Experimental evidence from a business plan
competition. American Economic Review,107(8), 22782307.
Meager, R. (2019). Understanding the average impact of microcredit
expansions: A Bayesian hierarchical analysis of seven randomized
experiments. American Economic Journal: Applied Economics,11(1), 5791.
Obayelu, A., Ajayi, O., Oluwalana, E., & Ogunmola, O. (2017). What does
literature say about the determinants of adoption of agricultural
technologies by smallholders farmers? Agricultural Research &
Technology Open Access Journal,6(1), 555676.
Oliveira, T., & Martins, M. F. (2011). Literature review of information
technology adoption models at firm level. The Electronic Journal
Information Systems Evaluation,14(1), 110121.
Piza, C., Cravo, T. A., Taylor, L., Gonzalez, L., Musse, I., Furtado, I.,
Sierra, A. C., & Abdelnour, S. (2016). The impacts of business
support services for small and medium enterprises on firm
performance in lowand middleincome countries. Campbell
Systematic Reviews, The Campbell Collaboration,12,1167.
Rosenberg, N. (1972). Factors affecting the diffusion of technology.
Explorations in Economic History,10(1), 333.
Silva, L., Fava, R., & Dias, A. (2015). Adoption of precision agriculture technologies
by farmers: A systematic literature review and proposition of an integrated
conceptual framework. Unpublished, IFAMA World Conference 2015.
Suri, T. (2011). Selection and comparative advantage in technology
adoption. Econometrica,79(1), 159209.
Verhoogen, E., AlfaroSerrano, D., Balantrapu, T., & Goicoechea, A. (2018).
Protocol: Interventions to promote technology adoption in firms: A
systematic review. Campbell Systematic Reviews,14(1), 130.
Verhoogen, E. (2020). Firmlevel upgrading in developing countries. CDEP
CGEG Working Paper No. 83, Columbia University.
Waddington, H., Snilstveit, B., Hombrados, J. G., Vojtkova, M.,
Anderson, J., & White, H. (2014). Farmer field schools for
improving farming practices and farmer outcomes in lowand
middleincome countries: A systematic review. Campbell
Systematic Reviews, The Campbell Collaboration,10(1), i335.
https://doi.org/10.4073/CSR.2014.6
Waddington, H., Sonnenfeld, A., Finetti, J., Gaarder, M., John, D., &
Stevenson, J. (2019). Citizen engagement in public services in low
and middleincome countries: A mixedmethods systematic review
of participation, inclusion, transparency and accountability (PITA)
initiatives. Campbell Systematic Reviews,15(12), e1025.
World Bank. (2017). Expanding market opportunities & enabling private
initiative for dynamic economies. Trade and Competitiveness Global
Practice, World Bank Group.
How to cite this article: AlfaroSerrano, D., Balantrapu, T.,
Chaurey, R., Goicoechea, A., & Verhoogen, E. (2021).
Interventions to promote technology adoption in firms: A
systematic review. Campbell Systematic Reviews, 17, e1181.
https://doi.org/10.1002/cl2.1181
ALFAROSERRANO ET AL.
|
27 of 36
APPENDIX A: MANUAL SEARCHES
In addition to the electronic search, we conducted a manual search
implemented between May, 2018 and June, 2020. It entailed the
following three procedures:
1. Search of websites of organizations working on the topic
2. Review of the references of each of the papers included after full
text screening
3. Review of citations of the included papers
The three procedures were executed as presented in
Figure A1. Once step 5 was completed, steps 25 were repeated to
conduct reference and citation searches on additional relevant
papers until no new relevant papers were found. For each pro-
cedure, the screening procedure outlined in Figure 2of the pro-
tocol was used.
1. Website Search
The website search involved screening 2258 papers, and 32
of them were included based on full text screening. Ten orga-
nizations were selected when the protocol was originally con-
ceived. The search process of the websites imitated the
database search procedure as much as possible, but the search
strategyhadtobetailoredtoeachwebsite.Forexample,some
websites only allowed a keyword search while some allowed for
an advanced search. The search method of each website is de-
tailed in Table A1 below.
2. Reference search
For each of the included papers, the references cited were first
screened by title and abstract and then by full text, using the cri-
teria described in section 3.1. Abstracts and full texts were accessed
through google scholar and the World Bank/IMF electronic library.
3. Citation search
Google Scholar was the primary tool used for the citation search.
For each paper, a search within the citation results was conducted
using the keywords technology adoption. Given that complex Boo-
lean searches do not function on Google Scholar search, and that the
search could not be specified to either the title or abstract, tech-
nology adoption was used as the search term. This search term
prioritized papers using the phrase technology adoption,but also
yielded results that use the term technologyor adoptionpromi-
nently. The citation search relied on the Google Scholar algorithm to
prioritize relevant results, and the search would be cut off once
relevant papers stopped appearing. For example, suppose that a
certain paper had 1000 citations and searching technology adop-
tionwithin the citations yielded 300 results. After a few pages of
results, the results begin to be less and less relevant, and the
screening would stop once the screener judged that no subsequent
results were likely to be included.
FIGURE A1 Manual search procedures
28 of 36
|
ALFAROSERRANO ET AL.
TABLE A1 Search methods by website
# Website Search Boolean Approach Results
1 OECD from (Abstract contains technolog* OR manag* OR innovat* OR practice*') from
(contains en') AND from (Abstract contains (adopt* OR diffus* OR chang*
OR alter*)') AND from (IGO collection contains OECD') with type(s)
subtype/journal OR subtype/article OR subtype/workingpaperseries OR
subtype/workingpaper published between 2003 and 2018
abstract search using advanced search in OECD iLibrary 574
2 3ie (technolog* OR manag* OR innovat* OR practice*) AND (adopt* OR diffus* OR
chang* OR alter*)
keyword search in the Impact Evaluation Repository 193
3 AfDB website search (technolog* OR manag* OR innovat* OR practice*) AND (adopt* OR diffus* OR
chang* OR alter*)
keyword search in its website search bar 43
4 AfDB publications
search
technology keyword search in working paper series, under publications 278
5 ATAI none No search, manual screening of titles under research publications 30
6 AEA RCT registry four separate searches: technolog*, manag*, innov*, and practice* four separate searches were made under advanced search abstract 639
7 ADB (technolog* OR manag* OR innovat* OR practice*) AND (adopt* OR diffus* OR
chang* OR alter*)
keyword search, after selecting Papers and Briefs96
8 IADB four separate searches: technolog*, manag*, innov*, and practice* four separate searches under working papers and copublications 337
9 DFID (technolog* OR manag* OR innovat* OR practice*) AND (adopt* OR diffus* OR
chang* OR alter*)
keyword search under publications 20
10 USAID (technolog* OR manag* OR innovat* OR practice*) AND (adopt* OR diffus* OR
chang* OR alter*)
in Development Experience Clearinghousesearch in title + language
English + document type (journal article, conference proceedings/
papers)
48
ALFAROSERRANO ET AL.
|
29 of 36
TABLE B1 Fields included in article
coding and data extraction
Publication details Author(s)
Year of publication
Full citation
Publication type: journal article, working paper, book
Author(s) affiliation(s)
Funding sources for the research
Context of intervention
or program
Region: EAP, ECA, LAC, MNA, SAR, SSA
Country Declared goal of the intervention
If it is a government deliberate intervention, ministry in charge
If it is a deliberate intervention, agency in charge of
implementation
Where did the intervention take place?
Period considered in the study
Location: urban, rural
Funding source(s) for the intervention
Total cost of the intervention (US dollars)
Population in the study If the intervention targets a specific type of unit, definition of
target
Unit of analysis in the study
Average annual output of the productive units
Average annual sales of the productive units
Average employment of the productive units
Sector(s) of operation
Fraction of the productive units owned, led, or operated by
female individuals.
Average fraction of highskill workers in units of production
Average fraction of female workers in units of production
Ownership composition of productive units
Intervention details Name of the intervention (if any) and brief description
Does it include an indirect financial support component?
Does it include a direct financial support component?
Does it include other direct support component?
Does it include a component affecting regulation and standards?
Study design Methodology:
Description of sample selection procedure, including units and
sample size for treatment, exposed, and comparison groups
If reported separately, treatment group sample size
If reported separately, comparison group sample size
If reported separately, indirectly exposed group sample size
Total sample size
If reported separately, attrition in treatment group
If reported separately, attrition in comparison group
If reported separately, attrition in indirectly treated group
Overall attrition
Frequency of outcome data collection
Total number of data collection periods including baseline
Baseline period
Intervention periods
Follow up periods
Duration of intervention
Spillovers
Contamination
Data for analysis Treatment variable (indicate units or dichotomous variable
values)
Outcome variable (If reported for different groups, for example,
average wages for male and female employeesconsider them
APPENDIX B: ARTICLE CODING AND DATA EXTRACTION
30 of 36
|
ALFAROSERRANO ET AL.
APPENDIX C: TOOL FOR ASSESSMENT OF
REPORTING BIASES
We assessed reporting biases with a modified version of the 3ie risk
of bias tool (Hombrados & Waddington, 2012). The tool covers four
dimensions: (1) Reporting on key aspects of selection bias and
confounding, (2) Reporting on spillovers of intervention to compar-
ison groups, (3) Reporting on SEs, and (4) Reporting on Hawthorne
effects and baseline data. For each of these dimensions, a set of
considerations were defined and presented in Table C1. The question
whether the study addresses each consideration can receive one of
three answers: reported,”“not reported,or not applicable.
as different outcomes. Indicate units or dichotomous variable
values)
Technology adoption outcome? (yes/no)
Number of observations
N (total sample size)
Nc (comparison group sample size)
Nt (treatment group sample size)
MeanC_Pre: Preintervention comparison group mean
MeanC_Post: Postintervention comparison group mean
sdC_Pre: Preintervention comparison group standard deviation
sdC_Post: Postintervention comparison group standard
deviation
MeanT_Pre: Preintervention treatment group mean
MeanT_Post: Postintervention treatment group mean
sdT_Pre: Preintervention treatment group standard deviation
sdT_Post: Postintervention treatment group standard deviation
sdpooled_pre: pooled standard deviation between the treatment
and control groups with equal weights preintervention
Treatment effect estimated: ITT, ATET, ATE, LATE
Model (fixed effects, random effects, etc.)
Effect estimate (difference in means)
Effect estimate standard error
Effect estimate tstatistic
SMD (standardized mean difference across treatment and
control groups computed using the treatment effect size and
standard deviation)
SE(SMD) (Standard error of the standardized mean difference)
Rsquare
F test
RR (relative risk ratio)
SE(RR) (standard error of the relative risk ratio)
P value (p value of the treatment effect)
Likelihood ratio
Log likelihood ratio
Chi square
I square
Random effects
If dichotomous outcome: number of successesin control group
If dichotomous outcome: number of successesin treatment
group
If dichotomous outcome: number of failuresin control group
If dichotomous outcome: number of failuresin treatment
group
Effect size adjustment: adjusted or unadjusted (i.e., using
multivariate analysis) treatment effect. Where adjusted analysis,
list additional covariates included in model.
ALFAROSERRANO ET AL.
|
31 of 36
TABLE C1 Tool for assessment of reporting biases
(1) Reporting on key aspects of selection bias and confounding
RCT (1) The study reports that the assignment was done at random or describes a random component as part of the assignment
procedure.
(2a) If the study shows heterogeneous effects at the group level, it reports some evidence of the strategy for choosing groups
before the randomization.
(2b) If the study shows heterogeneous effects at the group level but no evidence of the strategy for choosing groups before
the randomization, it reports potential threats to validity of causal claims based on the analysis of heterogeneous impacts.
(3) The study reports baseline characteristics of the treatment and control groups, and/or that overall they are statistically
similar
(4) If there is some difference in baseline characteristics between treated and control groups, the study reports that this
difference is accounted for.
(5) If there is attrition, the study reports that the loss of sample units can be considered random and/or intentiontotreat
estimates.
RDD (1) The study reports that the allocation is based on a threshold in a continuous variable.
(2) The study reports that the participants cannot manipulate the assignment variable.
(3) The study reports that the estimation is robust to different choices of bandwidth.
(4) The study reports that baseline characteristics are overall continuous around the assignment threshold and/or if some
baseline characteristic show a discontinuity around the assignment threshold, the study reports that this difference is
controlled for when estimating the treatment effects.
DiD (1) The study reports that the outcome variables' trends are parallel before the introduction of the treatment. This could be
in the form of a table [some regressionbased test] or graphs.
PSM or stat control (1) The study describes the control method used and it is based on baseline characteristics that are relevant to explain
participation.
(2) If the study reports baseline characteristics that are not used for control, it reports that these are overall statistically
similar in different groups.
IV (1) The paper acknowledges and addresses potential threats to validity (e.g., violation of exclusion restriction).
(2) The study reports the firststage regression Fstatistic.
Synthetic control (1) The study reports that the synthetic control matches the characteristics of the treated units before the treatment.
(2) The study reports robustness to the weights used to measure de difference between the characteristics of the treated
unit and the synthetic control.
(2) Reporting on spillover of intervention to comparison groups
(1) The study reports whether there are likely to be spillovers to comparison group.
(2) If the study reports that there are likely to be spillovers to the comparison group, then it accounts for these in the
analysis.
(3) Reporting of standard errors (SE)
(1) If the study report that observations might be independent, this is taken into account by computing the
heteroskedasticityrobust SE. In all other cases, clustered SE have been reported.
(4) Reporting on Hawthorne effects and collection of retrospective data
(1) If the data are collected in the context of the study, the study reports that the monitoring process did not affect behavior
(Hawthorne effects).
(2) If baseline information was collected retrospectively, the study reports this.
Abbreviations: DiD, difference in differences; IV, instrumental variables; PSM, propensity score matching; RCT, randomized control trial; RDD, regression
discontinuity design.
32 of 36
|
ALFAROSERRANO ET AL.
TABLE D1 Assessment of reporting biases
(1) Reporting on key aspects of selection bias and confounding (2) Spillovers (3) SE
(4) Hawthorne
and data
Short citation
RCT
item 1
RCT
item 2a
RCT
item 2b
RCT
item 3
RCT
item 4
RCT
item 5
DiD
item 1
PSM
item 1
PSM
item 2
IV
item 1
IV
item 2 Item 1 Item 2 Item 1 Item 1 Item 2
Abate et al. (2018) R R R R R NA RRRNR NA
Alem and Broussard (2013) R R NA NA NA R NA NA
Ali and Muhammad (2012)R NA NRNRNRNA NA
Ali and Rahut (2013)RNR NANANRNANA
Ambler et al. (2017) R NA NA R R R R R R NA NA
Aramburu et al. (2019) R R R R NA NR RRRNR NA
Ashraf et al. (2009) R NR NR R R R R NA R NR NA
Atkin et al. (2017) R NA NA R R R R R R NA NA
Beaman et al. (2013) R NA NA R R R NR NR R NR NA
Beaman et al. (2014) R NA NA R R R R R R NA NA
Beaman et al. (2015)
(Malawi)
RNRNRRRNA RRRNRNA
Beaman et al. (2015) (Mali) R R NA R NA R R R R R NA
Benyishay et al. (2016) R NA NA R R R R NA R R NA
BenYishay and
Mobarak (2018)
R NR NR R R R NA NA R R NA
Berge et al. (2014) R NR NR R NA R R NA R NR NA
Bernard et al. (2016) R NA NA R NA NA R R R NA NA
Biggeri et al. (2018) R NA R NA NR NA R
Blair et al. (2013) R NA NA R NA NA R NA R NR R
Bloom et al. (2013) R NA NA R R R R R R R NA
Bloom et al. (2016)R R RRRNA NA
(Continues)
APPENDIX D: RESULTS OF ASSESSMENT OF REPORTING BIASES
ALFAROSERRANO ET AL.
|
33 of 36
TABLE D1 (Continued)
(1) Reporting on key aspects of selection bias and confounding (2) Spillovers (3) SE
(4) Hawthorne
and data
Short citation
RCT
item 1
RCT
item 2a
RCT
item 2b
RCT
item 3
RCT
item 4
RCT
item 5
DiD
item 1
PSM
item 1
PSM
item 2
IV
item 1
IV
item 2 Item 1 Item 2 Item 1 Item 1 Item 2
Brooks et al. (2016) R NA NA R R R R NA R NA NA
Bruhn and Zia (2013) R R R R NA R NR NA R NR NA
Bruhn et al. (2018) R R NA R R R NR NA R NR NA
Cai and Szeild (2018) R R R R NA R NA NA R R NA
Van Campenhout
et al. (2018)
R NR NR R NA R R NR NR NR NA
Campos et al. (2017) R R R R NA R NR NA R R NA
Carter et al. (2014) R NA NA R R R R R R NA R
Casaburi et al. (2019) R R NA R R R RRRNA NA
Chudnovsky et al. (2011) R NR NA R NA R
Cole and Fernando (2016) R NR NR R NA R R R R NR NA
Crouzet et al., 2018 R NA NA R NA NA
Cruz et al. (2018) R R NA NA R NA NA
Dalton, Zia, et al. (2018)
(Indonesia)
R R R R NA R R NA R NR NA
Dalton, Pamuk, et al. (2018)
(Kenya)
R NA NA R R R NR NA R NR NA
de Rochambeau (2017) R NR NR R NA NA NR NA R R NA
Drexler et al. (2014) R R NA R R R RRRR NA
Duflo et al. (2011) R NA NA R R R R NA R R NA
Emerick et al. (2016a) (AER) R NA NA R NA R R R R NA R
Emerick et al. (2016b) (WB) R NA NA R R NA NR NA R NR NA
Emilia Tjenrström (2017) R NR NR R R NA R R R NR R
Fafchamps and
Minten (2012)
R NR NR R NA R RRRNA NA
Feder et al. (2003)R NA NA R NA NA
Field et al. (2010) R NR NR R NA R NR NR R NR NA
34 of 36
|
ALFAROSERRANO ET AL.
TABLE D1 (Continued)
(1) Reporting on key aspects of selection bias and confounding (2) Spillovers (3) SE
(4) Hawthorne
and data
Short citation
RCT
item 1
RCT
item 2a
RCT
item 2b
RCT
item 3
RCT
item 4
RCT
item 5
DiD
item 1
PSM
item 1
PSM
item 2
IV
item 1
IV
item 2 Item 1 Item 2 Item 1 Item 1 Item 2
Fischer and Qaim (2012)R R RRRNA NA
Fishman et al. (2017) R NR NR R NA R R NA R NA NA
Francesconi and
Heerink (2010)
R NR R NA NR NA NA
Freudenreich and
Mußhoff (2018)
RNRNRRRNA NRNANRNRR
Giné and Dean (2009) R NR NR R R NA R NA R NR NA
Gine and Mansuri (2014) R R NA R NA R NR NA R NR NA
Hardy and
McCasland (2018)
RNANARRNA RRRNRNA
Henna et al. (2014) R NR R R R NA NA NA R R NA
Higuchi et al. (2019) R NA NA R R R R R R NR NA
Tan (2009) R R NA NR NA NA
Hossain et al. (2016) R NR NR R R R R NA R NA NA
Iacovone et al. (2019) R NA NA R R R R NA R NR NA
Islam et al. (2018) R NR NR R NA NA R R R NA NA
Jack et al. (2015) R NA NA R NA R R NA R NA NA
Shikuku (2019) R NA NA NA R NR NA
Karlan and Valdivia (2011) R NR NR R NA R NR NA R NR NA
Karlan et al. (2014) R NR NR R R NR NR NA R R R
Karlan et al. (2015) R R NA R NA R R NA R R NA
Kelley et al. (2018) R NA NA R NA NA NR NA R NR NA
Kiiza and Pederson (2012) R NA NA NA R NA NA
Kondylis et al. (2017) R NR R R R R RRRNR R
Larochelle et al. (2019) R NA NA R R R R NA R NA NA
Leuveld et al. (2018) R NR R R R R RRRNA NA
Macdonald et al. (2018) R R NA R R R R NA R NR NA
(Continues)
ALFAROSERRANO ET AL.
|
35 of 36
TABLE D1 (Continued)
(1) Reporting on key aspects of selection bias and confounding (2) Spillovers (3) SE
(4) Hawthorne
and data
Short citation
RCT
item 1
RCT
item 2a
RCT
item 2b
RCT
item 3
RCT
item 4
RCT
item 5
DiD
item 1
PSM
item 1
PSM
item 2
IV
item 1
IV
item 2 Item 1 Item 2 Item 1 Item 1 Item 2
Maertens et al. (2018) R NR NR NR NA NR NR NA R NR NA
Magnan et al. (2013) R NR NR R R NA R R NR NR NA
Mahnaz Islam (2014) R NR NR R R R R R R NR R
Mano et al. (2012) R NR NR R NA R R R R R NA
Valdivia (2015) R R NA R NA R NR NA R NR NA
de Mel et al. (2012) R NA NA R NA R R NA R NR NA
Miyata and Sawada (2007) NA NA NR NA R
Nakasone and Torero (2014) R NA NA R R NR NR NA R NR R
Ogunniyi et al. (2017)RNA NANANRNANA
Praneetvatakul and
Waibel (2006)
R R NA NR NA NA
Rotondi et al. (2015) R R NA R R R RRRNA R
Higgins (2019)R RRRNA NA
Sonobe et al. (2011) R NA NA R NA NA NR NA NR NR NA
Abbreviations: NA, not applicable; NR, not reported; R, reported.
36 of 36
|
ALFAROSERRANO ET AL.
... Government regulations include policies, rules, and standards that facilitate or hinder the adoption of a given technology (Alfaro-Serrano et al., 2021). The role of the policies and regulations that local governments make in terms of adopting a new technology is crucial (Lian et al., 2014). ...
Article
The study examined the readiness of tourism firms to adopt digital marketing, using TOE framework and TAM.
... Government support usually refers to the favorable conditions and assistance provided by the government to a particular field or group at various levels, such as policy, funding, and law, to promote its healthy, orderly, and rapid development. This support can be direct financial subsidies, tax concessions, infrastructure construction, indirect regulation making, environmental optimization, etc. [36,37]. Diffusion of Innovations Theory (DIT) posits that government support is pivotal for the diffusion and adoption of new technologies or innovations. ...
Article
Full-text available
Amidst the digital economy surge, live streaming e-commerce of agricultural products has significantly boosted agricultural prosperity. Investigating farmers’ behavioral intentions toward adopting live streaming e-commerce holds critical importance for fostering agricultural healthy and swift growth. Utilizing the Technology Acceptance Model (TAM) as a foundation, this study incorporates three additional variables—government support, platform support, and social learning—to devise a theoretical model. It takes the agriculture-related live streaming e-commerce platform as an example, with 424 Chinese farmers as the sample, to quantitatively assess the factors that impact the intentions to adopt live streaming e-commerce behaviors. The findings indicate that, firstly, the TAM is applicable to the assessment of farmers’ intentions to adopt live streaming e-commerce. Secondly, government support positively impacts perceived usefulness, social learning enhances perceived ease of use, and platform support positively impacts both perceived ease of use and usefulness. Lastly, the technology acceptance extension model applicability varies among farmer groups: government support influence on perceived ease of use is more significant among traditional farmers, social learning impact on perceived ease of use is higher in farmers with higher education levels, and platform support effect on perceived usefulness is stronger among farmers experienced in e-commerce. Therefore, differentiated promotion strategies by the government are necessary, and e-commerce platforms should leverage their technology to offer efficient services and encourage farmer education. A multi-party collaboration model involving the government, platforms, and farmers is essential to collectively foster the healthy development of rural live streaming e-commerce.
Article
Purpose Despite the well-recognized importance of recycled water, the study of industry-peer pressure on recycled water is relatively new. This study investigates how organizations experience and react to industry-peer pressure to set recycled water targets. Additionally, this study investigates the role of board chairs involved in sustainability committees in contributing to responses to industry-peer pressure. Design/methodology/approach Using Eviews 12, this study employed a pooled logistic regression model to analyze data from 1,346 firms on Taiwan and Taipei exchanges (2017–2020). Findings The findings revealed that frequency-based imitation drives recycled water target-setting diffusion. However, there is no direct relationship between outcome-based imitation and recycled water target-setting. Notably, outcome-based imitation drives the adoption of recycled water target-setting of firms with board-chair membership in sustainability committees. Research limitations/implications This study faces certain data limitations. First, this study primarily focuses on water recycling. Future research could explore other ways to reduce water usage, such as using water-efficient equipment. Second, this study gathered information solely on the presence or absence of a board chairperson on the sustainability committee. Future researchers could explore the impact of the composition of sustainability committee on recycled water target-setting. Lastly, the sample used in this study is restricted to Taiwan's corporations that existed during 2017–2020. Future researchers may consider adopting a longitudinal design in other economies to address this limitation. Practical implications The findings of this study offer several guidelines and implications for recycled water target-setting and the composition of sustainability committees. It responds to an urgent call for solutions to water shortages when pressure from governments and nongovernmental organizations is relatively absent. The number of industry peers that have already set recycled water targets is indispensable for motivating firms to set their own recycled water targets. In terms of insufficient water-related regulatory pressure and normative pressure, this study found evidence suggesting that the direct motivation for setting recycled water targets stems from mimetic pressures via frequency-based imitation. The evidence in this study suggests that policymakers should require companies to disclose their peers’ recycled water target information, as doing so serves as an alternative means to achieving SDG 6.3. Social implications Recycled water target-setting might be challenging. Water recycling practices may face strong resistance and require substantial additional resources (Zhang and Tang, 2019; Gao et al ., 2019; Gu et al ., 2023). Therefore, this study suggests that firms should ensure the mindfulness of board members in promoting the welfare of the natural environment when making recycled water target-setting decisions. To reap the second-mover advantage, firms must consider the conditions in which board members can more effectively play their role. Corporations may help their chairpersons in setting recycled water targets by recruiting them as members of sustainability committees. Meanwhile, chairpersons tend to activate accurate mental models when the water conservation performance of pioneering industry peers is strong enough to indicate the potential benefits of adopting recycled water target-setting. Investors’ and stakeholders’ understanding of how the composition of sustainability committees is related to recycled water target-setting may help to identify the potential drivers of firms’ water responsibility. Investors and stakeholders should distinguish firms in terms of the board chair’s membership of their sustainability committee and focus on water-use reduction outcomes in the industry. This study provides insights into circumstances whereby chairpersons help to restore the water ecosystem. Originality/value This study explains how frequency-based and outcome-based imitation are two prominent mechanisms underlying the industry-peer pressure concerning recycled water target-setting. Moreover, this study fills literature gaps related to the moderating roles of board-chair membership in sustainability committees concerning industry-peer pressure on recycled water target-setting.
Article
The Covid-19 pandemic accelerated the need for higher levels of digitalisation in businesses, acting as a catalyst for the adoption of new technologies such as artificial intelligence (AI). Given that AI adoption is likely to have significant implications for skill demands, it becomes crucial to understand how we can meet the future skill requirements of the labour market. This literature review delves into the relationship between artificial intelligence adoption and the related skill needs of the workforce across various sectors and applications. The findings offer valuable directions for businesses to prepare their workforce for the challenges associated with AI adoption and also provide insights to guide future research in this field. Lastly, it furnishes useful insights to inform current policies aimed at addressing the challenges of the digital decade.
Article
This article explores the usability and user experience challenges of ARROWS, a novel augmented reality (AR) and wearable technology (WT) safety system for roadway work zones, an area with limited existing usability research. We utilized a mixed-method approach with two complementary experiments in indoor and outdoor settings, using the Wizard of Oz methodology and a high-fidelity prototype. We focused on identifying usability challenges, factors contributing to user experience and the distinct needs of highway workers, documenting results using the system usability scale (SUS), the rating scale mental effort (RSME) and a trust score. Participants rated the usability of ARROWS above average in both settings, while making a reasonable level of mental effort. The findings also indicate a significant correlation between perceived trust and usability, highlighting the importance of trust in user experience.
Research Proposal
Full-text available
The Global Coalition for Evaluative Evidence for the SDGs (“the Coalition”) partnered with American Institutes for Research (AIR) to design and implement an evidence synthesis to understand what works, why, and in what context in improving SDG-17 or the Partnership Pillar of the SDGs, the first SDG for which the Coalition has commissioned an evidence synthesis (with others to follow on the Peace, Prosperity, and Planet Pillars of the SDGs). The intended users of the synthesis include UN agencies, UN Member States from high-income, middle-income, and low-income countries, researchers and evaluators, and other stakeholders focused on achieving the SDG-17 objectives. This report presents the protocol for this evidence synthesis.
Article
Full-text available
This paper reports on an experiment that brings insights from the literature on demand-side determinants of technology adoption to the study of peer-to-peer diffusion. We develop a custom weaving technique and randomly seed training into a real network of garment making firm owners in Ghana. Training leads to limited adoption among trainees, but little to no diffusion to non-trainees. In a second phase, we cross-randomize demand for the technique. Demand shocks increase adoption of the technology in both groups and diffusion to untrained firms, generated by a pattern in which trained firm owners teach approximately 400% more of their peers if they are randomly assigned to the demand intervention. We find no evidence that our main effects are driven by differences in ability (learning-by-doing) or other adoption-based mechanisms. Rather, our findings are most consistent with the demand intervention generating differential willingness to diffuse among potential teachers.