ChapterPDF Available

How Organizations Manage Crowds: Define, Broadcast, Attract, and Select

Authors:
239
CHAPTER 10
HOW ORGANIZATIONS MANAGE
CROWDS: DEFINE, BROADCAST,
ATTRACT, AND SELECT
Linus Dahlander, Lars Bo Jeppesen and
Henning Piezunka
ABSTRACT
Crowdsourcing – a form of collaboration across organizational boundaries –
provides access to knowledge beyond an organization’s local knowledge base.
Integrating work on organization theory and innovation, the authors rst
develop a framework that characterizes crowdsourcing into a main sequential
process, through which organizations (1) dene the task they wish to have
completed; (2) broadcast to a pool of potential contributors; (3) attract a
crowd of contributors; and (4) select among the inputs they receive. For each
of these phases, the authors identify the key decisions organizations make, pro-
vide a basic explanation for each decision, discuss the trade-offs organizations
face when choosing among decision alternatives, and explore how organizations
may resolve these trade-offs. Using this decision-centric approach, the authors
continue by showing that there are fundamental interdependencies in the
process that makes the coordination of crowdsourcing challenging.
Keywords: Inter-organizational collaboration; crowdsourcing;
innovation; interdependence; search; organization
Many organizational theorists have argued that specialization is inevitable to deal
with growing complexity inside organizations. The traditional coping mechanism
Managing Inter-organizational Collaborations: Process Views
Research in the Sociology of Organizations, Volume 64, 239–270
Copyright © 2019 by Emerald Publishing Limited
All rights of reproduction in any form reserved
ISSN: 0733-558X/doi:10.1108/S0733-558X20190000064016
240 LINUS DAHLANDER ET AL.
has been to create new subunits or functions within organizations, where people
share the same language, to reduce this complexity (Levinthal & March, 1993).
This often creates rifts between increasingly specialized units that have difculty
communicating with one another, making it important to solve coordination
problems between such specialized units. But, the challenge often goes beyond
efcient organization within a rm – one also has to organize across organiza-
tions (Sydow & Windeler, 1998). According to Hayek (1945), knowledge is widely
distributed across society, which implies that all relevant knowledge cannot be
found in one single organization. In fact, developing new complex products and
services often involves spanning normative, technical, geographical, and organ-
izational boundaries to identify knowledge that can be recombined (Santos &
Eisenhardt, 2005; Schreyögg & Sydow, 2011). Motivated by these observations, a
great deal of research on inter-organizational collaborations has emerged in the
last decades (Powell, Koput, & Smith-Doer, 1996; Sydow, Schüßler, & Müller-
Seitz, 2016). We bring attention to a new particular form of collaboration that
transcends organizational boundaries – collaborations with crowds (Afuah &
Tucci, 2012; Felin, Lakhani, & Tushman, 2017; Ghezzi, Gabelloni, Martini, &
Natalicchio, 2018; Jeppesen & Lakhani, 2010; Puranam, Alexy, & Reitzig, 2011).
A key feature of working with crowds is that participants self-select into the col-
laboration without a central hierarchy assigning people to tasks and collaboration
partners. This approach marks a difference from other forms of collaborations
spanning boundaries that entail matching on both ends (Sydow et al., 2016).
Crowdsourcing is dened here as inviting an undened group of contributors to
self-select to work on tasks (for a comparison for different types of crowdsourcing,
see Ghezzi et al., 2018; Majchrzak & Malhotra, 2013). Contributors can be both
individuals, as is most often the case, as well as teams or even whole organiza-
tions. Governments and companies have long used crowdsourcing as a well of
ideas in order to advance such diverse issues as industrializing land, controlling
infectious diseases, and mass-producing and conserving food. For example, in
1714, the British government offered the Longitude Prize to elicit ideas on how
to solve one of the most pressing scientic problems of the time: determining
longitude at sea. Similarly, in 1869, Napoleon III used the Margarine Prize to
overcome the era’s butter shortage. In the last two decades, an increasing number
of organizations have begun to engage in crowdsourcing to solve more contem-
porary issues (Brunt, Lerner, & Nicholas, 2012). This recent widespread adoption
of crowdsourcing is rooted in (1) the constant need for innovation which prompts
organizations to search for knowledge beyond their boundaries; (2)the emer-
gence of the Internet, which has expanded the potential reach of crowdsourcing;
and (3) the decline in information and computation costs, which has facilitated
sophisticated problem-solving and innovation at the individual level (Baldwin &
von Hippel, 2011; Faraj, Krogh, Monteiro, & Lakhani, 2016; von Hippel, 2005;
von Hippel & von Krogh, 2003).
Crowdsourcing constitutes a new form of organizing, which differs from con-
ventional forms in terms of task division, task allocation, information provision,
and reward distribution (Puranam et al., 2011). Inspired by the behavioral theory
of the rm who views decisions as the key unit for understanding organizations
How Organizations Manage Crowds 241
(Cyert & March, 1963), we develop a process model of crowdsourcing focusing on
the key decisions organizations face. We separate crowdsourcing into four phases,
during which organizations (1) dene the inputs they are seeking; (2) broadcast
tasks to a pool of potential contributors; (3) attract a crowd of contributors; and
(4) select among the inputs they receive. For each of these phases, we identify
the key decisions organizations make, provide a basic explanation for each deci-
sion, and discuss the trade-offs organizations face when choosing among decision
alternatives. Although the basics of the model is sequential, the simplest form of
interdependence in a process (Berends, Reymen, Stultiëns, & Peutz, 2011), we also
elaborate on how decisions are interdependent on other decisions as illustrated in
Fig. 10.1. These interdependencies, therefore, result in that organizing crowdsourcing
can be challenging. An overview of the different stages and the interdependencies
can be found in Appendix. The interdependencies are elaborated in the column to
the right of Appendix Table 10.A1 summarizing the major decisions.
DEFINE
Denition of the desired input is crucial as it is one of few pieces of informa-
tion the crowd receives from an organization initially. Once broadcasted, input
denitions are difcult to change. The process of dening tasks for crowdsourc-
ing and, therefore, differs from the ongoing dialogue that typically takes place
Fig. 10.1: Illustrating the Stages and the Interdependencies. Note: Fig. 10.1
illustrates the main four stages in the top. More importantly, it illustrates the main
decisions associated with each stage, and how those are interdependent on other
decisions. The key insight is that decisions taken earlier have implications in later
stages, and that key to outcomes of crowdsourcing is to a large extent resting on
taking these interdependencies into account.
242 LINUS DAHLANDER ET AL.
when ideas are developed internally (Baer, Dirks, & Nickerson, 2013). We suggest
that the process of dening a task requires an organization to make three major
decisions: (1) the type of knowledge that is sought; (2) the degree to which the
task is decomposed; and (3) the degree to which the desired input is specied.
Type of Knowledge: Problem- or Solution-related
Denition and Explanation
An organization has to decide whether it is seeking solution- or problem-related
knowledge (von Hippel, 1988). Scholars of crowdsourcing have studied how
organizations crowdsource problem-related knowledge (Bayus, 2013; Dahlander
& Piezunka, 2014) as well as how they crowdsource solutions related knowledge
(Boudreau & Lakhani, 2012; Jeppesen & Lakhani, 2010). Organizations seek-
ing solution-related knowledge ask the crowd for solutions to specic problems.
For example, Colgate-Palmolive turned to crowdsourcing to nd a solution to
“get uoride powder into toothpaste tubes without it dispersing into the atmos-
phere” (Ideaconnection, 2009). Experts within the organization had invested sig-
nicant resources to nd a chemistry-based solution but had ultimately failed.
When the organization turned to crowdsourcing, however, the problem caught
the eye of a retired physicist, who was able to immediately identify a solution.
Because of his background in a different domain of expertise, the physicist
understood that he could coax the uoride into the tube by grounding the tube
and applying a positive charge to the uoride (Boudreau & Lakhani, 2009).
By contrast, organizations seeking problem-related knowledge ask the crowd
about the types of problems they should solve. For example, during its “My
Starbucks Idea” promotion, Starbucks asked its customers for ideas for new a-
vors and products and to identify any problems Starbucks could address.
Trade-off
Crowdsourcing solution-related knowledge fosters organizations’ efciency, since
it improves how a known problem is solved. As such, this approach can help
organizations nd new solutions to existing problems by attracting large and
diverse crowds representing a variety of perspectives (Jeppesen & Lakhani, 2010).
This can be done in a cost-efcient manner as the crowd pursues multiple pos-
sible solutions, and the organization only implements the most promising ones
(Boudreau, Lacetera, & Lakhani, 2011). The potential downside of using crowd-
sourcing to gain solution-related knowledge is that organizations draw attention
to their problems. Unless the call for solutions is abstracted or obfuscated, the
public availability of such information may deter potential customers and grant
competitors insights into an organization’s weaknesses. Crowdsourcing also
requires organizations to adjust internally. For example, soliciting solutions from
external sources necessitates a change in the role of internal R&D departments.
Internal developers and researchers stop being problems-solvers and become,
instead, managers and stewards of the contributors in the crowd (Boudreau &
Lakhani, 2009; Lifshitz-Assaf, 2018). Thus, while crowdsourced solutions are
How Organizations Manage Crowds 243
likely to increase efciency, the process of gathering these solutions imposes cer-
tain requirements that management may not be able to readily meet.
Crowdsourcing problem-related knowledge fosters an organization’s effective-
ness since it can increase the organization’s chances of nding relevant problems.
Organizations may have access to novel technologies (or solutions), but miss
insights to what problems for which these technologies are relevant (Catalini &
Tucker, 2016; Gruber, MacMillan, & Thompson, 2008; Shane, 2000). Crowds can
help to identify protable applications for such technologies. However, engag-
ing a crowd in the search for problem-related knowledge is not without chal-
lenges. Crowds may point out problems organizations wish to neither prioritize
nor discuss publicly. For example, crowds may emphasize the need to improve
customer service, lower prices, or become more sustainable. On such issues, the
interests of organizations and crowds (which might comprise, in part, the organi-
zation’s customers) might be misaligned, potentially resulting in public disagree-
ments between organizations and their primary stakeholders (Garud, Jain, &
Kumaraswamy, 2002). To innovate, organizations need to match knowledge about
problems (i.e., unsolved needs in the marketplace) with solutions (von Hippel,
1986; von Hippel & von Krogh, 2015). Organizations can potentially combine
the two alternatives sequentially (e.g., by rst crowdsourcing problem-related and
then solution-related knowledge).
Specicity of Task Denition
Denition and Explanation
Organizations often struggle to determine the optimal level of specicity when
dening a crowdsourcing task (Fernandes & Simon, 1999). The specicity with
which a task is dened outlines (and potentially constrains) the solution space.
For example, consider Netix’s search for ways to improve the effectiveness
of its movie recommendation system. Though, in reality, Netix provided the
crowd with very few details, it could alternatively have chosen to specify that it
was only seeking specic types of solutions (e.g., solutions building on machine
learning).
Trade-off
Over specication can lead to unnecessary and potentially detrimental con-
straints (Erat & Krishnan, 2012). In the classic example of nding longitude at
sea, Isaac Newton, who was a member of the longitude board that served as the
evaluation committee, imposed limitations on the problem formulation that, in
hindsight, were unnecessary (Sobel, 1995). Specically, he stated that a relevant
solution based on principles other than those of astronomy was unthinkable.
As history reveals, he was mistaken, and the problem was eventually solved using
the principles of clockwork. In a study of the InnoCentive problem-solving plat-
form, Jeppesen and Lakhani (2010) demonstrated that marginality in problem-
solving is related to success in crowdsourcing. For instance, in one case, the
optimal solution for a problem in toxicology originated in the eld of protein
244 LINUS DAHLANDER ET AL.
crystallography. Hence, to accommodate for the impossibility of predicting the
possible origins of a solution, it is important not to impose any more restrictions
(e.g., through terminologies, jargon, or framings that reect the approach(es)
and/or bias(es) of a specic eld or elds) on a task than necessary.
Organizations can also under-specify tasks. If an organization fails to suf-
ciently specify its desired inputs, the crowd’s contributions might be unusable
(Baer et al., 2013). For example, imagine an engine manufacturer seeking a solu-
tion for its cylinder design. If the crowd’s input fails to meet certain criteria
(e.g., size, weight, heat), it might be useless. Dening such specications requires
signicant upfront organizational effort. For example, the Progressive X-prize,
which sought to produce an energy-efcient car that could pass U.S. road safety
specications, had more than 50 pages of documentation and rules. Such efforts
may, however, be necessary to ensure that the crowd focuses on relevant and fea-
sible search domains.
If organizations seek more radical input and/or path-breaking solutions, they
may be better served by opting for less specication. Since many organizations
engage in crowdsourcing to access distant knowledge of which they are not aware
(Afuah & Tucci, 2012; Jeppesen & Lakhani, 2010), organizations’ specications
may unintentionally exclude valuable input. By providing only minimal specica-
tions, organizations ensure the deployability of the crowd’s contribution, while
maximizing freedom within these specications. Such minimal specications
allow crowd contributors to redene and interpret a problem through the lenses
of their own expertise and local knowledge (which are likely to be distant from
the organization) (Winter, Cattani, & Dorsch, 2007). Avoiding unnecessary speci-
cations can increase the number of people willing to engage in a task. In sum,
while a lower degree of specication is likely to increase the share of unusable
input, it increases the chances of sourcing extremely valuable input (Boudreau
et al., 2011).
Decomposition: Aggregated or Decomposed Tasks
Denition and Explanation
When organizations reach out to crowds, they decide on the degree to which
they wish to keep tasks aggregated or decompose them into chunks (Baumann
& Siggelkow, 2013; Ethiraj & Levinthal, 2004; Rivkin & Siggelkow, 2003).
For example, DARPA’s Grand Challenge on autonomous driving sought a com-
plete vehicle; however, DARPA could alternatively have decomposed the task and
crowdsourced the various components of a self-driving car (e.g., chassis, power-
train, scanner, software). Determining how to divide tasks and account for inter-
dependencies has been shown to be a crucial problem in organizational design
and product architecture (Ethiraj & Levinthal, 2004). Crowdsourcing aggravates
this problem, since the coordination mechanisms that exist within organizational
boundaries (Kotha, George, & Srikanth, 2013; Kretschmer & Puranam, 2008;
Srikanth & Puranam, 2014) as well as for inter-organizational relationships
(Sydow et al., 2016) are not available in crowdsourcing. While, internally, organi-
zations can switch between aggregated and decomposed tasks (Siggelkow &
How Organizations Manage Crowds 245
Levinthal, 2003), such switching is almost impossible when interacting with a
crowd composed of external contributors.
Trade-offs
Aggregated tasks require that crowd contributors consider the interdependen-
cies among different task components to achieve a global maximum. For exam-
ple, addressing the DARPA Grand Challenge on autonomous driving required
considering the complex interdependencies between the various components of
a self-driving car. When the task is decomposed, these interdependencies can-
not be considered (Baumann & Siggelkow, 2013; Ethiraj & Levinthal, 2004;
Rivkin & Siggelkow, 2003). The challenge of an aggregated task, however, is
that a single individual and even a single organization is unlikely to have all
the knowledge required to solve it. Thus, aggregated tasks make it difcult for
individuals to achieve and deliver relevant results (Sieg, Wallin, & Von Krogh,
2010). Individuals who have relevant but insufcient knowledge would rst need
to nd other individuals with complementary knowledge, with whom they would
then need to organize. Then, the groups of individuals would face the challenge
of coordinating the creative process to develop novel ideas (Harrison & Rouse,
2014). For this reason, unless specic coordination mechanisms exist to foster
collaboration among participants, aggregation may exclude individuals and con-
strain participation to existing organizations. Indeed, in the aggregated DARPA
challenge, contributors were mostly existing private companies and university
labs. Implicitly constraining participation to organizations in this way may also
constrain creativity, which is often fostered by individual ideation early in the
search process (Girotra, Terwiesch, & Ulrich, 2010).
One of the important advantages of a decomposed task is that task completion
requires only a specic set of knowledge. Such tasks, therefore, are more acces-
sible for single individuals to comprehend and solve. As a result, organizations
that choose to decompose their crowdsourcing tasks lower their barriers to entry,
allowing a broader range of contributors to engage. Since the various decom-
posed tasks will be addressed by different individuals with specialized knowledge,
the input quality for each of these subtasks is likely to be higher (Moreland &
Argote, 2003). Effective decomposition can, however, be challenging, since the
organization might not yet understand the different task components or how they
relate (Ethiraj & Levinthal, 2004). Thus, while strong decomposition can result
in optimal local solutions, it may prevent the crowd from nding a globally opti-
mal solution. For example, imagine if DARPA had crowdsourced the best laser
for scanning the environment and, separately, the best software for interpreting
laser-based images; it would have been challenging for DARPA to then combine
these components into a self-driving car, and the result would not necessarily have
been optimal.
The optimal level of aggregation depends on the types of actors that organiza-
tions are able to attract. For example, organizations like DARPA, Google, and
Netix have succeeded in attracting and enabling (temporal) organizations to
engage around various problems. Their ability to attract organizations is likely
246 LINUS DAHLANDER ET AL.
due, in part, to their brands and nancial resources. Organizations without
such resources may be unable to attract organizations and, thus, may need to
decompose tasks in order to sufciently lower the entry barriers for individual
contributors to participate. Establishing a community where individuals work on
separate parts of a decomposed problem, but take interdependencies into account
challenging and race (Snow, Fjeldstad, Lettl, & Miles, 2011).
BROADCAST
Once a task is dened, the next stage of crowdsourcing is to broadcast it to a
crowd: that is, to make the task known to individuals who might self-select to
solve the task. As organizations seek to make tasks known to a crowd, they face
multiple decisions: (1) whether to broadcast the crowdsourcing initiative via an
intermediary or use a crowd potentially at their disposal; (2) how big a crowd
to seek; and (3) whether to conduct an open call or invite crowd contributors
selectively.
Channel: Soliciting Knowledge Via Intermediaries or Through Own Initiative
Denition and Explanation
Organizations seeking to engage in crowdsourcing often decide whether to broad-
cast their tasks themselves or whether to use intermediaries (Lopez-Vega, Tell, &
Vanhaverbeke, 2016). Many historical examples of crowdsourcing projects were
designed and run by the organizations in which the tasks originated. For example,
consider the British Government’s Longitude Act, the DARPA tournament on
autonomous driving, Dell’s IdeaStorm, or Budweiser’s search for new beverages.
Alternatively, organizations can solicit the help from one of the many intermedi-
aries established in recent years to support crowdsourcing by helping organiza-
tions connect with potential crowds.
Trade-offs
When organizations run their own crowdsourcing, they tend to rely on crowds
of individuals who already know and have a relationship with the organization
(Jeppesen & Frederiksen, 2006). However, it is difcult for an organization to
attract individuals with whom they do not yet have a relationship, which, in turn,
prevents them from achieving sufcient “distance” in their search (Afuah & Tucci,
2012; Jeppesen & Frederiksen, 2006). This is problematic, since an organization’s
knowledge and the knowledge of the individuals with whom the organization
already has relationships are likely to overlap. These individuals are likely to oper-
ate in the same domain as the organization; thus, though their contributions are
often more feasible and immediately applicable, they also tend to be less novel
(Franke, Poetz, & Schreier, 2014). As a result, the inputs an organization gathers
from individuals with whom it already has relationships are unlikely to lead to
path-breaking innovations (Burgelman, 2002).
How Organizations Manage Crowds 247
Another option is to use intermediaries to help them broadcast their tasks.
Since intermediaries support various organizations in their crowdsourcing, they
invest in building pools of individuals who are interested in and skilled at solving
tasks. Since these pools of individuals tend to expand as intermediaries collabo-
rate with various organizations – and, as a result, can offer more and different
tasks – they can often represent a great diversity of expertise. Such diversity is
crucial to nd novel solutions (Boudreau et al., 2011), but would be difcult for
an organization to access without the help of an intermediary. Intermediaries
also offer advice about how to organize crowdsourcing. Since the individuals con-
tacted via an intermediary have no direct link to the organization, they are less
likely to be motivated to contribute by a feeling of afliation (compared to when
organizations recruit crowds themselves).
We suggest that a reliance on intermediaries is more likely (to be advanta-
geous) when organizations are in need of more radical approaches or when estab-
lished or accessible crowds of users may consider using intermediaries (Verona,
Prandelli, & Sawhney, 2006). If, however, an organization seeks to strengthen its
relationship with an existing group of customers or to seek the input of people
who are familiar with its brand and perhaps derive additional marketing ben-
ets from the exercise, it is likely to manage its crowdsourcing directly (Poetz &
Schreier, 2012). For example, a company like Starbucks might use crowdsourcing
to show its openness to customer suggestions.
Crowd Size: Small or Large
Denition and Explanation
A key decision organizations face is whether to seek a large crowd or a smaller
one. Of course, since individuals self-select, organizations are not necessarily
capable of choosing how many people will participate; thus, it may be impossible
to choose an “optimal” number of contributors. That said, organizations can
decide whether to seek a large crowd or a smaller one and plan accordingly.
Trade-offs
Larger crowds allow organizations to tap into larger pools of knowledge.
One of the goals of crowdsourcing is to tap into distant knowledge. It is, how-
ever, unclear ex ante who holds relevant knowledge (Afuah & Tucci, 2012). By
increasing the size of the crowd, an organization increases its chances of iden-
tifying a suitable input (Boudreau et al., 2017; Lakhani, Jeppesen, Lohse, &
Panetta, 2007; Terwiesch & Yi, 2008). A larger crowd increases the chance of
nding an extreme solution (Baer et al., 2010; Boudreau et al., 2011). Franke,
Lettl, Roiser, and Tuertscher (2014) argued that the quality of ideas is largely
random – and, as a result, that the success of crowdsourcing depends on the
number of people attracted. They conducted a eld experiment in which more
than 1,000 individuals developed ideas for smartphone apps and found that the
effect of randomness trumped the effects of all other variables, thus emphasiz-
ing the importance of a large crowd in allowing an organization to choose from
248 LINUS DAHLANDER ET AL.
a broad array of alternatives. However, some scholars have pointed out that the
relationship between the number of searchers and the breadth of search is sub-
linear (Erat & Krishnan, 2012). A large crowd increases competition, potentially
decreasing the incentive for any given individual to exert effort and reducing
creativity on the level of the individual contributor (Baer et al., 2010; Boudreau
et al., 2011). A larger crowd might, therefore, decrease the average quality of
the input.
Some research underscores the advantages of a small crowd. Fullerton and
McAfee (1999) studied the optimal design of research tournaments and suggested
that the optimal number of contributors may be as few as two. By limiting the
number of crowd contributors, an organization increases the probability of any
single participant winning and, thus, incentivizes each individual to invest more
effort in addressing the task (Boudreau, 2012; Taylor, 1995). Recent research on
the effect of competition, however, illustrates that competition mostly deters
less skilled individuals (Boudreau, Lakhani, & Menietti, 2014). A smaller crowd
increases each individual’s chances of winning the competition yet reduce other
kinds of rewards contributors derive from contributing (Lakhani & von Hippel,
2003). A reduction in the number of people, while increasing each individual’s
chances of winning, reduces these social benets and, thus, the contributors’ will-
ingness to engage (Zhang & Zhu, 2011).
A potential way to tackle this challenge is to separate crowdsourcing into
multiple rounds, such that the rst round involves a large pool of crowd con-
tributors making relatively small investments and the second round comprises
only the most promising crowd contributors. The low investment required for
the rst round encourages a large number of people to participate but prevents
people from being deterred by potential competition. One potential problem with
such an approach is that it is not always possible to recognize the precursors of
extremely valuable solutions in the rst round (Boudreau et al., 2011; Levinthal &
Posen, 2007). Since the size of the crowd affects contributors’ expectations about
their possible reward, the decision concerning crowd size requires considering the
crowdsourcing award structure. As organizations decide on the size of the crowd
they seek to attract – and, thus, the amount of input they wish to gather – they
have to simultaneously consider whether they have the capacity to select among
the inputs they receive. If the amount of input an organization receives exceeds its
cognitive capacity, the organization is more likely to overlook relevant knowledge
(Piezunka & Dahlander, 2015).
Invitation: Broadcasting Via Private or Public Call
Denition and Explanation
Organizations have to decide whether to conduct a private call, in which partici-
pation is invitation-only and exclusive to a selected group, or an open call, which
addresses anyone. An organization could issue invitations constraining participa-
tion to an exclusive group of individuals (e.g., customers of the organization,
scientists in a certain discipline). Alternatively, an organization could issue an
open call addressing anyone.
How Organizations Manage Crowds 249
Trade-offs
People who participate in crowdsourcing are partly motivated by attention and
status (Johnson, 2002). An exclusive invitation may, therefore, increase people’s
tendency to participate. Private calls also offer the opportunity to be more selec-
tive with respect to type of people the organization invites to participate. This
may allow an organization to attract individuals who are interested in interacting
with one another, thus creating a community that engages in fertile exchanges,
rather than a crowd of independent individuals. Beyond leading to higher qual-
ity inputs, a private call reduces the volume of low-quality inputs, thus facilitat-
ing the later ltering of suggestions. Issuing exclusive invitations also reduces the
problem of publicizing private information. The problem is, of course, that an
organization does not know ex ante who has the most relevant insight. By con-
straining crowdsourcing to a specic set of people, an organization reduces the
chances of serendipity.
By contrast, an open call allows an organization to broadcast its task to peo-
ple it does not know. Foregoing exclusivity in this way may increase the scope of
the crowd’s diversity and lead to extreme-value solutions (Boudreau et al., 2011;
Terwiesch & Yi, 2008). An open call, thus, increases the chances of “unexpected”
solutions originating from types of solvers the initiator may never have expected
(Jeppesen & Lakhani, 2010). For example, the self-educated carpenter and clock-
maker John Harrison, who eventually solved the longitude problem, would
hardly have been included in a private call (Cattani, Ferriani, & Lanza, 2017).
As demonstrated by the research on “marginal men” in science (see Chubin,
1976, for a review), it is often individuals at the periphery of a given area of
activity who have the greatest potential to contribute innovative ideas (Cattani &
Ferriani, 2008; Jeppesen & Lakhani, 2010). The challenge of broadcasting
to everyone is, of course, that potential contributors may not feel particularly
addressed by the call and may, therefore, lack interest to engage. Beyond the risk
of failing to attract anyone, by engaging in an open call, an organization gives up
any control over the composition of the crowd. As a result, the crowd may ulti-
mately comprise individuals whose interests are not aligned with the agenda of
the organization. Imagine a political party soliciting suggestions via an open call,
but gathering suggestions primarily from members of a competing political party.
An additional challenge of open calls is that they require organizations to reveal
their tasks publicly to an undened crowd – a move that is often not in the organi-
zation’s best interest, particularly if the task is sensitive or critical.
An interesting underexplored area is to examine whether and how organiza-
tions can combine these approaches sequentially or in parallel, given that the open
call and invitation-only formats reach different kinds of individuals and produce
different outcomes. For example, an organization could conduct an open call, but
also target those individuals it believes to hold relevant knowledge. Organizations
may also offer different targeted individuals different conditions of participation
(e.g., to reect different opportunity costs of participation). One consideration in
this case would obviously be managing other potential participants’ perceptions
of fairness, that is, a process that treats some preferentially may have negative
consequences on overall participation.
250 LINUS DAHLANDER ET AL.
ATTRACT
An organization’s success in crowdsourcing depends on the crowd contributors’
motivation, which, in turn, may depend on the incentives offered to activate
these crowd contributors and attract solutions. Crowd contributors carry costs
related to the time and effort of participating, as well as potential costs related to
access to necessary tools and equipment. We identify three key decisions that are
crucial for attracting entries: (1) the decision of whether to provide pecuniary
incentives; (2) the use of at or steep reward structures; and (3) the regulation
of ownership.
Incentive Type: Pecuniary or Non-pecuniary
Denition and Explanation
Organizations can decide to offer monetary or non-pecuniary incentives. Monetary
incentives vary in both value and type. An illustrative example is the case of
Threadless, a crowd-based t-shirt organization (Lakhani & Kanji, 2008). Threadless
conducts contests in which contributors create t-shirt designs – a relatively time-
consuming activity. The prize for a winning design is $2,500, and the probability
of winning is approximately 0.6%, implying an expected payoff of $15 per submit-
ted design – a very low hourly wage. Alternatively, rather than offering winners
a xed prize, it is possible to allow the crowd to claim a percentage of the nal
value of their solutions. For example, when it seeks crowdsourced designs for new
toys, LEGO Ideas uses a royalty scheme in which the winning crowd contributor(s)
obtain a 1% royalty of the sales.
Another way to attract crowd contributors is to rely on non-monetary incen-
tives. Studies of people’s motivation to engage in innovative efforts have found
that that people are often motivated by the challenge, fun, and learning inherent
in completing crowdsourcing tasks (Lakhani & von Hippel, 2003; Lakhani &
Wolf, 2005). Füller, Bartl, Ernst, and Mühlbacher (2006) found that individu-
als who are motivated intrinsically rather than extrinsically are particularly
helpful, as they provide a higher numbers of substantial contributions. Fre y,
Lüthje, and Haag (2011) suggested that enjoyment and fun are among the
strongest motivators for engaging in crowdsourcing, though contributors may
also seek social interactions as part of their engagement (Zhang & Zhu, 2011).
Finally, individuals may use crowdsourcing as a way to show off their skills
and, in turn, build their reputations, either as its own reward or as a long-term
investment to enhance career prospects. For example, in the Good Judgment
Project, a crowd-based forecasting project initiated by Philip Tetlock, the top
2% of forecasters are recognized by the organization. Building contributors’
reputations can also have long-term (pecuniary) benets. In their study of
open source development, Lerner and Tirole (2002) argued that software
developers – many of them unpaid – produce code to enhance their career pros-
pects. It is possible to tailor different non-pecuniary incentives by, for example,
providing more interesting tasks; however, the difculty stems from the fact that
the potential contributors are undened and unknown ex ante (Majchrzak &
Malhotra, 2013).
How Organizations Manage Crowds 251
Trade-off
The case for monetary incentives is evident. Crowd contributors invest time,
effort, and often external costs (e.g., for tools to complete a task), and pecuniary
incentives motivate individuals to make such investments (Gallus & Frey, 2016).
The size and types of such incentives allow organizations to differentiate them-
selves in the competition for crowd contributors’ participation. For example,
when Netix used crowdsourcing to nd an algorithm to make better movie rec-
ommendations, it offered $1 million USD in prize money to attract qualied indi-
viduals. While pecuniary incentives frequently attract and motivate individuals,
such incentives can have the adverse effect of crowding out individuals’ intrinsic
motivation in the long run (Frey & Oberholzer-Gee, 1997), which may be a rel-
evant concern when repeated participation is needed.
Tasks that require a substantial investment on the part of the participating
individuals are likely to require pecuniary incentives. For example, as part of the
Google Lunar X-prize, despite the fascinating nature of the task and Google’s
strong brand, Google also offers substantial nancial incentives. Intrinsic motiva-
tion can be sustained in the presence of an extrinsic reward (e.g., a prize) as long
as the prize is positioned, not as the focus of the exercise, but as an additional
reward (i.e., “icing on the cake”) experienced ex post (Amabile, 1993). For exam-
ple, since Google’s lunar project does not translate directly into a commercial
application, contributors may be less likely to expect money. If, however, Google
were to seek ideas on how to improve its advertising display – which would trans-
late into direct monetary benets for the company – contributors are more likely
to expect monetary rewards. People’s willingness to engage without monetary
incentives may also depend on whether they gain public exposure to an audience
(Boudreau & Jeppesen, 2015).
Allocation: Flat or Steep Award Structures
Denition and Explanation
Organizations that engage in crowdsourcing face a decision whether to allocate
their rewards (or recognition) through a at structure, in which many contribu-
tors receive money and recognition, or a steep structure, in which money and
recognition go only to a fortunate few.
Trade-offs
When a at reward structure is used, a large proportion of the crowd is com-
pensated, producing less effort per participant for a given prize than if the prize
were concentrated on only a few. While at reward structures seek to foster broad
participation, such structures incentivize less effort and fewer contributions from
participants. Contributors to such crowdsourcing projects are unlikely to exert
substantial efforts or to make investments (e.g., buying tools) that would allow
them to provide more effective solutions. A danger is that contributors’ participa-
tion become so negligible that it is of little value for the organization.
When a steep reward structure is used, very few contributors – and potentially
only one – are rewarded. Such reward structures create incentives for extreme
252 LINUS DAHLANDER ET AL.
performance (Rosen, 1981). While most crowd contributors do not win when the
reward structure is steep, research on both organizations and crowds shows that
such reward structures can attract and motivate numerous people to contribute
(Boudreau et al., 2011). In other words, steep rewards serve as “beacons” that
attract large crowds. For example, the attention Netix attracted for offering
$1 million USD for the best solution to its problem was likely far greater than it
would have been if Netix had offered $1,000 USD for the 1,000 best ideas. These
insights into reward structures obviously also pertain to non-monetary extrinsic
rewards, such as reputation. The steeper the reward structure, the smaller the pool
of winners, and the more difcult it is to win the prize, the higher the reputational
benets for those individuals who do succeed. However, one substantial downside
of extreme awards is that they may reduce the likelihood of people choosing to
participate, because the likelihood of winning is low or even near zero. People
participating are also less likely to collaborate and share information because
they are ercely competing for the big prize.
Recent studying the relationship between at and steep award structures illus-
trates how the resolution is contingent on the types of input organizations seek
(Terwiesch & Yi, 2008). While at reward structures incentivize minimal input
by numerous people, steep award structures incentivize extreme performances.
For example, when Google seeks to classify information it cannot classify auto-
matically (e.g., identifying street signs in pictures), it requires lots of low-level
participation. For such endeavors, organizations need to establish procedures
that ensure that participants’ contributions satisfy some minimal threshold. By
contrast, when Netix sought the best algorithm for movie recommendations, it
required extreme performance. Steep reward structures are more suitable when
the problem is well dened and there is a clear selection criterion to which entries
can be compared (Simon, 1972; Terwiesch & Yi, 2008). In such cases, the choice
of a winner is clearer and less subject to challenges from near-winners. These
considerations illustrate how the optimal resolution for a given crowdsourcing is
likely to depend on the types of inputs sought and require adjustments to other
organizational decisions.
IP Ownership: Crowd Contributors Versus Organizations
Denition and Explanation
Organizations have to decide who will own the intellectual property (IP) created
via crowdsourcing. This decision concerns not only the contribution that ulti-
mately wins, but also any contributions that are not selected (Scotchmer, 2004).
Organizations either receive all of the IP rights or allow crowd contributors to
retain these rights (or some combination of the two). Organizations’ freedom to
dene IP ownership is limited by their regulatory environment and the regula-
tions invoked by any used platforms. Even in the absence of regulation – that is,
when an organization can freely dene who owns the IP – the optimal choice is
not clear and comes with an important trade-off.
How Organizations Manage Crowds 253
Trade-off
An organization can choose to keep all of the rights to submitted ideas. The
advantage of this approach is obviously that the organization is free to implement
ideas as it wishes. However, a lack of ownership or IP protection affects crowd
contributors’ incentives to participate or disclose their input (Giorcelli & Moser,
2016). When would-be contributors sense the benets offered by the contest do
not match the (potentially high) value of their ideas, they may be reluctant to share
them. In fact, some of the entrepreneurship that can be observed among product
and service users (Agarwal & Shah, 2014; Shah & Tripsas, 2007) might stem, in
part, from organizations’ failure to provide sufcient rewards for crowd contribu-
tors’ contributions. Ding and Wolfstetter (2011, p. 665) point out that this lead
to adverse selection, since contributors “may withhold innovations that are worth
considerably more than the prize, so that only the lemons, that is, the inferior inno-
vations, are submitted.” History reveals several examples in which people have been
motivated to work on problems raised by crowdsourcing initiatives, but have then
chosen to keep their ideas private. Ding and Wolfstetter (2011, p. 665) write that
the inventor John Wesley Hyatt was encouraged to develop a new substance after he saw an
advertisement by Phelan & Collander offering $10,000 to the person who invented a usable
substitute for ivory in billiard balls. Hyatt eventually succeeded by inventing celluloid, which
seemed to be a perfect substitute for ivory, but nally decided to patent his innovation instead
of submitting it to the tournament and collecting the prize.
Such evidence suggests that crowd contributors may be motivated to solve
organizations’ challenges, but then redirect their work to competing organizations.
Alternatively, an organization could leave IP rights with contributing indi-
viduals. Such a clear provision of rights creates incentives for contributors, and
gives the crowd defense mechanisms from misappropriation by the organization.
People are more likely to share their ideas with an organization – and to do so
at an earlier stage – if their IP rights are secure (Katila & Mang, 2003; Luo,
2014). Granting IP rights to the crowd, however, renders the implementation of
ideas generated via crowdsourcing more difcult and costly. Moreover, giving the
crowd IP rights might result in conicts (Bauer, Franke, & Tuertscher, 2016). For
example, a contributor might claim that an idea is novel even if the organization
has worked on it before.
Resolving ambiguities about IP ownership in crowdsourcing is an important
consideration for organizations, since ambiguity can lead to lower participation
rates, lower quality, and potential conicts with the crowd. By specifying ex ante
how the ownership of outcomes will be assigned, organizations can avoid down-
stream problems concerning how and to whom rewards will be allocated.
SELECT
Once crowd contributors have submitted their inputs, the organization must eval-
uate and select among them. This process is challenging, since the organization
254 LINUS DAHLANDER ET AL.
may face a vast number of entries among which it must allocate a limited amount
of attention (Koput, 1997; Laursen & Salter, 2006; Piezunka & Dahlander, 2015).
Thus, since organizations strive to select the best entries, they face three decisions:
(1) whether to use a metric scale or a judgment call; (2) whether to involve the
crowd; and (3) whether to use a sequential process.
Evaluation Criteria: Metric Scale or Judgment Call
Denition and Explanation
There are two primary ways to evaluate crowdsourcing outcomes: pre-established
metrics and judgment calls. In cases using judgment calls, particular invention
types are not specied in advance (i.e., the winning entry is decided ex post)
(Moser & Nicholas, 2013). In these cases, judges are allowed to “know it when
they see it” (Scotchmer, 2004, p. 40). By contrast, when metrics are used, the
evaluation criteria are formalized by standards developed ex ante, which stipulate
a goal against which entries are evaluated to determine a winner.
Trade-offs
Judgment calls increase an organization’s exibility to choose unconventional
solutions (Jeppesen & Lakhani, 2010). However, relying purely on judgment calls
can be challenging: Each entry must be evaluated at great length, thus increasing
the organization’s selection burden and exposing the selection process to mana-
gerial biases. This burden can grow large. For instance, during the Deepwater
Horizon crisis, BP reached out to the crowd for ideas on how to tackle the catas-
trophe. More than 100,000 people offered suggestions, and more than 100 experts
were assigned to sift through them (Goldenberg, 2011). The attentional bur-
den can have detrimental consequences (March & Simon, 1958; Simon, 1971).
For example, Piezunka and Dahlander (2015) showed that an overabundance of
ideas in crowdsourcing increases the likelihood that an organization will overlook
ideas representing distant knowledge (i.e., the very type of knowledge organizations
hope to nd via crowdsourcing in most cases).
Alternatively, organizations can rely on metrics that eliminate the effort required
to choose among contributions. When clearly communicated, metrics prevent any
ambiguity in process selection and ensure a sense of fairness. The usage of met-
rics can allow crowd contributors to compare themselves to one another in real
time fostering learning and motivation. Chen, Harper, Konstan, and Xin Li (2010)
showed in a eld experiment involving movie recommendations that providing
information about the median activity of a reference group vastly increases the
contributions of less active participants. However, determining the right metric
can be difcult and costly (Terwiesch & Yi, 2008). For a metrics-based evaluation
to work, the conditions need to be specied in advance. However, this is inherently
difcult when an organization is exploring unfamiliar terrain, and it can also be
expensive, meaning that an organization involved in a metrics-based evaluation
may incur costs before it even knows whether it will attract any entries. Introducing
metrics-based criteria can unintentionally set specications that constrain the
solution space.
How Organizations Manage Crowds 255
A deeper appreciation for independencies with other decisions is needed.
For instance, the sourcing of problem-related knowledge does not allow for
applications of a metric scale. For example, Starbucks can hardly use metrics
to determine which of the new beverages suggested by customers has the most
potential. The decision between using metric scale and using a judgment call is
related to the decision concerning the specicity of task denition, since set-
ting metrics often implicitly sets specics – and, thus, potentially constrains the
solution space. The decision between metric scales and judgment calls depends on
the decisions related to crowd size and the related expected volume of contribu-
tions, since the marginal costs of using judgment calls are far higher than those
of using metrics.
Crowd Filtering: Involving the Crowd in Selection or Not
Denition and Explanation
If an organization engages in judgment calls (e.g., due to an inability to rely on
automated, metrics-based selection), it either select entries on its own or rely on the
crowd for evaluation (Poetz & Schreier, 2012). In these instances, it can ask con-
tributors to evaluate the ideas submitted by others (e.g., via votes or comments).
Trade-offs
Involving a crowd in nding the best solution is attractive because the process of
evaluating crowdsourcing input may exceed an organization’s attention capacity
and prevent information overload (O’Reilly, 1980). Engaging the crowd in the
evaluation also has the advantage that external contributors may actually be bet-
ter assessors of ideas than internal managers (Berg, 2016). Given their external
perspective, crowd contributors are optimally positioned to evaluate ideas, and
may become future customers (Mollick & Nanda, 2016). For example, when
LEGO involves crowds in evaluating designs on its LEGO Ideas platform, the
crowd evaluators’ preferences are highly relevant, since the crowd members in this
case are also often customers and users of the products. Crowd contributors are
also likely to perceive the evaluations of crowd evaluators as fair – an outcome
that is crucial for ensuring future engagement (Franke, Keinz, & Klausberger,
2013). Research shows that involving the crowd in selection is positively related to
subsequent product demand (Fuchs, Prandelli, & Schreier, 2010). These advan-
tages not-withstanding, involving the crowd in the evaluation process inher-
ently involves the possibility that the crowd will favor suggestions that are not in
line with the organization’s best interest. For example, a crowd composed of an
organization’s customers might favor lower prices or suggest solutions that are
not feasible for the company to make. Furthermore, when crowd contributors
participate in the selection process, they may be overly critical of others’ sugges-
tions in order to increase the chances of having their own suggestions selected.
In such cases, the contributors’ interactions may be dominated by harsh criti-
cisms, rather than constructive dialogue. This increases the danger of crucial sug-
gestions going unrecognized or being voted out for social reasons, rather than
merit. A crucial consideration when deciding whether to involve the crowd in
256 LINUS DAHLANDER ET AL.
selection is the possibility that a crowd of existing mainstream users will tend to
vote along the lines of products with which they are already familiar and have use
experience (von Hippel, 1986).
The alternative use of internal evaluation committees allows organizations to
maintain control of the selection process. Organizations that evaluate solutions
internally limit their chances of losing authority and/or being asked to select con-
tributions that are not in their best interest. Thus, evaluating solutions internally
allows organizations to remain in the “driver’s seat.” Involving organizations in
the evaluation also ensures that they will engage with the crowd: a critical driver
of participation among crowd contributors (Dahlander & Piezunka, 2014). The
downside of conducting completely internal evaluations is that the organizations
must shoulder the full effort of the evaluation process, and miss out on utilizing
the selective capability of the crowd.
Involving the crowd in selecting the best solution(s) may be most appro-
priate when the crowd’s opinion is highly correlated with future demand. For
example, in the case of ideation challenges, in which the crowd comprises users
and objective quality cannot be established (due to selection being a matter of
taste), involving the crowd in choosing the best solution(s) is fruitful. Crowds,
thus, play a crucial role when no metric can be established. However, when the
organization has relevant in-house knowledge of how various components
t together, an in-house selection of the best solution is more appropriate.
Transparency with regard to selection may be important for subsequent rounds
of crowdsourcing, when the crowd may grow critical of an organization’s evalu-
ation approach. A hybrid solution is possible where the crowd engages in indica-
tive pre-evaluations, and internal experts subsequently evaluate the screened
set of solutions. For example, LEGO Ideas relies on votes to pre-select designs
generated by crowd contributors. Once a suggested design crosses a certain
threshold in terms of votes, LEGO evaluates it internally and considers it for
selection.
Sequential: Stage-gate or One-off Model
Denition and Explanation
Organizations face a decision whether to use one-off models or sequential
(i.e., stage gate) models. In a one-off model, the crowd provides input from which
an eventual selection is made. In a sequential model, by contrast, the contribu-
tion/selection process involves multiple rounds (Salter, Wal, Criscuolo, & Alexy,
2015). For example, rst, a crowd may provide input; then, a rst selection may
be conducted (either via the crowd or internally by the organization); next, con-
tributors may be asked to provide a new set of inputs, from which a new round of
winners are selected; and so on.
Trade-offs
A sequential approach has the advantage of motivating contributors and allowing
organizations to provide conditions appropriate for different stages. Organizations
can further motivate contributors who have been selected in a certain stage by
How Organizations Manage Crowds 257
providing them with feedback instrumental for their attempts to reach further
stages. For example, once a certain stage has been reached, organizations pro-
vide tools or nancial means, as when Google provided nancial support to con-
tributors who passed a certain stage of its lunar project. A stage-gate approach
then reduces wasted effort, since contributors with little chance of winning are
prevented from making unnecessary investments. Finally, a sequential approach
creates opportunities for feedback and dialogue between the organization and
the crowd. For instance, contributors can learn vicariously by observing success-
ful contributors and reusing some of their ideas before working independently
in subsequent stages. The challenge with a stage-gate approach, however, is that
it can result in myopic selection (Levinthal & Posen, 2007), that is, approaches
which are promising but whose full potential requires time and development may
be ltered out too early.
Alternatively, in a one-off model, contributors have to develop fully edged
solutions before being evaluated. This model saves the organization time upfront,
since the complexity of developing different sequential stages implies an inher-
ent cost, whereas the one-off model only requires the organization to evaluate
solutions at the very end. In a one-off model, however, organizations lack a good
avenue to provide feedback to individual crowd contributors. The one-off model
thus increases crowd contributors’ chances of nding unorthodox solutions, and
also increases their chances of going “down the rabbit hole” – that is, wasting
effort on ideas with little chance of growing into promising contributions.
In the sourcing of problem-related knowledge, it is possible that no stages
are necessary, since the knowledge does not require further development on the
part of the crowd contributors. Such processes can be organized as one-off models
or even as an ongoing effort without any stages (Piezunka & Dahlander, 2015).
Of course, stages may be helpful for assessing the criticality of these problems
(see Table 10.1).
CONCLUSION
It has long been appreciated that the locus of innovation span organizational
boundaries and are embedded in inter-rm networks (Powell et al., 1996; Sydow
et al., 2016). However, managing the relationship with the crowd is qualitatively
different than managing other kinds of inter-organizational collaborations. We
took a decision-centric approach to organizing crowdsourcing, building upon the
tradition of the behavioral theory of the rm, which argues that decisions should
be the key unit of analysis when studying organizational forms (Gavetti, Greve,
Levinthal, & Ocasio, 2012; Gavetti, Levinthal, & Ocasio, 2007). This research
tradition has called for “an explicit emphasis on the actual process of decision
making as its basic research commitment” (Cyert & March, 1963, p. 19). When
Cyert and March (1963, p. 205) discussed how “we cannot assume that a rational
manager can treat the organization as a simple instrument in his dealings with
the external world,” they sparked a plethora of research on bounded rational
managers and the bases they use to make decisions. In the 1960s, when this stream
Table 10.1. Applying the Framework for Making Decisions to (Well-known) Cases of Crowdsourcing.
Colgate-
Palmolive
on Innocentive
Dell Idea
Storm
Marblar WSUP and
Unilever
on
OpenIDEO
Google Lunar
Xprize
Danfoss SAP Toyota
Idea for
Good
NASA on
Topcoder
Netix Thermomix
with Hyve
Via the
platform
Innocentive,
Colgate-
Palmolive
sought a
solution for
how to inject
uoride into
a toothpaste
tube without
dispersing
the uoride
into the air.
Dell uses the
IdeaStorm
platform
to gauge
which ideas
are most
important
and
relevant to
the public.
Marblar sought
commercial
solutions
to patents
emerging
from
universities.
The
company
paid cash
prizes
for ideas,
but then
struggled
to bring
selected
ideas to
fruition.
Via the platform
OpenIDEO
(which uses
crowdsourcing
to address
grand societal
challenges),
Water and
Sanitation
for the Urban
Poor (WSUP)
and Unilever
sought inputs
to develop
a complete
in-home toilet
and collection
solution for
low-income
urban families
in Ghana.
The Google
Lunar
XPRIZE
awards large
prizes to
encourage
the
development
of low-cost
methods of
robotic space
exploration.
Danfoss
organizes a
yearly internal
crowdsourcing
event during
which it asks
its 20,000
employees
for ideas
for radical
innovation.
SAP crowdsources
from its
developer
community
for trouble-
shooting,
blogging, and
answering
user questions.
Originally,
developers
earned points
toward
T-shirts and
memorabilia.
In 2008,
SAP offered
charitable
donations.
Toyota asked
the public
to submit
ideas
on how
to reuse
Toyota
technology
to benet
society
in a non-
automotive
capacity.
NASA asked
coders on
the Topcoder
platform
to submit
algorithms
for
improving
the
positioning
of solar
panels on the
ISS space
station.
Netix sought
ways to
improve
its movie
recommen-
dation
algorithm.
Vorwerk asked
members
of its own
Thermomix
Rezeptwelt
Community
for a better
understanding
of consumers’
cooking
journeys’
and a deeper
comprehension
of users’
needs during
the recipe
search phase
when using a
Thermomix.
Dene
Task type: problem- or
solution-related
Solution-related Problem-
related
Problem-
related
Solution-related Solution-related Problem-related Solution-related Problem-
related
Solution-related Solution-
related
Solution-related
Specicity of task
denition: narrow or
broad
Narrow Broad Broad Broad Broad Broad Broad Broad Narrow Narrow Broad
Decomposition:
aggregated or
decomposed tasks
Decomposed Aggregated Aggregated Aggregated Aggregated Decomposed Decomposed Aggregated Decomposed Decomposed Aggregated
Broadcast
Channel: soliciting
knowledge via
intermediaries or
through their own
initiatives
Intermediary Own Own Intermediary Own Own, later
intermediary
Own Own Intermediary Own Intermediary
Invitation: soliciting
via open call or via
invitation
Open call Open call Open call Open call Open call By invitation By invitation Open call Open call Open call By invitation
Crowd size: small
or large
Large Large Large Large Large Large Large Large Large Large Small
Attract
Incentive type:
pecuniary or
non-pecuniary
Pecuniary Non-pecuniary Pecuniary Both Pecuniary Non-pecuniary Non-pecuniary Non-pecuniary Both Pecuniary Non-pecuniary
Allocation: at or steep
reward structure
Steep Flat Steep Flat Steep Flat Flat Flat Steep Both Flat
Ownership ambiguity:
Ex ante or
ex post denition
of ownership
Ex ante Ex post Ex post Ex post Ex ante Ex post Ex post Ex post Ex ante Ex ante Ex ante
Select
Evaluation criteria:
metric or judgment
call
Judgment call Judgment call Judgment call Judgment call Metric Judgment call Judgment call Judgment call Metric Metric Judgment call
Crowd ltering: crowd
ltering or not
No Yes No Yes No No No No Yes No No
Sequential: One-off
or stage gate
(sequential) model
One-off Sequential One-off Sequential Sequential One-off One-off One-off One-off One-off One-off
260 LINUS DAHLANDER ET AL.
of research emerged, the types of decisions managers made to maneuver within
their external environments were qualitatively different. An implication of our
framework is that it facilitates a better understanding of the multifaceted nature
of crowdsourcing that, in part, has replaced these conventional ways of collabo-
rating with external environments.
Our reasoning is centered on the decisions organizations make to organize
crowds. We focus on contributors that could be individuals or even organizations.
It is empirically less frequent that whole organizations engage in crowdsourcing,
but it does happen as we note, especially in cases when complexity is high and it is
too difcult for a single individual to engage. In addition, some of the employees
of organizations may engage as part of their work, or even unknowingly for the
organization that employs them (see Dahlander and Wallin, 2006, for how this
plays out in open source software). This begs a larger question of how a relation-
ship with an individual can be a precursor for an inter-organizational relationship
(Powell et al., 1996; Sydow et al., 2016). For instance, Sydow et al. (2016) note
how Bayer’s Grants4Targets can be used for a company to nd scientists around
the world that can result in linkage with a university. Long-term collaborations
between organizations are thus a possible outcome of crowdsourcing.
As Fig. 10.1 shows, the model of crowdsourcing we illustrate entails both
sequential and reciprocal interdependence between different decisions. While a
sequential interdependence could be managed through planning and schedul-
ing, a more complicated reciprocal interdependence has to be managed through
constant information sharing and mutual adjustments. This makes the situation
more complicated than it appears at rst glance, as decisions that are interde-
pendent cannot be made in isolation (Levinthal, 1997; Puranam, Raveendran, &
Knudsen, 2012; Rivkin & Siggelkow, 2003). To complicate it even further, people
in the crowd may not respond the way the organization has intended and informa-
tion sharing is difcult to achieve in practice.
ACKNOWLEDGMENTS
All authors contributed equally. We are grateful for comments and suggestions from
editors Jörg Sydow and Hans Berends as well as Kevin Boudreau, Henrich Greve,
Karim Lakhani,Woody Powell, Phanish Puranam, and Ammon Salter, as well as
seminar participants at the Open and User Innovation Conference in Brighton
2013, the AOM Conference in Philadelphia 2014, the Crowdsourcing Workshop at
INSEAD 2016, the Vinnova conference on prize competitions in Stockholm, and
the INSEAD Entrepreneurship Forum 2016. The usual disclaimer applies.
REFERENCES
Afuah, A., & Tucci, C. (2012). Crowdsourcing as solution to distant search. Academy of Management
Review, 37(3), 355–375.
Agarwal, R., & Shah, S. K. (2014). Knowledge sources of entrepreneurship: Firm formation by
academic, user and employee innovators. Research Policy, 43(7), 1109–1133. doi:http://dx.doi.
org/10.1016/j.respol.2014.04.012
How Organizations Manage Crowds 261
Amabile, T. M. (1993). Motivational synergy: Toward new conceptualizations of intrinsic and extrinsic
motivation in the workplace. Human Resource Management Review, 3(3), 185–201.
Baer, M., Dirks, K. T., & Nickerson, J. A. (2013). Microfoundations of strategic problem formulation.
Strategic Management Journal, 34(2), 197–214. doi:10.1002/smj.2004
Baer, M., Leenders, R. T. A., Oldham, G. R., & Vadera, A. K. (2010). Win or lose the battle for
creativity: The power and perils of intergroup competition. Academy of Management Journal,
53(4), 827–845.
Baldwin, C., & von Hippel, E. (2011). Modeling a paradigm shift: From producer innovation to user
and open collaborative innovation. Organization Science, 22(6), 1399–1417. doi:10.1287/
orsc.1100.0618
Bauer, J., Franke, N., & Tuertscher, P. (2016). Intellectual property norms in online communities:
How user-organized intellectual property regulation supports innovation. Information Systems
Research, 27(4), 724–750. doi:10.1287/isre.2016.0649
Baumann, O., & Siggelkow, N. (2013). Dealing with complexity: Integrated vs. chunky search processes.
Organization Science, 24(1), 116–132. doi:10.1287/orsc.1110.0729
Bayus, B. L. (2013). Crowdsourcing new product ideas over time: An analysis of the Dell IdeaStorm
community. Management Science, 59(1), 226–244. doi:10.1287/mnsc.1120.1599
Berends, H., Reymen, I., Stultiëns, R. G. L., & Peutz, M. (2011). External designers in product
design processes of small manufacturing rms. Design Studies, 32(1), 86–108. doi:https://
doi.org/10.1016/j.destud.2010.06.001
Berg, J. M. (2016). Balancing on the creative highwire: Forecasting the success of novel ideas in organi-
zations. Administrative Science Quarterly, 61(3), 433–468. doi:10.1177/0001839216642211
Boudreau, K. (2012). Let a thousand owers bloom? Growing an applications software platform and
the rate and direction of innovation. Organization Science, 23(5), 1409–1427.
Boudreau, K. J., Brady, T., Ganguli, I., Gaule, P., Guinan, E., Hollenberg, T., & Lakhani, K. R. (2017).
A eld experiment on search costs and the formation of scientic collaborations. Review of
Economics and Statistics, 99(4), 565–576.
Boudreau, K. J., & Jeppesen, L. B. (2015). Unpaid crowd complementors: The platform network effect
mirage. Strategic Management Journal, 36(12), 1761–1777.
Boudreau, K. J., Lacetera, N., & Lakhani, K. R. (2011). Incentives and problem uncertainty in innova-
tion contests: An empirical analysis. Management Science, 57(5), 843–863.
Boudreau, K. J., & Lakhani, K. R. (2009). How to manage outside innovation. MIT Sloan Management
Review, 50(4), 69–76.
Boudreau, K. J., & Lakhani, K. R. (2012). High incentives, sorting on skills - or just a “taste” for com-
petition? Field experimental evidence from an algorithm design contest. Harvard Business School
Technology & Operations Mgt. Unit Working Paper No. 11–107. Cambridge, MA: Harvard
Business School.
Boudreau, K. J., Lakhani, K. R., & Menietti, M. (2014). Performance responses to competition across
skill-levels in rank order tournaments: Field evidence and implications for tournament design.
RAND Journal of Economics, 47(1), 140–165.
Brunt, L., Lerner, J., & Nicholas, T. (2012). Inducement prizes and innovation. The Journal of Industrial
Economics, 60(4), 657–696. doi:10.1111/joie.12002
Burgelman, R. A. (2002). Strategy as vector and the inertia of coevolutionary lock-in. Administrative
Science Quarterly, 47, 325–357.
Catalini, C., & Tucker, C. (2016). Seeding the S-curve? The role of early adopters in diffusion. NBER
Working Paper No. 22596. Cambridge, MA: NBER.
Cattani, G., & Ferriani, S. (2008). A core/periphery perspective on individual creative performance:
Social networks and cinematic achievements in the Hollywood lm industry. Organization
Science, 19(6), 824–844. doi:10.1287/orsc.1070.0350
Cattani, G., Ferriani, S., & Lanza, A. (2017). Deconstructing the outsider puzzle: The legitimation
journey of novelty. Organization Science, 28(6), 965–992. doi:10.1287/orsc.2017.1161
Chen, Y., Harper, F. M., Konstan, J., & Xin Li, S. (2010). Social comparisons and contributions to
online communities: A eld experiment on MovieLens. The American Economic Review, 100(4),
1358–1398. doi:10.1257/aer.100.4.1358
Chubin, D. E. (1976). State of the eld: The conceptualization of scientic specialties. Sociological
Quarterly, 17(4), 448–476. doi:10.1111/j.1533-8525.1976.tb01715.x
262 LINUS DAHLANDER ET AL.
Cyert, R. M., & March, J. G. (1963). A behavioral theory of the rm. Malden, MA: Blackwell.
Dahlander, L., & Piezunka, H. (2014). Open to suggestions: How organizations elicit suggestions
through proactive and reactive attention. Research Policy, 43(5), 812–827.
Dahlander, L., & Wallin, M. W. (2006). A man on the inside: Unlocking communities as complementary
assets. Research Policy, 35(8), 1243–1259.
Ding, W., & Wolfstetter, E. G. (2011). Prizes and lemons: Procurement of innovation under
imperfect commitment. The RAND Journal of Economics, 42(4), 664–680. doi:10.1111/
j.1756-2171.2011.00149.x
Erat, S., & Krishnan, V. (2012). Managing delegated search over design spaces. Management Science,
58(3), 606–623. doi:10.1287/mnsc.1110.1418
Ethiraj, S. K., & Levinthal, D. A. (2004). Modularity and innovation in complex systems. Management
Science, 50(2), 159–173.
Faraj, S., Krogh, G. v., Monteiro, E., & Lakhani, K. R. (2016). Special section introduction: Online
community as space for knowledge ows. Information Systems Research, 27(4), 668–684.
doi:10.1287/isre.2016.0682
Felin, T., Lakhani, K. R., & Tushman, M. L. (2017). Firms, crowds, and innovation. Strategic
Organization, 15(2), 119–140. doi:10.1177/1476127017706610
Fernandes, R., & Simon, H. (1999). A study of how individuals solve complex and ill-structured prob-
lems. Policy Sciences, 32(3), 225–245. doi:10.1023/A:1004668303848
Franke, N., Keinz, P., & Klausberger, K. (2013). “Does this sound like a fair deal?”: Antecedents and
consequences of fairness expectations in the individual’s decision to participate in rm innova-
tion. Organization Science, 24(5), 1495–1516. doi:10.1287/orsc.1120.0794
Franke, N., Lettl, C., Roiser, S., & Tuertscher, P. (2014, January 1). “Does god play dice?” Randomness
vs. deterministic explanations of crowdsourcing success. Paper presented at the the academy of
management conference.
Franke, N., Poetz, M. K., & Schreier, M. (2014). Integrating problem solvers from analogous markets
in new product ideation. Management Science, 60(4), 1063–1081. doi:10.1287/mnsc.2013.1805
Frey, B. S., & Oberholzer-Gee, F. (1997). The cost of price incentives: An empirical analysis of motivation
crowding-out. American Economic Review, 87(4), 746–755.
Frey, K., Lüthje, C., & Haag, S. (2011). Whom should rms attract to open innovation platforms? The
role of knowledge diversity and motivation. Long Range Planning, 44(5–6), 397–420. doi:http://
dx.doi.org/10.1016/j.lrp.2011.09.006
Fuchs, C., Prandelli, E., & Schreier, M. (2010). The psychological effects of empowerment strategies
on consumers’ product demand. Journal of Marketing, 74(1), 65–79. doi:10.1509/jmkg.74.1.65
Füller, J., Bartl, M., Ernst, H., & Mühlbacher, H. (2006). Community based innovation: How to inte-
grate members of virtual communities into new product development. Electronic Commerce
Research, 6(1), 57–73. doi:10.1007/s10660-006-5988-7
Fullerton, R. L., & McAfee, R. P. (1999). Auctioning entry into tournaments. Journal of Political
Economy, 107(3), 573–605. doi:10.1086/250072
Gallus, J., & Frey, B. S. (2016). Awards: A strategic management perspective. Strategic Management
Journal, 37(8), 1699–1714. doi:10.1002/smj.2415
Garud, R., Jain, S., & Kumaraswamy, A. (2002). Institutional entrepreneurship in the sponsorship
of common technological standards: The case of Sun Microsystems and Java. Academy of
Management Journal, 45, 196–214.
Gavetti, G., Greve, H. R., Levinthal, D. A., & Ocasio, W. (2012). The behavioral theory of the rm:
Assessment and prospects. Academy of Management Annals, 6(1), 1–40.
Gavetti, G., Levinthal, D., & Ocasio, W. (2007). Neo-Carnegie: The Carnegie school’s past, present, and
reconstructing for the future. Organization Science, 18(3), 523–536.
Ghezzi, A., Gabelloni, D., Martini, A., & Natalicchio, A. (2018). Crowdsourcing: A review and
suggestions for future research. International Journal of Management Reviews, 20(2), 343–363.
doi:10.1111/ijmr.12135
Giorcelli, M., & Moser, P. (2016). Copyrights and creativity: Evidence from Italian operas. Cambridge,
MA: NBER.
Girotra, K., Terwiesch, C., & Ulrich, K. T. (2010). Idea generation and the quality of the best idea.
Management Science, 56(4), 591–605. doi:10.1287/mnsc.1090.1144
How Organizations Manage Crowds 263
Goldenberg, S. (2011). BP’s oil spill crowdsourcing exercise: ‘A lot of effort for little result’. The
Guardian. Retrieved from http://www.theguardian.com/environment/2011/jul/12/bp-deepwater-
horizon-oil-spill-crowdsourcing
Gruber, M., MacMillan, I., & Thompson, J. (2008). Look before you leap: Market opportunity identi-
cation in emerging technology rms. Management Science, 54(9), 1652–1665.
Harrison, S. H., & Rouse, E. D. (2014). Let’s dance! Elastic coordination in creative group work:
A qualitative study of modern dancers. Academy of Management Journal, 57(5), 1256–1283.
doi:10.5465/amj.2012.0343
Hayek, F. A. (1945). The use of knowledge in society. American Economic Review, 35(4), 519–530.
Ideaconnection. (2009). Method to get uoride powder into toothpaste tubesDec 04. Retrieved from
https://www.ideaconnection.com/open-innovation-success/Method-to-Get-Fluoride-Powder-
into-Toothpaste-Tubes-00057.html
Jeppesen, L. B., & Frederiksen, L. (2006). Why do users contribute to rm-hosted user communities?
The case of computer-controlled music instruments. Organization Science, 17(1), 45–66.
Jeppesen, L. B., & Lakhani, K. R. (2010). Marginality and problem-solving effectiveness in broadcast
search. Organization Science, 21(5), 1016–1033.
Johnson, J. P. (2002). Open source software: Private provision of a public good. Journal of Economics &
Management Strategy, 11(4), 637–662. doi:10.1111/j.1430-9134.2002.00637.x
Katila, R., & Mang, P. Y. (2003). Exploiting technological opportunities: The timing of collaborations.
Research Policy, 32, 317–332.
Koput, K. W. (1997). A chaotic model of innovative search: Some answers, many questions. Organization
Science, 8(5), 528–542. doi:10.1287/orsc.8.5.528
Kotha, R., George, G., & Srikanth, K. (2013). Bridging the mutual knowledge gap: Coordination and
the commercialization of university science. Academy of Management Journal, 56(2), 498–524.
doi:10.5465/amj.2010.0948
Kretschmer, T., & Puranam, P. (2008). Integration through incentives within differentiated organizations.
Organization Science, 19(6), 860–875.
Lakhani, K., & Wolf, R. (2005). Perspectives on free and open source software. In J. Feller,
B. Fitzgerald, S. Hissam, & K. Lakhani (Eds.), Perspectives on free and open source software
(pp. 3–23). Cambridge, MA: MIT Press.
Lakhani, K. R., Jeppesen, L. B., Lohse, P. A., & Panetta, J. A. (2007). The value of openness in scientic
problem solving. Harvard Business School Working Paper No. 07–050. Retrieved from http://
www.hbs.edu/faculty/Publication%20Files/07-050.pdf
Lakhani, K. R., & Kanji, Z. (2008). Threadless: The business of community. Harvard Business School
Multimedia/Video Case, 608–707.
Lakhani, K. R., & Von Hippel, E. (2003). How open source software works: “Free” user-to-user assistance.
Research Policy, 32(6), 923–943.
Laursen, K., & Salter, A. (2006). Open for innovation: The role of openness in explaining innovation
performance among U.K. manufacturing rms. Strategic Management Journal, 27(2), 131–150.
doi:10.1002/smj.507
Lerner, J., & Tirole, J. (2002). Some simple economics of open source. The Journal of Industrial
Economics, 50(2), 197–234.
Levinthal, D., & Posen, H. E. (2007). Myopia of selection: Does organizational adaptation limit the
efcacy of population selection? Administrative Science Quarterly, 52(4), 586–620.
Levinthal, D. A. (1997). Adaptation on rugged landscapes. Management Science, 43(7), 934–950.
Levinthal, D. A., & March, J. G. (1993). The myopia of learning. Strategic Management Journal, 14(8), 95.
Lifshitz-Assaf, H. (2018). Dismantling knowledge boundaries at NASA: The critical role of
professional identity in open innovation. Administrative Science Quarterly, 63(4), 746–782.
doi:10.1177/0001839217747876
Lopez-Vega, H., Tell, F., & Vanhaverbeke, W. (2016). Where and how to search? Search paths in open
innovation. Research Policy, 45(1), 125–136. doi:https://doi.org/10.1016/j.respol.2015.08.003
Luo, H. (2014). When to sell your idea: Theory and evidence from the movie industry. Management
Science, 60(12), 3067–3086.
Majchrzak, A., & Malhotra, A. (2013). Towards an information systems perspective and research agenda
on crowdsourcing for innovation. The Journal of Strategic Information Systems, 22(4), 257–268.
264 LINUS DAHLANDER ET AL.
March, J. G., & Simon, H. A. (1958). Organizations. New York, NY: John Wiley & Sons.
Mollick, E., & Nanda, R. (2016). Wisdom or madness? Comparing crowds with expert evaluation in
funding the arts. Management Science, 62(6), 1533–1553. doi:10.1287/mnsc.2015.2207
Moreland, R. L., & Argote, L. (2003). Transactive memory in dynamic organizations. In R. S. Peterson
& E. A. Mannix (Eds.), Leading and managing people in the dynamic organization (pp. 135–162).
Mahwah, NJ: Lawrence Erlbaum Associates.
Moser, P., & Nicholas, T. (2013). Prizes, publicity and patents: Non-monetary awards as a mechanism
to encourage innovation. The Journal of Industrial Economics, 61(3), 763–788. doi:10.1111/
joie.12030
O’Reilly, C. A., III. (1980). Individuals and information overload in organizations: Is more necessarily
better? Academy of Management Journal, 23(4), 684–696.
Piezunka, H., & Dahlander, L. (2015). Distant search, narrow attention: How crowding alters
organizations’ ltering of suggestions in crowdsourcing. Academy of Management Journal, 58(3),
856–880. doi:10.5465/amj.2012.0458
Poetz, M. K., & Schreier, M. (2012). The value of crowdsourcing: Can users really compete with pro-
fessionals in generating new product ideas? Journal of Product Innovation Management, 29(2),
245–256. doi:10.1111/j.1540-5885.2011.00893.x
Powell, W. W., Koput, K. W., & Smith-Doer, L. (1996). Inter-organizational collaboration and the locus
of innovation: Networks of learning in biotechnology. Administrative Science Quarterly, 41(1),
116–145.
Puranam, P., Alexy, O., & Reitzig, M. (2011). What’s ‘new’ about new forms of organizing? Academy of
Management Review, 39(2), 162–180.
Puranam, P., Raveendran, M., & Knudsen, T. (2012). Organization design: The epistemic interdepend-
ence perspective. Academy of Management Review, 37(3), 419–440.
Rivkin, J. W., & Siggelkow, N. (2003). Balancing search and stability: Interdependencies among
elements of organizational design. Management Science, 49(3), 290–311.
Rosen, S. (1981). The economics of superstars. American Economic Review, 71(5), 845–858.
doi:10.2307/1803469
Salter, A., Wal, A. L. J., Criscuolo, P., & Alexy, O. (2015). Open for ideation: Individual-level open-
ness and idea generation in R&D. Journal of Product Innovation Management, 32(4), 488–504.
doi:10.1111/jpim.12214
Santos, F. M., & Eisenhardt, K. M. (2005). Organizational boundaries and theories of organization.
Organization Science, 16, 491–508.
Schreyögg, G., & Sydow, J. (2011). Organizational path dependence: A process view. Organization
Studies, 32(3), 321–335. doi:10.1177/0170840610397481
Scotchmer, S. (2004). Innovation and incentives. Cambridge, MA: MIT Press.
Shah, S., & Tripsas, M. (2007). The accidental entrepreneur: The emergent and collective process of
user entrepreneurship. Strategic Entrepreneurship Journal, 1(1), 123–140.
Shane, S. (2000). Prior knowledge and the discovery of entrepreneurial opportunities. Organization
Science, 11(4), 448–469.
Sieg, J. H., Wallin, M. W., & Von Krogh, G. (2010). Managerial challenges in open innovation: A study
of innovation intermediation in the chemical industry. R&D Management, 40(3), 281–291.
doi:10.1111/j.1467-9310.2010.00596.x
Siggelkow, N., & Levinthal, D. A. (2003). Temporarily divide to conquer: Centralized, decentralized, and
reintegrated organizational approaches to exploration and adaptation. Organization Science,
14(6), 650–669.
Simon, H. A. (1971). Designing organizations for an information-rich world. In M. Greenberger (Ed.),
Computers, communications, and the public interest (pp. 37–72). Baltimore, MD: The Johns Hopkins
Press.
Simon, H. A. (1972). Theories of bounded rationality. Decision and organization, 1, 161–176.
Snow, C. C., Fjeldstad, Ø. D., Lettl, C., & Miles, R. E. (2011). Organizing continuous product develop-
ment and commercialization: The collaborative community of rms model. Journal of Product
Innovation Management, 28(1), 3–16. doi:10.1111/j.1540-5885.2010.00777.x
Sobel, D. (1995). Longitude: The true story of a lone genius who solved the greatest scientic problem of
his time. New York, NY: Walker & Company.
How Organizations Manage Crowds 265
Srikanth, K., & Puranam, P. (2014). The rm as a coordination system: Evidence from software
services offshoring. Organization Science, 25(4), 1253–1271. doi:10.1287/orsc.2013.0886
Sydow, J., Schüßler, E., & Müller-Seitz, G. (2016). Managing inter-organizational relations: Debates and
cases: London: Palgrave Macmillan.
Sydow, J., & Windeler, A. (1998). Organizing and evaluating interrm networks: A structurationist
perspective on network processes and effectiveness. Organization Science, 9(3), 265–284.
doi:10.1287/orsc.9.3.265
Taylor, C. R. (1995). Digging for golden carrots: An analysis of research tournaments. American
Economic Review, 85(4), 872–890. doi:10.2307/2118237
Terwiesch, C., & Yi, X. (2008). Innovation contests, open innovation, and multiagent problem solving.
Management Science, 54(9), 1529–1543.
Verona, G., Prandelli, E., & Sawhney, M. (2006). Innovation and virtual environments: Towards virtual
knowledge brokers. Organization Studies, 27(6), 765–788. doi:10.1177/0170840606061073
von Hippel, E. (1986). Lead users: A source of novel product concepts. Management Science, 32(7),
791–805.
von Hippel, E. (1988). The sources of innovation. New York, NY: Oxford University Press.
von Hippel, E. (2005). Democratizing innovation. Cambridge, MA: The MIT Press.
von Hippel, E., & von Krogh, G. (2003). Open source software and the “private-collective” innovation
model: Issues for organization science. Organization Science, 14(2), 209–223.
von Hippel, E., & von Krogh, G. (2015). Crossroads: Identifying viable “need–solution pairs”: Problem
solving without problem formulation. Organization Science, 27(1), 207–221. doi:10.1287/
orsc.2015.1023
Winter, S. G., Cattani, G., & Dorsch, A. (2007). The value of moderate obsession: Insights from a
new model of organizational search. Organization Science, 18(3), 403–419. doi:10.1287/
orsc.1070.0273
Zhang, X., & Zhu, F. (2011). Group size and incentives to contribute: A natural experiment at Chinese
Wikipedia. The American Economic Review, 101(4), 1601–1615. doi:10.1257/aer.101.4.1601
266 LINUS DAHLANDER ET AL.
APPENDIX: TABLE 10.A1. OVERVIEW OF THE STAGES AND KEY DECISIONS.
Stage and Associated
Decisions
Denition Trade-off Considerations Interdependencies With Other Decisions
Dene
Task type: Problem- or
solution-related
knowledge
Using the crowd to learn
about problems and
needs for which the
organization might
develop solutions or
using the crown to nd
solutions to problems
the organization faces.
Gaining access to problem-related knowledge
increases the organization’s ability to address
relevant problems and, thus, the organization’s
effectiveness. Gathering such knowledge may
allow the organization to better satisfy current
customers, discover new markets for existing
technologies, or identify unsatised market needs.
Evaluation criteria: Problem-related knowledge is less
specied and increases the difculty of setting clear
evaluation criteria.
Gaining access to solution-related knowledge allows
the organization to increase its efciency by
identifying solutions that are either more cost-
efcient or of higher quality.
Incentive type: Problem-related knowledge makes
the use of pecuniary incentives difcult, given the
absence of clear evaluation criteria.
Specicity of task
denition: Narrow or
broad
Highly specifying tasks
to target narrowly
dened solution spaces
or minimally specifying
tasks to solicit a
broader array of
potential solutions.
Narrow denitions ensure that incoming solutions
can be applied to the underlying problem.
Crowd size: The narrower the problem denition is,
the more difcult it becomes to attract a sufciently
large crowd, since crowd contributors are more
likely to lack necessary knowledge.
Broad denitions engage large crowds from diverse
elds of expertise.
Channel: The organization is less likely to have access to
the specic crowd if the task denition is too narrow.
Allocation: The broader the denition is, the
harder it is to establish clear evaluation criteria.
Organizations launching with broad denitions
seek global maxima. Such denitions might foster
steep award structures.
Decomposition:
Aggregated or
decomposed tasks
Aggregating tasks or
decomposing them
into smaller sub-
components.
Aggregated problems allow for holistic solutions that
consider the interdependencies among decisions.
While this is feasible, searching for such solutions
takes a long time, and individual crowd contributors
are less likely to have all the knowledge necessary to
address all parts of the problem.
Evaluation criteria: Solutions to decomposed problems
are easier to evaluate using pre-existing metrics
How Organizations Manage Crowds 267
Decomposed problems allow for the engagement of
specialized crowd contributors, and multiple crowd
contributors working in parallel may achieve fast
progress. However, the solutions might not address
interdependencies among individual problems.
Broadcast
Channel: Soliciting
knowledge via
intermediaries
or through own
initiatives
Using an established
intermediary to connect
to a crowd or initiating
own initiatives through
own channels .
Engaging intermediaries increases the organization’s
potential to reach large crowds. Professional crowd
contributors who participate in many crowdsourcing
initiatives might build relationships with
intermediaries, thus decreasing search costs. Crowd
contributors might be more inclined to engage if
they perceive intermediaries to be more credible in
protecting entrants. Engaging intermediaries might
also accelerate the process, particularly when growing
a crowd is not an option or would take too long.
Incentive type: Launching via an intermediary with
an open approach to solver identities tends to offer
successful crowd contributors wide opportunities
for peer recognition, which may potentially
compensate for a lack of pecuniary incentives.
Forgoing an intermediary requires the organization to
draw on its own community of stakeholders. Own
crowds with prior experience with existing products
or services offered may be more predictable and
useful when incremental innovation is needed.
The members of such crowds might also be more
motivated to engage on behalf of an organization
with which they already have a rapport.
Evaluation criteria: Given well-known resistance and
Not-Invented-Here, convincing internal developers to
accept problem knowledge (in the selection stage) may
be easier than convincing them to accept (and select)
solution-related knowledge to “their” problems.
Task type and specicity: Engaging an intermediary
often means gaining access to a crowd, but it can
also mean gaining advice on solution denition,
attraction, and selection. Following an own
approach requires an organization to have sufcient
internal knowledge and resources to run the process.
Invitation: Soliciting
via open call or via
invitation
Using an open call to
anyone willing to
engage or using an
invitation-only format
to target a selected
group who could self-
select into the tasks.
Open calls are more likely to create crowd diversity,
at the cost of making proposed solutions more
difcult to evaluate. They also increase the
difculty of protecting ideas, since all information
becomes publicly available. Furthermore, open calls
introduce uncertainty about the upper bounds of
the crowd size. Note that, even for open calls, the
organization might still invite certain individuals.
Evaluation criteria: Open calls increase evaluation
costs due to the diversity and number of solutions.
(Continued )
268 LINUS DAHLANDER ET AL.
(Continued )
Stage and Associated
Decisions
Denition Trade-off Considerations Interdependencies With Other Decisions
Invitation-only crowds might increase motivation due
to the perceived exclusivity. Such calls also make it
possible to target specic crowd contributors (e.g.,
researchers in a particular domain).
Incentive type and allocation: Invitation-only calls
have more predictable outcomes, reducing
evaluation costs
Crowd size: Small or
large
Targeting a small potential
crowd or a large
potential crowd.
Large crowds involve high levels of competition, thus
decreasing the incentive for any individual to exert
effort. However, large crowds might also spark
extreme-value solutions from unexpected crowd
contributors, meaning that crowd contributors
might underestimate their competition. Finally,
large crowds increase the potential for diversity at
the cost of greater evaluation costs.
Invitation: The mode of invitation inuences the
crowd size, with open calls producing larger crowds
and invitation-only calls producing smaller crowds.
Stage and Associated
Decisions
Denition Trade-off Considerations Resolution and Interdependence
With Other Decisions
Attract
Incentive type:
Pecuniary or non-
pecuniary
Using pecuniary rewards (e.g.,
prizes) or non-pecuniary
rewards (e.g., career
opportunities, intellectual
challenges, feedback).
Pecuniary incentives are relatively easy to
implement but could potentially crowd out non-
pecuniary sources of motivation.
Evaluation criteria: Using pecuniary incentives
increases the importance of clear evaluation
criteria.
Non-pecuniary incentives may be too weak to
motivate contributors to reach the desired
outcome. However, projects appealing to non-
pecuniary motivations (e.g., social problems)
may not require nancial incentives to attract
participation.
Crowd involvement: Using non-pecuniary
incentives can produce more “unnished”
solutions that are difcult to select among,
even in the presence of clear evaluation
criteria. Thus, organizations that use non-
pecuniary incentives often use the crowd to
reduce evaluation costs.
How Organizations Manage Crowds 269
Allocation: Flat
or steep reward
structure
Using a at reward structure, in
which awards and attention are
fairly evenly distributed across
the crowd, or a steep
A at reward approach may be most appropriate
when several solutions are needed.
reward structure, in which
a few individuals receive
disproportionate amounts of
rewards or attention.
A steep (or single) reward structure may be most
benecial when the objective is to nd an
extreme-value solution. Steep rewards structures
can reduce crowd contributors’ tendency to
publicly share information.
Ownership ambiguity:
Ex ante denition
of ownership
Dening prior to the
crowdsourcing what will happen
to the ownership of successful
(and unsuccessful) entries.
Dening ownership ex ante is only possible when
the problem can be specied or narrowly dened.
Clearly dening ex ante ownership of solutions
that are not picked prevents “solution stealing”
and is central to attracting solutions. (Consider
the Arrow paradox: That is, can the seeking
organization use ideas and solutions that are not
rewarded?) Furthermore, dening ownership
ex ante makes it more difcult for crowd
contributors to reuse and recombine other crowd
contributors’ solutions with their own.
Crowd size: Clearer ownership positively affects
the ability to attract a large crowd.
Incentive: The size of the prize compensates
to offset the loss entailed in cases where
ownership is transferred.
Select
Evaluation criteria:
Metric or judgment
call
Using pre-dened metrics to
evaluate entries or using a
judgment call made by a group
of judges.
Metrics reduce evaluation costs but may exclude
novel entries that depart from what is known.
Metrics also reduce the organization’s discretion
to select unanticipated entries that are slightly
worse than the best alternative.
Incentive type: The use of pecuniary incentives
increases the need for clear criteria.
Task type: Judgment calls increase discretion but
elevate the risk of “off criteria” selection and
crowd outrage.
(Continued )
270 LINUS DAHLANDER ET AL.
(Continued )
Stage and Associated
Decisions
Denition Trade-off Considerations Resolution and Interdependence
With Other Decisions
Crowd involvement:
Crowd ltering
or not
Using crowd ltering, in which
crowd contributors vote entries
up or down, or not.
Crowd ltering reduces the selection burden
by aggregating the preferences of the crowd;
however, the crowd’s preferences are not
necessarily the same as those of the seeking
organization’s customers. Crowd ltering can
also lead to herd behavior, which may cause small
initial differences to escalate, and complacency,
in which the seeking organization may follow
the recommendations of the crowd without
carefully considering all alternatives or the t of
the chosen solution with the underlying issue.
Finally, crowd ltering can lead to potential
conicts with the organization when interests are
misaligned.
Task type: When solutions are directly applicable
to crowd needs (e.g., when consumer solutions
are being voted on by consumers), then the
crowd may have better (need-based) solution
knowledge than the organization.
Invitation: The outcome of a vote by contributors
should be directly relevant to voters to
motivate them to participate
Sequential: One-
off or stage-gate
(sequential) model
Adopting a single one-off selection
approach or using a stage-gate
approach, in which selection
decisions are made sequentially.
Sequential selections increase the possibility of
using different selection heuristics at the cost of
increased complexity.
Aggregation and task type: The cost of denition
may increase if problems need to be redened
for each selection stage.
Incentive type: The chances of winning increase
for “nalists,” hence strengthening incentives
to participate.
... This means that the initiator who addresses Internet users must devote their resources, time, and attention. Using the "wait and see" approach may bring more negative consequences than benefits (Dahlander, Piezunka, 2014), including loss of Internet users' trust, control over their actions or abandonment of a task (Dahlander et al., 2019;, and thus unnecessary time and financial outlays. ...
... Though each member may be acting autonomously, correlated errors would arise within a common "filter bubble," where people are presented identical information due to the confirmation bias inherent in search engines and social media (Bruns, 2019). Other suggestions by Dahlander, Jeppesen, and Piezunka (2019) include: (i) Decomposing tasks into appropriate chunks that are commensurate with the skills of contributors, (ii) Attracting appropriate contributors and delegating tasks to them, though notably self-selection can be an appropriate methodology, (iii) Reintegrating solutions into a globally optimal solution, which may involve several stages, and (iv) Providing appropriate incentives. Lanier (2006) also provides further conditions, arguing that the collective is likely to be smarter when there is a predefined question that can be evaluated simply (e.g., by a single number) and the information relied upon is trustworthy. ...
Chapter
Full-text available
Rational Choice Theory (RCT) or the Rational Model effectively guides or determines much of the built world. It is a cornerstone of economics, sociology, political science, and public policy, drawing on the principles that we are self-interested and seek to maximize our utility or well-being. RCT defines decision-making using a series of mathematical tenets or axioms, adeptly enabling mathematical modeling. However, when and to what degree do these axioms accurately describe our decision making? Often, proofs regarding RCT are unfalsifiable or examples of the logical fallacy of Affirming the Consequent, being equally applicable to almost every other motivational theory. Here, we review limitations to RCT and establish ten boundary conditions for when it has greater or less verisimilitude, including variations in individuals’ traits and states, the nature of tasks, and environmental scaffolding or structure. Acknowledging these boundary conditions and acting upon them improves the performance of RCT, making ideological adherence to RCT evermore self-defeating. Ideally, the field increasingly includes agent-based modeling and simulation (ABMS), which is better able to handle deviations from the rational model, to complement RCT’s mathematical theorems. The alternative appears to be resigning ourselves to the incentivization and fostering of irrational decision-making across society.
... The three identified practices-small rewards, task complexity, and an autonomy-supportive cue-add new knowledge to the phenomenon-driven crowdsourcing literature. For instance, regarding the specifications of the task, the nature and wording of the call formulations, our findings on the positive influence of task complexity and autonomy-supportive linguistic cues on intrinsic motivation shift the attention from the cognitive aspects-which have been central in prior research-to the motivational ones [82]. ...
Article
Full-text available
Existing crowdsourcing research largely agrees that intrinsic motivation is essential for users' intention to submit ideas to company-hosted crowdsourcing initiatives. However, enhancing intrinsic motivation is particularly difficult in crowdsourcing settings, given the limited potential for personal exchange with others. Therefore, identifying effective interventions to stimulate intrinsic motivation is an important gap. We draw on research in analogous contexts characterized by the absence of significant others (e.g., creative artwork, sports, and self-directed learning). Using the self-determination theory as a theoretical foundation, we theorize that organizers can use monetary incentives (offering small rewards) and non-monetary rewards (increasing task complexity and using autonomy-supportive linguistic cues) to stimulate intrinsic motivation. In three lab-in-the-field experiments, we test our predictions. Quite counterintuitively, we find that small rewards (rather than no or large rewards) are an effective mechanism to intrinsically motivate users and increase their intention to submit their ideas to company-hosted idea crowdsourcing contests. Also, our findings reveal that increasing rather than lowering task complexity and using non-controlling rather than controlling linguistic cues can stimulate intrinsic motivation and submission intention. Our paper sheds first light on interventions stimulating intrinsic motivation in idea crowdsourcing. More generally, it also adds to the discussion of the small rewards effect.
Article
Full-text available
Crowdsourcing has evolved as an organizational approach to distributed problem solving and innovation. As contests are embedded in online communities and evaluation rights are assigned to the crowd, community members face a tension: They find themselves exposed to both competitive motives to win the contest prize and collaborative participation motives in the community. The competitive motive suggests they may evaluate rivals strategically according to their self-interest, the collaborative motive suggests they may evaluate their peers truthfully according to mutual interest. Using field data from Threadless on 38 million peer evaluations of more than 150,000 submissions across 75,000 individuals over 10 years and two natural experiments to rule out alternative explanations, we answer the question of how community members resolve this tension. We show that as their skill level increases, they become increasingly competitive and shift from using self-promotion to sabotaging their closest competitors. However, we also find signs of collaborative behavior when high-skilled members show leniency toward those community members who do not directly threaten their chance of winning. We explain how the individual-level use of strategic evaluations translates into important organizational-level outcomes by affecting the community structure through individuals’ long-term participation. Although low-skill targets of sabotage are less likely to participate in future contests, high-skill targets are more likely. This suggests a feedback loop between competitive evaluation behavior and future participation. These findings have important implications for the literature on crowdsourcing design, and the evolution and sustainability of crowdsourcing communities. Funding: This work was supported by the National Science Foundation [Grant IIS-1514283] and the U.S. Office of Naval Research [Grant N00014-17-1-2542]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/orsc.2021.15163 .
Article
This research article addresses the current status and emerging trends in the literature at the intersection of crowdsourcing and innovation. While separate reviews exist for crowdsourcing and innovation individually, a comprehensive literature review specific to their intersection is lacking. Therefore, this study conducts a bibliometric meta-analysis of the extensive body of literature in the field of innovation and crowdsourcing, aiming to fill this research gap. The analysis encompasses all articles in Elsevier's Scopus database that incorporate relevant terms in their titles, abstracts, or keywords, resulting in a sample of 180 articles. VosViewer and Bibliometrix package in R are employed for the analysis. The analysis reveals three key research clusters, including the role of crowdsourcing in fostering organizational innovation, crowdsourcing-based social capital formation and its impact on organizational value creation, and the role of crowdsourcing platforms in facilitating engagement between the crowd and the organization. Adopting a holistic perspective, this research contributes new insights into the interconnections between crowdsourcing and innovation research fields. Additionally, the analysis provides clarity on research content, evolutionary context, and reveals emerging research trends in this domain.
Chapter
This Handbook is the first reference created for the large, diverse, and growing field of Open Innovation. Four editors, 75 reviewers, and 136 contributors collaboratively developed 57 handbook chapters. These present the current state of the growing body of work in Open Innovation from leading scholars and researchers in the field. It brings together 48 chapters on academic aspects of the theory, and 9 chapters that show Open Innovation in practice in different industries. The empirical, conceptual, and practical insights of the Handbook highlight the importance of strengthening practice-inspired research, and purposeful knowledge exchanges between individuals, organizations, and ecosystems. The Handbook is a great place to start learning about Open Innovation for beginners and an authoritative reference for those with more experience.
Chapter
This Handbook is the first reference created for the large, diverse, and growing field of Open Innovation. Four editors, 75 reviewers, and 136 contributors collaboratively developed 57 handbook chapters. These present the current state of the growing body of work in Open Innovation from leading scholars and researchers in the field. It brings together 48 chapters on academic aspects of the theory, and 9 chapters that show Open Innovation in practice in different industries. The empirical, conceptual, and practical insights of the Handbook highlight the importance of strengthening practice-inspired research, and purposeful knowledge exchanges between individuals, organizations, and ecosystems. The Handbook is a great place to start learning about Open Innovation for beginners and an authoritative reference for those with more experience.
Chapter
This Handbook is the first reference created for the large, diverse, and growing field of Open Innovation. Four editors, 75 reviewers, and 136 contributors collaboratively developed 57 handbook chapters. These present the current state of the growing body of work in Open Innovation from leading scholars and researchers in the field. It brings together 48 chapters on academic aspects of the theory, and 9 chapters that show Open Innovation in practice in different industries. The empirical, conceptual, and practical insights of the Handbook highlight the importance of strengthening practice-inspired research, and purposeful knowledge exchanges between individuals, organizations, and ecosystems. The Handbook is a great place to start learning about Open Innovation for beginners and an authoritative reference for those with more experience.
Article
Full-text available
Using a longitudinal in-depth field study at NASA, I investigate how the open, or peer-production, innovation model affects R&D professionals, their work, and the locus of innovation. R&D professionals are known for keeping their knowledge work within clearly defined boundaries, protecting it from individuals outside those boundaries, and rejecting meritorious innovation that is created outside disciplinary boundaries. The open innovation model challenges these boundaries and opens the knowledge work to be conducted by anyone who chooses to contribute. At NASA, the open model led to a scientific breakthrough at unprecedented speed using unusually limited resources; yet it challenged not only the knowledge-work boundaries but also the professional identity of the R&D professionals. This led to divergent reactions from R&D professionals, as adopting the open model required them to go through a multifaceted transformation. Only R&D professionals who underwent identity refocusing work dismantled their boundaries, truly adopting the knowledge from outside and sharing their internal knowledge. Others who did not go through that identity work failed to incorporate the solutions the open model produced. Adopting open innovation without a change in R&D professionals’ identity resulted in no real change in the R&D process. This paper reveals how such processes unfold and illustrates the critical role of professional identity work in changing knowledge-work boundaries and shifting the locus of innovation.
Article
Full-text available
The proposition that outsiders often are crucial carriers of novelty into an established institutional field has received wide empirical support. But an equally compelling proposition points to the following puzzle: the very same conditions that enhance outsiders’ ability to make novel contributions also hinder their ability to carry them out. We seek to address this puzzle by examining the contextual circumstances that affect the legitimation of novelty originating from a noncertified outsider that challenged the status quo in an established institutional field. Our research case material is John Harrison’s introduction of a new mechanical method for measuring longitude at sea—the marine chronometer— which challenged the dominant astronomical approach.We find that whether an outsider’s new offer gains or is denied legitimacy is influenced by (1) the outsider’s agency to further a new offer, (2) the existence of multiple audiences with different dispositions toward this offer, and (3) the occurrence of an exogenous jolt that helps create a more receptive social space. We organize these insights into a multilevel conceptual framework that builds on previouswork but attributes a more decisive role to the interplay between endogenous and exogenous variables in shaping a field’s shifting receptiveness to novelty. The framework exposes the interdependencies between the micro-, meso-, and macro-level processes that jointly affect an outsider’s efforts to introduce novelty into an existing field.
Article
Full-text available
The proposition that outsiders often are crucial carriers of novelty into an established institutional field has received wide empirical support. But an equally compelling proposition points to the following puzzle: the very same conditions that enhance outsiders' ability to make novel contributions also hinder their ability to carry them out. We seek to address this puzzle by examining the contextual circumstances that affect the legitimation of novelty originating from a non-certified outsider that challenged the status quo in an established institutional field. Our research case material is John Harrison's introduction of a new mechanical method for measuring longitude at sea – the marine chronometer – which challenged the dominant astronomical approach. We find that whether an outsider's new offer gains or is denied legitimacy is influenced by (1) the outsider's agency to further a new offer, (2) the existence of multiple audiences with different dispositions towards this offer, and (3) the occurrence of an exogenous jolt that helps create a more receptive social space. We organize these insights into a multilevel conceptual framework that builds on previous work but attributes a more decisive role to the interplay between endogenous and exogenous variables in shaping a field's shifting receptiveness to novelty. The framework exposes the mutually constitutive relationships between the micro-, meso-, and macro-level processes that jointly affect an outsider's efforts to introduce novelty into an existing field.
Article
Full-text available
The purpose of this article is to suggest a (preliminary) taxonomy and research agenda for the topic of “firms, crowds, and innovation” and to provide an introduction to the associated special issue. We specifically discuss how various crowd-related phenomena and practices—for example, crowdsourcing, crowdfunding, user innovation, and peer production—relate to theories of the firm, with particular attention on “sociality” in firms and markets. We first briefly review extant theories of the firm and then discuss three theoretical aspects of sociality related to crowds in the context of strategy, organizations, and innovation: (1) the functions of sociality (sociality as extension of rationality, sociality as sensing and signaling, sociality as matching and identity), (2) the forms of sociality (independent/aggregate and interacting/emergent forms of sociality), and (3) the failures of sociality (misattribution and misapplication). We conclude with an outline of future research directions and introduce the special issue papers and essays.
Article
Full-text available
We present the results of a field experiment conducted at Harvard Medical School to understand the extent to which search costs affect matching among scientific collaborators. We generated exogenous variation in search costs for pairs of potential collaborators by randomly assigning individuals to 90-minute structured information-sharing sessions as part of a grant funding opportunity. We estimate that the treatment increases the probability of grant co-application of a given pair of researchers by 75%. The findings suggest that matching between scientists is subject to considerable friction, even in the case of geographically proximate scientists working in the same institutional context. © 2017 by the President and Fellows of Harvard College and the Massachusetts Institute of Technology.
Article
This paper exploits variation in the adoption of copyright laws within Italy – as a result of variation in the timing of Napoleon’s military victories – to examine the effects of copyrights on creativity. To measure variation creative output, we use new data on 2,598 operas that premiered across eight states within Italy between 1770 and 1900. These data indicate that the adoption of copyrights led to a significant increase in the number of new operas premiered per state and year. We find that the number of high-quality operas also increased – measured both by their contemporary popularity and by the longevity of operas. By comparison, evidence for a significant effect of copyright extensions is limited. Our analysis of alternative mechanisms for this increase reveals a substantial shift in composer migration in response to copyrights. Consistent with agglomeration externalities, we also find that cities with a better pre-existing infrastructure of performance spaces benefitted more copyright laws.
Article
The problem of designing, coordinating and managing complex systems is central to the management and organizations literature. Recent writings have emphasized the important role of modularity in enhancing the adaptability of such complex systems. However, little attention has been paid to the problem of identifying what constitutes an appropriate modularization of a complex system. We develop a formal simulation model that allows us to carefully examine the dynamics of innovation and performance in complex systems. The model points to the trade-off between the virtues of parallelism that modularity offers and the destabilizing effects of overly refined modularization. In addition, high levels of integration can lead to modest levels of search and a premature fixation on inferior designs. The model captures some key aspects of technological evolution as a joint process of autonomous firm level innovation and the interaction of systems and modules in the marketplace. We discuss the implications of these arguments for product and organization design.
Article
In October 2014, all 4,494 undergraduates at the Massachusetts Institute of Technology were offered access to Bitcoin, a decentralized digital currency. As a unique feature of the experiment, students who would generally adopt first were placed in a situation where many of their peers received access to the technology before them, and they then had to decide whether to continue to invest in this digital currency or exit. Our results suggest that when natural early adopters are delayed relative to their peers, they are more likely to reject the technology. We present further evidence that this appears to be driven by identity, in that the effect occurs in situations where natural early adopters' delay relative to others is most visible, and in settings where the natural early adopters would have been somewhat unique in their tech-savvy status. We then show not only that natural early adopters are more likely to reject the technology if they are delayed, but that this rejection generates spillovers on adoption by their peers who are not natural early adopters. This suggests that small changes in the initial availability of a technology have a lasting effect on its potential: Seeding a technology while ignoring early adopters' needs for distinctiveness is counterproductive.