ArticlePDF Available

How Organizations Manage Crowds: Define, Broadcast, Attract and Select

Authors:

Abstract and Figures

Crowdsourcing-a form of collaboration across organizational boundaries-provides access to knowledge beyond an organization's local knowledge base. Integrating work on organization theory and innovation, we first develop a framework that characterizes crowdsourcing into a main sequential process, through which organizations (1) define the task they wish to have completed; (2) broadcast to a pool of potential contributors; (3) attract a crowd of contributors; and (4) select among the inputs they receive. For each of these phases, we identify the key decisions organizations make, provide a basic explanation for each decision, discuss the trade-offs organizations face when choosing among decision alternatives, and explore how organizations may resolve these trade-offs. Using this decision-centric approach, we continue by showing that there are fundamental interdependencies in the process that makes the coordination of crowdsourcing challenging.
Content may be subject to copyright.
1
HOW ORGANIZATIONS MANAGE CROWDS:
DEFINE, BROADCAST, ATTRACT AND SELECT
forthcoming
Research in the Sociology of Organizations
in “Managing Inter-organizational Collaborations Process Views”
Edited by Jörg Sydow and Hans Berends
LINUS DAHLANDER
ESMT Berlin
LARS BO JEPPESEN
Copenhagen Business School
HENNING PIEZUNKA
INSEAD
ABSTRACT: Crowdsourcing – a form of collaboration across organizational boundaries –
provides access to knowledge beyond an organization’s local knowledge base. Integrating work on
organization theory and innovation, we first develop a framework that characterizes crowdsourcing
into a main sequential process, through which organizations (1) define the task they wish to have
completed; (2) broadcast to a pool of potential contributors; (3) attract a crowd of contributors; and
(4) select among the inputs they receive. For each of these phases, we identify the key decisions
organizations make, provide a basic explanation for each decision, discuss the trade-offs
organizations face when choosing among decision alternatives, and explore how organizations may
resolve these trade-offs. Using this decision-centric approach, we continue by showing that there are
fundamental interdependencies in the process that makes the coordination of crowdsourcing
challenging.
KEYWORDS: inter-organizational collaboration, crowdsourcing, innovation, interdependence,
search, organization
ACKNOWLEDGMENTS: All authors contributed equally. We are grateful for comments and
suggestions from the Editors Jörg Sydow and Hans Berends as well as Kevin Boudreau, Henrich
Greve, Karim Lakhani, Woody Powell, Phanish Puranam, and Ammon Salter, as well as seminar
participants at the Open and User Innovation Conference in Brighton 2013, the AOM Conference
in Philadelphia 2014, the Crowdsourcing Workshop at INSEAD 2016, the Vinnova conference on
prize competitions in Stockholm, and the INSEAD Entrepreneurship Forum 2016. The usual
disclaimer applies.
2
Many organizational theorists have argued that specialization is inevitable to deal with
growing complexity inside organizations. The traditional coping mechanism has been to create new
subunits or functions within organizations, where people share the same language, to reduce this
complexity (Levinthal & March, 1993). This often creates rifts between increasingly specialized units
that have difficulty communicating with one another, making it important to resolve coordination
problems between such specialized units. But the challenge often goes beyond efficient organization
within a firm - one also has to organize across organizations (Sydow & Windeler, 1998). According to
Hayek (1945), knowledge is widely distributed across society, which implies that all relevant
knowledge cannot be found in one single organization. In fact, developing new complex products
and services often involves spanning normative, technical, geographical and organizational
boundaries to identify knowledge that can be recombined (Santos & Eisenhardt, 2005; Schreyögg &
Sydow, 2011). Motivated by these observations, a great deal of research on inter-organizational
collaborations has emerged in the last decades (Powell, Koput, & Smith-Doer, 1996; Sydow,
Schüßler, & Müller-Seitz, 2016). We bring attention to a new particular form of collaboration that
transcend organizational boundaries – collaborations with crowds (Afuah & Tucci, 2012; Felin,
Lakhani, & Tushman, 2017; Ghezzi, Gabelloni, Martini, & Natalicchio, 2018; Jeppesen & Lakhani,
2010; Puranam, Alexy, & Reitzig, 2011). A key feature of working with crowds is that participants
self-select into the collaboration without a central hierarchy assigning people to tasks and
collaboration partners. This approach marks a difference from other forms of collaborations
spanning boundaries that entail matching on both ends (Sydow et al., 2016). Crowdsourcing is
defined here as inviting an undefined group of contributors to self-select to work on tasks (for a comparison for
different types of crowdsourcing see Ghezzi et al., 2018; Majchrzak & Malhotra, 2013). Contributors
can be both individuals, as is most often the case, as well as teams or even whole organizations.
Governments and companies have long used crowdsourcing as a well of ideas in order to advance
3
such diverse issues as industrializing land, controlling infectious diseases, and mass-producing and
conserving food. For example, in 1714, the British government offered the Longitude Prize to elicit
ideas on how to solve one of the most pressing scientific problems of the time: determining
longitude at sea. Similarly, in 1869, Napoleon III used the Margarine Prize to overcome the eras
butter shortage. In the last two decades, an increasing number of organizations have begun to
engage in crowdsourcing to solve more contemporary issues (Brunt, Lerner, & Nicholas, 2012). This
recent widespread adoption of crowdsourcing is rooted in (1) the constant need for innovation
which prompts organizations to search for knowledge beyond their boundaries; (2) the emergence
of the Internet, which has expanded the potential reach of crowdsourcing; and (3) the decline in
information and computation costs, which has facilitated sophisticated problem solving and
innovation at the individual level (Baldwin & von Hippel, 2011; Faraj, Krogh, Monteiro, & Lakhani,
2016; von Hippel, 2005; von Hippel & von Krogh, 2003).
Crowdsourcing constitutes a new form of organizing, which differs from conventional forms
in terms of task division, task allocation, information provision, and reward distribution (Puranam et
al., 2011). Inspired by the behavioral theory of the firm who views decisions as the key unit for
understanding organizations (Cyert & March, 1963), we develop a process model of crowdsourcing
focusing on the key decisions organizations face. We separate crowdsourcing into four phases,
during which organizations (1) define the inputs they are seeking; (2) broadcast tasks to a pool of
potential contributors; (3) attract a crowd of contributors; and (4) select among the inputs they
receive. For each of these phases, we identify the key decisions organizations make, provide a basic
explanation for each decision, and discuss the trade-offs organizations face when choosing among
decision alternatives. Although the basics of the model is sequential, the simplest form of
interdependence in a process (Berends, Reymen, Stultiëns, & Peutz, 2011), we also elaborate on how
decisions are interdependent on other decisions as illustrated in Figure 1. These interdependencies
4
therefore result in that organizing crowdsourcing can be challenging. An overview of the different
stages and the interdependencies can be found in Appendix 1. The interdependencies are elaborated
in the column to the right of Table summarizing the major decisions.
--- INSERT FIGURE 1 ABOUT HERE ---
DEFINE
Definition of the desired input is crucial as it is one of few pieces of information the crowd receives
from an organization initially. Once broadcasted, input definitions are difficult to change. The
process of defining tasks for crowdsourcing, therefore, differs from the ongoing dialogue that
typically takes place when ideas are developed internally (Baer, Dirks, & Nickerson, 2013). We
suggest that the process of defining a task requires an organization to make three major decisions:
(1) the type of knowledge that is sought, (2) the degree to which the task is decomposed, and (3) the
degree to which the desired input is specified.
Type of knowledge: problem- or solution-related
Definition and explanation.
An organization have to decide whether it is seeking solution- or
problem-related knowledge (von Hippel, 1988). Scholars of crowdsourcing have studied how
organizations crowdsource problem related knowledge (Bayus, 2013; Dahlander & Piezunka, 2014)
as well as how they crowdsource solutions related knowledge (Boudreau & Lakhani, 2012; Jeppesen
& Lakhani, 2010). Organizations seeking solution-related knowledge ask the crowd for solutions to
specific problems. For example, Colgate-Palmolive turned to crowdsourcing to find a solution to
“get fluoride powder into toothpaste tubes without it dispersing into the atmosphere
(Ideaconnection, 2009). Experts within the organization had invested significant resources to find a
chemistry-based solution but had ultimately failed. When the organization turned to crowdsourcing,
however, the problem caught the eye of a retired physicist, who was able to immediately identify a
solution: Because of his background in a different domain of expertise, the physicist understood that
5
he could coax the fluoride into the tube by grounding the tube and applying a positive charge to the
fluoride (Boudreau & Lakhani, 2009). By contrast, organizations seeking problem-related knowledge
ask the crowd about the types of problems they should solve. For example, during its “My Starbucks
Idea” promotion, Starbucks asked its customers for ideas for new flavors and products and to
identify any problems Starbucks could address.
Trade-off
. Crowdsourcing solution-related knowledge fosters organizations’ efficiency, since it
improves how a known problem is solved. As such, this approach can help organizations find new
solutions to existing problems by attracting large and diverse crowds representing a variety of
perspectives (Jeppesen & Lakhani, 2010). This can be done in a cost-efficient manner as the crowd
pursues multiple possible solutions, and the organization only implements the most promising ones
(Boudreau, Lacetera, & Lakhani, 2011). The potential downside of using crowdsourcing to gain
solution-related knowledge is that organizations draw attention to their problems. Unless the call for
solutions is abstracted or obfuscated, the public availability of such information may deter potential
customers and grant competitors insights into an organization’s weaknesses. Crowdsourcing also
requires organizations to adjust internally. For example, soliciting solutions from external sources
necessitates a change in the role of internal R&D departments: Internal developers and researchers
stop being problems-solvers and become, instead, managers and stewards of the contributors in the
crowd (Boudreau & Lakhani, 2009; Lifshitz-Assaf, 2018). Thus, while crowdsourced solutions are
likely to increase efficiency, the process of gathering these solutions imposes certain requirements
that management may not be able to readily meet.
Crowdsourcing problem-related knowledge fosters an organization’s effectiveness since it can
increase the organization’s chances of finding relevant problems. Organizations may have access to
novel technologies (or solutions), but miss insights to what problems for which these technologies
are relevant (Catalini & Tucker, 2016; Gruber, MacMillan, & Thompson, 2008; Shane, 2000).
6
Crowds can help to identify profitable applications for such technologies. However, engaging a
crowd in the search for problem-related knowledge is not without challenges. Crowds may point out
problems organizations wish to neither prioritize nor discuss publicly. For example, crowds may
emphasize the need to improve customer service, lower prices, or become more sustainable. On
such issues, the interests of organizations and crowds (which might comprise, in part, the
organization’s customers) might be misaligned, potentially resulting in public disagreements between
organizations and their primary stakeholders (Garud, Jain, & Kumaraswamy, 2002). To innovate,
organizations need to match knowledge about problems (i.e., unsolved needs in the marketplace)
with solutions (von Hippel, 1986; von Hippel & von Krogh, 2015). Organizations can potentially
combine the two alternatives sequentially (e.g., by first crowdsourcing problem-related and then
solution-related knowledge).
Specificity of task definition
Definition and explanation.
Organizations often struggle to determine the optimal level of
specificity when defining a crowdsourcing task (Fernandes & Simon, 1999). The specificity with
which a task is defined outlines (and potentially constrains) the solution space. For example,
consider Netflix’s search for ways to improve the effectiveness of its movie recommendation
system. Though, in reality, Netflix provided the crowd with very few details, it could alternatively
have chosen to specify that it was only seeking specific types of solutions (e.g., solutions building on
machine learning).
Trade-off.
Overspecification can lead to unnecessary and potentially detrimental constraints
(Erat & Krishnan, 2012). In the classic example of finding longitude at sea, Isaac Newton, who was
a member of the longitude board that served as the evaluation committee, imposed limitations on
the problem formulation that, in hindsight, were unnecessary (Sobel, 1995). Specifically, he stated
that a relevant solution based on principles other than those of astronomy was unthinkable. As
7
history reveals, he was mistaken, and the problem was eventually solved using the principles of
clockwork. In a study of the InnoCentive problem-solving platform, Jeppesen and Lakhani (2010)
demonstrated that marginality in problem solving is related to success in crowdsourcing. For
instance, in one case, the optimal solution for a problem in toxicology originated in the field of
protein crystallography. Hence, to accommodate for the impossibility of predicting the possible
origins of a solution, it is important not to impose any more restrictions (e.g., through terminologies,
jargon, or framings that reflect the approach(es) and/or bias(es) of a specific field or fields) on a task
than necessary.
Organizations can also underspecify tasks. If an organization fails to sufficiently specify its
desired inputs, the crowd’s contributions might be unusable (Baer et al., 2013). For example, imagine
an engine manufacturer seeking a solution for its cylinder design. If the crowd’s input fails to meet
certain criteria (e.g., size, weight, heat), it might be useless. Defining such specifications requires
significant upfront organizational effort. For example, the Progressive X-prize, which sought to
produce an energy-efficient car that could pass U.S. road safety specifications, had more than 50
pages of documentation and rules. Such efforts may, however, be necessary to ensure that the crowd
focuses on relevant and feasible search domains.
If organizations seek more radical input and/or path-breaking solutions, they may be better
served by opting for less specification. Since many organizations engage in crowdsourcing to access
distant knowledge of which they are not aware (Afuah & Tucci, 2012; Jeppesen & Lakhani, 2010),
organizations’ specifications may unintentionally exclude valuable input. By providing only minimal
specifications, organizations ensure the deployability of the crowd’s contribution, while maximizing
freedom within these specifications. Such minimal specifications allow crowd contributors to
redefine and interpret a problem through the lenses of their own expertise and local knowledge
(which are likely to be distant from the organization) (Winter, Cattani, & Dorsch, 2007). Avoiding
8
unnecessary specifications can increase the number of people willing to engage in a task. In sum,
while a lower degree of specification is likely to increase the share of unusable input, it increases the
chances of sourcing extremely valuable input (Boudreau et al., 2011).
Decomposition: aggregated or decomposed tasks
Definition and explanation.
When organizations reach out to crowds, they decide on the degree
to which they wish to keep tasks aggregated or decompose them into chunks (Baumann &
Siggelkow, 2013; Ethiraj & Levinthal, 2004; Rivkin & Siggelkow, 2003). For example, DARPA’s
Grand Challenge on autonomous driving sought a complete vehicle; however, DARPA could
alternatively have decomposed the task and crowdsourced the various components of a self-driving
car (e.g., chassis, powertrain, scanner, software). Determining how to divide tasks and account for
interdependencies has been shown to be a crucial problem in organizational design and product
architecture (Ethiraj & Levinthal, 2004). Crowdsourcing aggravates this problem, since the
coordination mechanisms that exist within organizational boundaries (Kotha, George, & Srikanth,
2013; Kretschmer & Puranam, 2008; Srikanth & Puranam, 2014) as well as for inter-organizational
relationships (Sydow et al., 2016) are not available in crowdsourcing. While, internally, organizations
can switch between aggregated and decomposed tasks (Siggelkow & Levinthal, 2003), such switching
is almost impossible when interacting with a crowd composed of external contributors.
Trade-offs.
Aggregated tasks require that crowd contributors consider the
interdependencies among different task components to achieve a global maximum. For example,
addressing the DARPA Grand Challenge on autonomous driving required considering the complex
interdependencies between the various components of a self-driving car. When the task is
decomposed, these interdependencies cannot be considered (Baumann & Siggelkow, 2013; Ethiraj &
Levinthal, 2004; Rivkin & Siggelkow, 2003). The challenge of an aggregated task, however, is that a
single individual and even a single organization is unlikely to have all the knowledge required to
9
solve it. Thus, aggregated tasks make it difficult for individuals to achieve and deliver relevant results
(Sieg, Wallin, & Von Krogh, 2010). Individuals who have relevant but insufficient knowledge would
first need to find other individuals with complementary knowledge, with whom they would then
need to organize. Then, the groups of individuals would face the challenge of coordinating the
creative process to develop novel ideas (Harrison & Rouse, 2014). For this reason, unless specific
coordination mechanisms exist to foster collaboration among participants, aggregation may exclude
individuals and constrain participation to existing organizations. Indeed, in the aggregated DARPA
challenge, contributors were mostly existing private companies and university labs. Implicitly
constraining participation to organizations in this way may also constrain creativity, which is often
fostered by individual ideation early in the search process (Girotra, Terwiesch, & Ulrich, 2010).
One of the important advantages of a decomposed task is that task completion requires only
a specific set of knowledge. Such tasks, therefore, are more accessible for single individuals to
comprehend and solve. As a result, organizations that choose to decompose their crowdsourcing
tasks lower their barriers to entry, allowing a broader range of contributors to engage. Since the
various decomposed tasks will be addressed by different individuals with specialized knowledge, the
input quality for each of these subtasks is likely to be higher (Moreland & Argote, 2003). Effective
decomposition can, however, be challenging, since the organization might not yet understand the
different task components or how they relate (Ethiraj & Levinthal, 2004). Thus, while strong
decomposition can result in optimal local solutions, it may prevent the crowd from finding a globally
optimal solution. For example, imagine if DARPA had crowdsourced the best laser for scanning the
environment and, separately, the best software for interpreting laser-based images; it would have
been challenging for DARPA to then combine these components into a self-driving car, and the
result would not necessarily have been optimal.
10
The optimal level of aggregation depends on the types of actors that organizations are are
able to attract. For example, organizations like DARPA, Google, and Netflix have succeeded in
attracting and enabling (temporal) organizations to engage around various problems. Their ability to
attract organizations is likely due, in part, to their brands and financial resources. Organizations
without such resources may be unable to attract organizations and, thus, may need to decompose
tasks in order to sufficiently lower the entry barriers for individual contributors to participate.
Establishing a community where individuals work on separate parts of a decomposed problem, but
take interdependencies into account challenging and race (Snow, Fjeldstad, Lettl, & Miles, 2011).
BROADCAST
Once a task is defined, the next stage of crowdsourcing is to broadcast it to a crowd: that is, to make
the task known to individuals who might self-select to solve the task. As organizations seek to make
tasks known to a crowd, they face multiple decisions: (1) whether to broadcast the crowdsourcing
initiative via an intermediary or use a crowd potentially at their disposal, (2) how big a crowd to seek,
and (3) whether to conduct an open call or to invite crowd contributors selectively.
Channel: soliciting knowledge via intermediaries or through own initiative
Definition and explanation.
Organizations seeking to engage in crowdsourcing often decide
whether to broadcast their tasks themselves or whether to use intermediaries (Lopez-Vega, Tell, &
Vanhaverbeke, 2016). Many historical examples of crowdsourcing projects were designed and run by
the organizations in which the tasks originated. For example, consider the British Government’s
Longitude Act, the DARPA tournament on autonomous driving, Dell’s IdeaStorm, or Budweiser’s
search for new beverages. Alternatively, organizations can solicit the help from one of the many
intermediaries established in recent years to support crowdsourcing by helping organizations
connect with potential crowds.
11
Trade-offs.
When organizations run their own crowdsourcing, they tend to rely on crowds
of individuals who already know and have a relationship with the organization (Jeppesen &
Frederiksen, 2006). However, it is difficult for an organization to attract individuals with whom they
do not yet have a relationship, which, in turn, prevents them from achieving sufficient “distance” in
their search (Afuah & Tucci, 2012; Jeppesen & Frederiksen, 2006). This is problematic, since an
organization’s knowledge and the knowledge of the individuals with whom the organization already
has relationships are likely to overlap. These individuals are likely to operate in the same domain as
the organization; thus, though their contributions are often more feasible and immediately
applicable, they also tend to be less novel (Franke, Poetz, & Schreier, 2014). As a result, the inputs
an organization gathers from individuals with whom it already has relationships are unlikely to lead
to path-breaking innovations (Burgelman, 2002).
Another option is to use intermediaries to help them broadcast their tasks. Since
intermediaries support various organizations in their crowdsourcing, they invest in building pools of
individuals who are interested in and skilled at solving tasks. Since these pools of individuals tend to
expand as intermediaries collaborate with various organizations—and, as a result, can offer more
and different tasks—they can often represent a great diversity of expertise. Such diversity is crucial
to find novel solutions (Boudreau et al., 2011), but would be difficult for an organization to access
without the help of an intermediary. Intermediaries also offer advice about how to organize
crowdsourcing. Since the individuals contacted via an intermediary have no direct link to the
organization, they are less likely to be motivated to contribute by a feeling of affiliation (compared to
when organizations recruit crowds themselves).
We suggest that a reliance on intermediaries is more likely (to be advantageous) when
organizations are in need of more radical approaches or when established or accessible crowds of
users may consider using intermediaries (Verona, Prandelli, & Sawhney, 2006). If, however, an
12
organization seeks to strengthen its relationship with an existing group of customers or to seek the
input of people who are familiar with its brand and perhaps derive additional marketing benefits
from the exercise, it is likely to manage its crowdsourcing directly (Poetz & Schreier, 2012). For
example, a company like Starbucks might use crowdsourcing to show its openness to customer
suggestions.
Crowd size: small or large
Definition and explanation.
A key decision organizations face is whether to seek a large crowd or
a smaller one. Of course, since individuals self-select, organizations are not necessarily capable of
choosing how many people will participate; thus, it may be impossible to choose an “optimal”
number of contributors. That said, organizations can decide whether to seek a large crowd or a
smaller one and plan accordingly.
Trade-offs.
Larger crowds allow organizations to tap into larger pools of knowledge. One
of the goals of crowdsourcing is to tap into distant knowledge. It is, however, unclear ex ante who
holds relevant knowledge (Afuah & Tucci, 2012). By increasing the size of the crowd, an
organization increases its chances of identifying a suitable input (Boudreau et al., 2017; Lakhani,
Jeppesen, Lohse, & Panetta, 2007; Terwiesch & Yi, 2008). A larger crowd increases the chance of
finding an extreme solution (Baer, Leenders, Oldham, & Vadera, 2010; Boudreau et al., 2011).
Franke, Lettl, Roiser, and Tuertscher (2014) argued that the quality of ideas is largely random—and,
as a result, that the success of crowdsourcing depends on the number of people attracted. They
conducted a field experiment in which more than 1000 individuals developed ideas for smartphone
apps and found that the effect of randomness trumped the effects of all other variables, thus
emphasizing the importance of a large crowd in allowing an organization to choose from a broad
array of alternatives. However, some scholars have pointed out that the relationship between the
number of searchers and the breadth of search is sublinear (Erat & Krishnan, 2012). A large crowd
13
increases competition, potentially decreasing the incentive for any given individual to exert effort
and reducing creativity on the level of the individual contributor (Baer et al., 2010; Boudreau et al.,
2011). A larger crowd might, therefore, decrease the average quality of the input.
Some research underscores the advantages of a small crowd. Fullerton and McAfee (1999)
studied the optimal design of research tournaments and suggested that the optimal number of
contributors may be as few as two. By limiting the number of crowd contributors, an organization
increases the probability of any single participant winning and, thus, incentivizes each individual to
invest more effort in addressing the task (Boudreau, 2012; Taylor, 1995). Recent research on the
effect of competition, however, illustrates that competition mostly deters less skilled individuals
(Boudreau, Lakhani, & Menietti, 2014). A smaller crowd increases each individual’s chances of
winning the competition yet reduce other kinds of rewards contributors derive from contributing
(Lakhani & Von Hippel, 2003). A reduction in the number of people, while increasing each
individual’s chances of winning, reduces these social benefits and, thus, the contributors’ willingness
to engage (Zhang & Zhu, 2011).
A potential way to tackle this challenge is to
separate crowdsourcing into multiple rounds,
such that the first round involves a large pool of crowd contributors making relatively small
investments and the second round comprises only the most promising crowd contributors. The low
investment required for the first round encourages a large number of people to participate but
prevents people from being deterred by potential competition. One potential problem with such an
approach is that it is not always possible to recognize the precursors of extremely valuable solutions
in the first round (Boudreau et al., 2011; Levinthal & Posen, 2007). Since the size of the crowd
affects contributors’ expectations about their possible reward, the decision concerning crowd size
requires considering the crowdsourcing award structure. As organizations decide on the size of the
crowd they seek to attractand, thus, the amount of input they wish to gatherthey have to
14
simultaneously consider whether they have the capacity to select among the inputs they receive. If
the amount of input an organization receives exceeds its cognitive capacity, the organization is more
likely to overlook relevant knowledge (Piezunka & Dahlander, 2015).
Invitation: broadcasting via private or public call
Definition and explanation.
Organizations have to decide whether to conduct a private call, in
which participation is invitation-only and exclusive to a selected group, or an open call, which
addresses anyone. An organization could issue invitations constraining participation to an exclusive
group of individuals (e.g., customers of the organization, scientists in a certain discipline).
Alternatively, an organization could issue an open call addressing anyone.
Trade-offs
. People who participate in crowdsourcing are partly motivated by attention and
status (Johnson, 2002). An exclusive invitation may, therefore, increase people’s tendency to
participate. Private calls also offer the opportunity to be more selective with respect to type of
people the organization invites to participate. This may allow an organization to attract individuals
who are interested in interacting with one another, thus creating a community that engages in fertile
exchanges, rather than a crowd of independent individuals. Beyond leading to higher quality inputs,
a private call reduces the volume of low-quality inputs, thus facilitating the later filtering of
suggestions. Issuing exclusive invitations also reduces the problem of publicizing private
information. The problem is, of course, that an organization does not know ex ante who has the
most relevant insight. By constraining crowdsourcing to a specific set of people, an organization
reduces the chances of serendipity.
By contrast, an open call allows an organization to broadcast its task to people it does not
know. Foregoing exclusivity in this way may increase the scope of the crowd’s diversity and lead to
extreme-value solutions (Boudreau et al., 2011; Terwiesch & Yi, 2008). An open call, thus, increases
the chances of “unexpected” solutions originating from types of solvers the initiator may never have
15
expected (Jeppesen & Lakhani, 2010). For example, the self-educated carpenter and clockmaker
John Harrison, who eventually solved the longitude problem, would hardly have been included in a
private call (Cattani, Ferriani, & Lanza, 2017). As demonstrated by the research on “marginal men”
in science (see Chubin (1976) for a review), it is often individuals at the periphery of a given area of
activity who have the greatest potential to contribute innovative ideas (Cattani & Ferriani, 2008;
Jeppesen & Lakhani, 2010). The challenge of broadcasting to everyone is, of course, that potential
contributors may not feel particularly addressed by the call and may, therefore, lack interest to
engage. Beyond the risk of failing to attract anyone, by engaging in an open call, an organization
gives up any control over the composition of the crowd. As a result, the crowd may ultimately
comprise individuals whose interests are not aligned with the agenda of the organization. Imagine a
political party soliciting suggestions via an open call, but gathering suggestions primarily from
members of a competing political party. An additional challenge of open calls is that they require
organizations to reveal their tasks publicly to an undefined crowd—a move that is often not in the
organization’s best interest, particularly if the task is sensitive or critical.
An interesting underexplored area is to examine whether and how organizations can
combine these approaches sequentially or in parallel, given that the open-call and invitation-only
formats reach different kinds of individuals and produce different outcomes. For example, an
organization could conduct an open call, but also target those individuals it believes to hold relevant
knowledge. Organizations may also offer different targeted individuals different conditions of
participation (e.g., to reflect different opportunity costs of participation). One consideration in this
case would obviously be managing other potential participants’ perceptions of fairness, a process
that treats some preferentially may have negative consequences on overall participation.
16
ATTRACT
An organization’s success in crowdsourcing depends on the crowd contributorsmotivation, which,
in turn, may depend on the incentives offered to activate these crowd contributors and attract
solutions. Crowd contributors carry costs related to the time and effort of participating, as well as
potential costs related to access to necessary tools and equipment. We identify three key decisions
that are crucial for attracting entries: (1) the decision of whether to provide pecuniary incentives, (2)
the use of flat or steep reward structures, and (3) the regulation of ownership.
Incentive type: pecuniary or non-pecuniary
Definition and explanation.
Organizations can decide to offer monetary or non-pecuniary
incentives. Monetary incentives vary in both value and type. An illustrative example is the case of
Threadless, a crowd-based t-shirt organization (Lakhani & Kanji, 2008). Threadless conducts
contests in which contributors create t-shirt designs—a relatively time-consuming activity. The prize
for a winning design is $2,500, and the probability of winning is approximately 0.6 per cent, implying
an expected payoff of $15 per submitted design—a very low hourly wage. Alternatively, rather than
offering winners a fixed prize, it is possible to allow the crowd to claim a percentage of the final
value of their solutions. For example, when it seeks crowdsourced designs for new toys, Lego Ideas
uses a royalty scheme in which the winning crowd contributor(s) obtain a one percent royalty of the
sales.
Another way to attract crowd contributors is to rely on non-monetary incentives. Studies of
people’s motivation to engage in innovative efforts have found that that people are often motivated
by the challenge, fun, and learning inherent in completing crowdsourcing tasks (Lakhani & Wolf,
2005; Lakhani & Von Hippel, 2003). Füller, Bartl, Ernst, and Mühlbacher (2006) found that
individuals who are motivated intrinsically rather than extrinsically are particularly helpful, as they
provide a higher numbers of substantial contributions. Frey, Lüthje, and Haag (2011) suggested that
17
enjoyment and fun are among the strongest motivators for engaging in crowdsourcing, though
contributors may also seek social interactions as part of their engagement (Zhang & Zhu, 2011).
Finally, individuals may use crowdsourcing as a way to show off their skills and, in turn, build their
reputations, either as its own reward or as a long-term investment to enhance career prospects. For
example, in the Good Judgment Project, a crowd-based forecasting project initiated by Philip
Tetlock, the top 2 percent of forecasters are recognized by the organization. Building contributors’
reputations can also have long-term (pecuniary) benefits. In their study of open source development,
Lerner and Tirole (2002) argued that software developers—many of them unpaid—produce code to
enhance their career prospects. It is possible to tailor different non-pecuniary incentives by, for
example, providing more interesting tasks; however, the difficulty stems from the fact that the
potential contributors are undefined and unknown ex ante (Majchrzak & Malhotra, 2013).
Trade-off
. The case for monetary incentives is evident. Crowd contributors invest time,
effort, and often external costs (e.g., for tools to complete a task), and pecuniary incentives motivate
individuals to make such investments (Gallus & Frey, 2016). The size and types of such incentives
allow organizations to differentiate themselves in the competition for crowd contributors
participation. For example, when Netflix used crowdsourcing to find an algorithm to make better
movie recommendations, it offered one million USD in prize money to attract qualified individuals.
While pecuniary incentives frequently attract and motivate individuals, such incentives can have the
adverse effect of crowding out individuals’ intrinsic motivation in the long run (Frey & Oberholzer-
Gee, 1997), which may be a relevant concern when repeated participation is needed.
Tasks that require a substantial investment on the part of the participating individuals are
likely to require pecuniary incentives. For example, as part of the Google Lunar X-prize, despite the
fascinating nature of the task and Google’s strong brand, Google also offers substantial financial
incentives. Intrinsic motivation can be sustained in the presence of an extrinsic reward (e.g., a prize)
18
as long as the prize is positioned, not as the focus of the exercise, but as an additional reward (i.e.,
“icing on the cake”) experienced ex post (Amabile, 1993). For example, since Google’s lunar project
does not translate directly into a commercial application, contributors may be less likely to expect
money. If, however, Google were to seek ideas on how to improve its advertising display—which
would translate into direct monetary benefits for the company—contributors are more likely to
expect monetary rewards. People’s willingness to engage without monetary incentives may also
depend on whether they gain public exposure to an audience (Boudreau & Jeppesen, 2015).
Allocation: flat or steep award structures
Definition and explanation.
Organizations that engage in crowdsourcing face a decision whether
to allocate their rewards (or recognition) through a flat structure, in which many contributors receive
money and recognition, or a steep structure, in which money and recognition go only to a fortunate
few.
Trade-offs
. When a flat reward structure is used, a large proportion of the crowd is
compensated, producing less effort per participant for a given prize than if the prize were
concentrated on only a few. While flat reward structures seek to foster broad participation, such
structures incentivize less effort and fewer contributions from participants. Contributors to such
crowdsourcing projects are unlikely to exert substantial efforts or to make investments (e.g., buying
tools) that would allow them to provide more effective solutions. A danger is that contributors’
participation become so negligible that it is of little value for the organization.
When a steep reward structure is used, very few contributorsand potentially only oneare
rewarded. Such reward structures create incentives for extreme performance (Rosen, 1981). While
most crowd contributors do not win when the reward structure is steep, research on both
organizations and crowds shows that such reward structures can attract and motivate numerous
people to contribute (Boudreau et al., 2011). In other words, steep rewards serve as “beaconsthat
19
attract large crowds. For example, the attention Netflix attracted for offering one million USD for
the best solution to its problem was likely far greater than it would have been if Netflix had offered
1,000 USD for the 1,000 best ideas. These insights into reward structures obviously also pertain to
non-monetary extrinsic rewards, such as reputation. The steeper the reward structure, the smaller the
pool of winners, and the more difficult it is to win the prize, the higher the reputational benefits for
those individuals who do succeed. However, one substantial downside of extreme awards is that
they may reduce the likelihood of people choosing to participate, because the likelihood of winning
is low or even near zero. People participating are also less likely to collaborate and to share
information because they are fiercely competing for the big prize.
Recent studying the relationship between flat and steep award structures illustrates how the
resolution is contingent on the types of input organizations seek (Terwiesch & Yi, 2008). While flat
reward structures incentivize minimal input by numerous people, steep award structures incentivize
extreme performances. For example, when Google seeks to classify information it cannot classify
automatically (e.g., identifying street signs in pictures), it requires lots of low-level participation. For
such endeavors, organizations need to establish procedures that ensure that participants’
contributions satisfy some minimal threshold. By contrast, when Netflix sought the best algorithm
for movie recommendations, it required extreme performance. Steep reward structures are more
suitable when the problem is well-defined and there is a clear selection criterion to which entries can
be compared (Simon, 1972; Terwiesch & Yi, 2008). In such cases, the choice of a winner is clearer
and less subject to challenges from near-winners. These considerations illustrate how the optimal
resolution for a given crowdsourcing is likely to depend on the types of inputs sought and require
adjustments to other organizational decisions.
20
IP ownership: crowd contributors vs. organizations
Definition and explanation.
Organizations have to decide who will own the intellectual property
(IP) created via crowdsourcing. This decision concerns not only the contribution that ultimately
wins, but also any contributions that are not selected (Scotchmer, 2004). Organizations either receive
all of the IP rights or allow crowd contributors to retain these rights (or some combination of the
two). Organizations’ freedom to define IP ownership is limited by their regulatory environment and
the regulations invoked by any used platforms. Even in the absence of regulationthat is, when an
organization can freely define who owns the IP—the optimal choice is not clear and comes with an
important trade-off.
Trade-off
. An organization can choose to keep all of the rights to submitted ideas. The
advantage of this approach is obviously that the organization is free to implement ideas as it wishes.
However, a lack of ownership or IP protection affect crowd contributorsincentives to participate
or disclose their input (Giorcelli & Moser, 2016). When would-be contributors sense the benefits
offered by the contest do not match the (potentially high) value of their ideas, they may be reluctant
to share them. In fact, some of the entrepreneurship that can be observed among product and
service users (Agarwal & Shah, 2014; Shah & Tripsas, 2007) might stem, in part, from organizations
failure to provide sufficient rewards for crowd contributors’ contributions. Ding and Wolfstetter
(2011: 665) point out that this lead to adverse selection, since contributors “may withhold
innovations that are worth considerably more than the prize, so that only the lemons, that is, the
inferior innovations, are submitted.” History reveals several examples in which people have been
motivated to work on problems raised by crowdsourcing initiatives, but have then chosen to keep
their ideas private. Ding and Wolfstetter (2011: 665) write that
“the inventor John Wesley Hyatt was encouraged to develop a new substance after he saw
an advertisement by Phelan & Collander offering $10,000 to the person who invented a
usable substitute for ivory in billiard balls. Hyatt eventually succeeded by inventing celluloid,
21
which seemed to be a perfect substitute for ivory, but finally decided to patent his
innovation instead of submitting it to the tournament and collecting the prize.”
Such evidence suggests that crowd contributors may be motivated to solve organizations’ challenges,
but then redirect their work to competing organizations.
Alternatively, an organization could leave IP rights with contributing individuals. Such a clear
provision of rights creates incentives for contributors, and gives the crowd defense mechanisms
from misappropriation by the organization. People are more likely to share their ideas with an
organizationand to do so at an earlier stageif their IP rights are secure (Katila & Mang, 2003;
Luo, 2014). Granting IP rights to the crowd, however, renders the implementation of ideas
generated via crowdsourcing more difficult and costly. Moreover, giving the crowd IP rights might
result in conflicts (Bauer, Franke, & Tuertscher, 2016). For example, a contributor might claim that
an idea is novel even if the organization has worked on it before.
Resolving ambiguities about IP ownership in crowdsourcing is an important consideration
for organizations, since ambiguity can lead to lower participation rates, lower quality, and potential
conflicts with the crowd. By specifying ex ante how the ownership of outcomes will be assigned,
organizations can avoid downstream problems concerning how and to whom rewards will be
allocated.
SELECT
Once crowd contributors have submitted their inputs, the organization must evaluate and select
among them. This process is challenging, since the organization may face a vast number of entries
among which it must allocate a limited amount of attention (Koput, 1997; Laursen & Salter, 2006;
Piezunka & Dahlander, 2015). Thus, since organizations strive to select the best entries, they face
three decisions: (1) whether to use a metric scale or a judgment call, (2) whether to involve the
crowd, and (3) whether to use a sequential process.
22
Evaluation criteria: metric scale or judgment call
Definition and explanation.
There are two primary ways to evaluate crowdsourcing outcomes:
pre-established metrics and judgment calls. In cases using judgment calls, particular invention types
are not specified in advance (i.e., the winning entry is decided ex post) (Moser & Nicholas, 2013). In
these cases, judges are allowed to “know it when they see it(Scotchmer, 2004, p. 40). By contrast,
when metrics are used, the evaluation criteria are formalized by standards developed ex ante, which
stipulate a goal against which entries are evaluated to determine a winner.
Trade-offs
. Judgment calls increase an organization’s flexibility to choose unconventional
solutions (Jeppesen & Lakhani, 2010). However, relying purely on judgement calls can be
challenging: Each entry must be evaluated at great length, thus increasing the organization’s selection
burden and exposing the selection process to managerial biases. This burden can grow large. For
instance, during the Deepwater Horizon crisis, BP reached out to the crowd for ideas on how to
tackle the catastrophe. More than 100,000 people offered suggestions, and more than 100 experts
were assigned to sift through them (Goldenberg, 2011). The attentional burden can have detrimental
consequences (March & Simon, 1958; Simon, 1971). For example, Piezunka and Dahlander (2015)
showed that an overabundance of ideas in crowdsourcing increases the likelihood that an
organization will overlook ideas representing distant knowledge (i.e., the very type of knowledge
organizations hope to find via crowdsourcing in most cases).
Alternatively, organizations can rely on metrics that eliminate the effort required to choose
among contributions. When clearly communicated, metrics prevent any ambiguity in process
selection and ensure a sense of fairness. The usage of metrics can allow crowd contributors to
compare themselves to one another in real time fostering learning and motivation. Chen, Harper,
Konstan, and Xin Li (2010) showed in a field experiment involving movie recommendations that
providing information about the median activity of a reference group vastly increases the
23
contributions of less active participants. However, determining the right metric can be difficult and
costly (Terwiesch & Yi, 2008). For a metrics-based evaluation to work, the conditions need to be
specified in advance. However, this is inherently difficult when an organization is exploring
unfamiliar terrain, and it can also be expensive, meaning that an organization involved in a metrics-
based evaluation may incur costs before it even knows whether it will attract any entries. Introducing
metrics-based criteria can unintentionally set specifications that constrain the solution space.
A deeper appreciation for independencies with other decisions is needed. For instance, the
sourcing of problem-related knowledge does not allow for applications of a metric scale. For
example, Starbucks can hardly use metrics to determine which of the new beverages suggested by
customers has the most potential. The decision between using metric scale and using a judgment call
is related to the decision concerning the specificity of task definition, since setting metrics often
implicitly sets specifics—and, thus, potentially constrains the solution space. The decision between
metric scales and judgment calls depends on the decisions related to crowd size and the related
expected volume of contributions, since the marginal costs of using judgment calls are far higher
than those of using metrics.
Crowd filtering: involving the crowd in selection or not
Definition and explanation
. If an organization engages in judgment calls (e.g., due to an inability
to rely on automated, metrics-based selection), it either select entries on its own or rely on the crowd
for evaluation (Poetz & Schreier, 2012). In these instances, it can ask contributors to evaluate the
ideas submitted by others (e.g., via votes or comments).
Trade-offs
. Involving a crowd in finding the best solution is attractive because the process
of evaluating crowdsourcing input may exceed an organization’s attention capacity and prevent
information overload (O'Reilly III, 1980). Engaging the crowd in the evaluation also has the
advantage that external contributors may actually be better assessors of ideas than internal managers
24
(Berg, 2016). Given their external perspective, crowd contributors are optimally positioned to
evaluate ideas, and may become future customers (Mollick & Nanda, 2016). For example, when
Lego involves crowds in evaluating designs on its Lego Ideas platform, the crowd evaluators
preferences are highly relevant, since the crowd members in this case are also often customers and
users of the products. Crowd contributors are also likely to perceive the evaluations of crowd
evaluators as fairan outcome that is crucial for ensuring future engagement (Franke, Keinz, &
Klausberger, 2013). Research shows that involving the crowd in selection is positively related to
subsequent product demand (Fuchs, Prandelli, & Schreier, 2010). These advantages not-
withstanding, involving the crowd in the evaluation process inherently involves the possibility that
the crowd will favor suggestions that are not in line with the organization’s best interest. For
example, a crowd composed of an organization’s customers might favor lower prices or suggest
solutions that are not feasible for the company to make. Furthermore, when crowd contributors
participate in the selection process, they may be overly critical of others’ suggestions in order to
increase the chances of having their own suggestions selected. In such cases, the contributors’
interactions may be dominated by harsh criticisms, rather than constructive dialogue. This increases
the danger of crucial suggestions going unrecognized or being voted out for social reasons, rather
than merit. A crucial consideration when deciding whether to involve the crowd in selection is the
possibility that a crowd of existing mainstream users will tend to vote along the lines of products
with which they are already familiar and have use experience (von Hippel, 1986).
The alternative use of internal evaluation committees allows organizations to maintain
control of the selection process. Organizations that evaluate solutions internally limit their chances
of losing authority and/or being asked to select contributions that are not in their best interest.
Thus, evaluating solutions internally allows organizations to remain in the “driver’s seat.Involving
organizations in the evaluation also ensures that they will engage with the crowd: a critical driver of
25
participation among crowd contributors (Dahlander & Piezunka, 2014). The downside of
conducting completely internal evaluations is that the organizations must shoulder the full effort of
the evaluation process, and miss out on utilizing the selective capability of the crowd.
Involving the crowd in selecting the best solution(s) may be most appropriate when the
crowd’s opinion is highly correlated with future demand. For example, in the case of ideation
challenges, in which the crowd comprises users and objective quality cannot be established (due to
selection being a matter of taste), involving the crowd in choosing the best solution(s) is fruitful.
Crowds, thus, play a crucial role when no metric can be established. However, when the
organization has relevant in-house knowledge of how various components fit together, an in-house
selection of the best solution is more appropriate. Transparency with regard to selection may be
important for subsequent rounds of crowdsourcing, when the crowd may grow critical of an
organization’s evaluation approach. A hybrid solution is possible where the crowd engages in
indicative pre-evaluations, and internal experts subsequently evaluate the screened set of solutions.
For example, Lego Ideas relies on votes to pre-select designs generated by crowd contributors. Once
a suggested design crosses a certain threshold in terms of votes, Lego evaluates it internally and
considers it for selection.
Sequential: stage gate or one-off model
Definition and explanation.
Organizations face a decision whether to use one-off models or
sequential (i.e., stage gate) models. In a one-off model, the crowd provides input from which an
eventual selection is made. In a sequential model, by contrast, the contribution/selection process
involves multiple rounds (Salter, Wal, Criscuolo, & Alexy, 2015). For example, first, a crowd may
provide input; then, a first selection may be conducted (either via the crowd or internally by the
organization); next, contributors may be asked to provide a new set of inputs, from which a new
round of winners are selected; and so on.
26
Trade-offs
. A sequential approach has the advantage of motivating contributors and
allowing organizations to provide conditions appropriate for different stages. Organizations can
further motivate contributors who have been selected in a certain stage by providing them with
feedback instrumental for their attempts to reach further stages. For example, once a certain stage
has been reached, organizations provide tools or financial means, as when Google provided financial
support to contributors who passed a certain stage of its lunar project. A stage gate approach then
reduces wasted effort, since contributors with little chance of winning are prevented from making
unnecessary investments. Finally, a sequential approach creates opportunities for feedback and
dialogue between the organization and the crowd. For instance, contributors can learn vicariously by
observing successful contributors and reusing some of their ideas before working independently in
subsequent stages. The challenge with a stage gate approach, however, is that it can result in myopic
selection (Levinthal & Posen, 2007), meaning that approaches that are promising but whose full
potential requires time and development may be filtered out too early.
Alternatively, in a one-off model, contributors have to develop fully-fledged solutions before
being evaluated. This model saves the organization time upfront, since the complexity of developing
different sequential stages implies an inherent cost, whereas the one-off model only requires the
organization to evaluate solutions at the very end. In a one-off model, however, organizations lack a
good avenue to provide feedback to individual crowd contributors. The one-off model thus
increases crowd contributors’ chances of finding unorthodox solutions, but it also increases their
chances of going “down the rabbit hole”that is, wasting effort on ideas with little chance of
growing into promising contributions.
In the sourcing of problem-related knowledge, it is possible that no stages are necessary,
since the knowledge does not require further development on the part of the crowd contributors.
Such processes can be organized as one-off models or even as an ongoing efforts without any stages
27
(Piezunka & Dahlander, 2015). Of course, stages may be helpful for assessing the criticality of these
problems.
--- INSERT TABLE 1 ABOUT HERE ---
CONCLUSION
It has long been appreciated that the locus of innovation span organizational boundaries and are
embedded in inter-firm networks (Powell et al., 1996; Sydow et al., 2016). However, managing the
relationship with the crowd is qualitatively different than managing other kinds of inter-
organizational collaborations. We took a decision-centric approach to organizing crowdsourcing,
building upon the tradition of the behavioral theory of the firm, which argues that decisions should
be the key unit of analysis when studying organizational forms (Gavetti, Greve, Levinthal, & Ocasio,
2012; Gavetti, Levinthal, & Ocasio, 2007). This research tradition has called for “an explicit
emphasis on the actual process of decision making as its basic research commitment (Cyert &
March, 1963: 19).” When Cyert and March (1963: 205) discussed how “we cannot assume that a
rational manager can treat the organization as a simple instrument in his dealings with the external
world,” they sparked a plethora of research on bounded rational managers and the bases they use to
make decisions. In the 1960s, when this stream of research emerged, the types of decisions managers
made to maneuver within their external environments were qualitatively different. An implication of
our framework is that it facilitates a better understanding of the multi-faceted nature of
crowdsourcing that, in part, has replaced these conventional ways of collaborating with external
environments.
Our reasoning is centered on the decisions organizations make to organize crowds. We focus
on contributors that could be individuals or even organizations. It is empirically less frequent that
whole organizations engage in crowdsourcing, but it does happen as we note, especially in cases
when complexity is high and it is too difficult for a single individual to engage. In addition, some of
28
the employees of organizations may engage as part of their work, or even unknowingly for the
organization that employs them (see Dahlander and Wallin (2006) for how this plays out in open
source software). This begs a larger question of how a relationship with an individual can be a pre-
cursor for an inter-organizational relationship (Powell et al., 1996; Sydow et al., 2016). For instance
Sydow et al. (2016) note how Bayer’s Grants4Targets can be used for a company to find scientists
around the world that can result in linkage with a university. Long-term collaborations between
organizations is thus a possible outcome of crowdsourcing.
As Figure 1 shows, the model of crowdsourcing we illustrate entails both sequential and
reciprocal interdependence between different decisions. While a sequential interdependence could
be managed through planning and scheduling, a more complicated reciprocal interdependence has to
be managed through constant information sharing and mutual adjustments. This makes the situation
more complicated than it appears at first glance, as decisions that are interdependent cannot be
made in isolation (Levinthal, 1997; Puranam, Raveendran, & Knudsen, 2012; Rivkin & Siggelkow,
2003). To complicate it even further, people in the crowd may not respond the way the organization
has intended and information sharing is difficult to achieve in practice.
REFERENCES:
Afuah, A., & Tucci, C. (2012). Crowdsourcing as solution to distant search. Academy of Management
Review, 37(3), 355-375.
Agarwal, R., & Shah, S. K. (2014). Knowledge sources of entrepreneurship: Firm formation by
academic, user and employee innovators. Research Policy, 43(7), 1109-1133.
doi:http://dx.doi.org/10.1016/j.respol.2014.04.012
Amabile, T. M. (1993). Motivational synergy: Toward new conceptualizations of intrinsic and
extrinsic motivation in the workplace. Human Resource Management Review, 3(3), 185-201.
Baer, M., Dirks, K. T., & Nickerson, J. A. (2013). Microfoundations of strategic problem
formulation. Strategic Management Journal, 34(2), 197-214. doi:10.1002/smj.2004
Baer, M., Leenders, R. T. A., Oldham, G. R., & Vadera, A. K. (2010). Win or lose the battle for
creativity: The power and perils of intergroup competition. Academy of Management Journal,
53(4), 827-845.
29
Baldwin, C., & von Hippel, E. (2011). Modeling a paradigm shift: From producer innovation to user
and open collaborative innovation. Organization Science, 22(6), 1399-1417.
doi:10.1287/orsc.1100.0618
Bauer, J., Franke, N., & Tuertscher, P. (2016). Intellectual property norms in online communities:
How user-organized intellectual property regulation supports innovation. Information Systems
Research, 27(4), 724-750. doi:10.1287/isre.2016.0649
Baumann, O., & Siggelkow, N. (2013). Dealing with complexity: Integrated vs. chunky search
processes. Organization Science, 24(1), 116-132. doi:10.1287/orsc.1110.0729
Bayus, B. L. (2013). Crowdsourcing new product ideas over time: An analysis of the Dell IdeaStorm
community. Management Science, 59(1), 226-244. doi:10.1287/mnsc.1120.1599
Berends, H., Reymen, I., Stultiëns, R. G. L., & Peutz, M. (2011). External designers in product
design processes of small manufacturing firms. Design Studies, 32(1), 86-108.
doi:https://doi.org/10.1016/j.destud.2010.06.001
Berg, J. M. (2016). Balancing on the creative highwire: Forecasting the success of novel ideas in
organizations. Administrative Science Quarterly, 61(3), 433-468. doi:10.1177/0001839216642211
Boudreau, K. (2012). Let a thousand flowers bloom? Growing an applications software platform and
the rate and direction of innovation. Organization Science, 23(5), 1409-1427.
Boudreau, K. J., Brady, T., Ganguli, I., Gaule, P., Guinan, E., Hollenberg, T., & Lakhani, K. R.
(2017). A field experiment on search costs and the formation of scientific collaborations.
Review of Economics and Statistics, 99(4), 565-576.
Boudreau, K. J., & Jeppesen, L. B. (2015). Unpaid crowd complementors: The platform network
effect mirage. Strategic Management Journal, 36(12), 1761-1777.
Boudreau, K. J., Lacetera, N., & Lakhani, K. R. (2011). Incentives and problem uncertainty in
innovation contests: An empirical analysis. Management Science, 57(5), 843-863.
Boudreau, K. J., & Lakhani, K. R. (2009). How to manage outside innovation. MIT Sloan Management
Review, 50(4), 69-76.
Boudreau, K. J., & Lakhani, K. R. (2012). High incentives, sorting on skills - or just a “taste” for
competition? Field experimental evidence from an algorithm design contest. Harvard Business
School Technology & Operations Mgt. Unit Working Paper(No. 11-107).
Boudreau, K. J., Lakhani, K. R., & Menietti, M. (2014). Performance responses to competition
across skill-levels in rank order tournaments: Field evidence and implications for tournament
design. RAND Journal of Economics, 47(1), 140-165.
Brunt, L., Lerner, J., & Nicholas, T. (2012). Inducement prizes and innovation. The Journal of
Industrial Economics, 60(4), 657-696. doi:10.1111/joie.12002
Burgelman, R. A. (2002). Strategy as vector and the inertia of coevolutionary lock-in. Administrative
Science Quarterly, 47, 325-357.
Catalini, C., & Tucker, C. (2016). Seeding the s-curve? The role of early adopters in diffusion. NBER
Working Paper No. 22596.
Cattani, G., & Ferriani, S. (2008). A core/periphery perspective on individual creative performance:
Social networks and cinematic achievements in the Hollywood film industry. Organization
Science, 19(6), 824-844. doi:10.1287/orsc.1070.0350
Cattani, G., Ferriani, S., & Lanza, A. (2017). Deconstructing the outsider puzzle: The legitimation
journey of novelty. Organization Science, 28(6), 965-992. doi:10.1287/orsc.2017.1161
Chen, Y., Harper, F. M., Konstan, J., & Xin Li, S. (2010). Social comparisons and contributions to
online communities: A field experiment on MovieLens. The American Economic Review, 100(4),
1358-1398. doi:10.1257/aer.100.4.1358
Chubin, D. E. (1976). State of the field: The conceptualization of scientific specialties. Sociological
Quarterly, 17(4), 448-476. doi:10.1111/j.1533-8525.1976.tb01715.x
30
Cyert, R. M., & March, J. G. (1963). A behavioral theory of the firm. Malden, MA: Blackwell.
Dahlander, L., & Piezunka, H. (2014). Open to suggestions: How organizations elicit suggestions
through proactive and reactive attention. Research Policy, 43(5), 812-827.
Dahlander, L., & Wallin, M. W. (2006). A man on the inside: Unlocking communities as
complementary assets. Research Policy, 35(8), 1243-1259.
Ding, W., & Wolfstetter, E. G. (2011). Prizes and lemons: Procurement of innovation under
imperfect commitment. The RAND Journal of Economics, 42(4), 664-680. doi:10.1111/j.1756-
2171.2011.00149.x
Erat, S., & Krishnan, V. (2012). Managing delegated search over design spaces. Management Science,
58(3), 606-623. doi:doi:10.1287/mnsc.1110.1418
Ethiraj, S. K., & Levinthal, D. A. (2004). Modularity and innovation in complex systems. Management
Science, 50(2), 159-173.
Faraj, S., Krogh, G. v., Monteiro, E., & Lakhani, K. R. (2016). Special section introduction—Online
community as space for knowledge flows. Information Systems Research, 27(4), 668-684.
doi:10.1287/isre.2016.0682
Felin, T., Lakhani, K. R., & Tushman, M. L. (2017). Firms, crowds, and innovation. Strategic
Organization, 15(2), 119-140. doi:10.1177/1476127017706610
Fernandes, R., & Simon, H. (1999). A study of how individuals solve complex and ill-structured
problems. Policy Sciences, 32(3), 225-245. doi:10.1023/A:1004668303848
Franke, N., Keinz, P., & Klausberger, K. (2013). "Does this sound like a fair deal?": Antecedents and
consequences of fairness expectations in the individual's decision to participate in firm
innovation. Organization Science, 24(5), 1495-1516. doi:10.1287/orsc.1120.0794
Franke, N., Lettl, C., Roiser, S., & Tuertscher, P. (2014, January 1, 2014). “Does god play dice?”
Randomness vs. deterministic explanations of crowdsourcing success. Paper presented at the the
Academy of Management Conference.
Franke, N., Poetz, M. K., & Schreier, M. (2014). Integrating problem solvers from analogous
markets in new product ideation. Management Science, 60(4), 1063-1081.
doi:doi:10.1287/mnsc.2013.1805
Frey, B. S., & Oberholzer-Gee, F. (1997). The cost of price incentives: An empirical analysis of
motivation crowding-out. American Economic Review, 87(4), 746-755.
Frey, K., Lüthje, C., & Haag, S. (2011). Whom should firms attract to open innovation platforms?
The role of knowledge diversity and motivation. Long Range Planning, 44(56), 397-420.
doi:http://dx.doi.org/10.1016/j.lrp.2011.09.006
Fuchs, C., Prandelli, E., & Schreier, M. (2010). The psychological effects of empowerment strategies
on consumers' product demand. Journal of Marketing, 74(1), 65-79. doi:10.1509/jmkg.74.1.65
Füller, J., Bartl, M., Ernst, H., & Mühlbacher, H. (2006). Community based innovation: How to
integrate members of virtual communities into new product development. Electronic Commerce
Research, 6(1), 57-73. doi:10.1007/s10660-006-5988-7
Fullerton, Richard L., & McAfee, R. P. (1999). Auctioning entry into tournaments. Journal of Political
Economy, 107(3), 573-605. doi:10.1086/250072
Gallus, J., & Frey, B. S. (2016). Awards: A strategic management perspective. Strategic Management
Journal, 37(8), 1699-1714. doi:10.1002/smj.2415
Garud, R., Jain, S., & Kumaraswamy, A. (2002). Institutional entrepreneurship in the sponsorship of
common technological standards: The case of Sun Microsystems and Java. Academy of
Management Journal, 45, 196-214.
Gavetti, G., Greve, H. R., Levinthal, D. A., & Ocasio, W. (2012). The behavioral theory of the firm:
Assessment and prospects. Academy of Management Annals, 6(1), 1-40.
31
Gavetti, G., Levinthal, D., & Ocasio, W. (2007). Neo-Carnegie: The Carnegie school’s past, present,
and reconstructing for the future. Organization Science, 18(3), 523-536.
Ghezzi, A., Gabelloni, D., Martini, A., & Natalicchio, A. (2018). Crowdsourcing: A review and
suggestions for future research. International Journal of Management Reviews, 20(2), 343-363.
doi:doi:10.1111/ijmr.12135
Giorcelli, M., & Moser, P. (2016). Copyrights and creativity: Evidence from italian operas.
Girotra, K., Terwiesch, C., & Ulrich, K. T. (2010). Idea generation and the quality of the best idea.
Management Science, 56(4), 591-605. doi:10.1287/mnsc.1090.1144
Goldenberg, S. (2011). BP's oil spill crowdsourcing exercise: 'A lot of effort for little result'. The
Guardian. Retrieved from http://www.theguardian.com/environment/2011/jul/12/bp-
deepwater-horizon-oil-spill-crowdsourcing
Gruber, M., MacMillan, I., & Thompson, J. (2008). Look before you leap: Market opportunity
identification in emerging technology firms. Management Science, 54(9), 1652-1665.
Harrison, S. H., & Rouse, E. D. (2014). Let's dance! Elastic coordination in creative group work: A
qualitative study of modern dancers. Academy of Management Journal, 57(5), 1256-1283.
doi:10.5465/amj.2012.0343
Hayek, F. A. (1945). The use of knowledge in society. American Economic Review, 35(4), 519-530.
Ideaconnection. (2009). Method to get fluoride powder into toothpaste tubesDec 04. Retrieved from
https://www.ideaconnection.com/open-innovation-success/Method-to-Get-Fluoride-
Powder-into-Toothpaste-Tubes-00057.html
Jeppesen, L. B., & Frederiksen, L. (2006). Why do users contribute to firm-hosted user
communities? The case of computer-controlled music instruments. Organization Science, 17(1),
45-66.
Jeppesen, L. B., & Lakhani, K. R. (2010). Marginality and problem-solving effectiveness in broadcast
search. Organization Science, 21(5), 1016-1033.
Johnson, J. P. (2002). Open source software: Private provision of a public good. Journal of Economics
& Management Strategy, 11(4), 637-662. doi:doi:10.1111/j.1430-9134.2002.00637.x
Katila, R., & Mang, P. Y. (2003). Exploiting technological opportunities: The timing of
collaborations. Research Policy, 32, 317-332.
Koput, K. W. (1997). A chaotic model of innovative search: Some answers, many questions.
Organization Science, 8(5), 528-542. doi:doi:10.1287/orsc.8.5.528
Kotha, R., George, G., & Srikanth, K. (2013). Bridging the mutual knowledge gap: Coordination and
the commercialization of university science. Academy of Management Journal, 56(2), 498-524.
doi:10.5465/amj.2010.0948
Kretschmer, T., & Puranam, P. (2008). Integration through incentives within differentiated
organizations. Organization Science, 19(6), 860-875.
Lakhani, K., & Wolf, R. (2005). Perspectives on free and open source software. In J. Feller, B.
Fitzgerald, S. Hissam, & K. Lakhani (Eds.), Perspectives on Free and Open Source Software (pp. 3-
23): MIT Press.
Lakhani, K. R., Jeppesen, L. B., Lohse, P. A., & Panetta, J. A. (2007). The value of openness in scientific
problem solving. Harvard Business School Working Paper 07-050. Retrieved from
http://www.hbs.edu/faculty/Publication%20Files/07-050.pdf
Lakhani, K. R., & Kanji, Z. (2008). Threadless: The business of community. Harvard Business School
Multimedia/Video Case, 608-707.
Lakhani, K. R., & Von Hippel, E. (2003). How open source software works: “Free” user-to-user
assistance. Research Policy, 32(6), 923-943.
32
Laursen, K., & Salter, A. (2006). Open for innovation: The role of openness in explaining
innovation performance among U.K. manufacturing firms. Strategic Management Journal, 27(2),
131-150. doi:10.1002/smj.507
Lerner, J., & Tirole, J. (2002). Some simple economics of open source. The Journal of Industrial
Economics, 50(2), 197-234.
Levinthal, D., & Posen, H. E. (2007). Myopia of selection: Does organizational adaptation limit the
efficacy of population selection? Administrative Science Quarterly, 52(4), 586-620.
Levinthal, D. A. (1997). Adaptation on rugged landscapes. Management Science, 43(7), 934-950.
Levinthal, D. A., & March, J. G. (1993). The myopia of learning. Strategic Management Journal, 14(8),
95.
Lifshitz-Assaf, H. (2018). Dismantling knowledge boundaries at NASA: The critical role of
professional identity in open innovation. Administrative Science Quarterly, Forthcoming.
doi:10.1177/0001839217747876
Lopez-Vega, H., Tell, F., & Vanhaverbeke, W. (2016). Where and how to search? Search paths in
open innovation. Research Policy, 45(1), 125-136.
doi:https://doi.org/10.1016/j.respol.2015.08.003
Luo, H. (2014). When to sell your idea: Theory and evidence from the movie industry. Management
Science, 60(12), 3067-3086.
Majchrzak, A., & Malhotra, A. (2013). Towards an information systems perspective and research
agenda on crowdsourcing for innovation. The Journal of Strategic Information Systems, 22(4), 257-
268.
March, J. G., & Simon, H. A. (1958). Organizations. New York, NY: John Wiley & Sons.
Mollick, E., & Nanda, R. (2016). Wisdom or madness? Comparing crowds with expert evaluation in
funding the arts. Management Science, 62(6), 1533-1553. doi:doi:10.1287/mnsc.2015.2207
Moreland, R. L., & Argote, L. (2003). Transactive memory in dynamic organizations. In R. S.
Peterson & E. A. Mannix (Eds.), Leading and managing people in the dynamic organization (pp. 135-
162). Mahwah, NJ: Lawrence Erlbaum Associates.
Moser, P., & Nicholas, T. (2013). Prizes, publicity and patents: Non-monetary awards as a
mechanism to encourage innovation. The Journal of Industrial Economics, 61(3), 763-788.
doi:10.1111/joie.12030
O'Reilly III, C. A. (1980). Individuals and information overload in organizations: Is more necessarily
better? Academy of Management Journal, 684-696.
Piezunka, H., & Dahlander, L. (2015). Distant search, narrow attention: How crowding alters
organizations’ filtering of suggestions in crowdsourcing. Academy of Management Journal, 58(3),
856-880. doi:10.5465/amj.2012.0458
Poetz, M. K., & Schreier, M. (2012). The value of crowdsourcing: Can users really compete with
professionals in generating new product ideas? Journal of Product Innovation Management, 29(2),
245-256. doi:10.1111/j.1540-5885.2011.00893.x
Powell, W. W., Koput, K. W., & Smith-Doer, L. (1996). Interorganizational collaboration and the
locus of innovation: Networks of learning in biotechnology. Administrative Science Quarterly,
41(1), 116-145.
Puranam, P., Alexy, O., & Reitzig, M. (2011). What’s ‘new’ about new forms of organizing? Academy
of Management Review, 39(2), 162-180.
Puranam, P., Raveendran, M., & Knudsen, T. (2012). Organization design: The epistemic
interdependence perspective. Academy of Management Review, 37(3), 419-440.
Rivkin, J. W., & Siggelkow, N. (2003). Balancing search and stability: Interdependencies among
elements of organizational design. Management Science, 49(3), 290-311.
33
Rosen, S. (1981). The economics of superstars. American Economic Review, 71(5), 845-858.
doi:10.2307/1803469
Salter, A., Wal, A. L. J., Criscuolo, P., & Alexy, O. (2015). Open for ideation: Individual-level
openness and idea generation in R&D. Journal of Product Innovation Management, 32(4), 488-504.
doi:doi:10.1111/jpim.12214
Santos, F. M., & Eisenhardt, K. M. (2005). Organizational boundaries and theories of organization.
Organization Science, 16, 491-508.
Schreyögg, G., & Sydow, J. (2011). Organizational path dependence: A process view. Organization
Studies, 32(3), 321-335. doi:10.1177/0170840610397481
Scotchmer, S. (2004). Innovation and incentives. Cambridge, MA, United States of America: MIT press.
Shah, S., & Tripsas, M. (2007). The accidental entrepreneur: The emergent and collective process of
user entrepreneurship. Strategic Entrepreneurship Journal, 1(1), 123-140.
Shane, S. (2000). Prior knowledge and the discovery of entrepreneurial opportunities. Organization
Science, 11(4), 448-469.
Sieg, J. H., Wallin, M. W., & Von Krogh, G. (2010). Managerial challenges in open innovation: A
study of innovation intermediation in the chemical industry. R&D Management, 40(3), 281-
291. doi:10.1111/j.1467-9310.2010.00596.x
Siggelkow, N., & Levinthal, D. A. (2003). Temporarily divide to conquer: Centralized, decentralized,
and reintegrated organizational approaches to exploration and adaptation. Organization Science,
14(6), 650-669.
Simon, H. A. (1971). Designing organizations for an information-rich world. In M. Greenberger
(Ed.), Computers, Communications, and the Public Interest (pp. 37-72). Baltimore, MD: The Johns
Hopkins Press.
Simon, H. A. (1972). Theories of bounded rationality. Decision and organization, 1, 161-176.
Snow, C. C., Fjeldstad, Ø. D., Lettl, C., & Miles, R. E. (2011). Organizing continuous product
development and commercialization: The collaborative community of firms model. Journal of
Product Innovation Management, 28(1), 3-16. doi:doi:10.1111/j.1540-5885.2010.00777.x
Sobel, D. (1995). Longitude: The true story of a lone genius who solved the greatest scientific problem of his time.
New York: Walker & Company.
Srikanth, K., & Puranam, P. (2014). The firm as a coordination system: Evidence from software
services offshoring. Organization Science, 25(4), 1253-1271. doi:10.1287/orsc.2013.0886
Sydow, J., Schüßler, E., & Müller-Seitz, G. (2016). Managing inter-organizational relations: Debates and
cases: Palgrave Macmillan.
Sydow, J., & Windeler, A. (1998). Organizing and evaluating interfirm networks: A structurationist
perspective on network processes and effectiveness. Organization Science, 9(3), 265-284.
doi:10.1287/orsc.9.3.265
Taylor, C. R. (1995). Digging for golden carrots: An analysis of research tournaments. American
Economic Review, 85(4), 872-890. doi:10.2307/2118237
Terwiesch, C., & Yi, X. (2008). Innovation contests, open innovation, and multiagent problem
solving. Management Science, 54(9), 1529-1543.
Verona, G., Prandelli, E., & Sawhney, M. (2006). Innovation and virtual environments: Towards
virtual knowledge brokers. Organization Studies, 27(6), 765-788.
doi:10.1177/0170840606061073
von Hippel, E. (1986). Lead users: A source of novel product concepts. Management Science, 32(7),
791-805.
von Hippel, E. (1988). The sources of innovation. New York, NY: Oxford University Press.
von Hippel, E. (2005). Democratizing innovation. Cambridge, MA: The MIT Press.
34
von Hippel, E., & von Krogh, G. (2003). Open source software and the "private-collective"
innovation model: Issues for organization science. Organization Science, 14(2), 209-223.
von Hippel, E., & von Krogh, G. (2015). Crossroads — Identifying viable “need–solution pairs”:
Problem solving without problem formulation. Organization Science, 27(1), 207-221.
doi:10.1287/orsc.2015.1023
Winter, S. G., Cattani, G., & Dorsch, A. (2007). The value of moderate obsession: Insights from a
new model of organizational search. Organization Science, 18(3), 403-419.
doi:10.1287/orsc.1070.0273
Zhang, X., & Zhu, F. (2011). Group size and incentives to contribute: A natural experiment at
Chinese Wikipedia. The American Economic Review, 101(4), 1601-1615.
doi:10.1257/aer.101.4.1601
35
LIST OF TABLES
TABLE 1: Applying the framework for making decisions to (well-known) cases of crowdsourcing
SAP
NASA on
Topcoder
SAP
crowdsources
from its
developer
community for
trouble-
shooting,
blogging, and
answering user
questions.
Origina lly,
developers
earned points
toward T-shirts
and
memorabilia. In
2008, SAP
offered
charitable
donations.
NASA aske d
coders on the
Topcoder
platform to
submit
algorithms for
improving the
positioning of
solar panels on
the ISS space
station.
DEFINE
- Task type: Problem- or solution-related
Solution-related
Solution-related
- Specificity of task definition: Narrow or
broad
Broad
Narrow
- Decomposition: Aggregated or
decomposed tasks
Decomposed
Decomposed
BROADCAST
- Channel: Soliciting knowledge via
intermediaries or through their own initiatives
Own
Intermediary
- Invitation: Soliciting via open call or via
invitation
By invitation
Open ca ll
- Crowd Size: Small or large
Large
Large
ATTRACT
- Incentive type: Pecuniary or non-
pecuniary
Non-pecuniary
Both
- Allocation: Flat or steep reward structure
Flat
Steep
- Ownership a mbiguity: Ex ante or ex post
definition of ownership
Ex post
Ex ante
SELECT
- Evaluation criteria: Metric or judgment
call
Judgment call
Metric
- Crowd filtering: Crowd filtering or not
No
Yes
- Sequential: One-off or stage gate
(sequent ial) model
One-off
One-off
36
FIGURE 1: Illustrating the stages and the interdependencies
Note: Figure 1 illustrates the main four stages in the top. More importantly, it illustrates the main decisions
associated with each stage, and how those are interdependent on other decisions. The key insight is that
decisions taken earlier have implications in later stages, and that key to outcomes of crowdsourcing is to a
large extend resting on taking these interdependencies into account.
37
APPENDIX 1: Overview of the stages and key decisions
Stage and associated
decisions
Definition
Trade-off consi derations
Interdependencies with other decisi ons
DEFINE
Task type: Problem- or
solution-related
knowledge
Using the crowd to learn
about problems and needs
for which the organization
might develop solutions or
using the crown to find
solutions to problems the
organization faces.
- Gainin g access to problem-rel ated knowl edge increases the
organization’s ability to address relevant problems and, thus, the
organization’s effectiveness. Gathering such knowledge may allow
the organization to better satisfy current customers, discover new
markets for existing technologies, or identify unsatisfied market
needs.
- Gainin g access to solution-rel ated knowledge allows the
organization to increase its efficiency by identifying solutions that
are either more cost-efficient or of higher quality.
- Evaluation criteria: Problem-related
knowledge is less specified and increases the
difficulty of setting clear evaluation criteria.
- Incentive type: Problem-related knowledge
makes the use of pecuniary incentives difficult,
given the absence of clear evaluation criteria.
Specificity of task
definition: Narrow or
broad
Highly specifying tasks to
target narrowly defined
solution spaces or minimally
specifying tasks to solicit a
broader array of potential
solutions.
- Narrow definitions ensure that incoming s olutions can be applied
to the underlying problem.
- Broad definitions engage large crowds from diverse fields of
expertise.
- Crowd size: The narrower the problem
definition is, the more difficult it becomes to
attract a sufficiently large crowd, since crowd
contributors are more likely to lack necessary
knowledge.
- Channel: The organization is less likely to
have access to the specific crowd if the task
definition is too narrow.
- Allocation: The broader the definition is, the
harder it is to establish clear evaluation criteria.
Organiz ations launchi ng with broad defini tions
seek global maxima. Such definitions might
foster steep award structures.
Decomposition:
Aggregated or
decomposed tasks
Aggregating tasks or
decomposing them into
smaller sub-components
- Aggregated problems allow for holistic solutions that consider the
interdependencies among decisions. While this is feasible,
searching for such solutions takes a long time, and individual
crowd contributors are less likely to have all the knowledge
necessary to address all parts of the problem.
- Decomposed problems allow for the engagement of specialized
crowd contributors, and multiple crowd contributors working in
parallel may achieve fast progress. However, the solutions might
not address interdependencies among individual problems.
- Evaluation criteria: Solutions to decomposed
problems are easier to evaluate using pre-
existing metrics.
BROADCAST
Channel: Soliciting
knowledge via
intermediaries or
through own ini tiatives
Using an established
intermediary to connect to a
crowd or initiating own
initiatives through own
channels
- Engaging intermediaries increases the organization’s potential to
reach l arge crowds . Professional crowd contribu tors who
participate in many crowdsourcing initiatives might build
relatio nships wi th intermedi aries, th us decrea sing searc h costs.
Crowd contributors might be more inclined to engage if they
perceive intermediaries to be more credible in protecting entrants.
Engaging intermediaries might also accelerate the process,
particularly when growing a crowd is not an option or would take
too long.
- Forgoing an intermedi ary requi res the organ ization to draw on its
own community of stakeholders. Own crowds with prior experience
with existing products or services offered may be more predictable
and useful when incremental innovation is needed. The members
of such crowds might also be more motivated to engage on behalf
of an organization with which they already have a rapport.
- Incentive type: Launching via an intermediary
with an open approach to solver identities tends
to offer successful crowd contributors wide
opportunities for peer recognition, which may
potentially compensate for a lack of pecuniary
incentives.
- Evaluation criteria: Given wel l-known
resist ance and Not-Invented-Here, convincing
internal developers to accept problem
knowledge (in the selection stage) may be
easier than convincing them to accept (and
select) solution-related kn owledge to “their”
problems.
- Task type and s pecificity: Engaging an
intermediary often means gaining access to a
crowd, but it can also mean gaining advice on
solution definition, attraction, and selection.
Followi ng an own ap proach requi res an
organization to have sufficient internal
knowledge and resources to run the process.
Invitation: Soliciting via
open call or via
invitation
Using an open call to anyone
willing to engage or using an
invitation-only format to target
a selected group who could
self-select into the tasks
- Open ca lls ar e more likely to create crowd diversity, at the cost of
making proposed solutions more difficult to evaluate. They also
increase the difficulty of protecting ideas, since all information
becomes publicly available. Furthermore, open calls introduce
uncertainty a bout t he upp er boun ds of the cro wd siz e. Note that ,
even for open calls, the organization might still invite certain
individuals.
- Invitation-only crowds might increase motivation due to the
perceived exclusivity. Such calls also make it possible to target
specific crowd contributors (e.g., researchers in a particular
domain).
- Evaluation criteria: Open ca lls i ncrease
evaluation costs due to the diversity and
number of solutions.
- Incentive type and allocation: Invitation-only
calls have more predictable ou tcomes , reducing
evaluation costs.
Crowd size: Small or
large
Targeting a small pot ential
crowd or a large potential
crowd
- Large crowds involve high levels of competition, thus decreasing
the incentive for any individual to exert effort. However, large
crowds might also spark extreme-value solutions from unexpected
crowd contributors, meaning that crowd contributors might
underestimate their competition. Finally, large crowds increase the
potential for diversity at the cost of greater evaluation cost s.
- Invitation: The mode of i nvitatio n influences
the crowd size, with open calls producing larger
crowds and invitation-only calls producing
smaller crowds.
38
APPENDIX 1: continued
Stage and associated
decisions
Definition
Trade-off considerations
Resolution and interdependence with other
decisions
ATTRACT
Incentive type:
Pecuniary or non-
pecuniary
Using pecuniary rewards
(e.g., pri zes) or non-
pecuniary rewards (e.g.,
career opportunities,
intellectual challenges,
feedback)
- Pecuniary incentives are relatively easy to implement but could
potentially crowd out non-pecuniary sources of motivation.
- Non-pecuniary incentives may be too weak to motivate
contributors to reach the desired outcome. However, projects
appealing to non-pecuniary motivations (e.g., social problems) may
not require financial incentives to attract participation.
- Evaluation criteria: Using pecuniary
incentives increases the importance of clear
evaluation criteria.
- Crowd involvement: Using non-pecuniary
incentives can produce more “unfinished”
solutions that are difficult to select among, even
in the presence of clear evaluation criteria.
Thus, orga nizations that use no n-pecuniary
incentives often use the crowd to reduce
evaluation costs.
Allocation: Flat or steep
reward stru cture
Using a flat reward structure,
in which awards and attention
are fairly evenly distributed
across the crowd, or a steep
reward st ructure, in which a
few individuals receive
disproportionate amounts of
rewards o r attention
- A flat reward approach may be most appropriate when several
solutions are needed.
- A steep (or single) reward structure may be most beneficial when
the objective is to find an extreme-value solution. Steep rewards
structures can reduce crowd contributors’ tendency to publicly
share information.
Ownership ambigui ty:
Ex ante definition of
ownership
Defining prior to the
crowdsourcing what will
happen to the ownership of
successful (and
unsuccessful) entries
- Defining ownership ex ante is only possible when the problem
can be specified or narrowly defined. Clearly defining ex ante
ownership of solutions that are not picked prevent s “sol ution
stealing” and is central to attracting solutions. (Consider the Arrow
paradox: That is, can the seeking organization use ideas and
solutions that are not rewarded?) Furthermore, defining ownership
ex ante makes it more difficult for c rowd c ontrib utors to reus e and
recombi ne other c rowd contri butors’ s olutions with thei r own.
- Crowd size: Clearer ownership positively
affects the ability to attract a large crowd.
- Incentive: The size of the prize compensates
to offset the loss entailed in cases where
ownership is transferred.
SELECT
Evaluation criteria:
Metric or judgment call
Using pre-defined metrics to
evaluate entries or using a
judgment call made by a
group of judges
- Metrics reduce evaluation costs but may exclude novel entries
that depart from what is known. Metrics also reduce the
organization’s discretion to select unanticipated entries that are
slightly worse than the best alternative.
- Incentive type: The use of p ecuniary
incentives increases the need for clear criteria.
- Task type: Judgment calls increase discretion
but elevate the risk of “off criteria” selection and
crowd outrage.
Crowd involvement:
Crowd filtering or not
Using crowd filtering, in which
crowd contributors vote
entries up or down, or not
- Crowd filtering reduces the selection burden by aggregating the
preferences of the crowd; however, the crowd’s preferences are
not necessarily the same as those of the seeking organization’s
customers. Crowd filtering can also lead to herd behavior, which
may cause small initial differences to escalate, and complacency,
in which the seeking organization may follow the recommendations
of the crowd without carefully considering all alternatives or the fit
of the chosen solution with the underlying issue. Finally, crowd
filtering can lead to potential conflicts with the organization when
interests are misaligned.
- Task type: When solutions are directly
applicable to crowd needs (e.g., when
consumer solutions are being voted on by
consumers), then the crowd may have better
(need-based) solution knowledge than the
organization.
- Invitation: The outcome of a vote b y
contributors should be directly relevant to voters
to motivate them to participate.
Sequential: One-off or
stage gate (sequential)
model
Adopting a single one-off
selection approach or using a
stage gate approach, in
which selection decisions are
made sequentially
- Sequential selections increase the possibility of using different
selection heuristics at the cost of increased complexity.
- Aggregation and task type: The cos t of
definition may increase if problems need to be
redefine d for each selectio n stage.
- Incentive type: The chances of winning
increase for “finalists,” hence strengthening
incentives to participate.
... Crowdsourcing is a widely-used open innovation practice in which firms (called "seekers") publicly broadcast internal problems in the form of challenges to which individuals external to the firm (called "crowds" or "solvers") are invited to offer solutions (see, e.g., Chesbrough, 2003;Dahlander et al., 2019;Howe, 2006;Lakhani & Panetta, 2007;Majchrzak & Malhotra, 2020;Tucci et al., 2018). This approach has shown its promise as a fast and reliable form of distant search for implementable and novel solutions on an innovation landscape defined by the firm (Afuah & Tucci, 2012;Poetz & Schreier, 2012): By specifying what kind of problems the crowd may work on, the firm essentially constrains the crowd's search to a delineable focus area within which the crowd searches toward predefined goals (Simon, 1973), such as improving technological performance along known dimensions or finding a new market (or product) for an existing technology (or market) complementary to firms' existing activities (Antorini et al., 2012;. ...
... Extant research points out that without clear a priori criteria defining what makes for a good solution (i.e., clear performance indicators, thresholds, and boundary conditions about what kind of technology may or may not be included), firms may often end up unable to identify whether any of the ideas submitted solves the original problem, let alone implement those ideas (e.g., Afuah & Tucci, 2012;Alexy et al., 2012;Felin & Zenger, 2014Piezunka & Dahlander, 2015;Pollok et al., 2019;Tucci et al., 2018;Wallin et al., 2018). Accordingly, theorizing suggests that if firms want to draw on crowds, they are well advised to constrain the problem definition by sufficiently specifying and modularizing the problem, so that solvers with matching knowledge can identify their fit to the problem and feel motivated to contribute (e.g., Afuah & Tucci, 2012;Benkler, 2002;Dahlander et al., 2019;Lakhani & Panetta, 2007;Lee et al., 2022;MacCormack et al., 2006). ...
... By constraining the problem as such, the firm can reduce its transaction costs for coordinating the crowd and evaluating solutions. At the same time, however, the firm forfeits opportunities to benefit from other, potentially more valuable ideas that are incompatible with how it has framed the problem, and which hence are excluded by the definition of the landscape (Dahlander et al., 2019). Accordingly, most applications of crowdsourcing have happened in contexts such as improving existing technology or products (e.g., Bayus, 2013), identifying a new product for an existing market (e.g., Poetz & Schreier, 2012), or exploring opportunities to redeploy an existing product or technology in a new-to-the-firm market (e.g., Bjelland & Wood, 2008). ...
Article
Full-text available
Research Summary Theories of crowdsourced search suggest that firms should limit the search space from which solutions to the problem may be drawn by constraining the problem definition. In turn, problems that are not or cannot be constrained should be tackled through other means of innovation. We propose that unconstrained problems can be crowdsourced, but firms need to govern the crowds differently. Specifically, we hypothesize that firms should govern crowds for solving unconstrained problems by instructing them not just to solve the problem but also to help (re)define the problem by offering their problem frames and integrating others' frames. We find evidence for this interaction hypothesis in a field study of over a thousand participants in 20 different crowdsourcing events with interventions for the different governance approaches. Managerial Summary Unconstrained innovation problems, which require finding a new product and a new market at the same time, are thought to be difficult to solve via crowdsourcing. We propose and test a governance approach for problem‐finding, that is, (re)defining the firm's original problem statement by instructing crowds to make their problem frames explicit (by posting them) and to integrate others’ problem frames into their solution ideas. In doing so, we provide guidance for firms hoping to use crowdsourcing for both unconstrained and constrained problems. For constrained problems, as widely known, firms should govern for problem‐solving only; for unconstrained problems, they should govern for problem‐finding and problem‐solving. Both forms of governance are “light‐touch,” requiring only minimal intervention in the form of instructions for the crowd at the beginning of the crowdsourcing event.
... The fundamental principle revolves around an open call through which firms outsource the task of idea generation and problem-solving to a broader crowd of distributed individuals (Afuah and Tucci, 2012;Natalicchio et al., 2014;Dahlander et al., 2019). While much research attention has been placed on external crowdsourcing, recent studies highlight that the principles of crowdsourcing can also be applied within the organizational boundaries to overcome information silos and tap into the creative potential of employees (Stieger et al., 2012;Malhotra et al., 2017;Pohlisch, 2020). ...
... Overall, much research has focused on the benefits of crowdsourcing, pointing also to the potential of combining internal and external crowdsourcing to support innovation. Recent studies, however, highlight that implementing and managing crowdsourcing platforms constitute a challenging task for firms (Blohm et al., 2018;Dahlander et al., 2019). In order to leverage the full potential of these platforms, careful management of the crowdsourcing process is crucial. ...
... When investigating the implementation and management of crowdsourcing from the firm perspective, recent studies distinguish between different stages of such process. These stages relate to the initiation of crowdsourcing, the attraction of needed ideas and solutions, and the evaluation and assimilation of generated ideas (Sieg et al., 2010;Luttgens et al., 2014;Blohm et al., 2018;Dahlander et al., 2019). Within these stages, firms need to make key decisions on a number of relevant dimensions. ...
Article
It is increasingly common for firms to gather ideas and solutions through the use of crowdsourcing. However, limited focus is placed on understanding how firms can manage different crowdsourcing platforms to involve both employees and external crowds in innovation. The tendency of crowdsourcing research to focus on successful cases does not allow unfolding the difficulties that the implementation of these platforms create in less experienced firms. This paper presents an exploratory study of a large firm experimenting with both internal and external crowdsourcing. Based on data collected through interviews and secondary sources, we unveil the challenges experienced by the firm when implementing both platforms. We discuss how the implementation of both platforms impacted the work of R&D employees, as they were required to assume new roles and responsibilities related to crowdsourcing. Finally, we present how the firm attempted to address these issues. Implications for innovation research and practice are discussed.
... Consequently, intermediaries adapt their search depending on the required discourse. Dahlander et al. (2019) recommend following a delegated search when the potential solution is very accessible for solvers due to a specific knowledge set that is articulated, while Felin and Zenger (2014) propose direct search when the solution space is unclear because the interaction of knowledge sets cannot be well expressed. ...
... Mediator variable: Level of concentrating on a search strategy. To measure the mediating effect of search, the authors asked for the level of activities that indicate the intermediary's concentration on the two different search strategies (Dahlander et al., 2019;Felin & Zenger, 2014). The applied scales range from hardly ever to very frequent (see appendix for exact items). ...
Article
Full-text available
Intermediaries are an inherent part of value creation in open innovation. They connect organisations seeking external solutions for an innovation-related problem (seekers) with potential solution providers (solvers). To bridge between the innovation problem and external knowledge sources, intermediaries deploy different search strategies. This study compares the cost of using two prevalent approaches: direct versus delegated search. Direct search corresponds to the conventional understanding of search by screening a pre-identified set of solution providers that the intermediary has identified as potentially relevant contributors. Delegated search comprises more indirect search such as problem broadcasting or crowdsourcing. Here, the innovation problem is distributed to a large external network of potential solvers, allowing even unobvious outsiders to contribute to its solution. An empirical study of 53 open innovation intermediaries indicates that delegated search outperforms direct search in terms effectiveness. The lower overall effort for intermediation in delegated search mainly arises from decoupling the effort to coordinate the search process by shifting it towards the solution provider.
... Consequently, intermediaries adapt their search depending on the required discourse. Dahlander et al. (2019) recommend following a delegated search when the potential solution is very accessible for solvers due to a specific knowledge set that is articulated, while Felin and Zenger (2014) propose direct search when the solution space is unclear because the interaction of knowledge sets cannot be well expressed. ...
... Mediator variable: Level of concentrating on a search strategy. To measure the mediating effect of search, the authors asked for the level of activities that indicate the intermediary's concentration on the two different search strategies (Dahlander et al., 2019;Felin & Zenger, 2014). The applied scales range from hardly ever to very frequent (see appendix for exact items). ...
Article
This paper responds to the lack of research investigating procedural differences in open innovation collaboration. Based upon a sample of intermediaries, we analyze the costs of different setups for open collaboration. Our focus is on the mechanism of search initiating the collaboration. Differentiating direct search, i.e. actively scanning information sources for relevant knowledge, and indirect search, i.e. revealing an innovation problem to a potential pool of contributors, we find that variations in search behavior lead to different coordination cost. Our analysis allows us deriving implication for the theory and management of open innovation.
... 14 Crowdsourcing ideas (Afuah and Tucci, 2012), as in the Dell Ideastorm community (Bayus, 2013), and crowdsourcing contests, where firms define a problem and allow the entire population (or a screened subset of it) to submit solutions (Boudreau, Lacetera, and Lakhani, 2011;Piezunka and Dahlander, 2015) are other types of managed ecosystems. In these cases, organizations collaborate with external communities outside organizational boundaries and manage interdependencies as they coordinate the crowdsourcing process (Dahlander, Jeppesen, Piezunka, 2018). In user innovation (Franke and Shah, 2003;O'Mahony, 2003), where firms rely on lead users to help generate solutions to innovation problems, we also see managed ecosystems. ...
Article
With the growing complexity of social and environmental issues, there has been a blossoming of hackathons and open innovation challenges. This push to accelerate innovation embraces a perspective of time as clock time—conceived as objective, linear, measurable, and therefore, rather easy to compress. Such a view of time conflicts with the emergent nature of idea generation and the indeterminate process that leads to social impact, which both rely on event time. Drawing on a 40-month ethnographic study of OpenIDEO, an open social innovation platform, I examine how, in designing open innovation challenges, the OpenIDEO team interwove clock time and event time in order to foster idea generation and support social impact. Through inductive analysis, I identify three practices—mapping, stretching, and squeezing time—enacted by the OpenIDEO team to “make time” and thus, continuously engage participants and sponsors in the challenges as well as to allow participants to implement their ideas. My findings demonstrate how organizations can intentionally use time to nurture collaborative innovation and yield sustainable social impact. My study questions the traditional interpretation of clock time as the foundation of all temporalities as it shows how temporal work can be grounded within event time. Funding: This work was supported by the National Science Foundation [NSF VOSS Grant 1122381].
Article
Full-text available
Crowdsourcing—asking an undefined group of external contributors to work on tasks—allows organizations to tap into the expertise of people around the world. Crowdsourcing is known to increase innovation and loyalty to brands, but many organizations struggle to leverage its potential, as our research shows. Most often this is because organizations fail to properly plan for all the different stages of crowd engagement. In this paper, we use several examples to explain these challenges and offer advice for how organizations can overcome them.
Article
Full-text available
Crowds can be very effective, but that is not always the case. To actually render the usage of crowds effective, several factors need to be aligned: crowd composition, the right question at the right time, and the right analytic method applied to the responses. Specific skills are mandatory to tap into the creativity of a crowd, harness it effectively and transform it into offers that markets value. The “DBAS” framework is recommended to successfully implement a crowd project. It consists of four stages, and in each phase some key questions need to be addressed. Each decision along the DBAS pathway matters and how you navigate each stage can either reinforce or undermine decisions made at the other stages. The right degree of innovativeness, listening to contributors and informing participants openly about the fate of rejected ideas are key success factors that require special attention. To continually improve the odds of success, crowdsourcing should best be treated as a continual iterative churn.
Article
Full-text available
When organizations crowdsource ideas, they select only a small share of the ideas that contributors submit for implementation. If a contributor submits an idea to an organization for the first time (i.e., is a newcomer), and the organization does not select the idea, this may negatively affect the newcomer’s relationship with the organization and willingness to submit ideas to the organization in future. We suggest that organizations can increase newcomers’ willingness to submit further ideas by providing a thus far understudied form of feedback: rejections. Though counterintuitive, we suggest that rejections encourage newcomers to bond with an organization. Rejections signal contributors that an organization is interested in receiving their ideas and developing relationships with them. To test our theory, we examine the crowdsourcing of 70,159 organizations that received ideas from 1,336,154 contributors. Using text analysis, we examine differences in how rejections are written to disentangle the mechanisms through which rejections affect contributors’ willingness to continue to interact with an organization. We find that receiving a rejection positively impacts newcomers’ willingness to submit ideas in future. This effect is stronger if the rejection includes an explanation and is particularly pronounced if the explanation matches the original idea in terms of linguistic style.
Article
Full-text available
The proposition that outsiders often are crucial carriers of novelty into an established institutional field has received wide empirical support. But an equally compelling proposition points to the following puzzle: the very same conditions that enhance outsiders' ability to make novel contributions also hinder their ability to carry them out. We seek to address this puzzle by examining the contextual circumstances that affect the legitimation of novelty originating from a non-certified outsider that challenged the status quo in an established institutional field. Our research case material is John Harrison's introduction of a new mechanical method for measuring longitude at sea – the marine chronometer – which challenged the dominant astronomical approach. We find that whether an outsider's new offer gains or is denied legitimacy is influenced by (1) the outsider's agency to further a new offer, (2) the existence of multiple audiences with different dispositions towards this offer, and (3) the occurrence of an exogenous jolt that helps create a more receptive social space. We organize these insights into a multilevel conceptual framework that builds on previous work but attributes a more decisive role to the interplay between endogenous and exogenous variables in shaping a field's shifting receptiveness to novelty. The framework exposes the mutually constitutive relationships between the micro-, meso-, and macro-level processes that jointly affect an outsider's efforts to introduce novelty into an existing field.
Article
Full-text available
The purpose of this article is to suggest a (preliminary) taxonomy and research agenda for the topic of “firms, crowds, and innovation” and to provide an introduction to the associated special issue. We specifically discuss how various crowd-related phenomena and practices—for example, crowdsourcing, crowdfunding, user innovation, and peer production—relate to theories of the firm, with particular attention on “sociality” in firms and markets. We first briefly review extant theories of the firm and then discuss three theoretical aspects of sociality related to crowds in the context of strategy, organizations, and innovation: (1) the functions of sociality (sociality as extension of rationality, sociality as sensing and signaling, sociality as matching and identity), (2) the forms of sociality (independent/aggregate and interacting/emergent forms of sociality), and (3) the failures of sociality (misattribution and misapplication). We conclude with an outline of future research directions and introduce the special issue papers and essays.
Article
Full-text available
Online communities frequently create significant economic and relational value for community participants and beyond. It is widely accepted that the underlying source of such value is the collective flow of knowledge among community participants. We distinguish the conditions for flows of tacit and explicit knowledge in online communities and advance an unconventional theoretical conjecture: Online communities give rise to tacit knowledge flows between participants. The crucial condition for these flows is not the advent of novel, digital technology as often portrayed in the literature, but instead the technology's domestication by humanity and the sociality it affords. This conjecture holds profound implications for theory and research in the study of management and organization, as well as their relation to information technology.
Article
This paper exploits variation in the adoption of copyright laws within Italy – as a result of variation in the timing of Napoleon’s military victories – to examine the effects of copyrights on creativity. To measure variation creative output, we use new data on 2,598 operas that premiered across eight states within Italy between 1770 and 1900. These data indicate that the adoption of copyrights led to a significant increase in the number of new operas premiered per state and year. We find that the number of high-quality operas also increased – measured both by their contemporary popularity and by the longevity of operas. By comparison, evidence for a significant effect of copyright extensions is limited. Our analysis of alternative mechanisms for this increase reveals a substantial shift in composer migration in response to copyrights. Consistent with agglomeration externalities, we also find that cities with a better pre-existing infrastructure of performance spaces benefitted more copyright laws.
Article
The problem of designing, coordinating and managing complex systems is central to the management and organizations literature. Recent writings have emphasized the important role of modularity in enhancing the adaptability of such complex systems. However, little attention has been paid to the problem of identifying what constitutes an appropriate modularization of a complex system. We develop a formal simulation model that allows us to carefully examine the dynamics of innovation and performance in complex systems. The model points to the trade-off between the virtues of parallelism that modularity offers and the destabilizing effects of overly refined modularization. In addition, high levels of integration can lead to modest levels of search and a premature fixation on inferior designs. The model captures some key aspects of technological evolution as a joint process of autonomous firm level innovation and the interaction of systems and modules in the marketplace. We discuss the implications of these arguments for product and organization design.
Article
In October 2014, all 4,494 undergraduates at the Massachusetts Institute of Technology were offered access to Bitcoin, a decentralized digital currency. As a unique feature of the experiment, students who would generally adopt first were placed in a situation where many of their peers received access to the technology before them, and they then had to decide whether to continue to invest in this digital currency or exit. Our results suggest that when natural early adopters are delayed relative to their peers, they are more likely to reject the technology. We present further evidence that this appears to be driven by identity, in that the effect occurs in situations where natural early adopters' delay relative to others is most visible, and in settings where the natural early adopters would have been somewhat unique in their tech-savvy status. We then show not only that natural early adopters are more likely to reject the technology if they are delayed, but that this rejection generates spillovers on adoption by their peers who are not natural early adopters. This suggests that small changes in the initial availability of a technology have a lasting effect on its potential: Seeding a technology while ignoring early adopters' needs for distinctiveness is counterproductive.
Article
Currently, two models of innovation are prevalent in organization science. The “private investment” model assumes returns to the innovator result from private goods and efficient regimes of intellectual property protection. The “collective action” model assumes that under conditions of market failure, innovators collaborate in order to produce a public good. The phenomenon of open source software development shows that users program to solve their own as well as shared technical problems, and freely reveal their innovations without appropriating private returns from selling the software. In this paper, we propose that open source software development is an exemplar of a compound “private-collective” model of innovation that contains elements of both the private investment and the collective action models and can offer society the “best of both worlds” under many conditions. We describe a new set of research questions this model raises for scholars in organization science. We offer some details regarding the types of data available for open source projects in order to ease access for researchers who are unfamiliar with these, and also offer some advice on conducting empirical studies on open source software development processes.