ArticlePDF Available

Choosing Rules for Formal Standardization

Authors:

Abstract

Formal standardization - explicit agreement on compatibility standards - has important advantages over de facto standardization, but is marred by severe delays. I explore the tradeoffs between speed and the quality of the outcome in a private-information model of the war of attrition and alternative mechanisms, and show that the war of attrition can be excessively slow. I discuss strategies to reduce delay, including changes in intellectual property policy and in voting rules, early beginnings to standardization efforts, and the use of options.
Choosing the Rules for Formal Standardization
Joseph Farrell
University of California, Berkeley
This version: January 1996
Abstract. Formal standardization – explicit agreement on compatibility
standards – has important advantages over de facto standardization, but is
marred by severe delays. I explore the tradeoffs between speed and the
quality of the outcome in a private-information model of the war of attrition
and alternative mechanisms, and show that the war of attrition can be
excessively slow. I discuss strategies to reduce delay, including changes in
intellectual property policy and in voting rules, early beginnings to
standardization efforts, and the use of options.
Even supposedly backward-compatible software isn’t always, and while the body of the paper below is
(newly) pdf’d from the 1996 file, this title page and the references have had to be re-processed (March
2002); I am doing this in part because I hope soon to rescue this paper from my own “severe delays.”
Compatibility standards often are developed through a process of explicit consensus.
When participants have little vested interest in particular outcomes, the process will be
straightforward | participants working together to ¯nd the best technical solution. If
anything, there might be a free-rider problem, as development of the standard could be a
public good. But participants often do have strong vested interests, and while this helps
overcome the free-rider problem, it can make it hard to reach consensus, as each participant
holds out for agreement on its preferred standard.
Farrell and Saloner (1988) modeled such disagreement in consensus standard-setting
using a complete-information war-of-attrition model. Their analysis predicted that con-
sensus standardization is more likely to achieve coordination on a standard than is a de
facto standards race, but that (on average) it is slow: the equilibrium delays may dissipate
a large fraction of the potential gain from the process. This is essentially a bargaining
problem and a bargaining ine±ciency.
As the modern literature on bargaining suggests, it is useful to make explicit the pri-
vate information that drives bargaining behavior and bargaining ine±ciencies. In this
paper, accordingly, I develop an incomplete-information war-of-attrition model to assess
the performance of consensus standardization. I ¯nd that the predicted delays are often
long enough to make the process perform very poorly, even on a somewhat optimistic view
of its merits. Standards organizations' policies that reduce vested interest may help in
I thank the National Science Foundation and the Berkeley Committee on Research for research
support. I thank seminar participants at Berkeley, Davis, UCLA, Santa Barbara, Lisbon, Barcelona,
LSE, USC, Calgary, Vancouver, TPRC, Harvard, NBER, Aix-en-Provence, OECD, and Oxford; and es-
pecially James Dana, Glenn Ellison, Barry Nalebu®, Eric Rasmusen, Pierre Regibeau, Michael Whinston,
and Charles Wilson for helpful comments. I also thank the members of ANSI's X3 Strategic Planning
Committee, especially its former and current Chairs, S.P.Oksala and C. Cargill, for helpful comments,
although they do not necessarily agree with my approach and conclusions. My views on standard-setting
have evolved in part through work with Garth Saloner and Carl Shapiro, although they are not responsible
for my statements here. I thank Chris Simpson and Anthony Raeburn of the IEC, and Christian Favre of
the ISO, for interviews. Comments are welcome: Internet farrell@econ.berkeley.edu, phone (510) 642-9854
or fax (510) 642-6615.
1
reducing delays; in particular, intellectual-property policies akin to compulsory licensing
are likely to speed the process and do not necessarily reduce the incentives to innovate. On
the other hand, some policies intended to reduce delays, but not acting on vested interest,
may be stymied by the fundamental bargaining incentives.
1. The Formal Standards Process: Description and Delays
Standards-developing organizations (SDOs) try to replace the bandwagon de facto stan-
dards process with an orderly explicit search for consensus. This process mingles technical
discussion and political negotiation, in contrast to the race for market share and the battle
for users' expectations that typify the de facto standards process.
1
The active participants are \volunteers" willing to spend substantial time and travel
money.
2
Some commentators, such as Weiss and Toyofuku (1993), have stressed the re-
sulting incentives for free-riding. In particular, users, whose interests are typically more
di®use than vendors', are often thought to be badly underrepresented.
3
However, when ¯rms have vested interests in particular solutions, participation may
be less of a problem but agreement may be hard to reach. Kolodziej (1988) writes, \Pol-
itics can become especially entangled when vendors already have a vested interest in the
technology being brought to the standards table," and quotes a member of an IEEE stan-
dards committee as saying that \when there is `silicon', or component products, already
available in the market, it will always cause problems in standards work."
4
Even absent
physical installed base, vendors may have proprietary complementary technologies; if there
1
For a fuller description see for instance Cargill (1989).
2
The direct pecuniary costs of formal standards activities for information-technology ¯rms have been
estimated at around 1% of revenues | Swann (1990, page 472) quotes estimates of 0.5%, 1%, and 2.5%.
Participation in a single committee may cost $250,000 annually, according to an estimate by Professor
Michael Spring, quoted in Datamation, September 1, 1989, page 64.
3
At one point in 1988, all participants in the IEEE 802.3 work group represented companies with vested
product interests. The chairman remarked, \We [the IEEE] don't have any organizational policies that bar
individual users from attending and participating in these standards meetings. There are just not enough
users who want to attend." See \Users cry for standards but don't get involved," Computerworld, May 4,
1988.
4
Gabel (1987, section 5) suggests that X/OPEN selected UNIX as a standard operating system for
mainframe computers precisely because it was equally unfamiliar to all the members of X/OPEN.
2
is ¯rm-speci¯c learning by doing, each ¯rm will have a cost advantage in producing to its
\own" standard even if the standard itself is made public; and generally ¯rms with di®erent
strategic foci may prefer di®erent standards.
In principle, one might imagine a broad range of decision mechanisms for a standards
organization, designed to take into account parties' information, their participation con-
straints, and the need to produce standards that will attract widespread support (most
consensus standards are voluntary). It seems, however, that standards organizations recoil
from any element of compulsion | ex ante or ex post. There are of course reasons for this,
notably antitrust concerns,
5
but here I focus on its implication: although standards can
be adopted over some opposition, the institutions almost desperately seek \consensus,"
de¯ned by the American National Standards Institute (ANSI) as the ¯nding that:
\substantial agreement has been reached by directly and materially a®ected interest categories.
Substantial agreement means much more than a simple majority, but not necessarily unanimity.
Consensus requires that all views and objections be considered, and that a concerted e®ort be
made toward their resolution."
6
ANSI (1987) requires more speci¯cally that a standard obtain at least a two-thirds majority
of those voting and a majority of the membership (section A8.3), and that unresolved
negative views and objections be formally addressed (section A9.1, part 11). Similarly,
the International Organization for Standardization (ISO) speci¯es that \the decision to
register a committee draft as a draft International Standard shall be taken on the basis of
the consensus principle." It de¯nes consensus as
7
\general agreement, characterized by the absence of sustained opposition to substantial issues by
any important part of the concerned interests and by a process that involves seeking to take into
account the views of all parties concerned and to reconcile any coicting arguments."
5
Even some who doubt that there is a serious threat of anticompetitive behavior agree that there
is a serious threat of such allegations, and that the fear of (even baseless) suits constrains the process.
See Anton and Yao (1995) for a general discussion of antitrust and standardization, and Federal Trade
Commission (1983) for some antitrust concerns related to standard-setting, although mostly related to
quality rather than compatibility standards. Cargill (1989, p. 106), discussing the widespread Accredited
Organization method of standards development, notes that \the potential use of standards to protect the
status quo is a constant worry : : : . There is a great temptation to standardize only upon things that are
familiar to the majority of the members." See ANSI (1987) or Rockwell (1990) on the need for careful and
time-consuming procedures when there is actual or potential coict over a standard.
6
ANSI (1987) section 1.3.3.
7
IEC/ISO Directives, 1989, part 1, section 2.4.3. IEC is the International Electrotechnical Commission.
3
As a result, although consensus need not imply unanimity, the parties most directly con-
cerned can often block or at least delay the adoption of a standard they dislike. In the
case I consider, where two or more parties want there to be a standard but di®er on their
preferred standards, this turns the coordination game into a war of attrition.
Delays in Formal Standardization
Viewing formal standardization as a war of attrition suggests that we should expect delays
in reaching consensus. Indeed, formal consensus standardization is often very slow, despite
e®orts to hasten the process. Cargill (1989, page 114) reports that reaching a standard
takes \an average of four years to complete; much more, if [it is] controversial". Kolodziej
(1988) estimates four to ¯ve years as an average. In 1981 the chairman of the IEEE
Standards Board cited seven years as an average delay for an IEEE standard (Lee, 1981).
At the International Organization for Standardization (ISO), the average elapsed time in
developing standards during 1987{1991 ranged from six years (1991) to well over seven
(1988). In its 1991 Annual Report the International Electrotechnical Commission (IEC)
set a goal of reducing the standards development cycle to \a maximum of ¯ve years, and a
mean time of very much less,"(page 2), from a previous ¯gure of 87 months (page 6); but
its 1994 Annual Report (page 4) reported a mean development time of just over ¯ve years.
Participants and observers complain ¯ercely about these delays. For example, in a
National Research Council survey of responses to the question \What issues in the US
standards developing process must be resolved?", the ¯rst point listed was that \the adop-
tion process is too slow."
8
2. Outline of the Paper
In section 3, I develop a simple model of a two-player private-information war of attrition,
in which each player's type is the quality of the technology that it proposes as a standard.
A player's proposal is adopted when the other \concedes," and each player's payo® at that
time depends on the quality of the proposal adopted and on who \won." I show that in the
8
National Research Council (1990), page 21.
4
symmetric equilibrium the better proposal wins. In section 4, I assess the performance of
this symmetric equilibrium, taking account of its selection of the better proposal and of its
delays. In a simple example, I ¯nd conditions for this equilibrium to outperform immediate
random choice, which can be viewed either as a metaphor for a faster, less careful process,
or as the process when there is a predetermined dominant standard-setter. In section 5, I
use this analysis to discuss attempts to reduce delays. The model predicts that policies that
reduce vested interest, or make it less powerful, will likely reduce delays, but that those
that do not will be relatively unsuccessful. In section 6, I address the concern that reducing
\vested interest," particularly through the kind of intellectual-property policy that many
standards organizations adopt, might reduce incentives to develop the proposals on which
the model is based. I show that the countervailing e®ect of faster adoption may outweigh
the obvious adverse e®ect, so weaker intellectual property protection may actually increase
incentives to innovate. In section 7, I address the concern that some interested parties |
notably users | are underrepresented because their interests are too di®use to induce
them to participate. I ask how their ex ante preferences are likely to di®er from active
participants' (mostly vendors'). Section 8 concludes.
3. A Model
Two proponents have developed systems, which may di®er in \quality" (which is private
information), and must agree on one as a standard. Each player would like a high-quality
standard adopted, but also would like its own system adopted as the standard. Speci¯cally,
each player i = 1; 2 has a privately known quality q
i
, and can concede at any time, ending
the game. That is, each chooses a concession time t
i
; if t
1
· t
2
then player 1 concedes at
time t
1
, and as of that date it gets a payo® of Lq
2
, while player 2 gets Wq
2
. Here, L > 0
and W > 0 measure the \loser's" and \winner's" shares of total surplus q; we normalize
when convenient so that L + W = 1. The players share a discount rate r, and °ow payo®s
are zero until agreement is reached (that is, I assume that the market waits until there is
a standard).
If we had L = W then both players would want the player with lower quality to
concede, and a sensible model would predict rapid and e±cient agreement in this case.
5
When W > L there is \vested interest," and we assume this henceforth. Thus each player
would rather its rival concede, even if its rival has a somewhat higher quality. This creates
a private-information war of attrition, in which each player's strategy is the time at which
it will concede if the other has not already done so. We consider a symmetric model, and
focus on the symmetric perfect Bayesian equilibrium. Thus we look for a concession-time
strategy t(¢), such that for a player of type q it is optimal to concede at time t(q) if its
rival has not yet conceded and if it believes its rivals is also using the strategy t(¢).
9
I assume that there are no side payments: this is my understanding from a number of
conversations with standards o±cials and participants. Side payments would likely also
raise antitrust concerns. For simplicity, I also assume that agreement must take the form
of one party conceding or agreeing to the other's proposal: in general there is no obvious
channel for compromise, although it may sometimes be possible.
A Screening Property
In Propositions 1 and 2 I show that in the symmetric perfect Bayesian equilibrium of the
war of attrition, the better system wins.
Proposition 1. Every rationalizable Bayesian strategy is weakly increasing: lower quality
types concede before higher quality types.
Proof. Consider any two possible types of a player (say player 1), q
L
and q
H
, with q
L
< q
H
.
Suppose that q
L
puts positive probability weight on conceding at time t
L
, and q
H
puts
positive weight on conceding at time t
H
. We will show that t
L
· t
H
.
Let ¡ be player 1's (perceived) distribution function of t
2
, the time when player 2
concedes (if player 1 does not previously do so). Let E[qjt] ´ E[q
2
jt
2
= t] be player 1's
expected value of player 2's quality q
2
given that player 2 concedes at t. Then, because
type q
L
is willing to concede at t
L
, we have:
W q
L
Z
t
L
0
e
¡rt
d¡(t)+L
Z
1
t
L
E[qjt]e
¡rt
d¡(t) ¸ W q
L
Z
t
H
0
e
¡rt
d¡(t)+L
Z
1
t
H
E[qjt]e
¡rt
d¡(t):
9
In general, this might be a mixed strategy, but it will not be in the examples we consider.
6
Similarly, because type t
H
is willing to concede at time t
H
,
W q
H
Z
t
L
0
e
¡rt
d¡(t)+L
Z
1
t
L
E[qjt]e
¡rt
d¡(t) · W q
H
Z
t
H
0
e
¡rt
d¡(t)+L
Z
1
t
H
E[qjt]e
¡rt
d¡(t):
Subtracting the ¯rst of these inequalities from the second we get
W (q
H
¡q
L
)
Z
t
L
0
e
¡rt
d¡(t) · W (q
H
¡q
L
)
Z
t
H
0
e
¡rt
d¡(t): (1)
If t
L
> t
H
, (1) would imply that there is zero probability of player 2 conceding between
t
H
and t
L
. But then no type of player 1 should concede at t
L
> t
H
(it would be better to
concede sooner, at t
H
), contradicting our assumption that q
L
can optimally concede at t
L
.
Thus indeed t
L
· t
H
.
Proposition 1 has an important performance implication. In the symmetric equilibrium
of the war of attrition, the \winner" is the player with the later concession time, and by
Proposition 1 this is the player with the higher quality proposal. Thus, in this model, the
better system will eventually be chosen. We formalize this as a proposition and provide a
formal argument:
Proposition 2. In the symmetric equilibrium of the war of attrition, each player's strategy
implies a continuous and gap-free distribution of concession time. Consequently, the winner
is the higher-quality system.
Proof. Suppose that each player's strategy t
i
(¢) had an atom at t: that is, there is positive
probability of concession exactly at t. Let ¹q(t) be the supremum of the set of types who can
optimally concede at t. By continuity of the payo® function, type ¹q(t) also can optimally
concede at t. But by waiting until just after t, type ¹q(t) would increase its probability of
\winning," and could not reduce the quality of the system that emerges. Hence its payo®
would increase. This contradicts the statement that it can optimally concede at t. Thus
the distribution of concession time cannot have atoms.
To see that the distribution of concession time cannot have gaps, suppose that no con-
cessions take place in the interval (t
L
; t
H
). Then the lowest type that (in the hypothetical
equilibrium) can optimally concede at t
H
would do better to concede at t
L
: it would not
reduce its probability of winning nor could it lower the quality of the system adopted, but
7
it would get its payo® sooner. Therefore there are no gaps in the distribution of concession
time.
Consequently, if q is continuously distributed, the symmetric equilibrium strategy is
one-to-one; thus the war of attrition selects the higher-quality system because concession
time t is strictly increasing in quality q.
If the distribution of q has (say) an atom at ^q, then that type will randomize its
concession time (creating a concession-time distribution without atoms). By Proposition 1,
this randomization will be on an interval [t
L
; t
H
], and types below ^q will concede at or before
t
L
and types above ^q will concede at or after t
H
. Consequently, atoms in the q distribution
will not lead with positive probability to the choice of an inferior system. This establishes
the Proposition.
This sorting or discrimination property is well known (for instance, it is in Bliss and
Nalebu®, 1984). Below, I explore whether the war of attrition buys this sorting at too high
a price in delay. Here, I pause for two comments.
First, in general the war of attrition selects for willingness to wait. In my simple
model, di®erences in willingness to wait are caused solely by di®erences in quality, so that
selection is ideal. This clearly overstates the case. In particular, as Katz and Shapiro
(1985) emphasized, small ¯rms often are keener to have a standard than are large ¯rms,
so the war of attrition may favor large ¯rms' proposals as well as high-quality proposals.
Second, Proposition 2 concerns the symmetric equilibrium. There is also an asymmetric
equilibrium in which player 1 never concedes and player 2 concedes immediately,
10
as
well as the mirror-image equilibrium. In ex ante symmetric situations, the symmetric
equilibrium seems a natural model. However, if one player is (generally perceived as) the
standard-setter, an asymmetric equilibrium may be the right description. Historically, IBM
may have played this role in the computer industry, and AT&T in US telecommunications.
10
This must be supported by out-of-equilibrium beliefs of the same form: if player 2 does not immediately
concede, player 1 must expect that it will do so at the next instant.
8
Properties of Equilibrium for Continuous Distributions
Suppose that quality q is distributed independently for each ¯rm with the continuous
distribution function F (¢). De¯ne
G(q) ´ E[yjy > q] = [1 ¡ F (q)]
¡1
Z
1
q
y dF (y):
Observe that
G
0
(q) ´ h(q)[G(q) ¡ q]; (2)
where of course h denotes the hazard rate,
h(q) ´
F
0
(q)
1 ¡ F (q)
:
Now consider the problem facing a player of type q who, according to the equilibrium,
is meant to concede at time t(q). Once that time is reached, conceding yields an expected
payo® of LG(q). Holding out a short time dt longer yields an expected payo® of
h(q)q
0
(t) dtW q + [1 ¡h(q)q
0
(t) dt]e
¡r dt
LG(q + q
0
(t) dt);
where the function q(¢) tells us what type is meant to be conceding at any instant (it is
the inverse function of the t(¢) function). Hence (suppressing arguments for brevity)
0 = hq
0
W q + LG
0
q
0
¡hq
0
LG ¡ rLG:
Using (2) this becomes
0 = [W ¡ L]hqq
0
¡ rLG: (3)
We can separate (3) as
r dt = v
qh(q)
G(q)
dq; (4)
where v ´ (W ¡ L)=L ¸ 0.
11
This di®erential equation, together with the boundary
condition that q = q
min
at t = 0, de¯nes the symmetric Bayesian equilibrium of the
private-information war of attrition.
11
Thus, with our normalization L + W = 1, we have W = (v + 1)=(v + 2) and L = 1=(v + 2).
9
To solve this, de¯ne
K(q) ´ [1 ¡F (q)]G(q) ´
Z
1
q
y dF (y)
and note that K(q
min
) = ¹, where ¹ is the mean of q. Then the solution to (4) with our
boundary condition is
rt = v log ¹ ¡ v log K(q);
or in terms of the time value of delay until q concedes,
±(q) ´ e
¡rt(q)
=
·
K(q)
¹
¸
v
: (5)
From (5) we see that r has disappeared | it a®ects the time to agreement but not the
payo®s given both players' types. Indeed, we see that:
Proposition 3. Performance is independent of r. Delays are increasing in v. When v
approaches zero (no vested interest), so do delays, and the performance of the symmetric
equilibrium of the war of attrition approaches the ¯rst-best.
4. Performance
In this section I analyze the ex ante performance (from the players' point of view) of the
war of attrition. As a benchmark, I use the alternative robust mechanism of an immediate
\random choice." The comparisons of the war of attrition and of \random choice" can be
read in two ways. First, they may suggest whether or not it would be wise to move towards
a faster process, even at the expense of some loss in the quality of outcome. Second, they
let us compare the e±ciency of setting standards through a symmetric consensus process
against that of having an established standard-setter. These interpretations will di®er
when we consider incentives to improve q below, but for now they are equivalent: in each
case, we get an immediate standard with expected quality ¹.
Write u(q) for the (interim) expected payo® to a player of type q in the symmetric
equilibrium. From the envelope theorem, u
0
(q) is equal to the gain from an increase in q
holding concession strategy ¯xed; thus we have
u
0
(q) = W
Z
q
q
min
±(y) dF (y) = W
Z
q
q
min
·
K(y)
¹
¸
v
dF (y): (6)
10
We can combine (6) with the bottom boundary condition u(q
min
) = L¹ to get
u(q) = L¹ + W
Z
q
q
min
Z
z
q
min
·
K(y)
¹
¸
v
dF (y) dz: (7)
An alternative formulation can be written down simply by accounting for the outcome
as a function of the rival's type, say t:
u(q) = W q
Z
q
q
min
±(t) dF (t) + ±(q)L
Z
1
q
t dF (t);
or
¹
v
u(q) = W q
Z
q
q
min
K(t)
v
dF (t) + LK(q)
v+1
: (8)
Unfortunately, I have been unable to use (7) or (8) to derive general properties of
performance beyond Proposition 3. So we turn to an example.
An Example
The most tractable example I have found is based on the distribution function F (x) ´
1 ¡ x
¡(1+a)
for x ¸ 1 (where a > 2). This distribution has a mean of (a + 1)=(a ¡ 2); I
think of a more as an indicator of dispersion than of mean, however, because the entire
model is transparent to multiplication by a constant factor.
12
We have then
13
K(q) =
a + 1
a ¡ 2
q
¡(a¡2)
:
Hence
±(q) = q
(2¡a)v
(q ¸ 1);
and
u
0
(q) =
v + 1
v + 2
a + 1
1 + (a ¡ 2)(v + 1)
h
1 ¡ q
¡[1+(a¡2)(v+1)]
i
: (9)
12
That is, we can consider the generalized model F (x) ´ 1 ¡ (x=k)
¡(1+a)
for x ¸ k; here, the mean is
k(a + 1)=(a ¡ 2) but the proportional dispersion is the same as for k = 1, i.e., is determined solely by a.
13
The calculations reported here and below are straightforward but tedious; I did or cormed them
using Mathematica.
11
Integrating and using the boundary condition at q
min
= 1, which is
u(1) = L¹ =
1
v + 2
a + 1
a ¡ 2
;
we get
u(q) =
(1 + a) q
£
q
1+2 v¡a¡av
+ (a ¡ 2)(v + 1)
¤
(a ¡ 2) (v + 2) (1 + (a ¡ 2)(v + 1))
: (10)
The lowest type (q = 1)'s expected payo® is L¹, and that of a very high type, q, is
(asymptotically) equal to W q times the expected value of ±(q), which is (1 + a)=[1 + (a ¡
2)(v + 1)].
Taking the expectation of (10) yields the ex ante expected payo® (per player), which
we use as a measure of performance for the participants:
Eu(q) =
(1 + a)
2
(a ¡ 2) [1 + (a ¡ 2)(v + 2)]
: (11)
Comparisons
In the ¯rst-best, the expected per-player payo® is
Z
1
q
min
qF (q) dF (q) =
(a + 1)
2
(a ¡ 2)(2a ¡ 1)
:
Under random choice it is Eu =
1
2
¹ = (a + 1)=[2(a ¡ 2)].
Thus the war of attrition achieves a fraction
2a ¡ 1
1 + (a ¡ 2)(v + 2)
of the ¯rst-best payo®. Perhaps more interestingly, it achieves a fraction
2(a + 1)
1 + (a ¡ 2)(v + 2)
of the payo® achieved under random choice. This latter fraction is greater than 1 if and
only if (a ¡ 2)v < 5.
12
Proposition 4. The ex ante payo®s to participants from the war of attrition exceed those
from random choice if and only if v < 5=(a ¡ 2).
Because selection for quality is valuable ex ante, some vested interest is tolerable, but
too much will more than dissipate the gains from selection. A rapid but rough choice
(random choice), or a pre-determined standard-setter, is superior for participants if v is
large or if a is large. Large v leads to long delays (Proposition 3); a more concentrated
distribution of quality (large a) reduces the value of the selection e®ect. Interestingly, one
might expect the most resistance to a predetermined standard-setter precisely when v is
large.
Other Examples
Although the example above is the only one I have found tractable in general, it is possible
to solve others for the simple case v = 1. I report the relative performance of the war of
attrition and of random choice, and for a benchmark also that of the ¯rst best.
Uniform Distribution. Let q be uniformly distributed on [m ¡
1
2
; m +
1
2
]. Then (lengthy
but straightforward) calculations along the lines above show that the expected per-player
payo® in the war of attrition is
1
12
+
1
120 m
+
m
3
:
In this example the ¯rst best yields
1
12
+
m
2
;
while random choice of course yields each player m=2. Thus the war of attrition out-
performs random choice if and only if 1 + 10m ¡ 20m
2
> 0, or approximately m < :59.
Recalling that the range of q is [m ¡
1
2
; m +
1
2
], I interpret this to mean that if the uniform
distribution is appropriate, the war of attrition will seldom outperform random choice.
Exponential Distribution Let q be exponentially distributed, with density e
b¡q
for q ¸ b.
Then calculation shows that the war of attrition gives per-player payo®s of
17 + 24b + 9b
2
27(1 + b)
:
13
The ¯rst-best payo® per player is
3
4
+
b
2
;
and the random-choice payo® per player is
1
2
(1 + b). Thus, the war of attrition achieves a
fraction
4(17 + 24b + 9b
2
)
27(1 + b)(3 + 2b)
of the gains from the ¯rst-best; this is decreasing in b (which one can interpret as an inverse
measure of relative dispersion of q), and ranges from about .84 when b is very small to 2=3
when b is very large. The ratio of the payo® in the war of attrition to that under random
choice is
2(17 + 24b + 9b
2
)
27(1 + b)
2
;
which exceeds 1 if and only if b < (12
p
2 ¡ 6)=18 ¼ :61.
5. Policies to Reduce Delays in War of Attrition
Standards organizations are very concerned about delays, and try to reduce them.
14
Our
model suggests that delays will shrink if v is reduced, and that given the war-of-attrition
structure, the distribution of q, and v, other strategies intended to reduce delays may be
ine®ective. In this section I describe the application of this idea to a number of policies
mean to reduce delay.
Meeting More Often
Committees charged with developing consensus on a standard typically meet only period-
ically.
15
A natural initiative toward reducing delays is meeting more often. Presumably,
this provides more time to work out technical issues. But to the extent that, as in the
model, the work is largely bargaining, the time to agreement is determined by screening
constraints, and if the °ow costs of delay are primarily the loss of bene¯ts from a new
14
Besen and Farrell (1991) describe competitive pressures that may be part of the motive for these
e®orts.
15
For example, in T1, the ANSI-accredited organization for telecommunications standards, \technical
subcommittees" \ordinarily meet four times each year" (T1 Manual, June 1994, paragraph 4.1.5.1.)
14
market, as in the model, then meeting more often is unlikely to cut delays.
16
Indeed, the
model above assumes perpetual meetings, but the delays persist.
Meeting more often to develop consensus therefore seems likely to be of limited value.
Of course, once consensus is developed, more frequent plenary or o±cial meetings to ¯nish
the process can help. For example, until 1989 the ITU would formally approve standards
only at plenary meetings held only every four years. As a result of reforms intended to
reduce delays, it is now said to be on a \perpetual standards-creation basis", so that
standards can be approved at any time.
17
As one might expect from this reasoning, it appears that SDOs have had much more
success reducing delays in \¯nal processing" than in reaching consensus. For example, in
its 1994 Annual Report (page 4), the IEC noted that \Time for the fundamental part of
standards production | preparatory and technical development stages : : : has remained
substantially the same, while time for the latter stages of approval and publication : : : has
been brought down by more than 60%."
Standardizing Early to Reduce Vested Interest
Vested interests are growing all the time as installed bases grow or proprietary knowledge
develops. As when one sets o® on a commute just before rush-hour, every delay in starting
means a bigger delay in ¯nishing. Thus, some observers urge standardization \in advance
of the market," before vested interests grow strong. For example,
18
16
Of course, if meeting more often raises the direct °ow costs of participation and if that is a signi¯cant
part of the total costs of disagreement, then the time to agreement will indeed fall, although the total
costs imposed on participants by delay may not. This would best be analyzed in a model in which costs
of continuing disagreement are not simply the delay in the bene¯ts of an agreement; David and Monroe
(1994b) discuss such a model. Since nonparticipants too are hurt by delays, such a change may be socially
valuable. But the e®ect seems unlikely to be large, since direct costs are presumably small compared to
foregone revenues from an important market. It is also worth noting that increasing the °ow costs of
participation will presumably drive out some marginal participants, leaving only those with the strongest
vested interests.
17
See e.g., B. Crockett, \ITU takes steps to speed up standards process," Network World, 9 October
1989, page 30, and \CCITT chief plots changes to speed standards e®orts," Network World, 25 December
1989, page 19. For some discussion see Besen and Farrell (1991).
18
\Users fear Standards Groups Act as Vehicles for Vendors' Interests," Infoworld, 12/5/88, page 1,8.
See also a quote from the chair of the T1 committee, in Dorros (1990), and Dorros' agreement (page 38).
15
\Brian Livingston, connectivity specialist for GE Consulting Services and chairman of the
micro managers' 486 standardization committee, believes that to do any good, committees must
be organized before the camps have formed around competing standards. `It's much easier to
establish a standard before the market has [formed] than to go back and get a number of competing
vendors to agree upon one. That's why we formed the 486 committee before the chip was even
released.'
\Once vendors have brought competing products to market,: : : there's often no hope of a clear
standard emerging. One such futile e®ort is the attempt by manufacturers of competing RISC
chips to reach a standard."
In general standardizing in advance seems likely to reduce vested interest, and hence
reduce delay, but to reduce both the opportunities for product development and the relia-
bility of screening for quality. It could thus be viewed as a move \towards" random choice,
with the bene¯ts and costs of such a move discussed above. It is not clear how to put this
formally in the model, however.
Voting Rules to Reduce Power of Vested Interest
Given the extent of vested interest, one can try to reduce its power to delay agreement.
One change in this direction is to weaken the consensus principle.
While the consensus principle does not exactly confer veto power on each interested
party (as in my model), it at least enables important participants to hold up an agree-
ment if they hope to extract something they prefer. In the IEC, for example, even minor
players have sometimes been able to hold up agreement.
19
In response, some standards
bodies have introduced provisions de¯ning the (super)majority required to approve a draft
standard. Besen (1989) describes how the European Telecommunications Standards In-
stitute (ETSI), formed in 1988, replaced the consensus principle by a system of weighted
voting (when consensus \cannot be achieved") that allows adoption of a standard on a 71-
percent weighted majority. Similarly, Besen (1990) discusses voting arrangements in the
Telecommunications Technology Committee (TTC), a Japanese telecommunications stan-
dards body. The ISO allows for draft international standards approved by a two-thirds
majority (of \P-members") provided that \every attempt shall be made to resolve negative
19
Author's interview with Anthony M. Raeburn, Secretaire General, IEC, Geneva, May 1992.
16
votes."
20
The ITU has a similar voting provision; see, e.g., Besen and Farrell (1991). And
the European Communities' Green Paper (1990, paragraph 18) describes how \: : : in 1987,
the internal regulations of CEN/CENELEC were revised at the request of the Commission
to permit the adoption and obligatory transposition of European standards by weighted
majority vote."
The two-player model above is inadequate to analyze the e®ects of changes in voting
rules, but one would expect that requiring \less" consensus would reduce delays, as David
and Monroe (1994a) have argued in a three-player complete-information model. They also
suggest that the screening e®ect (the best quality emerges) would be weakened by relaxing
the de¯nition of consensus.
Incomplete Standardization to Sidestep Con°ict
In practice, formal standards do not always ensure compatibility: not every two \conform-
ing" products are interoperable. The reason is that, partly in response to the uncertainty
about feasibility, cost, and demand that results from standardizing early, but also as an-
other strategy to reduce the delays due to vested interest, a standard will often include
incompatible \options". For instance, Wagner (1990) describes how the IEEE was ex-
pected \to abandon e®orts to choose between Motif and Open Look" and to \standardize
on the common elements of the two rival GUI and windowing systems, then develop a guide
for programmers to write applications that can easily migrate from one GUI to another."
Sirbu and Zwimpfer (1985) describe how incompatible options were included in the X.25
packet switching standard in order to avoid intractable problems of vested interest (since
more than one system already had an installed base). Similarly, Kolodziej (1988) relates
that an impasse in negotiating a PC modem standard (V.42) in the CCITT was overcome
by deciding \to put both protocols into the standard. On the surface, that might seem
like a cop-out : : : "
The result of such a decision is often called a \model" | an incompletely-speci¯ed
standard, or a menu of choices. Two products, both of which conform to the standard,
20
IEC/ISO Directives, part 1, section 2.4.3.
17
may be incompatible if they re°ect di®erent choices from the menu. A set of choices is
sometimes called a pro¯le, or a strict standard or functional standard. These pro¯les
are typically developed outside the original standards organization: by user groups, by
governments, or by other standards organizations. Sometimes, as in ISO, the original
organization then \certi¯es" pro¯les | an ironic twist.
21
One might think that including incompatible options vitiates the standardization e®ort.
That would be too harsh a conclusion. Although a model does not ensure compatibility,
it greatly helps in achieving it. The market, or other organizations, can more easily
choose a pro¯le within a model than choose a standard from scratch: not all possible
(or even proposed) options need be included, and many uncontroversial issues may be
standardized.
22
And, if the market respects the model, it is often easier, cheaper, and
more e®ective to patch together compatibility through converters within a model than it
would be if competing technologies were not constrained by a (nonstrict) standard.
23
For
example, since there are economies of scale in providing converters, a full set of converters
is more likely to be o®ered the smaller the variety of standards permitted within the model,
and having no model is like having an in¯nitely permissive model. Thus, a model is an
important partial solution to the compatibility problem.
Intellectual Property Rules to Reduce Vested Interest
Many standards bodies have intellectual-property rules specifying that if a proprietary
technology is essential in complying with a standard, the owner must agree to license it
21
ISO has recognized a number of pro¯le developers and formed \Feeders' Forums" to coordinate their
activities and to propose pro¯les for special recognition by ISO itself. See for instance \The standards
deluge: a sound foundation or a tower of Babel?" Data Communications, September 1988, especially
pages 163{164.
22
Even in the controversial case of high-de¯nition television (HDTV), I understand that many parameters
of a standard have been internationally agreed. Although the remaining coicts will probably be enough
to ensure that receivers cannot be freely traded (this may have been the intention of some countries), the
areas of agreement are large enough to make converting and trading programming much easier than it might
have been. Indeed, a proposal to construct \open architecture receivers" that would be consistent with
a restricted but considerable range of derent possible dusion standards was taken seriously (although
¯nally rejected).
23
See for instance Wagner (1990).
18
liberally. For example, ANSI rules require that any patented technology used in a pro-
posed standard be licensed either \without compensation" or \under reasonable terms
and conditions that are demonstrably free of any unfair discrimination."
24
Similarly, the
ISO's Directives require that if a standard is prepared \in terms which include the use of a
patented item," then the patent-holder must promise to \negotiate licences under patent
and like rights with applicants throughout the world on reasonable terms and conditions."
25
Such licensing requirements will reduce the winner's payo® ex post, and increase the
loser's; they therefore reduce v, and our model indicates that this will reduce delays.
The obvious economic concern is whether these rules reduce the incentive to develop a
technology in the ¯rst place; I address this question next.
6. E®ects of v on Incentives to Improve Proposals
Does reducing v reduce the winner's rewards and thus reduce the incentives to produce
a good system? Equation (6), which gives a formula for u
0
(q), tells us something about
a ¯rm's incentive to improve (the distribution of) the quality of its proposals. Assuming
that the ¯rm's rival cannot observe the ¯rm's quality improvements, it will continue to
play the same concession strategy, and (6) o®ers a point-speci¯c estimate of the gain from
improving quality. This can be combined with the marginal e®ect on the distribution
of q from some e®ort variable.
26
Perhaps more intuitively, e®ort that shifts q up by dq
24
Appendix I, \ANSI's Patent Policy", in ANSI (1987). ANSI does not mention copyright protection,
presumably since historically copyright did not protect things that might be needed for compliance to a
standard. Recently, however, this has begun to change, since much software is protected by copyright, and
since compatibility at the user interface often requires that a \look and feel" be imitated. See for instance
Menell (1987).
25
International Organization for Standardization, Directives, 1989, Part 2 (\Methodology for the Devel-
opment of International Standards"), Appendix A.
26
Thus, suppose that player 1 believes that its rival, player 2, has a distribution of q as given above,
and that player 2 will play a concession strategy as if it believes that player 1 also has this distribution.
Suppose however that player 1 chooses an e®ort variable e that a®ects the distribution of its quality q,
which it (but not player 2) will observe before the war of attrition takes place. Then the gross incentive
to increase e is given by the change in its expected payo®,
Z
1
q
min
u(q) dF (q; e) = [u(q)F (q; e)]
1
q
min
¡
Z
1
q
min
F (q; e)u
0
(q) dq;
19
whatever the realization of the random q will have value E[u
0
(q)] dq. Thus we take E[u
0
(q)]
as a measure of the incentive to improve quality. In our example, this incentive is
I ´ E[u
0
(q)] =
(1 + a)
2
(1 + v)
(a ¡ 1) (2 + v) [2 + (a ¡ 2)(v + 2)]
:
Higher v raises W (the winner captures a greater fraction of the gains when its system is
¯nally adopted), but this event is delayed longer. In fact,
dI
dv
=
(a + 1)
2
(a ¡ 1)
2 ¡ v(a ¡ 2)(v + 2)
(v + 2)
2
[¡2 + 2a ¡ 2v + av]
2
:
Thus increasing v increases the mean incentive to improve q (i.e., dI= dv > 0) if and only
if v(v + 2) < 2=(a ¡ 2): that is, if v is small enough. We also note that when a is large (a
concentrated distribution of quality), increasing v is unlikely to increase quality incentives.
Proposition 5. Increasing vested interest in the war of attrition increases incentives to
improve q only if vested interest is small enough, and if the distribution of quality is di®use
enough (a relatively small).
I conclude that, to the extent one can judge from a special model, imposing an
intellectual-property rule such as the standards organizations use need not reduce par-
ticipants' incentives to innovate.
Relative Incentives with Random Choice
Consider the corresponding quality incentive facing a predetermined standard-setter, i.e.,
a ¯rm that knows in advance that its proposal will be adopted. Its payo® is of course
simply Wq, so its incentive to increase q slightly is just W = (v + 1)=(v + 2). The ratio of
this to the incentive I for each ¯rm in the war of attrition (derived above) is
I
RC
I
WOA
=
(a ¡ 1)[2 + (a ¡2)(v + 2)]
(a + 1)
2
: (12)
and the ¯rst term on the right is independent of e provided that the support of q does not change; hence
the gross incentive to increase e is
¡
Z
1
q
min
@F (q; e)
@e
u
0
(q) dq:
20
For small v, each ¯rm has stronger incentives to innovate under the war of attrition than
has a predetermined standard-setter if and only if a is small enough (a < 6 approximately).
For large v the incentives are stronger for a predetermined standard-setter.
Our other interpretation of \random choice," as a very rough-and-ready choice between
systems once they are presented, gives each participant a quality incentive is
1
2
W . Thus
each one's incentive, relative to what it would be in the war of attrition, is half of (12).
For small v, incentives are greater under the war of attrition (for all a); for large v, they
are greater under random choice.
Proposition 6. For large v, incentives to improve quality are greater under random choice
than in the war of attrition. For small v, the war of attrition provides more incentives than
does a symmetric random choice; and if in addition a is small enough (the quality distri-
bution is spread out), the war of attrition provides more incentives than a predetermined
standard- setter faces.
7. Non-Participants' Interests
Observers of formal standardization often fear that the interests of users, who typically
do not themselves participate in the standards process, may therefore be poorly served. In
this section I use the modeling framework above to think about how users' and participants'
interests may di®er, and thus where we might be concerned about representation.
In very broad terms, the problem may not be very severe, provided that active par-
ticipants indeed want a standard. Nonparticipants (users) typically want a standard too.
Users want the standard promptly; so | in this case | do vendors. Users want a high-
quality standard; so, presumably, do vendors. Thus participants and nonparticipants have
similar lists of objectives. However, the tradeo®s among these objectives may di®er con-
siderably between users and vendors. In our notation, participants' payo®s in aggregate
are equal to (W + L)qe
¡rt
; nonparticipants' might be represented as (1 ¡W ¡L)f(q)e
¡rt
,
where we remove the normalization W + L ´ 1 used in discussing participants' incentives
above, and where the function f(¢) might be nonlinear. Thus the two groups' incentives
di®er in several possible ways:
21
Direct Costs
The most obvious asymmetry is the direct costs of the process. The fact that participants
bear these costs and nonparticipants do not leads to several possible disagreements. First,
there may be a public-good or free-rider problem: if participants' rents are too small,
nobody may want to bear the costs of participating, even though a standard would be
desirable. Second, if working faster has a greater °ow cost, the model suggests that par-
ticipants should be roughly indi®erent to this (since it is screening that determines time to
agreement), while nonparticipants will urge more speed. Third, standardizing in advance
may increase direct costs, because they are borne earlier, and they are often borne when
delay might reveal that there will be no market or that a completely di®erent approach
is appropriate; this suggests that if active participants set the rules, they will be biased
against working in advance.
In my model I have assumed that the cost of delay is primarily the delay in getting the
bene¯ts of a standard. If direct costs are important, a slightly di®erent modeling approach
is appropriate, including such costs; David and Monroe (1994b) give such an approach.
Rent-Shifting
Participants want both W and L to be large, although they also care about the balance
between them; nonparticipants (at least once the systems are developed) would prefer both
W and L to be small, as well as wanting v to be small (i.e., W ¼ L).
How might participants design or in°uence the process so as to increase their joint
rents W + L? They might set substantial license fees for the standard or for technology
embodied in the standard. They might perhaps favor a technology that (given its quality)
has a demand structure that lets an oligopolistic industry extract a relatively large fraction
of the social surplus. They might use the standards process to exclude rent-destroying new
technology.
I suspect that such actions, if jointly undertaken in order to increase joint rents, would
be regarded in the United States as antitrust violations; in any case, if I am correct that
the standards community generally is very apprehensive about antitrust, they would have
some motive to steer clear of such acts. Nevertheless, to the extent that active participants
choose the rules, some vigilance in this regard is appropriate.
22
Value of Additional Quality
If f(¢) does not take the form f(q) ´ fq, i.e., if it is not linear (through the origin),
then participants and nonparticipants have di®erent tradeo®s between quality and speed
of the process.
27
For instance, if f(¢) is less steep than this, it means that nonparticipants
want some standard but do not care very much about its quality, relative to participants'
preferences. This is an incidence question: where do the incremental rents from higher
quality accrue, and does this di®er according to the level of quality? The answer will
depend on the details of user tastes and of competition among vendors.
Closely allied to this question is the tradeo® between speed and completeness of a
standard. If an incomplete standard (or \model") can be adopted relatively easily and
therefore quickly, is it worth the extra delay in order to make the standard more complete,
and thus either ensure full interoperability or at least reduce the cost and increase the qual-
ity of converters? Again, this is an incidence question: who bears the costs of converters?
For instance, if the costs of converters fall primarily on users while the costs of delays are
shared between vendors and users, we would suspect that the vendors are inclined to set
the rules in such a way as to over-use converters and under-use standardization, relative
to the overall e±cient solution. The side that has higher proportional losses is less ready
to accept speedy incomplete standardization.
28
8. Conclusion
In evaluating rules for formal standardization, both discrimination and speed count. Vol-
untarism and a concern for consensus lead to the war of attrition, which discriminates well
in our simple model but is slow. Moreover, a more realistic model might darken the rosy
27
This will also be true, of course, if they have derent discount rates, but this seems less interesting.
28
Some surprising results can arise in such an analysis. For example, in the special duopoly model of
Farrell and Saloner (1992), less-e±cient converters hurt ¯rms' pro¯ts but actually (because of oligopoly
price e®ects) help consumers; thus, if a stricter model makes more e±cient converters possible, ¯rms are
inclined to try too hard to reach agreement on a strict standard. Cheaper converters, on the other hand,
help both ¯rms and consumers; the question therefore becomes whose surplus is more dramatically a®ected
by cheaper converters. That is, which is larger: the elasticity of consumer surplus with respect to converter
cost, or the same elasticity for pro¯ts? Calculations from equations (15) and (16) of Farrell-Saloner show
that the comparison is ambiguous: so, if a stricter model makes cheaper converters possible, it is ambiguous
whether ¯rms go too far or not far enough.
23
conclusion of Proposition 2: willingness to wait may be imperfectly correlated with quality.
For instance, as Katz and Shapiro (1985) showed, the desire for standards is likely to vary
with market share; thus those most willing to concede may be the small ¯rms rather than
(as in my model) those with mediocre systems.
In cases where vested interest is important, it might be more e±cient to stress speed
even at the expense of screening for quality | perhaps somehow moving towards a rapid
but less careful choice. Tentative calculations suggest that e®ects on development incen-
tives might even be favorable and would probably not be disastrous.
Where vested interest is important and where the quality of proposals is likely to be
similar, it may also be more e±cient to have a predetermined standard-setter. However,
the bene¯ts of this institution are asymmetrically distributed: the standard-setter gains
disproportionately.
24
REFERENCES
American National Standards Institute (ANSI), "Procedures for the Development and Coordination of
American National Standards," approved by ANSI Board of Directors, 9/9/1987.
American National Standards Institute (ANSI), "Guide for Accelerating the Development of American
National Standards," leaflet, n.d.
American National Standards Institute (ANSI), "Guidelines for Intellectual.........
Arthur, W. Brian, "Competing Technologies, Increasing Returns, and Lock-In by Historical Small
Events," Economic Journal 99, March 1989, 116-131.
Berg, John L., and Harald Schumny, An Analysis of the Information Technology Standardization
Process. Amsterdam: North-Holland, 1990.
Besen, Stanley, "European Telecommunications Standards Setting: A Preliminary Analysis of the
European Telecommunications Standards Institute", Telecommunications Policy, 1991.
Besen, Stanley, and Joseph Farrell, "The Role of the ITU in Telecommunications Standards-Setting:
Pre-Eminence, Impotence, or Rubber Stamp," Telecommunications Policy, August 1991.
Besen, Stanley, and Leland Johnson, "Compatibility Standards, Competition and Innovation in the
Broadcasting Industry," RAND report R-3453-NSF (1986).
Besen, Stanley, and Garth Saloner, "The Economics of Telecommunications Standards," in Changing
the Rules: Technological Change, International Competition, and Regulation in
Telecommunications. R. Crandall and K. Flamm, eds. Washington: The Brookings Institution,
1989.
Bliss, Christopher, and Barry Nalebuff, "Dragon Slaying and Ballroom Dancing: The Private Supply of
a Public Good," Journal of Public Economics 25 (1984), 1-12.
Bolton, Patrick, and Joseph Farrell, "Decentralization, Duplication, and Delay," Journal of Political
Economy, 1990.
Cargill, Carl, Information Technology Standardization. Digital Press, 1989.
Commission of the European Communities, "Action for faster technological integration in Europe"
("Green Paper"), Official Journal of the European Communities, 28 January 1991.
Commission of the European Communities, "Standardization in the European Economy (Follow-up to
the Commission Green Paper ...)", COM(91) 521 final, mimeo, Brussels: Office for Official
Publications of the European Communities, 16 December 1991. Catalogue number CB-CO-
91-580-EN-C.
Crane, Rhonda, The Politics of International Standards: France and the Color TV War. Ablex, 1979.
David, Paul, "Clio and the Economics of QWERTY," American Economic Review (1985).
David, Paul, and Shane Greenstein, "The Economics of Compatibility Standards: An Introduction to
Recent Research," Economics of Innovation and New Technology 1 (1990), 3-42.
Dorros, Irwin, "Can Standards Help Industry in the United States to Remain Competitive in the
International Marketplace?", in National Research Council (1990).
Economides, Nicholas, "Variable Compatibility without Network Externalities," mimeo, New York
University, 1989.
Ergas, Henry, "Information Technology Standards: The Issues." [Special Report] Telecommunications,
September 1986, pp 127-133.
Farrell, Joseph, "Standardization and Intellectual Property," Jurimetrics Journal, 1989.
Farrell, Joseph, and Garth Saloner, "Standardization, Compatibility and Innovation," Rand Journal of
Economics 16 (1985) 70-83.
Farrell, Joseph, and Garth Saloner, "Compatibility and Installed Base: Innovation, Product
Preannouncement and Predation," American Economic Review 77 (December 1986b).
Farrell, Joseph, and Garth Saloner, "Coordination Through Committees and Markets," Rand Journal of
Economics, Summer 1988.
Farrell, Joseph, and Garth Saloner, "Converters, Compatibility, and the Control of Interfaces," Journal
of Industrial Economics 40, March 1992.
Farrell, Joseph, and Carl Shapiro, "Standard-Setting in High-Definition Television," Brookings Papers
on Economic Activity: Microeconomics, 1992, 1-93.
Federal Trade Commission, "Standards and Certification", Final Staff Report (April) and Report of the
Presiding Officer on Proposed Trade Regulation Rule (June). 1983.
Fudenberg, Drew, and Jean Tirole, "A Theory of Exit in Duopoly", Econometrica, 54, 1986, 943-960.
Gabel, H. Landis, "Open Standards in the European Computer Industry: The Case of X/OPEN," in
Gabel, ed., Product Standardization and Competitive Strategy, North-Holland, 1987.
Gantz, John, "Standards: What they are, what they aren't," Networking Management, May 1989.
Ghemawat, Pankaj, and Barry Nalebuff, "Exit", Rand Journal of Economics, 16 (1985) 184-194.
Harris, Charon J., "Advanced Television and the Federal Communications Commission," Federal
Communications Law Journal, 1992.
International Electrotechnical Commission and International Organization for Standards, Directives (2
volumes). Geneva.
International Electrotechnical Commission, Annual Report, 1991. Geneva.
Katz, Michael and Carl Shapiro, "Competition, Compatibility, and Standards", American Economic
Review, 1985.
Katz, Michael and Carl Shapiro, "Technology Adoption with Network Externalities", Journal of
Political Economy, 1986a.
Katz, Michael, and Carl Shapiro, "Product Compatibility Choice in a Market with Technological
Progress," Oxford Economic Papers 38 (1986b) 146-165.
Katz, Michael, and Carl Shapiro, "Product Introduction with Network Externalities," Journal of
Industrial Economics, 40 (1992) 55-84.
Kolodziej, Stan, "Egos, Infighting, and Politics: Standards Process Bogged Down," Computerworld,
September 7, 1988, 17-22.
Lee, John A.N., "Response to the Federal Trade Commission's Proposed Ruling on Standards and
Certification," Communications of the ACM, 24, June 1981.
Levy, J.D., Diffusion of Technology and Patterns of International Trade: The Case of Television
Receivers. PhD thesis, Yale University, 1981.
Liebowitz, S.J., and S.E. Margolis, "The Fable of the Keys," Journal of Law and Economics, 1990.
Menell, Peter, "Tailoring Legal Protection for Computer Software," Stanford Law Review, 39 (1987)
1329-1372.
Myerson, Roger, and Mark Satterthwaite, "Efficient Mechanisms for Bilateral Trading," Journal of
Economic Theory, 29 (1983) 265-281.
National Research Council, Crossroads of Information Technology Standards. Washington: National
Academy Press, 1990.
Rockwell, William H., "Liability Concerning Voluntary Standards Activities in the US," Address at the
EDI Letters of the Law Seminar, Dallas, Feb. 15, 1990.
Saloner, Garth, "Economic Issues in Computer Interface Standardization," Economics of Innovation
and New Technology, 1 (1990) 135-156.
Samuelson, Pamela, "CONTU Revisited: The case against copyright protection for computer
programs," Duke Law Journal, 1984, 663-769.
Sirbu, Marvin, and Zwimpfer, Laurence, "Standards Setting for Computer Communication: The Case
of X.25" IEEE Communications Magazine, 23, March 1985, 35-45.
Swann, G.M.P., "Standards and the Growth of a Computer Network," in Berg and Schumny, op.cit.,
1990.
Thompson, G., "Intercompany Technical Standardization in the Early American Automobile Industry,"
Journal of Economic History, Winter 1954.
U.S. Congress, Office of Technology Assessment, Global Standards: Building Blocks for the Future.
March 1992.
Wagner, Mitch, "IEEE Hopes to Revive GUI Drive," UNIX Today!, June 25, 1990, page 48.
Weiss, Martin, and Marvin Sirbu, "Technological Choice in Voluntary Standards Committees: An
Empirical Analysis," Economics of Innovation and New Technology 1 (1990) 111-134.
Weiss, Martin, and R. T. Toyofuku, "Free Ridership in the standards-setting process: the case of
10BaseT," Technical report, University of Pittsburgh, 1993.
... Standards are specifications for better compatibility and interoperability, and have important implications for market performance and economic welfare (David, 1985;Farrell and Saloner, 1985;Stango, 2004). Many standards are developed via formal negotiation and coordination in industry consortia (David and Greenstein, 1990;Farrell, 1996), in which a large number of firms work together to develop standard based on consensus (Greenstein, 1992;Weiss and Cargill, 1992). Process benefits motivate members to behave more cooperatively. ...
... (MISMO website) In transaction costs theory, economic agents align transactions with governance structures to reduce costs (Teece, 1986;Williamson, 1996). A standard consortium, consisting of many member firms and a set of policies and procedures for standardization, is characterized by the governance structure, effectiveness, and competitive behavior of member firms (David and Greenstein, 1990;Farrell, 1996). Since the governance structures of such standard consortia are similar (e.g. ...
... Since members are likely to compete with one another outside the standard consortium (Weiss and Sirbu, 1990;Farrell, 1996), some competitive behavior of member firms must be taken as given. The firm must mitigate the negative impact from competition while interacting with other firms to acquire learning and social capital benefits. ...
... Examples of proprietary de facto standards include Google's Android as well as Apple's iOS. Such competition can lead to fierce battles between incompatible technologies for market share (Farrell 1996), commonly known as standards wars (Cusumano et al. 1992;Shapiro and Varian 1999). When dealing with unsponsored standards, the new standard is essentially chosen based on demand-side decisions alone, while in the case of sponsored standards, owners can strategically influence users' behavior (Stango 2004). ...
Article
Full-text available
In response to the impact of the SARS-CoV-2 (COVID-19) pandemic, various developers turned to smartphone-based contact tracing to address the challenges of manual tracing. Due to the presence of network effects, i.e., the effectiveness of contact tracing applications increases with the number of users, information technology standards were critical to the technology’s success. The standardization efforts in Europe led to a variety of trade-offs concerning the choice of an appropriate technological architecture due to the contradictory tensions resulting from the dualism between the need for contact tracing data to contain the pandemic and the need for data minimization to preserve user privacy. Drawing predominantly on the software platform and standards literature, we conduct an interpretive case study to examine the emergence and consequences of this multi-layered decision situation. Our findings reveal how Google and Apple were able to limit the individual leeway of external developers, thereby effectively resolving the European standards war. Furthermore, we identify and discuss the various short-term and long-term trade-offs associated with the standardization of contact tracing applications and translate our findings into recommendations for policy makers with respect to future crisis situations. Specifically, we propose a strategy grounded in our data that enables responsible actors to make goal-oriented and rapid decisions under time constraints.
... While the activities of a private consortium led by a giant platform with significant market influence, such as Google or Intel, play a central role in creating implementation cases in the United States, much expense and time are needed to reach consensus and develop standards for the transformation of society and industrial structure with information technologies in Germany and Japan. The increase in and diversification of participants causes delays in the standardization process [47][48][49]. However, there are areas where the diversity of stakeholders is not the only barrier. ...
Article
Full-text available
The concept of the Internet of Things (IoT), which is an architecture in which devices supplied by various firms and services operated by distributed organizations exchange data, has been adopted in an increasing number of situations. While there are cases in which a small number of limited organizations collaborate on certain ecosystems based on proprietary specifications, the development of open standards is increasingly important for building scalable ecosystems because of the introduction of the concepts of Industry 4.0 and Society 5.0. Under these circumstances, there are two types of barriers to standardization. One barrier is the lack of shared frames for architectural design. The other barrier is the lack of awareness of the need for scalability. In this paper, we analyze the factors underlying these two barriers and discuss the path towards breakthroughs.
... The literature differentiates between de jure and de facto standards (Farrell & Simcoe, 1996;Funk, 2001). De facto standards refer to processes whose objective is uni-formity, where all or nearly all potential adopters use the same interoperability agreements and turn it into a system that is hard to deviate from (Brunsson, Rasche, & Seidl, 2012), such as the native Microsoft Word 'doc' and 'docx' file format for storing and exchanging text documents. ...
Article
Full-text available
Taking the region of Flanders in Belgium as a case study, this article reflects on how smart cities initiated a grassroots initiative on data interoperability. We observe that cities are struggling due to the fragmentation of data and services across different governmental levels. This may cause frustrations in the everyday life of citizens as they expect a coherent user experience. Our research question considers the relationship between individual characteristics of decision makers and their intention to use data standards. We identified criteria for implementing data standards in the public sector by analysing the factors that affect the adoption of data governance, based on the Technology Readiness and Acceptance Model (TRAM), by conducting an online survey (n = 205). Results indicate that respondents who score high on innovativeness have a higher intention to use data standards. However, we conclude that personality characteristics as described in the TRAM-model are not significant predictors of the perceived usefulness and perceived ease of use of data standards. Therefore, we suggest exploring the effects of network governance and organisational impediments to speed-up the adoption of open standards and raise interoperability in complex ecosystems.
Article
Standard specifications that realize mutual availability in data distribution are indispensable for cooperation between different fields. On the other hand, the forming standardization processes that allow many different things such as physical objects and services to be connected through the Internet, generate costs and require time to form consensus due to stakeholder diversification. To adapt to social evolution and use of big data generated by a massive amount of distributed data, establishing a method to develop a standard of data specification that involves a large number of diverse industries and stakeholders is necessary. The paper analyzes the evolution of the Standard Developing Organizations (SDOs) management policy for data-related technologies and discusses strategies for encouraging data transactions with rapid standardization processes and early diffusion.
Chapter
The purpose of this chapter is to situate this study in a global economic and social context, and to review the literature and discourses that inform this study and identify its objects of analysis. The discourse on enclosure, including its key concepts, is examined in some detail. The study is couched in an immediate discursive context and then in a greater economic, social, and historical context. The discourse on standards and standardization is briefly surveyed here, but a detailed analysis is left for later discussion in Chapter VII. The key relevant discourses are examined. Related and useful discourses on social construction of technology and on institutionalization are also examined.
Chapter
While the welfare implications of de jure standardization is an extremely complex question, economic theorists and standards practitioners alike have suggested that speed is an important dimension of performance for non-market Standard Setting Organizations (SSOs). A variety of factors may influence the timeliness of SSOs, including the complexity of the underlying technology; the commercial significance of the proposed standard; and the rules governing the consensus decision-making process. This chapter uses data from the Internet Engineering Task Force (IETF) to take a preliminary look at the relationship between the size, scope, and composition of SSO committees and the time required for those groups to reach a consensus. In particular, it documents a significant slowdown in IETF standard setting that coincides with the commercialization of the Internet during the 1990s. The chapter concludes by discussing several open questions related to the political economy of voluntary standards creation and suggesting that the increased availability of archival data – from institutions such as the IETF – makes this a promising area for empirical research.Introduction In 1986, twenty-one people attended the first meeting of the Internet Engineering Task Force (IETF) – the organization that creates and maintains the technical standards used to run the Internet. Over the next fifteen years, the rapid growth and commercialization of the Internet helped make the IETF an important Standard Setting Organization (SSO) for the rapidly converging fields of computing and communications. By 2001, its meetings regularly drew more than 2,000 participants. © Cambridge University Press 2007 and Cambridge University Press, 2009.
Article
This research examines who participates in the de jure standard setting in Japan, so as to understand the benefits and costs that corporations receive from participation in de jure standard setting in Japan. In particular, the research and development (R&D) expenditure to sales (R/S) ratio is used to examine corporations’ relative R&D position in their industry sector, in the following four sectors: (1) production machinery, (2) transportation machinery, (3) non-ferrous metals, and (4) information and communications technology. In addition, an analytical framework for the cost and benefit structure of corporate participation in standard development organizations is described. We have found that in R&D-intensive industries, there is less participation from high R/S corporations. This result is in accordance with previous research into EU de jure standardization, but not with the US case.
Article
It is challenging for vertical standards consortia (VSCs) to succeed and thrive, since they need to serve heterogeneous members, their operations depend on members' voluntary contributions, and the social interactions within them can be complex to manage. Yet there is limited understanding of how such voluntary initiatives can sustain themselves. To fill the gap, we conceptualize VSCs as communities of practice and systematically explore their sustainability through the lens of theories underlying communities of practice and the resource-based model of sustainable social structures. Using the case study approach, we propose a multilevel framework that explains how critical member-level dynamics, consortium characteristics, and industry characteristics affect VSC sustainability.
Article
Full-text available
This paper surveys the contributions that economists have made to understanding standards-setting processes and their consequences for industry structure and economic welfare. Standardization processes of four kinds are examined, namely: (1) market competition involving products embodying unsponsored standards, (2) market competition among sponsored (proprietary) standards, (3) agreements within voluntary standards-writing organizations, a18d (4) direct governmental promulgation. The major trajectories along which research has been moving are described and related to both the positive and the normative issues concerning compatibility standards that remain to be studied.
Article
Full-text available
In many cases, standards have public goods attributes. As a result it is important to consider how the development costs are provided. It is well known that public goods, due to their nonexclusionary nature, are subject to free riders. We consider free-ridership in standardization in general, and examine the case of one standard, IEEE 802.3i (10BaseT) in particular. We show that free-ridership existed in the development of the 10BaseT standard, and in the subsequent product market, by specifying the criteria for the existence of free-ridership and by providing the necessary data to show that such an issue actually exists. We discuss the consequences of free-ridership and consider the implications for the standards development process in general.
Article
Vendors frequently compete to have their technology adopted as part of a voluntary consensus standard. In this paper we report the results of an empirical study of the factors that influence the choice of technologies in voluntary technical standards committees. Participation in standards committees is viewed as an aspect of the product development process of corporations involved in markets where network externalities are present. The factors hypothesized to affect the technology decision are: the market power of the coalition sponsoring the technology, the installed base of the products containing the technology, the size of the firms that make up the coalition, the promotional activities of the sponsors (such as technical contributions submitted), the perceived superiority of the technology, and the political skills of the coalition. These hypotheses were tested by collecting data concerning specific technical decisions that were made in several standards committees in the area of computer communications hardware. Logit regression was used to infer the importance of each factor in predicting adoption or non-adoption of the technology. The results suggest that the size of the firms in the coalition supporting a technology and the extent to which they support their position through written contributions are significant determinants of technological choice in the standards decisions studied. The market share of the firms in the coalition was found to be significant only for the buyers of compatible products, i.e., the monopsony power was significant, not the monopoly power. In addition, the technologies whose sponsors weighted market factors more highly than technical factors were more likely to be adopted in the standards decision studied. The proponents of both the adopted and non-adopted technologies were found to have equal belief in the overall technical superiority of their technical alternative, even after the decision. The installed base of a technology and process skills were not found to be significant predictors of the committee outcome.
Article
To the best of my knowledge, this is the first-ever HICSS minitrack on standardization. I consider this a highly relevant and, particularly, interesting topic; yet, the latter perception every now and then tends to raise an eyebrow or two.This is still very much a developing — and even widely unknown — discipline, so this brief introduction shall touch on some of the topics typically discussed by standards researchers. Maybe some of you will become interested.'Standard' and 'standardization' are tricky terms. They are even trickier when it comes to information technology. Think about it for a minute — what exactly establishes a 'standard'? Is a specification rubber-stamped by of one of the 'official' standards setting bodies a standard? Alternatively, is the degree of usage of a system or a product the decisive factor — is, for instance, MS-Word a 'standard', or SAP/R3? Do industry consortia actually issue 'standards'? In addition, what about the Internet — are the RFCs published in the STD-series standards? Ask any three people and the odds are that they will come up with at least four different opinions. In addition, if you can argue so splendidly about the definition alone.These days, a web of SDOs (Standards Developing Organizations) at the global, regional, and national level issue what is commonly referred to as 'de jure' standards — although none of their standards have any regulatory power. Likewise, a plethora of industry fora and consortia (a recent survey found more than 250) produce so-called 'de-facto' standards almost by the week.As a result, there exists an almost impenetrable maze of what is generally called 'standards', ranging from company specific rules, over regional and national regulations, up to globally accepted standards. Moreover, one may distinguish between different types of standards: there are voluntary, regulatory, de jure, de facto, pro-active, reactive, public, industry, and proprietary standards; this list is by no means exhaustive.Just as Andrew Tanenbaum once put it: “The nice thing about standards is that there are so many to choose from”... The desire to make sure that the 'right' standard emerges normally lies at the heart of firms' involvement in the standards setting process, be it in the 'official' process or in consortium-led activities (regrettably, though, trying to prevent standards from coming into being may also be a motivation for participation). Yet, what exactly characterizes the 'right', or at least a 'good' standard is far from being clear. One author associates a good standard with the attributes 'speed' and 'meet technical requirements'. Whilst these characteristics are valuable for winning stakeholders' support, this is a surprisingly narrow focus. In particular, whether or not 'speed' is necessarily desirable is an issue open for debate, at least in my view. Moreover, meeting organizational and, particularly, societal requirements should clearly play a role in standards setting as well.These days, standardization is becoming all the more important with the increasing economic and corporate globalization. That is, you also have to think about the economic consequences of standardization. Pros and cons of joining the standardization bandwagon vs. trying to push a proprietary solution need to be considered by companies. Standards based products or services may imply price wars and lower revenues, but may also open new markets and widen the customer base. Offering a proprietary solution may yield (or keep, rather) a loyal customer base, but may also result in a technological lock-in and, eventually, marginalization for the vendor or service provider. In fact, the economics of standards seems to be the best researched aspect.There are other, maybe more theoretical questions surrounding standards. Do they really hamper progress and stand in the way of technical innovation? A very popular perception, but is it accurate? Should we really leave it to the market — and its hype — alone to decide about winning technologies? Technology studies tell us that this would be unwise, but are they correct?Unfortunately, the papers of this minitrack will not give you definite answers to these questions. They will, however, provide you with food for thought and with additional insight into some of the most interesting issues surrounding standards and standardization.
Article
The role of institutions in mediating the use of intellectual property rights has long been neglected in debates over the economics of intellectual property. In a path-breaking work, Rob Merges studied what he calls "collective rights organizations," industry groups that collect intellectual property rights from owners and license them as a package. Merges finds that these organizations ease some of the tensions created by strong intellectual property rights by allowing industries to bargain from a property rule into a liability rule. Collective rights organizations thus play a valuable role in facilitating transactions in intellectual property rights. There is another sort of organization that mediates between intellectual property owners and users, however. Standard-setting organizations (SSOs) regularly encounter situations in which one or more companies claim to own proprietary rights that cover a proposed industry standard. The industry cannot adopt the standard without the permission of the intellectual property owner (or owners). How SSOs respond to those who assert intellectual property rights is critically important. Whether or not private companies retain intellectual property rights in group standards will determine whether a standard is "open" or "closed." It will determine who can sell compliant products, and it may well influence whether the standard adopted in the market is one chosen by a group or one offered by a single company. SSO rules governing intellectual property rights will also affect how standards change as technology improves. Given the importance of SSO rules governing intellectual property rights, there has been surprisingly little treatment of SSO intellectual property rules in the legal literature. My aim in this article is to fill that void. To do so, I have studied the intellectual property policies of dozens of SSOs, primarily but not exclusively in the computer networking and telecommunications industries. This is no accident; interface standards are much more prevalent in those industries than in other fields. In Part I, I provide some background on SSOs themselves, and discuss the value of group standard setting in network markets. In Part II, I discuss my empirical research, which demonstrates a remarkable diversity among SSOs even within a given industry in how they treat intellectual property. In Part III, I analyze a host of unresolved contract and intellectual property law issues relating to the applicability and enforcement of such intellectual property policies. In Part IV, I consider the constraints the antitrust laws place on SSOs in general, and on their adoption of intellectual property policies in particular. Part V offers a theory of SSO intellectual property rules as a sort of messy private ordering, allowing companies to bargain in the shadow of patent law in those industries in which it is most important that they do so. Finally, in Part VI I offer ideas for how the law can improve the efficiency of this private ordering process. In the end, I hope to convince the reader of four things. First, SSO rules governing intellectual property fundamentally change the way in which we must approach the study of intellectual property. It is not enough to consider IP rights in a vacuum; we must consider them as they are actually used in practice. And that means considering how SSO rules affect IP incentives in different industries. Second, there is a remarkable diversity among SSOs in how they treat IP rights. This diversity is largely accidental, and does not reflect conscious competition between different policies. Third, the law is not well designed to take account of the modern role of SSOs. Antitrust rules may unduly restrict SSOs even when those organizations are serving procompetitive ends. And enforcement of SSO IP rules presents a number of important but unresolved problems of contract and intellectual property law, issues that will need to be resolved if SSO IP rules are to fulfill their promise of solving patent holdup problems. My fourth conclusion is an optimistic one. SSOs are a species of private ordering that may help solve one of the fundamental dilemmas of intellectual property law: the fact that intellectual property rights seem to promote innovation in some industries but harm innovation in others. SSOs may serve to ameliorate the problems of overlapping intellectual property rights in those industries in which IP is most problematic for innovation, particularly in the semiconductor, software, and telecommunications fields. The best thing the government can do is to enforce these private ordering agreements and avoid unduly restricting SSOs by overzealous antitrust scrutiny.