ArticlePDF Available

An Empirical Guide to Hiring Assistant Professors in Economics

Authors:

Abstract

We study the research productivity of new graduates of top Ph.D. programs in economics. We find that class rank is as important as departmental rank as predictors of future research productivity. For example the best graduate from UIUC or Toronto in a given year will have roughly the same number of American Economic Review (AER) equivalent publications at year six after graduation as the number three graduate from Berkeley, U. Penn or Yale. We also find that research productivity of graduates drops off very quickly with class rank at all departments. For example, even at Harvard, the median graduate has only 0.04 AER paper at year six, an untenurable record at almost any department. These results provide guidance on how much weight to give to place of graduation relative to class standing when hiring new assistant professors. They also suggest that even the top departments are not doing a very good job of training students to be successful research economists for any not in the top of their class.
Vanderbilt University Department of Economics Working Papers, VUECON-13-00009
1. Introduction
Top departments in economics are able to choose their new assistant professors from among the top
graduates of other top departments. At lesser departments, there is always a debate about whether it is
better to hire lower ranked graduates from top departments, or the best graduates from lower ranked
departments. Surely the worst Ph.D. out of Harvard or Chicago in a given year should be avoided, but
what about the tenth best? On the other hand, even if we believe the recommendations claiming that a job
candidate out of Ohio State or Duke is the best thing they have produced in five years, is this enough to
make it likely we would be able to tenure the candidate in six years?
The main purpose of this paper is to present some data to help guide recruiting committees in balancing
the rank of the department from which a candidate graduates and the candidate's rank within his
graduating class. The general message is that class rank matters much more than we might have guessed.
The top graduates of programs in the 10 to 30 range often are quite successful in establishing a tenurable
record by the end of their sixth year. On the other hand, the data suggest that not only should one avoid
the worst graduate out of Harvard or Chicago, but also the median and even higher ranked candidates
depending upon what one's department sees as a tenure research record.
This data also shows that the research productivity of new Ph.D.’s from even top departments drops off
very rapidly with class rank. To the extent that the mission of top graduate programs is to turn their
students into the next generation of research economists
1
, we are largely failing except for the top 15-25%
of each graduating class. Given the high quality of applicants and the intense competition to gain
admission to top 10 to 30 programs, one has to wonder why the great majority of these promising young
students ultimately do not seem to benefit from the training they receive. We conclude the paper with
more discussion of why this might be.
2. Data
This study follows up on Conley, Crucini, Driskill, and Önder (2013) in which we examined recent trends
in publication rates for young scholars. To carry out this analysis, we constructed a panel dataset
consisting of two parts: a census of Ph.D. recipients from academic institutions in the US and Canada who
received their economics Ph.D.’s between 1986 and 2000 and a complete record of the journal
publications of these individuals for the years 1985 to 2006 in EconLit listed journals. Pooling all years,
the panel contains 14,271 economics Ph.D.’s and 368,672 peer-reviewed papers. We refer the reader to
Conley et al. (1913) for more details regarding the nature and origin of this data.
Of course, raw counts of publications are imperfect measures of the research productivity of individual
scholars because of the variation in the quality of those publications. We therefore use journal quality
1 Siegfried and Stock (1999) point out that economics Ph.D. programs lack “product differentiation”in the sense that most
of these programs can be claimed to be designed primarily to raise researchers and lack a niche in meeting expectations of
potential prospective Ph.D.’s who will be taking jobs in business or industry.
1
Vanderbilt University Department of Economics Working Papers, VUECON-13-00009
indexes from Kalaitzidakis, Mamuenas and Stengos (2003) to convert each raw publication into a number
of American Economic Review (AER) equivalent papers. We also discount this by the number of
coauthors on a given paper. Thus, to be more precise, if a graduate in our sample publishes a paper in a
journal with C coauthors, containing a proportion P as many pages as the average AER article in a journal
with a quality index of Q relative to the AER, then the graduate is credited with PQ/C AER publications.
Finally, we focus our attention on graduates of the top 30 ranked departments. We use a department
ranking developed by Coupe (2003) based on faculty research productivity to choose the top 30 group.
Which departments are “top 30 is open to debate, of course, and regardless of how the ranking is
established, departments would have moved in and out of this group over the fifteen year interval we
study. Given this, it would be better to think of our “top 30” departments as representative of “top
departments” in general. The non-top 30 departments we use for comparison are a set of 30 Ph.D.
granting departments not in the top group.
3. Results
One of the major findings of Conley, et al. (2013) was that research productivity dropped off very quickly
with the top 1% of publishing research economists across the whole sample producing 13% of the AER
equivalent papers, and the top 20% producing 80%. This leaves unanswered exactly who these most
productive scholars are. Does this group contain only graduates of top programs or does it include many
graduates from lesser departments? Are most graduates of top programs likely to become one of these
highly productive scholars, or will most join the other 80% who produce comparatively little research? To
address this, we took each top 30 department, combined all their graduates into a single sample, and
looked at their total research productivity at the end of the sixth year after graduation. We did the same
for graduates of non-top 30 graduates as one combined group.
Table 1 shows the the number of AER equivalent publications that appear on the CV's of graduates of
each department at the end of their sixth year after graduation by productivity percentile. For example,
the Harvard graduates in the 95th percentile of research productivity relative to their classmates
published the equivalent of 2.36 or more AER papers in this period.
It should be noted that this table identifies the ex-post top graduates as determined by actual measured
productivity which may not necessarily be the same as the ex-ante top graduates as rated by the faculties
of their home departments as they went to the job market. Unfortunately, we have no way of ascertaining
such ex-ante rankings. While it would be interesting to know whether or not students fulfilled the
expectations of their supervisors, our data does not allow to us explore this question. However, we would
expect that our colleagues make their best, though somewhat noisy, estimates at who are the best students
they are sending to market in a given year, and that recruiting committees also make guesses and
judgments. Thus, while it is unlikely that the winners will be perfectly identified ex-ante, hiring the person
that you guess is the third best graduate of MIT this year should give something close to the outcome in
the table below, at least in expectation.
2
Vanderbilt University Department of Economics Working Papers, VUECON-13-00009
Table 1. Number of AER-Equivalent Publications of Graduating Classes from 1986 to 2000
Department
Percentiles of Graduates' AER-Equivalent Publications 6 years after Ph.D.
99th
95th
90th
85th
80th
75th
70th
65th
60th
55th
50th
45th
40th
Harvard
4.31
2.36
1.47
1.04
0.71
0.41
0.30
0.21
0.12
0.07
0.04
0.02
0.01
Chicago
2.88
1.71
1.04
0.72
0.51
0.33
0.19
0.10
0.06
0.03
0.01
0.01
0
U Penn
3.17
1.52
1.01
0.60
0.40
0.27
0.22
0.13
0.06
0.03
0.02
0.01
0
Stanford
3.43
1.58
1.02
0.67
0.50
0.33
0.23
0.14
0.08
0.05
0.03
0.02
0.01
MIT
4.73
2.87
1.66
1.24
0.83
0.64
0.48
0.33
0.20
0.12
0.07
0.04
0.02
UC Berkeley
2.37
1.08
0.55
0.35
0.20
0.13
0.08
0.06
0.04
0.03
0.02
0.01
0.01
Northwestern
2.96
1.92
1.15
0.93
0.61
0.47
0.30
0.21
0.14
0.10
0.06
0.03
0.01
Yale
3.78
2.15
1.22
0.83
0.57
0.39
0.19
0.12
0.08
0.05
0.03
0.02
0.01
UM Ann Arbor
1.85
0.77
0.48
0.29
0.17
0.09
0.05
0.03
0.02
0.01
0.01
0
0
Columbia
2.90
1.15
0.62
0.34
0.17
0.10
0.06
0.02
0.01
0.01
0.01
0
0
Princeton
4.10
2.17
1.79
1.23
1.01
0.82
0.60
0.45
0.36
0.28
0.19
0.12
0.09
UCLA
2.59
0.89
0.49
0.26
0.14
0.06
0.04
0.02
0.02
0.01
0
0
0
NYU
2.05
0.89
0.34
0.20
0.07
0.03
0.02
0.01
0.01
0.01
0
0
0
Cornell
1.74
0.65
0.40
0.23
0.12
0.07
0.05
0.04
0.02
0.01
0.01
0.01
0
UW Madison
2.39
0.89
0.51
0.31
0.20
0.11
0.06
0.04
0.03
0.02
0.01
0.01
0
Duke
1.37
1.03
0.59
0.49
0.23
0.19
0.11
0.08
0.05
0.04
0.02
0.01
0
Ohio State
0.69
0.41
0.13
0.07
0.04
0.02
0.02
0.01
0.01
0.01
0
0
0
Maryland
1.12
0.37
0.23
0.10
0.07
0.05
0.03
0.02
0.01
0.01
0.01
0
0
Rochester
2.93
1.94
1.56
1.21
1.14
0.98
0.70
0.51
0.34
0.27
0.17
0.12
0.06
UT Austin
0.92
0.53
0.21
0.06
0.05
0.02
0.01
0.01
0
0
0
0
0
Minnesota
2.76
1.20
0.68
0.46
0.29
0.21
0.12
0.08
0.04
0.02
0.01
0.01
0
UIUC
1.00
0.38
0.21
0.10
0.06
0.04
0.03
0.02
0.01
0.01
0.01
0
0
UC Davis
1.90
0.66
0.42
0.27
0.12
0.08
0.05
0.03
0.02
0.02
0.01
0
0
Toronto
3.13
1.85
0.80
0.61
0.29
0.19
0.15
0.10
0.07
0.05
0.03
0.02
0.02
UBC
1.51
1.05
0.71
0.60
0.52
0.45
0.26
0.23
0.22
0.15
0.11
0.08
0.05
UCSD
2.29
1.69
1.17
0.88
0.74
0.60
0.46
0.34
0.30
0.20
0.18
0.10
0.06
USC
3.44
0.34
0.14
0.09
0.03
0.02
0.02
0.01
0.01
0
0
0
0
Boston U
1.59
0.49
0.21
0.08
0.05
0.02
0.02
0.01
0
0
0
0
0
Penn State
0.93
0.59
0.25
0.12
0.08
0.06
0.02
0.02
0.01
0.01
0.01
0
0
CMU
2.50
1.27
1.00
0.86
0.71
0.57
0.52
0.29
0.21
0.13
0.09
0.08
0.05
Non-top 30
1.05
0.31
0.12
0.06
0.04
0.02
0.01
0.01
0
0
0
0
0
3
Vanderbilt University Department of Economics Working Papers, VUECON-13-00009
Table 1 makes it clear that there is a rapid drop off of research productivity in graduates regardless of
department as class rank decreases. Even at Harvard, a student has to be in the 85th percentile or above
to be likely to publish even a single AER equivalent paper in six years. The median Harvard graduate
publishes only .04 AER papers. On the other hand, the 90th percentile of graduates of CMU, UCSD and
the 80th percentile of Rochester graduates can also be expected to have one AER paper or more by year
six. Going further down this table, we see that one would be better off hiring a 95th percentile graduate of
a typical non-top 30 department than the 70th percentile graduate of Harvard, Chicago, U. Penn,
Stanford or Yale, or an 80th percentile graduate of Berkeley, Michigan, NYU UCLA or Columbia.
Since the main point of this paper is to give some guidance to hiring committees, it would be useful to
spend a few lines on how a department's tenure standard translates into AER equivalent papers. The
following is a list of possible publication records that are all roughly equivalent to one AER paper.
Obviously, this can be scaled up or down if one has higher or lower standards.
One in the American Economic Review or Econometrica
Two in the Journal of Econometrics, Econometric Theory, or Journal of Economic Theory.
Three in the Journal of Monetary Economics or Games and Economic Behavior
Four in the European Economic Review, Review of Economic and Statistics, International
Economic Review or Economic Theory
Five in the Economic Journal, Journal of Public Economics, or Economics Letters
Six to ten in high quality field journals.
Different departments produce different numbers of new Ph.D.'s every year, which makes the percentiles
in Table 1 a bit difficult to understand. What recruiting committees really need to know is how far down
in class rank at a given department they should consider given their own tenure standards. Table 2
addresses this directly. The numbers give the average number of new Ph.D.'s coming out of a given
department each year that achieve a research record of at least a given number of AER equivalent papers
by the end of year six. Thus, if your department's tenure standard is one AER paper, you should not hire
below the five best people out of MIT, the two best from Berkeley, Yale or U. Penn., or the top candidates
from Columbia or UCLA in an average year.
4
Vanderbilt University Department of Economics Working Papers, VUECON-13-00009
Table 2: The Number of Graduates each year for each Department who Publish at Least a Given Number
of AER Equivalent Papers within 6 years
AER Papers 2.5 2 1.5 1.25 1 0.75 0.5 0.25 0.1 Av. Cohort
Size
Harvard 1.3 2.1 2.9 3.9 4.6 5.8 7.2 10.1 12.7 30.5
Chicago 0.5 0.9 1.7 2.1 3.1 4.0 5.6 7.5 9.5 27.3
U Penn 0.4 0.7 1.1 1.3 1.9 2.3 3.5 5.5 7.1 19.3
Stanford 0.7 0.9 1.4 1.7 2.7 3.4 5.0 7.4 9.3 24.7
MIT 1.5 2.0 3.1 3.8 4.7 5.4 7.5 9.9 11.9 25.5
Berkeley 0.3 0.5 0.9 1.1 1.8 2.1 3.1 5.2 7.9 28.0
Northwestern 0.3 0.5 0.8 0.9 1.3 2.0 2.5 3.3 4.5 10.1
Yale 0.7 0.9 1.3 1.5 1.9 2.5 3.5 4.5 5.9 15.7
UM Ann Arbor 0.0 0.1 0.4 0.5 0.7 1.0 1.8 3.3 4.7 19.1
Columbia 0.3 0.3 0.5 0.7 1.1 1.6 2.3 3.1 4.3 17.4
Princeton 0.7 1.2 2.0 2.3 3.3 4.4 5.4 7.6 9.4 16.2
UCLA 0.2 0.2 0.5 0.5 0.8 1.1 1.7 2.7 3.9 17.9
NYU 0.0 0.1 0.1 0.3 0.4 0.6 1.0 1.6 2.1 11.7
Cornell 0.1 0.1 0.3 0.3 0.4 0.7 1.3 2.4 3.8 17.3
UW Madison 0.0 0.3 0.5 0.6 1.1 1.7 2.6 4.3 6.4 25.0
Duke 0.0 0.0 0.0 0.2 0.4 0.6 1.1 1.5 2.4 7.8
Ohio State 0.0 0.0 0.0 0.1 0.1 0.1 0.5 1.1 1.7 15.9
Maryland 0.0 0.1 0.1 0.1 0.3 0.3 0.4 1.3 2.2 13.5
Rochester 0.1 0.3 1.0 1.2 2.1 2.5 3.1 4.1 4.9 8.7
UT Austin 0.0 0.0 0.0 0.0 0.1 0.2 0.6 0.9 1.4 10.3
Minnesota 0.4 0.6 0.8 1.1 1.4 1.9 2.9 4.8 7.1 22.2
UIUC 0.0 0.0 0.1 0.2 0.3 0.4 1.1 2.2 3.9 26.4
UC Davis 0.0 0.0 0.1 0.1 0.1 0.2 0.5 1.0 1.3 6.2
Toronto 0.1 0.2 0.3 0.5 0.5 0.7 1.1 1.5 2.3 6.4
UBC 0.0 0.0 0.1 0.1 0.3 0.4 0.9 1.5 2.3 4.5
UCSD 0.0 0.1 0.5 0.6 0.7 1.2 1.8 2.5 3.4 6.1
USC 0.1 0.1 0.1 0.1 0.1 0.1 0.2 0.4 0.7 4.9
Boston U 0.0 0.1 0.1 0.1 0.2 0.3 0.5 1.1 1.8 12.5
Penn State U 0.0 0.0 0.0 0.1 0.1 0.3 0.5 0.8 1.2 7.1
CMU 0.0 0.1 0.1 0.1 0.2 0.4 0.6 0.8 0.9 2.0
Non-top 30 0.0 0.0 0.1 0.1 0.2 0.3 0.5 1.0 1.8 16.8
5
Vanderbilt University Department of Economics Working Papers, VUECON-13-00009
This table might also be of some help to top departments. Suppose a department only wants to hire super-
stars (which we define as having published 2.5 or more AER papers at year six), Then the set of potential
job candidates is restricted to the top one or two graduates from Harvard, or MIT and the top graduate
from Stanford, Yale or Princeton, if these departments are having a good year. In addition, once every
other year Chicago, U Penn. and Minnesota should produce a super-star. Other departments will do so
with less frequency. We should note that many people may become stars later in their careers, but only
seven or eight are likely to reveal themselves by the sixth year after graduation
One final noteworthy pattern emerges from this data. Although a few departments are good at producing
superstars, most departments show a very steep drop-off in quality thereafter. For example, if one
considers the 80th percentile of students and sets a tenure standard of .6 AER papers, only graduates of
Harvard, MIT, Northwestern, Yale, Princeton, Rochester, UCSD, and CMU are likely to achieve this level
of productivity. In other words 80% or more of the graduates of Chicago, U Penn, Stanford, UC Berkeley,
UM Ann Arbor, Columbia, UCLA, NYU, Cornell, UW Madison, Duke, Ohio State, U Maryland, UT
Austin, Minnesota, UIUC, Toronto. UBC, USC, Boston U and Penn State will not have .6 AER papers at
the end of six years.
On the other hand, there are a few schools that do relatively better at training students who are not in the
top percentiles. Table 3 gives a set of departmental rankings based on the productivity of different
percentiles of the graduating class. Thus, at for the 99th and 95th percentile of students, MIT graduates
are more productive at year six than those of any other department. If we look at students in the 70th
percentile, however, MIT's ranking drops to 4. For comparison the second column gives the departmental
ranking according to Coupe (2003).
6
Vanderbilt University Department of Economics Working Papers, VUECON-13-00009
Table 3. Department Rankings based on Graduating Cohorts' Publication Performance (1986-2000)
Department Coupe Ranking at Percentile:
Percentile
99th
95th
90th
85th
80th
75th
70th
65th
60th
55th
50th
45th
40th
Harvard
1
2
2
4
4
5
8
6
8
8
8
8
11
11
Chicago
2
12
8
8
9
10
10
12
13
12
15
17
12
30
U Penn
3
7
11
10
13
12
12
10
10
13
13
14
15
14
Stanford
4
6
10
9
10
11
11
9
9
9
9
10
9
10
MIT
5
1
1
2
1
3
3
4
4
6
6
6
6
6
UC Berkeley
6
17
15
17
16
17
16
16
16
15
14
13
14
12
Northwestern
7
9
6
7
5
7
6
7
7
7
7
7
7
9
Yale
8
4
4
5
8
8
9
11
11
10
11
11
10
8
UM Ann Arbor
9
21
21
20
19
18
19
21
20
20
20
23
21
23
Columbia
10
11
14
15
17
19
18
18
21
22
23
20
30
21
Princeton
11
3
3
1
2
2
2
2
2
1
1
1
2
1
UCLA
12
14
19
19
21
20
22
22
22
21
22
26
26
17
NYU
13
19
20
23
23
24
26
26
27
27
27
30
27
22
Cornell
14
22
23
22
22
21
21
19
18
19
19
15
18
18
UW Madison
15
16
18
18
18
16
17
17
17
17
17
19
16
13
Duke
16
25
17
16
14
15
15
15
15
14
12
12
13
19
Ohio State
17
31
27
30
29
29
27
27
26
24
26
28
24
25
U Maryland
18
26
29
25
25
25
24
23
25
25
21
21
19
27
Rochester
19
10
5
3
3
1
1
1
1
2
2
3
1
2
UT Austin
20
30
25
27
31
27
29
31
31
31
28
27
25
20
Minnesota
21
13
13
14
15
14
13
14
14
16
16
18
17
26
UIUC
22
28
28
26
26
26
25
24
24
26
25
24
28
31
UC Davis
23
20
22
21
20
22
20
20
19
18
18
16
20
28
Toronto
24
8
7
12
11
13
14
13
12
11
10
9
8
7
UBC
25
24
16
13
12
9
7
8
6
4
4
4
4
5
UCSD
26
18
9
6
6
4
4
5
3
3
3
2
3
3
USC
27
5
30
29
27
31
28
28
28
28
30
25
31
15
Boston U
28
23
26
28
28
28
30
29
29
30
31
29
22
24
Penn State
29
29
24
24
24
23
23
25
23
23
24
22
29
16
CMU
30
15
12
11
7
6
5
3
5
5
5
5
5
4
Non-top 30
27
31
31
30
30
31
30
30
29
29
31
23
29
7
Vanderbilt University Department of Economics Working Papers, VUECON-13-00009
This table shows that that some departments like Harvard, MIT, Yale and to a smaller extent Chicago and
U. Penn follow a downward trend in these rankings. That is, they do better at training top students than
middle or lower level students in a relative sense. Other departments, such as Rochester, UBC, UCSD and
CMU do not compete with the top departments in producing the very top research scholars, but are able
to turn out lower ranked students who dominate the similarly ranked graduates at better ranked
departments. For example Rochester is third best at producing students at the 90th and 85th percentile,
and thereafter mostly trades the one and two spots with Princeton.
4. Conclusion
The main purpose of this paper is to provide some empirical guidance to departments seeking to hire new
professors in economics. The main conclusions are that class rank matters a great deal and quickly out-
weighs the ranking of the department from which a job candidate graduates. It is indeed worthwhile to
look at non-top ranked departments for new hires, though only at their very top students in general. On
the other hand, if a department is only willing to hire superstars in the making, then only the top
candidates from the very top departments should be considered. It is very rare for a non-top 10
department to produce a superstar, at least one who stands out as such at the point that tenure is granted.
Perhaps a more interesting question is how it is that the median Harvard (or any top school's) graduate
can be so bad? To get to Harvard, an applicant has to have great grades, perfect test scores, strong and
credible recommendations, and know how to package all this to stand out to the admissions committee.
Thus, successful candidates must be hardworking, intelligent, well-trained as undergraduates, savvy and
ambitious. Why is it that the majority of these successful applicants, who were winners and did all the
right things up to the time they applied to graduate school, become so unimpressive after they are trained?
Are we failing the students, or are the students failing us?
Three possible answers suggest themselves. First, it might be that what makes a successful research
economist is not well measured by tests and grades. Being hardworking, well-trained and intelligent might
be necessary for success, but by no means sufficient. Perhaps it has more to do with being creative, self-
motivated, or thick-skinned. We don't have good ways of measuring these attributes so it may be that the
admissions system currently used by all departments (even outside of economics) is not gathering the right
information. Second, it might be that nothing succeeds like success. If a new graduate (regardless of
fundamental quality) gets a good first job
2
, is well mentored and fostered by his new colleagues, and has
early success in publishing, he may be more likely to have more papers accepted by good journals in the
future. After all, the editors and referees will know that this new submission was written by a bright young
person; everyone says so; look at this first publication
3
. There is a kind of virtuous circle in success and
vicious one in failure. Luck may also play a role who starts their careers on the high road. Students who
happen to have chosen to work on a topic that is in vogue at the time they graduate are more likely to get
2 Oyer (2006) discusses learning-on-the-job aspects in academic careers and establishes a causal relationship between
landing a research-oriented first job after Ph.D. and life cycle publication productivity.
3 However, a quick data investigation of the relationship between publishing a paper before graduation and productivity
over the six year probationary period shows them to be uncorrelated. That is, publishing a paper before graduation is a
bird in hand, and is an addition to total expected publications at year six. However, it does not predict that a graduate will
publish at a higher rate over the next six years.
8
Vanderbilt University Department of Economics Working Papers, VUECON-13-00009
good offers and to publish more easily. In other words, publication success may be tied to first jobs and
good luck. Since there is only so much of each to go around and success breeds success, the distribution
of sixth year publications is inevitably very skewed, and not proportional to either the innate quality of the
new graduate or the quality of his training. If this is the case, the outcomes we document derive from the
sociology of the profession and there is little to be done to change it. Success is more of a lottery.
Recruiting committees should hire in trendy topics, but otherwise, graduates hired by good departments
will simply be more successful regardless of their quality. Finally, there may be a kind of positional game
going on that affects both students and professors. The faculty will generally identify the top students in an
entering class and this in turn generates raised expectations and higher confidence in those picked out,
and perhaps the opposite for the rest of the class. Being number six is much like being number sixteen,
but if I am number one or two, I want to hold on to my status and will work harder to do so. Faculty, on
the other hand, seek the best students out, give them more time and attention, and suggest better projects
to them. Thus, it might be better to be the top student in a second tier program than a second tier student
in a top program.
In any event, what these data show is that if the objective of graduate training in top ranked departments
is to produce successful research economists, then we, as a profession, are largely failing. Even at the top
five departments it would be hard to argue that the bottom half of their students are successful. The
number of AER publications at year six is below .1 in all cases and substantially below in most. At the
majority of the top 10 departments 60% of their students fail to meet this standard, and for the majority
of the top 30 departments, 70% fail. A tenure standard of .1 AER publication is equivalent to publishing
one paper in second tier field journal over six years. It is doubtful that this would pass for research active
in many departments, much less, result in tenure. We conclude that either we are failing to identify the
characteristics that lead to future success in the admissions process, that our graduate programs are set up
in a way that serves only the best students, or that the nature of the economics profession is such that to
create only a few winners and many losers. Whichever is correct, it is largely beyond the powers of
individual departments to fix. Thus, the best thing a department who wishes to hire people that are likely
to get tenure and contribute to their research ranking is to focus on candidates who are working in trendy
areas and are near the top of their respective classes, but not to be overly impressed by the place from
which job candidates receive their degree.
References
Conley, John, Mario Crucini, Robert Driskill, and Ali Sina Önder. 2013. The Effects Of Publication Lags
on Life-Cycle Research Productivity in Economics. Economic Inquiry, 51: 1251–1276.
Coupe, Tom. 2003. Revealed Performances: Worldwide Rankings of Economists and Economics
Departments, 1990-2000. Journal of the European Economic Association, 1: 1309-1345.
Kalaitzidakis, Pantelis, Theofanis P. Mamuneas, and Thanasis Stengos. 2003. Rankings of Academic
Journals and Institutions in Economics. Journal of the European Economic Association, 1(6): 1346-1366.
Oyer, Paul. 2006. Initial Labor Market Conditions and Long-Term Outcomes for Economists. Journal of
Economic Perspectives, 20(3): 143-160.
Siegfried, John J. and Wendy A. Stock. 1999. The Labor Market for New Ph.D. Economists. Journal of
Economic Perspectives, 13(3): 115-134.
9
... A recent study by Conley and Onder (2013) found that around 80% of the published work in economics journals was done by around 20% of the scholars. They also found that it was only the top-ranked PhD students (those ranked in the 90th percentile or higher) who ended up publishing with any consistency over the next 6 years. ...
Article
Full-text available
One of the greatest challenges that new scholars face is getting their work published in peer-reviewed journals. Publications weigh heavily in decisions to hire, promote, and tenure faculty, and they occupy a central position in studies about departmental prestige rankings. The problem is that most scholars find success in publishing to be elusive. The purpose of this article is to provide readers with a series of tips to help them organize their academic lives so that they may become a successful publisher. None of the tips I suggest are for the faint of heart, and no tricks or gimmicks are offered. Instead, the broad theme is that scholars need to take agency over their careers. Put simply, your success is your responsibility!
Chapter
International education is rooted in the ideals of diversity, inclusion, and cross-cultural understanding. However, the industry falls short of these ideals during the student recruitment process, which is often concentrated in just a few source markets, with impersonal systems and practices. New technology, notably artificial intelligence, is creating new opportunities for institutions to address this challenge. New platforms can spread the attention and engagement of university recruiters to every corner of the globe, deliver a more personalized experience to prospective applicants that have historically been ignored, improve campus diversity, and lessen the industry's climate impact by reducing the need for travel. Insights can be drawn from the high technology industry to create trust and scale, adequate venture capital is available globally, and organizations such as the Groningen Declaration Network (GDN) can provide the necessary governance. Together, these factors will enable a global electronic marketplace for education with greater diversity and personalization.
Article
Background Factors affecting academic productivity of neurosurgeons are increasingly being studied. In the current investigation, we retrospectively reviewed a cohort of early career neurosurgeons to determine if their medical education, residency training, or academic employer had the most influence on a young academician's productivity. Methods We studied early career neurosurgeons who completed residency in U.S.-based neurosurgical training programs between 2010 and 2014. The ranking of an individual subject's medical school, residency, and current academic employer were analyzed for correlation with his or her current h-index. Results The neurosurgeons with the highest h-indexes are more likely to have attended elite medical schools, have trained in high-ranking residency programs, and work for prestigious university departments (P < 0.0001). Furthermore, we identified a positive correlation between the subjects' academic productivity and the ranking of all the institutions throughout their medical education, training, and current employment. The strongest correlation was with the rank of their residency program (ρ = 0.52). Conclusions There is a correlation between the early career academic neurosurgeons' h-indexes and the ranking of all the institutions throughout their education, training, and current employment, but the strongest correlation was with the academic productivity of their residency program.
Article
This paper investigates the value of on-the-job experience for workers’ long-run career outcomes. We exploit the effects of team relegation in professional soccer by contrasting players on teams just below and just above the cutoff point of relegation. We find that players on teams relegated to a lower division have more match appearances in the short run, and they play in better leagues and earn higher wages in the long run. This gain is concentrated among individuals who are young or less experienced at the time of relegation. Because other consequences of relegation would tend to work against individuals’ long-run success, the positive net gain is most likely to be the result of greater on-the-job experience. The findings have implications for firms on job assignment and for workers on managing their careers. Data are available at https://doi.org/10.1287/mnsc.2016.2609. This paper was accepted by Sendil Ethiraj, organizations.
Article
Full-text available
We investigate how increases in publication delays have affected the life-cycle of publications of recent Ph.D. graduates in economics. We construct a panel dataset of 14,271 individuals who were awarded Ph.D.s between 1986 and 2000 in US and Canadian economics departments. For this population of scholars, we amass complete records of publications in peer reviewed journals listed in the JEL (a total of 368,672 observations). We find evidence of significantly diminished productivity in recent relative to earlier cohorts when productivity of an individual is measured by the number of AER equivalent publications. Diminished productivity is less evident when number of AER equivalent pages is used instead. Our findings are consistent with earlier empirical findings of increasing editorial delays, decreasing acceptance rates at journals, and a trend toward longer manuscripts. This decline in productivity is evident in both graduates of top thirty and non-top thirty ranked economics departments and may have important implications for what should constitute a tenurable record. We also find that the research rankings of the faculty do not line up with the research quality of their students in many cases. Institutional subscribers to the NBER working paper series, and residents of developing countries may download this paper without additional charge at www.nber.org.
Article
Presents results from a survey of 450 new (1996-97) Ph.D. economists, providing information about employment, underemployment, employers, work activities, salaries, and job satisfaction. Comparisons are made across ranks of the graduates' Ph.D. programs, sectors of employment and subfields of economics, as well as over time. Labor market outcomes for economists also are compared with those of seven other disciplines. Results indicate that a growing proportion of new economics Ph.D.s start their careers in business or industry, that an international market for new economics Ph.D.s is evolving, and that job outcomes for economists compare favorably with new Ph.D.s in many other disciplines.
Article
Each year, graduate students entering the academic job market worry that they will suffer due to uncontrollable macroeconomic risk. Given the importance of general human capital and the relative ease of publicly observing productivity in academia, one might expect that long-term labor market outcomes for students graduating in unfavorable climates will resemble long-term outcomes for those graduating in favorable climates. In this paper, I analyze the relationship between macroeconomic conditions at graduation, initial job placement, and long-term outcomes for Ph.D. economists from seven programs. Using macroeconomic conditions as an instrument for initial placement, I show that a quality and type of initial job have a causal effect on long-term job characteristics. I also show that better initial placement increases research productivity, which helps to limit the set of economic models that can explain the effect of initial placement on long-term jobs.
Article
this paper we try to rectify this deficiency in the literature by both computing an updated list of journal rankings with current weights computed from their citations impact and then use those to produce a world wide ranking of academic institutions
Article
Introduction There has been a lot of recent research literature on rankings of economics departments throughout the world. They serve as signals tools for attracting new faculty and retaining older in highly ranked institutions and also help attract the best graduate students who have academic aspirations. Many times these rankings are used by university administrators to allocate scarce education funds to dierent departments according to their success in these rankings. There has been a long standing tradition for US economic departments to be ranked (see Scott and Mitias (1996) and Dusansky and Vernon (1998) for recent such rankings). Recent European studies of this kind include Kirman and Dahl (1994) and Kalaitzidakis, Mamuneas and Stengos (1999). There have been also rankings of departments in Asia (see Jin and Yau (1999)), Canada (see Lucas (1995)), as well as Australia (see Harris (1990)). Rankings are also constructed in other related disciplines such as ...nance for the