Does a player’s “course history” predict performance?

A much-debated topic among golf fans is the relevance of so-called “course history” to a player’s performance in a given week. That is, do specific players tend to play well on specific courses?

Of course, there are intuitive reasons we can come up with to explain why this should, logically, be true. First, the characteristics of certain courses (e.g. length, fairway width, rough length, etc.) should favor players with certain characteristics (e.g. power, accuracy, etc.). Second, golf has a mental component to it; if a player develops a certain level of comfort (or, discomfort) with a given course layout, it makes sense that this higher (or, lower) comfort level will impact their performance at that course.

But, talk is cheap. I can also give you intuitive reasons as to why some players play better on Bermuda greens, or why some players play better when wearing white belts than when wearing black belts. These are theories, and for theories to gain credibility, you need to provide some empirical evidence that corroborates their predictions.

The mere existence of certain players that have a string of good performances at the same course is not, necessarily, strong evidence for the existence of course-player effects. It is true that Luke Donald has played unusually well (compared to his typical performance level) at Harbour Town. This is simply a fact, and can’t be disputed. However, did you know that Henrik Stenson had a great course history at Bay Hill, but played awful in 2017? It is easy to focus on the former point, and overlook the latter. The reason why Luke Donald playing well at Harbour Town doesn’t provide indisputable evidence for the course history hypothesis is that it is not based off a large enough sample of rounds (and yes, 25 rounds is still a small sample, especially in golf). Suppose there really are no course-player performance effects; unless everyone plays a very large number of rounds at each course, it would be astonishing if we didn’t find evidence of some golfers playing better, or worse, than usual at specific courses. The logic here is the same as if we had 300 people flip a coin 10 times; some people will get 8-10 Heads, or 8-10 Tails, simply due to the statistical variation inherent to finite samples.

More generally, finding differences among golfers with respect to some statistic can be thought of as a necessary first step to finding a meaningful metric for predicting golf scores. It’s true that if course history, or performance on Bermuda greens, is going to be a successful predictor of player performance we need to first confirm that there exists substantial variation in the statistic (i.e. if we don’t find any variation in a player’s course-specific scoring averages, then clearly it can’t have any predictive power). But, the next, critical, step is to show that this statistic actually predicts scores to some degree. People analyzing sports data love to do this first step (because it’s easy), but the second step isn’t done very often. So next time you see a list of players ranked by a statistic, the first question should be “Is there any evidence that this helps to predict scores?”.

In this article we are going to examine how well a player’s course history predicts their performance. Along the way, we’ll explore how to best predict golf scores, in general. The hope is that the evidence here can be taken as free of any personal bias from us (full disclosure: as anybody who follows us on Twitter knows, we have been on the “course-history is irrelevant” side of this debate).

Let’s get started. First, we want readers who haven’t analyzed golf data before to appreciate how much *random* variation exists in golf scores on the PGA Tour. Below, we’ve plotted the adjusted strokes-gained of two players on Tour from 2012-present. These scores are adjusted for course difficulty, so any remaining differences reflect only differences in golfer performance; that is, an adjusted score from the U.S. Open can be directly compared to an adjusted score from the Sony Open. (Also, from here on, when I use the phrase “raw scores”, or “strokes-gained”, I am referring to this adjusted measure of scores as just defined. See footnote 1 for a primer on how this adjustment works.)

The players in the plot are Dustin Johnson and another player who we’ll keep unnamed for a moment; take a guess at the (average) world rank of this other player during this period.

Notes: Plotted here are event-level averages; round-level data would show even greater variation. Data is from 2012-present. Positive values indicate better performances.

The unnamed player’s scores plotted here belong to Kevin Na; he has been solid in this period, having an average world rank of around 50-60th or so. However, when you think of Dustin Johnson and Kevin Na, you likely imagine a wide gap between them with respect to their ability levels. But, with only a quick glance at the graph, it’s not immediately obvious who the better player even is! This is an attempt to highlight the fact that the scores of any individual golfer vary a lot.

Next, we add in our best estimates of Dustin Johnson’s and Kevin Na’s “ability” before each tournament (i.e. the score we expect them to shoot at each point in time – estimated from our model) throughout the time period:

Notes: Data points represent event-level average score. “Ability” is defined here, loosely speaking, as a weighted average of various historical scoring averages (2-year, 2-month, last event). Data is from 2012-present.

When you see the plots of their respective predicted abilities, it does become clear that Dustin Johnson has been the better player. Near the end of the sample period, DJ’s ability is estimated to be about 1 stroke per round better than Na; this is actually quite a big difference (as it compares to the typical difference in our measure of ability between PGA Tour players). However, when you see it plotted alongside their raw scores, this difference looks like small peanuts compared to the weekly (*random*) variation in an individual player’s scores. This is probably a good time to mention that we are only able to explain (or, successfully predict) about 15% of the variation in golf scores; the rest is unaccounted for! (If instead we were trying to predict round-level scores, this number drops to about 7-8%.)

Moving forward; let’s do one more quick exercise before we get to the analysis of course history. In the graph below we plot a few different scoring averages calculated over different historical time horizons. The goal here is to evaluate different ways of predicting a player’s scores. Graphically, we’ll just focus on Dustin Johnson’s data so things aren’t too crowded:

Notes: “2Y prediction” is plotting DJ’s strokes-gained average over the previous 2 years (from the date of each event), “2M prediction” is plotting his strokes-gained average over the previous 2 months, “Last event prediction” is his strokes-gained average in his most recent event, and finally, “Weighted prediction” is a weighted average of 2-year S.G., 2-month S.G., and last event S.G; the *weights* are just the coefficients from a linear regression (using all the data, not just Johnson’s).

So, what’s predicting best? Let’s calculate the average absolute deviation of our various predictions from the realized scores. To do this, we take the absolute value of the difference between every score and every prediction, and then average these. Here’s how the predictions did (this is for the entire sample, not just Johnson’s data):

What method predicts best? Average prediction errors:

  • 2Y prediction: 1.41 strokes
  • 2M prediction: 1.52 strokes
  • Last Event prediction: 1.86 strokes
  • Weighted prediction: 1.39 strokes

(Again, recall that this is all done with event-level averages.) The two main takeaways here are: 1) All the predictions do pretty poorly; the best we can do is miss a player’s average score at an event by 1.4 strokes (that is, this is our average prediction error); and 2) The 2-year strokes-gained prediction does almost as well as the optimal (i.e. “Weighted”) prediction method!

Now, finally to the discussion of the relevance of course history. Up to this point, we have been predicting scores without using any course-player specific variables. The goal is to see whether adding in a player’s course history helps to predict their performance in a given week. So, what should we use as our measure of course history? Evidently, a course history variable defined as the average of a player’s raw scores at a course would be problematic, as this will be correlated with the general ability of the player. That is, at Augusta National, Dustin Johnson will likely have a better historical scoring average than Kevin Na, but that may be simply due to the fact that Johnson is typically better than Na at all courses, and not due to unusually good performances on the part of DJ at Augusta. Therefore, we first need to adjust scores for the ability level of the player at each point in time; we’ll call this the residual score. The residual score is how much better, or worse, a player played in each round compared to their ability level at the time. (See footnote 1 to see how we estimate each player’s ability; if you don’t want to read it, you can basically think of the “Weighted prediction” above as the player’s ability at any point in time. Then, the residual score is equal to the raw score minus this prediction.) Our course history variable is going to be defined as a player’s historical average residual score at the relevant course.

In words, we are asking: “Does the fact that Luke Donald has typically played better than expected at Harbour Town from 2010-2015 mean he will play better than expected at Harbour Town in 2016?” 

This is quite a nice approach, because even though Donald’s ability level has dropped off in recent years, we are only looking to see whether he plays better than what we’ve estimated his current form to be. So, Donald may, in terms of raw scores, play worse than he has in the past at Harbour Town in 2016, but if this is still above his current ability level then that would be evidence in favor of the course history hypothesis. (*Only for those who are interested* – for a discussion of why this approach is slightly different than controlling for current ability in a multi-variable regression, see Footnote 2).

Some final details: the estimating data is PGA Tour rounds from 2010-2017. We include all players who played at least 70 rounds in this time period (otherwise we are just bringing in a lot of unnecessary noise with players who’ve only played a few rounds). We predict event-level (or, event*course-level at events with multiple courses) performances using the years 2015-2017. The reason for that is we need to have enough historical years to construct meaningful course-specific scoring averages. To be clear, we predict 2015 scores using 2010-2014 course-specific averages, 2016 scores using 2010-2015 course-specific averages, etc.

The following simple regression is run:

Residual.score_{i} = \beta_{0} + \beta_{1} \cdot Historical.avg.residual.score_{i} + u_{i}

where the regressor is the player’s historical average residual score at the relevant course, and the dependent variable is the player’s average residual score in the current week (or his average at each course in the current week if it’s a multi-course event).

Here is the main result:

Notes: Historical course-specific averages are calculated from 2010 up to year of interest. Dependent variable is current week’s average score. All scores have been adjusted for a player’s current form (i.e. they reflect how much better or worse a player performed than expected). Regression is using data from 2015-2017; sample is restricted to those with at least 15 rounds in their course history.

The slope of the regression line is 0.12 – this means that for every 1 stroke increase in a player’s course history (i.e. his course-specific historical average of residual scores) his expected score increases by 0.12 strokes. Importantly, this graph is constructed only using players with course histories comprised of at least 15 rounds (this leaves ~ 2000 observations). As can be seen from the plot, course history is providing a very noisy signal; there are plenty of players who had good course histories (i.e. further right on the x-axis) but play very poorly in the current week, and vice versa. Of course, on the whole, having a better course history correlates slightly with better performance that week (as evidenced by the upward sloping regression line). For those interested, the estimated slope has a standard error of about 0.05 – so, pretty noisy.

In the full sample (i.e. no restriction on minimum number of rounds played at the course, other than it being greater than zero), course history has basically no impact on expected score: a 1 stroke increase in course history increases the predicted score by about 0.02 strokes. However, with the full sample, there are many observations in which a player only has 2-4 rounds to construct a course history; this adds a lot of statistical noise. Perhaps unsurprisingly, the estimate of the course history effect gets larger as the round cutoff is made more strict, culminating with the result shown in the plot above (a 1 stroke increase in course history average is associated with a 0.12 stroke increase in expected performance). We could keep making the minimum round cutoff stricter, but eventually the sample becomes too small for reliable inference. For a reference point, the coefficient on short-term (say, the previous 2-3 months) from a similar regression would be about 0.15, and the coefficient on long-term form (2 years) would be about 0.75 – 0.80.

In terms of predictive power (i.e the “R-squared” of a regression), course history has very little. Recall that before we were able to predict about 15% of the variation in scores at the event-level (i.e. R-squared equals 0.15). The R-squared of the course history regressions range from 0.02% (!!) (full sample) to 0.2% (restricting to course histories with at least 15 rounds). The R-squared is only a function of two things: 1) the size of the coefficient, and 2) the variance of the course history variable. There is a decent amount of variation in course histories across players, so the reason the R-squared is so small is mainly just due to the small coefficient size. (See footnote 3 for a short discussion on this.)

To conclude, in this article we’ve shown that long-term form is king when it comes to predicting golf scores. However, short-term form does provide a slight improvement in predictive power. Course history, defined here as how much better than expected a player has historically played at a course, is found to impact performance to some degree: we estimate that increasing the course history measure by 1 stroke increases our predicted score by at least 0.02 strokes, and by at most 0.12 strokes (the former using all course history data, the latter obtained only using course histories calculated from at least 15 rounds). But, despite the somewhat meaningful impact course history has on predictions (0.12 strokes is meaningful, in our opinion), it adds virtually no predictive power (as evidenced by an extremely low R-squared). Moving forward, we will keep course history in mind when modelling golf scores, but it trails far behind long-term form, and to a lesser degree short-term form, in its relevance to predicting golfer performance.

Footnotes:

1. We use a slightly different (and, better) method to properly adjust for course difficulty and to estimate player ability than we have in previous work. We roughly follow the method used in Connolly and Rendleman (2008). The naive way to adjust for course difficulty of any given round is to subtract the mean score for the field that day. This can lead to erroneous conclusions about course difficulty, however, because not all fields are the same in terms of average skill level. Subtracting off the mean will tend to overvalue rounds played against weaker fields, and undervalue rounds played against stronger fields. To account for field strength, we have in the past estimated a fixed effects regression of the following form:

Score_{ij} = \mu_{i} + \delta_{j} + \epsilon_{ij}

where \mu_{i} represents a fixed player skill level for player i, and \delta_{j} represents the course difficulty for a given round j. We augment this specification by allowing \mu_{i} to vary over “golf time” (this is the chronological sequence of rounds the golfer plays). Consider the following:

Score_{ij} = \mu_{i}(t) + \delta_{j} + \epsilon_{ij}

where \mu_{i}(t) is now a time-varying measure of player ability (where time is specific to each player, and represents their sequence of rounds). We estimate this in a cool way using an iterative process, the basic idea is outlined in the Rendleman article linked above. The bottom line is that we allow each player’s ability to vary over time (whereas before, it was forced to be fixed over time). This is especially important because our estimating sample spans 9 years (with just a year or two of data, the fixed ability assumption is probably not unreasonable). Recall that in other parts of this article, player ability was defined as the weighted average of 2-year, 2-month, and last event scoring averages. The ability measure here is preferable because it uses data both before and after each point in time to estimate player ability (whereas the other method clearly just uses historical data – which is obviously all you have when you are doing a prediction exercise!).

From this, our adjusted score variable is defined as Score_{ij} - \delta_{j} , and the residual score variable is defined as \epsilon_{ij} .

2. An obvious way to approach this problem would have been to run a regression where you control for a player’s current form using various historical averages (e.g. 2-year S.G., 2-month S.G., etc.) and then include the raw course history average in the regression as well:

adj.score_{i} = \beta_{0} + \beta_{1} \cdot adj.score.2Y_{i} + \beta_{2} \cdot adj.score.2M_{i} + \beta_{3} \cdot adj.score.ch_{i} + \epsilon_{i}

The dependent variable is the adjusted score, and the regressor of interest is adj.score.ch_{i} . This is not quite the same as what we are doing in the body of this article. The difference is very subtle; the interpretation of \beta_{3} is the effect of a player’s historical course-specific scoring average on this week’s performance after controlling for the player’s current form (as defined here by 2-year S.G. and 2-month S.G.). Conversely, in the body of the article, the method we are using can be thought of as controlling for the form of a player at the time they played the course. To clarify with an example: the former method asks: “Does the fact that Luke Donald played better at Harbour Town in the past than what his current form indicates mean he will play better this week?”, while the latter method asks: “Does the fact that Luke Donald has typically played better than his form at the time at Harbour Town in the past mean he will play better than expected at Harbour Town this week?” If my intuition is right (and it may not be, I’m still grappling with this a bit) these two methods would seem to be the same if a player’s form hasn’t changed much in time period we are considering. Anyways, for what it’s worth, doing it with the regression controlling for current form gives almost identical results to those reported in the body of the article.

3. Intuitively, the R-squared of a regression is the proportion of the variance in the dependent variable that is *accounted for* by the included regressors. In the simple case of a single independent variable (e.g. X), the R-squared is equal to:

R^{2} =  \beta_{1}^{2} \cdot Var(X) / Var(Y)

where in our context, X is the course history variable, Y is the current week’s average score, and \beta_{1} is the regression slope coefficient. Evidently, this measure can only be small if the coefficient is very small, or the variance of X is small (relative to the variance of Y). In the full data, the variance of X is 1.48, while the variance of Y is 3.03; therefore, it’s the small size of the coefficient that is driving our very small R-squared (~0.0002, or 0.02% in the full data) result.

An Intergenerational Approach to Ranking PGA Tour Players

In this article we provide a method for comparing the performances of golfers who did not compete in the same time period. We answer questions of the following nature:

“How would the 2015 version of Rory McIlroy perform against the 1995 version of Greg Norman playing the same course with the same equipment?”

The statistical approach taken here is motivated by the method we use in our predictive model to adjust scores for field strength and course difficulty within a year (used in Broadie and Rendleman (2012), as well). In that context, because all European Tour and PGA Tour events contain overlapping sets of golfers, we are able to compare relative performances of all golfers even though all golfers do not directly compete against one another. The logic is that although Phil Mickelson and K.T. Kim may never play in the same tournament in a given year (suppose), because they both play in tournaments that contain Rory McIlroy, we are able to compare Mickelson and Kim through their performances relative to McIlroy.

The rest of this article is organized as follows: we first provide the intuition behind our approach, then provide results and a discussion of their interpretation, and then conclude with the statistical details.

Intuition

We use the same logic described above to compare players across generations. A macro example is shown here:

That is, we compare the performances of McIlroy and Faldo through their relative performances against Tiger. The method we use is based on this simple logic, but instead of just a single player linking players from different generations, we have hundreds. An obvious critique to this approach is that the Tiger Woods that Faldo played against was not necessarily the same Tiger Woods that McIlroy faced 10-15 years later. To get around this, we break each player’s career into 2-year blocks. So Tiger Woods in 1997-1998 is a “different player” in our sample than Tiger Woods in 1999-2000; his ability level can be different in the two periods. Therefore, to compare, for example, the 1995 version of Greg Norman to the 2015 version of Rory McIlroy, we first compare Norman to the players (that is, the 2-year blocks of players’ careers) he played against in 1995-1996, and then those players are compared to players they competed against in 1997-1998, and so on, all the way up to 2015.

Attentive readers may notice a problem here: the key to this approach is that we have overlap across time of players’ careers (i.e. part of Faldo’s career overlapped with Tiger, and part of Tiger’s overlapped with Rory), but now that we have defined each 2-year segment of a player’s career as distinct, how do we ensure we still have overlap? That is, if every player from 1999-2000 is a distinct “player” from that in 2001-2002, then we have no way to link the performances of these two groups of players. We circumvent this problem by randomly assigning half the players in our sample to have their 2-year blocks defined starting on the odd years (1999-2000, 2001-2002), and the other half of the sample starts on the even years (2000-2001, 2002-2003). Therefore, we are able to link the performance of say, the 2000-2001 version of Tiger to the 2002-2003 version of Tiger by comparing his performances to 2001-2002 Mickelson (because both 2000-2001 version of Tiger and 2002-2003 version of Tiger competed against the 2001-2002 version of Mickelson). For our results, we actually end up getting a value for a player’s performance in each year of their career (this is discussed in detail later). This annual measure should be thought of as a sort of smoothed 3-year average (i.e. Tiger’s 2000 value is affected by his 1999 and 2001 performances as well).

The main assumption we are relying on is that within a 2-year period players’ ability is constant on average. There can be some players whose performance improves during a 2-year period as long as there are others whose performance declines. We require that on average these discrepancies even out.

In Connolly and Rendleman (2008), they estimate a continuous time-varying golfer-specific ability function. However, for our purposes, we cannot implement this; there would be no way to separate genuine changes in player ability over time from technological advances or improvements in course conditions.

Ranking PGA Tour Players from 1984-2016

We are using PGA Tour round-level data from 1983-2017 (the reason we have to drop the first and last years in the sample is explained in a later section). The output of this method is a value for each year of each player’s career in our sample. This value is going to be measure of that player’s performance in that year; we call this the All-Time Performance Index (ATPI).

The ATPI is a relative measure, and as such it requires a normalization. The absolute level of the index is irrelevant, what matters is the relative magnitudes. We decide to give the average player on the PGA Tour in the year 2000 an ATPI of zero. The interpretation of the index is best understood with a specific example. The ATPI value of 3.8 assigned to Rory McIlroy in 2015 says the following: the 2015 version of McIlroy would be expected to beat the average PGA Tour player in the year 2000 by 3.8 strokes in a single round, on the same course using the same equipment. Therefore, the ATPI value for each player-year observation represents their scoring average on a “neutral” course relative to the average player in the year 2000.

Okay, now to some results (which some people are not going to like, or believe, perhaps). As usual, the plots are interactive so click around.

First, we plot the average ATPI across all players (weighted by the number of rounds played) for each year from 1984-2016. Additionally, we plot the ATPI for the best player in each year.

 

The aggregate annual numbers reflect the expected scoring difference in a single round between the average player in the relevant year and the average player in the year 2000.

Next, we basically provide all the ATPI data in this interactive graph. From the dropdown bar choose any player, and his ATPI for all years in which he played a minimum of 25 rounds will be plotted.

If you are doubting the validity of the results, please take a long look through the data. Looking at individual players' ATPI over the span of their careers has helped convince us of the validity of this measure. For example, if you think that our measure is systematically biased to favor more recent players, then we should (in general) observe players' ATPI steadily rising over their careers (even if their true ability stays relatively constant). Look up some players that have their entire career contained within 1984-2016 (Leonard, Vijay, Love III, for example). If the measure is not biased to recent years, you should observe a career arc in a player's ATPI, where they peak in the middle of their career, and have lower quality performance at the beginning and end of their careers. This is generally what you find.

Next, here are the best player-years of all-time according to the ATPI:

This highlights Tiger's greatness, as well as the strength of today's best players.

Finally, we provide a list of some notable players' average ATPI over the entire sample period. The players listed are generally those who have all (or most) of their careers contained in our 1984-2016 sample. Keep in mind that, for most of these players, (relatively) poor performances in the last few years of their careers causes their careers ATPI averages to be a bit lower than in their primes.

So... What to Make of This?

If you are willing to accept the assumptions imposed by this approach, the interpretation of these numbers is as has been stated above. That is, the differences between players' ATPI reflect differences in single-round scoring average in a neutral setting (i.e. technology, course conditions, etc are held constant). If you are uncertain as to whether we are controlling for technology changes or course conditions, recall the simple example given earlier: Rory is compared to Tiger (they are using the same equipment and playing the same courses), and Tiger is then compared to Faldo (they are also using the same equipment and playing the same courses). And, through this, Rory and Faldo are compared.

Of course, we don't think this analysis proves that mid-level players today should be regarded as "greater" golfers than Greg Norman or Tom Watson, for example. The greatness of any athlete will always be measured by their performances relative to their peers. In athletics, Roger Bannister was the first man to break the 4-minute mile barrier, and is held in very high regard because of it - despite the fact that the best high school boys can break 4 minutes in the mile today (although some of that would be attributed to improvements in shoes and track surfaces).

It very well could be that if Greg Norman had grown up in the same generation as McIlroy, he would be better than McIlroy. This analysis cannot speak to the validity of that claim. The current generation has modern technology and improved coaching (whether the latter is helpful could be debated) at their disposal to aid the development of their games in their formative years. Further, serious fitness routines have become the norm among competitive golfers. Finally, and we think most importantly, the raw numbers of serious golfers has grown immensely in the last 30 years, resulting in an increased level of competition that pushes all golfers to get better.

All of these factors could contribute to better performances by recent generations of golfers. It seems natural to think that all sports are continually progressing, and current athletes always have a bit of an edge over those that preceded them.

Statistical Details

Our results are based on fixed-effects regressions of the following form:

Score_{ijt} =  \mu_{i,t;t+/-1} + \delta_{jt} + \epsilon_{ijt}

where i is indexing player, j is indexing a specific tournament-round, and t is indexing time. The slightly complex subscript i,t;t+/-1 is indexing a specific player in the years t and t+1, or t and t-1 (depending whether the player has 2-year blocks on odd or even years).

In practice, this is implemented as a regression of score on a set of dummy variables for each 2-year block of a player's career and a set of year-tournament-round dummies.

As described earlier, we need overlap between the 2-year segments of different players' careers to connect performances across time. To obtain this overlap, we randomly assign half the players in our sample to have 2-year blocks starting on the even years (2010-2011, 2012-2013), while the other half gets the odd years (2011-2012, 2013-2014). Evidently, we do not want our estimation procedure to be sensitive to this assignment; therefore, we run the estimation many times. Because assignment is random, in some years a player will be assigned to odd-numbered 2-year segments, while in others they will be assigned to even-numbered 2-year segments. In each estimation iteration we collect the player fixed effects for every year of their career (they will be the same for each 2-year block), and then the ATPI will be calculated as the average value for a given year over all estimation iterations.

Let's make this concrete with an example; I'll describe how we come up with Rory McIlroy's ATPI for 2015. Suppose in the first estimation iteration Rory is assigned to be on the even years for his 2-year blocks. We run the regression, and obtain Rory's fixed effect for 2014-2015 (suppose it is 4.0). We write down this value as a measure of Rory's performance in the years 2014 and 2015. Next, suppose on the second iteration Rory is assigned to be on the odd 2-year block. Now, we run the regression and obtain Rory's fixed effect for 2015-2016 (suppose it's 3.0). We write down this value as a measure of Rory's performance in the years 2015 and 2016. If we decided just to do 2 iterations, Rory's ATPI value for 2015 would be equal to (3.0 + 4.0) / 2 = 3.5. Therefore, it is best to think of Rory's 2015 ATPI as a type of smoothed 3-year average, as it is ultimately obtained by averaging estimates of his performance for the 2-year blocks 2014-2015 and 2015-2016 (clearly, the middle year influences this average the most).

The fixed effects estimation is fairly computationally difficult, so we perform just 100 iterations. The estimates do not vary drastically from one iteration to the next, and consequently we think 100 iterations is more than enough to get rid of any statistical oddities that could appear from the random assignment. We drop the first and last years, 1983 and 2017, as the estimation procedure requires that there is a year on either side of the year of interest.

To conclude, it is worth mentioning the work in Berry, Reese, and Larkey (1999), who used a conceptually similar method to compare the performances of players in major championships over 5 decades. Their results are also very interesting.