## Quantifying the “Tiger Effect”

In the prime of his career, there was, and has never been, a more intimidating force on the golf course than Tiger Woods. Trying to win a golf tournament just seemed to become much more difficult playing alongside Tiger. This was in part due to Tiger always playing well, but also in part because players seemed to play worse than usual when in a Tiger pairing. Perhaps this was due to the larger crowds following and moving around constantly, or the pressure of having the G.O.A.T watching each shot, or having a front row seat to Tiger’s trademark fistpumps. Whatever the cause, players seemed to routinely play poorly alongside TW.

In this post, I try to quantify this apparent “Tiger effect”. For those of you who have read earlier posts, I use a similar approach here to that which was used to calculate our measure of “Strokes-Gained in Contention“. I consider all rounds that were played with Tiger on the weekend from 1997-2015. For each player, I calculate their baseline scoring average relative to the field using only weekend rounds played without Tiger in the group. Then, I calculate their relative to the field scoring average with Tiger for each year. Thus, for each player, in each year, I have the difference between their baseline weekend scoring average (which were calculated on a 3-year rolling basis) and their weekend scoring average with Tiger in the group.

This first figure plots the annual weighted average of these player-specific differences, where the weights are equal to the number of times a player plays with Tiger.

It can be seen that players played significantly worse, relative to their usual standard of play, when playing with Tiger in nearly all the years from 1997-2013. The largest “Tiger Effect” was in 2006, when players played a whopping 1.2 shots worse on average with Tiger in the group. That is astounding! It should be noted that the sample sizes in 2008, 2014, and 2015 are all quite small, as Tiger didn’t play many events those years. Taking a weighted average over the entire sample, we obtain a grand average “Tiger Effect” for 1997-2015 of -0.47. You could argue that the difference between the scoring average with Tiger and the scoring average without Tiger could reflect some differences apart from just the presence of Tiger; being paired with Tiger on the weekend means you are likely near the lead, and so perhaps this measure is capturing the general pressure effects of being near the lead. This is a fair criticism, but keep in mind the baseline scoring average is constructed using only weekend rounds as well, and as such it includes rounds where a player was near the lead (but was not playing with Tiger).

Next, I plot Tiger’s relative (to the field) scoring average for Saturday and Sunday rounds for 1997-2015.

In addition to the precipitous dip that occurred in 2014, it is interesting to note that this plot is close to a mirror image of the previous plot; when Tiger is playing really well, his playing partners seem to play poorly, and vice versa.

Finally, I provide the player-specific “Tiger Effects” for players who have played with Tiger at least 5 times on the weekend from 1997-2015. (The data that I have used is pretty spotty before 2003; no majors are included, and even some regular events lacked tee time data).

Atta boy Philly! At least he was playing well, on a personal level, while getting pummeled by Tiger on a regular basis. It also deserves to be noted that, while he didn’t make this table, Ian Poulter had the lowest “Tiger Effect” at -9.7 (a single round in 2011…).

Looking ahead to 2016 and beyond, we can only hope that Tiger regains his form, and the “Tiger Effect” returns with it.

## Should Ryder Cup Partners That Win Their Matches Play In The Next Session?

Remember the 2012 Ryder Cup?

Phil and Keegan were “high-fiving” (or “thumbs upping”) for a day and a half at Medinah on their way to winning 3 matches in a row. Despite the momentum they had created, Davis Love decided not to put them back out for another session on Saturday afternoon.

Many analysts believe this was the biggest mistake the skipper made all week, as the afternoon session ended up being owned by Ian Poulter, which ultimately seemed to shift the tide back to the European side and spark their historic Sunday run.

While it is easy to blame the shift in momentum on the decision to sit Lefty, let’s turn to the data instead of jumping to conclusions.

Our Dad has actually pulled Ryder Cup data all the way back to 1991, seeking to discover how winning pairings have performed when they are sent out again in the following session. Here is what he found:

The results are quite surprising. Losing pairs actually perform better (13-13-4) than winning pairs (17-22-8) when they are sent out for the following session. Further, it is the pairings that halved their previous match who perform the best (10-4-2)! While the sample size is admittedly small (more on this below), it seems as though DL3 didn’t make such a bone head move sitting the hot duo Saturday afternoon.

For those readers that are statistically inclined, you may wonder whether these results are actually meaningful – or if this is just a product of the randomness inherent to a small sample of observations. If you are wondering….read on!

First, these win-loss records are of interest in their own right; it is a fact that, from 1991-2014, pairings that tied their previous match proceed to win their next  match more often than those pairings that lost their previous match. Further, and surprisingly, those that lost their previous match won their next match more often than those that won their previous match. This is certainly counter-intuitive!

However, is this data sufficient to tell us that there is truly a mechanism at work wherein a pairing that tied the previous match actually plays better in their next match? Perhaps they are extra motivated after playing a match that went to the 18th hole, for example. Or, is this just statistical noise? After all, if you flip a coin 14 times,  it is not that unlikely to get 10 heads (prob=9ish%). Therefore, let’s suppose that Ryder Cup matches are coin flips – then, we could calculate the probability of observing a win-loss record at least as extreme as the one we observed for the pairings who tied their previous match (i.e. the probability of flipping a coin 14 times, and getting at least 10 wins, or less than 4 wins). Note that I am ignoring the matches that ended in ties. This probability is equal to 18%. Well, this is still pretty good! Only an 18% chance that our finding is just noise. But hold on…..

When my dad collected this data, he did not specifically set out to determine whether those pairings that tied the previous match won more often than not. Rather, he was looking at all 3 groups of pairings; those that won in the previous match, those that lost in the previous match, and those that tied in the previous match. If he had found any of these groups to have an interesting record (i.e. many more wins than losses, or losses than wins) this would have been noteworthy. Therefore the relevant question to ask to determine if our finding is just statistical noise is the following:

If Ryder Cup matches are a coin flip, what is the probability I would observe one of these groups (the pairings that won previously, the pairings that lost previously, or the pairings that tied previously) have a win-loss record at least as extreme as 6 more wins than losses, or 6 more losses than wins (this was the spread I observed for the “tied in previous match” group). This probability is equal to 45%! Therefore, if Ryder Cup matches are coin flips, there was a 45% chance of finding a result at least as extreme as the one we did.

All of this withstanding, it is still an interesting result! Namely, there is NO evidence that the pairings that win in their previous match do better in their next match. However, we should be careful in thinking that those who tied in their previous match actually play better in their subsequent match.

Note: The problem I described above is a malpractice common in social science and medical research known as “p-hacking”. Check out this comic to get a better understanding.