A Golf Fan’s Reflection

(Written by Rob Courchene)

What if…

What if ‘Tiger Woods’ had been a television series? One of the great ones like M.A.S.H. or Seinfeld. People tuning in weekly to the scenes we knew so well. The red shirt, the fist pumps, the massive galleries, and Tiger Woods on the leaderboard.

Ah, the age of innocence. We all knew the characters. Hawkeye, Radar, Houlihan, Colonel Blake. Jerry, Elaine, Kramer, George…Newman. We knew them as friends, maybe even as best friends. In the glory days we knew Tiger in the same way. We knew his family, his friends, his clubs, his caddies, his swing in all its different forms. We even knew his coaches. Golf may as well have been listed as the Tiger Woods show on Sunday Afternoons.

But that is where it ends. Most of us remember where we were when we gathered with friends to watch the final episode of M.A.S.H., or Seinfeld, or even Mary Tyler Moore. We watched with tears of both joy and sadness (or at least I did), saying goodbye to characters that had been a part of our lives for a decade.

But what if ‘Tiger Woods’ had been a TV series. What if we could have said our farewells on a Sunday afternoon in June? What if the producers of ’The Tiger Woods Show’ had scripted the 2008 US Open as the series finale? Our broken, yet unbeaten hero limping off into the sunset? No surgeries, no crack in the armor, no kryptonite. Who knows, maybe not even a fire hydrant. Just a glorious Sunday/ Monday at Torrey Pines to bid farewell to our golfing hero. What if…

I think he deserved this much. Think of Jack and Arnie with their memorable walks up the final holes of Pebble, or St Andrews, or Augusta. Wayne Gretzky playing his final games in Ottawa and New York in the spring of 1999. (The game I attended in Ottawa remains one of the special moments of my life).

After this latest surgery even I have little hope of a return to Tiger mania. But I do have one wish. I would like the producers of ‘Tiger Woods’ to get it right this time. Maybe just one last walk up the 18th at Augusta…maybe even in a red shirt…and maybe, just maybe, on a Sunday.

How’s the model doing so far?

It’s been 8 weeks now since we started predicting PGA Tour events using our predictive model. Here we’ll look at how well the model has done in the 7 stroke play events we predicted to-date (we exclude the Match Play because that’s not what the model was made to predict). We’ve made predictions for 851 players with regards to their probability of winning, finishing in the top 5, finishing in the top 20, and making the cut. That gives us 851*4 = 3404 predictions to evaluate.

Because our predictions are probabilistic, we need a lot of data points to properly evaluate their quality. If the model says some event should happen 10% of the time, then we (evidently) hope that it does in fact happen 10% of the time.

To evaluate the model, we group our predictions into different classes defined by the percentage assigned to it (ex: 0-2%, 2-4%, 4-6%, etc.) and the type of prediction (Win, Top 5, Top 20, Make Cut). For each class, we give the number of observations that fell into the relevant range (ex: win with probability 0-2%) and provide the percentage of those observations that were “correct” (ex: the player actually won).

We first use a 2% interval and then a 10% interval to define our prediction categories. The model evaluation using 10% intervals is shown below.

The 10% intervals are more appropriate when evaluating the cut predictions because they take on a bigger range. Conversely, the 2% intervals are more appropriate for evaluating the win predictions because they are nearly all less than 10%.  You can look at the model evaluation using 2% intervals here.

To make sure you are interpreting the table correctly, let’s focus on a specific prediction class and walk through its interpretation. Let’s look at predictions for making the cut that were between 40-50%. So far, the model has given 175 players a probability between 40-50% of making the cut. And, so far, 41.7% of those players went on to actually make the cut. This is what we want to see: the actual cut percentage (41.7%) is inside the predicted range (40-50%) for the subset of players in this prediction category. For this exercise to be fruitful, we do need a lot of observations (I would say minimum 100) to really gain some insight into the model’s performance. It is a bit concerning that some of the actual cut probabilities (with a reasonably large N) fall outside their predicted range. We have been a bit lazy in the model simulations in deciding how to define the cut (in our model there are no ties, so we typically let 75-77 players make the cut in each simulation), so perhaps this is a sign we should make that a little more rigorous.

In the other categories the model is, in general, doing pretty well. Please do look through the table with 2% intervals if you want a better look at how the model is performing with win and Top 5 predictions.