Analytics

How Important is Thanksgiving in Relation to Making the Playoffs? by Alex Craig

By: Ryan Reid

How early is too early when it comes to getting excited about a player or teams’ success early on in the season? While looking at Mikko Rantanen’s pace through 20 games and assuming he will score 130 points seems a bit ridiculous now (he is currently on pace for just over 100), the fact is that a 20 game sample size for teams as a whole is often very predictive of whether or not they will ultimately make the playoffs. In fact, over the past 5 seasons, 77.5% of teams that found themselves in a playoff position at American Thanksgiving went on to make the playoffs.

Screen Shot 2019-03-05 at 4.27.36 PM.png

Given the high predictability of holding a playoff spot at Thanksgiving, I believed that when other statistics are analyzed, they are likely to provide an even greater ability to predict which teams are playoff teams given various statistics collected at American Thanksgiving each year. 

With the help of machine learning, I hoped to be able to create a model to out predict the strategy of picking current playoff teams.

Process Used

In creating a machine learning model, I wanted to be able to classify whether a team could be best classified as a playoff team or not, given a variety of statistics collected on Thanksgiving. To do so, I used Logistic Regression within machine learning in order to classify and group variables as binary, 1 being a playoff team, and 0 being a non-playoff team. Through examining the past 11 years of team data from Thanksgiving (minus the lockout shortened season for obvious reasons) and classifying each team, I hoped to train my model to be able to accurately classify playoff teams.

Screen Shot 2019-03-05 at 4.33.56 PM.png

Within python I used the numpy, pandas, pickle, and various features within sklearn including RFE (Recursive Feature Elimination) and Logistic Regression packages to create the model. Pandas was used to import and read spreadsheets from within excel. Pickle was used to save my finalized model. Numpy was used in certain fit calculations. RFE was used to eliminate features and assign coefficients to the impact criteria was having on the decision of whether a team made the playoffs. Finally, Logistic Regression was used to assign a predicted shape to the model.

Criteria Valuation             

Starting off with all statistics I could collect for teams at Thanksgiving, I began to weed out less predictive variables until I landed on a group of 8. Using Recursive Feature Elimination (RFE), I was able to continually run the model and see which variables were deemed most predictive and should be included in the model. The factors as listed below were deemed most predictive, in order of importance 
to the model. 

While point percentage is the most predictive, other statistics like shooting percentage, save percentage, or goals for percentage provide a bigger picture perspective that allows for a better predictive capability for the machine learning model.

It has been determined that having higher shots for, shooting percentage, and save percentage all have a negative effect on whether or not you end up making the playoffs. For shooting percentage and save percentage, this is likely due to the fact that the model has identified a PDO like correlation in which teams with a lower save percentage and shooting percentage can be classified as “unlucky” and will eventually regress towards the norm. Additionally, the number of shots a team takes relative to the other team has a negative correlation with making the playoffs. This could be due to score effects that cause losing teams to typically generate more shots that are of lower quality. As the model shows, it is primarily high danger chances that are predictive of making the playoffs, not just any shot.

The Results

Screen Shot 2019-03-05 at 4.39.46 PM.png

Running the model, 81.25% or 13 out of 16 playoff teams in a playoff spot as of March 1stwere correctly classified as playoff teams. Furthermore, an additional 2 teams (Columbus and Colorado) sat only 1 point back of a playoff spot. In contrast, picking the playoff teams at Thanksgiving would only result in a 68.75% success rate or 11 out of 16 teams. Furthermore, 3 teams that were in a playoff position at Thanksgiving are no longer in the playoff race in comparison to only 1 team (Buffalo) predicted by the model. 

Outliers

Particularly interesting decisions made by the machine learning model include the decision to not pick the Rangers to make the playoffs, despite leading the Metro at Thanksgiving, and the choice to select Vegas to make the playoffs despite a slow start.

One reason behind this choice could have been New York’s low number of ROW. With a mere 8 ROW in 22 games, the New York Rangers sat atop the Metropolitan Division mainly in part to their 4-0 record in shootouts. Seeing that the New York Rangers were playing so many close games, the model likely discounted the strength of the Rangers. Additionally, the New York Rangers had the 4thlowest corsi for %, 6thlowest shots for %, 9thlowest scoring chance for %. As for points for %, the Rangers were ranked at an underwhelming 13th in the league, but led the Metro since the Metro was a weak division and the Rangers had more games played. Given the Rangers low valuation across all these supporting criteria, the machine predicted that they would not make the playoffs despite their stronger points for % at Thanksgiving. 

As for the Golden Knights, despite holding the 29thbest point % in the league, Vegas was among the top 4 in the league in shots for %, corsi for % and scoring chances for %. Additionally, Vegas had the league’s lowest PDO (SH% + SV%) at 95.66. Given all these things considered, the model likely believed it was only a matter of time before the Vegas Golden Knights began winning.

Flaws in the Model

While my machine learning model appears to have the ability to out predict the strategy of picking all playoff teams at Thanksgiving, two main limitations of the model as highlighted above is the inability of the machine to pick teams based on the given playoff format, and the lack of data at various game states. 

Unaware of the NHL’s current playoff format, the model picked 9 Eastern Conference teams, and only 7 Western Conference teams. Without a grasp on the alignment of divisions within the league, the model is at a disadvantage when picking teams, particularly when specific divisions or conferences are more “stacked” than others. Therefore, there is the potential of the model picking an otherwise impossible selection of teams to make the playoffs.

Furthermore, data collected to be fed into the model was only even-strength data. While this provides a decent picture of a team’s capability, certain teams that rely on their power play, as the Penguins traditionally have, may be disadvantaged and discounted. Finding a way to incorporate this data into the model would likely provide a fuller picture and a more accurate prediction.

Final Thoughts

While the model I have created is by no means perfect, it provides a unique perspective into not only the importance of the first 20 or so games of the season, but also what statistics beyond wins are important in attempting to classify a playoff team. While the model appears to out predict the strategy of selecting all playoff teams at Thanksgiving, it will be interesting to see in years to come if there is a continued ability to classify playoff teams given Thanksgiving stats.

***All statistics gathered from Natural Stat Trick


Keep up to date with the Queen's Sports Analytics Organization. Like us on Facebook. Follow us on Twitter. For any questions or if you want to get in contact with us, email qsao@clubs.queensu.ca, or send us a message on Facebook.

How Important is Winning a Period in the NHL? by Alex Craig

By: Adam Sigesmund (@Ziggy_14)

Sometimes when I watch hockey on television, the broadcast will display a stat that makes me cringe. One of my (least) favourites is a stat like the one displayed just under the score in the screenshot below:

Picture1.png

Most of us have noticed these stats on broadcasts before. I imagine they are common because they match the game state (i.e. the Leafs are leading after the first period), so broadcasters probably believe we find them insightful. However, we are all smart enough to understand that teams should theoretically have a better record in games that saw them outscore their opponents in the first period. In this case, whatever amount of insight the broadcasters believe they are providing us with is merely an illusion. Perhaps they also saw value in the fact that the Leafs were undefeated in those 13 games, but that is not what I want to focus on today. 

More generally, my primary objective for this post is to shed light on the context behind this type of stat, mostly because broadcasts rarely provide it for us. Ultimately, I will examine 11 seasons worth of data to understand how the outcome of a specific period effects the number of standings points a team should expect to earn in that game. Yes, this means there will be binning*. And yes, I acknowledge that binning is almost always an inappropriate approach in any meaningful statistical analysis. The catch here is that broadcasters continue to display these binned stats without any context, and I believe it is important to understand the context of a stat we see on television many times each season.

* Binning is essentially dividing a continuous variable into subgroups of arbitrary size called “bins.”In this case, we are dividing a 60-minute hockey game into three 20-minute periods. 

A particular team wins a period by scoring more goals than their opponent. I looked at which teams won, lost, or tied each period by running some Python code through a data set provided by moneypuck.com. The data includes 13057 regular season games between the 2007-2008 and 2017-2018 seasons, inclusive. (Full disclosure: I’m pretty sure four games are missing here. My attempts to figure out why were unsuccessful, but I went ahead with this article because the rest of my code is correct, and 4 games out of over 13K is virtually insignificant anyways).  The table below displays our sample sizes over those eleven seasons:

Picture2.png

Remember that when the home team loses, the away team wins, so the table with our results will be twice as large at the table above. I split the data into home and away teams because of home-ice advantage; Home teams win more games than the visitors, which suggests that home teams win specific periods more often too. We can see this is true in the table shown above. In period 1, for example, the home team won 4585 times and lost only 3822 times. The remaining 4650 games saw first periods that ended in ties. 

We want to know the average number of standings points the home team earned in games after winning, tying, or losing period 1. This will give us three values: One average for each outcome of the first period. We also want to find the same information for the away team, giving us atotal of six different values for period 1. (This step is not redundant because of the “Pity Point”system, which awards one point to the losing team if they lost in overtime or the shootout. The implication is that some games result in two standings points but others end in three, so knowing which team won the game still does not tell us exactly how many points the losing team earned). Repeating this process for periods 2 and 3 brings our total to 18 different values. The results are shown below:

Picture3.png

The first entry in the table (i.e. the top left cell) tells us that when home teams win period 1, they end up earning an average of 1.65 points in the standings. We saw earlier that the home team has won the first period 4585 times, and now we know that they typically earn 1.65 points in the standings from those specific games. But if we ignore the outcome of each period, and focus instead on the outcomes of all 13057 games in our sample, we find that the average team earns 1.21 points in the standings when playing at home. (This number is from the sentence below the table —the two values there suggest the average NHL team finishes an 82-game season with around 91.43 points, which makes sense). So, we know that home teams win an average of 1.21 points in general, but if they win the first period they typically earn 1.65 points. In other words, they jumped from an expected points percentage of 60.5% to 82.5%. That is a significant increase.

However, in those 4585 games, the away team lost the first period because they were outscored by the home team. It is safe to say that the away team experienced a similar change, but in the opposite direction. Indeed, their expected gain decreased from 1.02 points (a general away game) to 0.54 points (the condition of losing period 1 on the road). Every time your favourite team is playing a road game and loses period 1, they are on track to earn 0.48 less standings points than when the game started; That is equivalent to dropping from a points percentage of 51% to 27%. Losing period 1 on the road is quite damaging, indeed. 

Another point of interest in these results, albeit an unsurprising one, is the presence of home-ice advantage in all scenarios. Regardless of how a specific period unfolds, the home team is always better off than the away team would be in the same situation.

I also illustrated these results in Tableau for those of you who are visual learners. The data is exactly the same as in the results table, but now it’s illustrated relative to the appropriate benchmark (1.21 points for home teams and 1.02 points for away teams).  

Picture4.png

Now, let’s reconsider the original stat for a moment. We know that when the Leafs won the first period, they won all 13 of those games. Clearly, they earned 26 points in the standings from those games alone. How many points would the average team have earned under the same conditions? While the broadcast did not specify which games were home or away, let’s assume just for fun that 7 of them were at home, and 6 were on the road. So, if the average team won 7 home games and 6 away games, and also happened to win the first period every time, they would have: 7(1.65) + 6(1.53) = 20.73 standings points. Considering that the Leafs earned 26, we can see they are about 5 points ahead of the average team in this regard. Alternatively, we can be nice and allow our theoretical “average team”to have home-ice advantage in all 13 games. This would bump them up to 13(1.65) = 21.45 points, which is still a fair amount below the Leafs’ 26 points. 

One issue with this approach is that weighted averages like the ones I found do not effectively illustrate the distributionof possible outcomes. All of us know it is impossible to earn precisely 1.65 points in the standings —the outcome is either 0, 1, or 2. An alternative approach involves measuring the likelihood of a team coming away with 2 points, 13 times in a row, given that all 13 games were played at home and that they won the first period every time. We know the average is 13(1.65) = 21.45 standings points, but how likely is that? It took a little extra work, but I calculated that the average team would have only a 3.86% chance to earn all 26 points available in those games. (I did this by finding the conditional probability of winning a specific game after winning the first period at home, and then multiplying that number by itself 13 times). Although the probability for the Leafs is a touch lower than this, since there is a good chance a bunch of those 13 games were not played at home, you should not allow such a low probability to shock you; 13 games is a small sample, especially for measuring goals. There is definitely lots of luck mixed in there. 

This brings us back to my original anecdote about cringing whenever I encounter this type of stat. Even if we acknowledge its fundamental flaw —scoring goals leads to wins, no matter when those goals occur in a game —the stat is virtually meaningless in a small sample. Goals are simply too rare to provide us with much insight in a sample of 13 games. Nevertheless, broadcasters will continue displaying these numbers without context. This article will not change that. So, the next time it happens, you can now compare that team to league average over the past eleven seasons. Even if the stat is not shown on television, all you need to know is the outcome of a specific period to find out how the average team has historically performed under the same condition. At the very least, we have a piece of context that we did not have before.

Do Tired Defensemen Surrender More Rebounds? by Owen Kewell

By: Owen Kewell

Two thoughts popped into my mind, one after the other.

First, I wondered whether an NHL player’s performance fluctuated depending on how long they had been on the ice. Does short-term fatigue play a significant role over a single shift?

Second, I wondered how to quantify (and hopefully answer) this question.

The Data

Enter the wonderfully detailed shot dataset recently published by moneypuck.com. In it, we have over 100 features that describe the location and context of every shot attempt since the 2010-11 NHL season. You can find the dataset here: http://moneypuck.com/about.htm#data.

Within this data I found two variables to test my idea. First, the average number of seconds that the defending team’s defensemen had been on the ice when the attacking team’s shot was taken. The average across all 471,898 shots was 34.2 seconds, if you’re curious. With this metric I had a way to quantify the lifespan of a shift, but what variable could be used as a proxy for performance?

Fortunately, the dataset also says whether each shot was a rebound shot. To assess defensive performance, I decided to use the rate at which shots against were rebounds. Recovering loose pucks in your own end is a fundamental part of the job description for NHL defensemen, especially in response to your goalie making a save. Should the defending team fail to recover the puck, the attacking team could generate a rebound shot, which would often result in a goal against. We can see evidence of this in the 5v5 data:

Rebound shooting % is 3.6x larger than non-rebound shooting %

Rebound shooting % is 3.6x larger than non-rebound shooting %

The takeaway here is that 24.1% of rebound shots go into the net, compared to just 6.7% of non-rebound shots. Rebounds are much closer to the net on average, which can explain much of this difference.

I believe that a player’s ability to recover loose pucks is a function of their ability to anticipate where the puck is going to be and their quickness to get to there first. While anticipation is a mental talent, quickness is physical, meaning that a defender’s quickness could deteriorate over the course of their shift as short-term fatigue sets in. Could their ability to prevent rebound shots be consequently affected? Let’s plot that relationship:

No trendline graph.jpg

There’s a lot going on here, so let’s break it down.

The horizontal axis shows the average shift length of the defending defense pairing at the time of the shot against. I cut the range off at 90 seconds because data became scarce after that; pairings normally don’t get stuck on the ice for more than a minute and a half at 5v5. The vertical axis shows what percentage of all shots against were rebounds.

Each blue dot represents the rebound rate for all shots that share a shift length, meaning that there are 90 data points, or one for each second. The number of total shots ranges from 382 (90 seconds) to 8,124 (27 seconds). Here’s the full distribution:

Shot Rates.jpg

We can see that sample size is an inherent limitation for long shifts. The number of shots against drops under 1,000 for all shift lengths above 74 seconds, which means that the conclusions drawn from this portion of the data need to be taken with a grain of salt. This sample size issue also explains the plot’s seemingly erratic behaviour towards the upper end of the shift length range, as percentage rates of relatively rare events (rebounds) tend to fluctuate heavily in smaller sample sizes.

The Model

Next, I wanted to create a model to represent the trend of the observed data. The earlier scatter plot tells us that the relationship between shift length and rebound rate is probably non-linear, so I decided to use a polynomial function to model the data. But what should be this function’s degree? I capped the range of possibilities at degree = 5 to avoid over-fitting the data, and then set out to systematically identify the best model.

It’s common practice to split data into a training set and a testing set. I subjectively chose a split of 70-30% for training and testing, respectively. This means that the model was trained using 70% of all data points, and then its ability to predict previously unseen data was measured using the remaining 30%. Model accuracy can be measured by any number of metrics, but I decided to use the root mean squared error (RMSE) between the true data points and the model’s predictions. RMSE, which penalizes large model errors, is among the most popular and commonly-used error functions. I conducted the 70-30 splitting process 10,000 times, each time training and testing five different models (one each of degree 1, 2, 3, 4, and 5). Of the five model types, the 5th degree function produced the lowest root mean squared error (and therefore the highest accuracy) more often than the degree 1, 2, 3 or 4 functions. This tells us that the data is best modelled by a 5th degree polynomial. Fitting a normalized 5th degree function produced the following equation:

x  = shift length in seconds

x = shift length in seconds

This equation is less interesting than the curve that it represents, so let’s look at that:

Regression.jpg

What Does It Mean?

The regression appears to generally do a good job of fitting the data. Our r-squared value of 0.826 tells us that ~83% of the variance in ‘Rebound %’ is explained by defensemen shift length, which is encouraging. Let’s talk more about the function’s shape.

Rebound rate first differences decrease at first as the rate stabilizes, and then increase further

Rebound rate first differences decrease at first as the rate stabilizes, and then increase further

As defense pairings spend more time on the ice, they tend to surrender more rebound shots, meaning that they recover fewer defensive zone loose pucks. Pairings who are early in their shift (< 20 seconds) surrendered relatively few rebound shots, but there's likely a separate explanation for this. It's common for defensemen to change when the puck is in other team’s end, meaning that their replacements often get to start shifts with the puck over 100 feet away from the net they're defending. For a rebound shot to be surrendered, the opposing team would need to recover possession, transition to offense, enter the zone and generate a shot. These events take time, which likely explains why rebound rates are so low in the first 15-20 seconds of a shift.

We can see that rebound rates begin to stabilize after this threshold. The rate is most flat at the 34 second mark (5.9%), after which the marginal rate increase begins to grow for each additional second of ice time. This pattern of increasing steepness can be seen in the ‘Rebound Rate Increase’ column of the above chart and likely reflects the compounding effects of short-term fatigue felt by defensemen late in their shifts, especially when these shifts are longer than average. The sample size concerns for long shifts should again be noted, as should the accompanying skepticism that our long-shift data accurately represent their underlying phenomenon.

The main strategic implications of these findings relate to optimal shift length. The results confirm the age-old coaching mantra of ‘keep the shifts short’, showing a positive correlation between shift length and rebound rates. Defensemen shift lengths should be kept to 34 seconds or less, ideally, since the data suggests that performance declines at an increasingly steep rate beyond this point. Further investigation is needed, however, before one can conclusively state that this is the optimal shift length.

Considering that allowing 4 rebound shots generally translates to a goal against, it’s strategically imperative to reduce rebound shot rates by recovering loose pucks in the defensive zone. Better-rested defensemen are better able to recover these pucks, as suggested by the strong, positive correlation between defensemen shift length and rebound rates. While further study is needed to establish causation, proactively managing defensive shift lengths appears to be a viable strategy to reduce rebound shot rates. 

Any hockey fan could tell you that shifts should be kept short, but with the depth of available data we're increasingly able to figure out exactly how short they should be.

In Search of Similarity: Finding Comparable NHL Players by Owen Kewell

By: Owen Kewell

The following is a detailed explanation of the work done to produce my public player comparison data visualization tool. If you wish to see the visualization in action it can be found at the following link, but I wholeheartedly encourage you to continue reading to understand exactly what you’re looking at:

https://public.tableau.com/profile/owen.kewell#!/vizhome/PlayerSimilarityTool/PlayerSimilarityTool

NHL players are in direct competition with hundreds of their peers. The game-after-game grind of professional hockey tests these individuals on their ability to both generate and suppress offense. As a player, it’s almost guaranteed that some of your competitors will be better than you on one or both sides of the puck. Similarly, you’re likely to be better than plenty of others. It’s also likely that there are a handful of players league-wide whose talent levels are right around your own.

The NHL is a big league. In the 2017-18 season, 759 different skaters suited up for at least 10 games, including 492 forwards and 267 defensemen. In such a deep league, each player should be statistically similar to at least a handful of their peers. But how to find these league-wide comparables?

Enter a bit of helpful data science. Thanks to something called Euclidean distance, we can systemically identify a player’s closest comparables around the league. Let’s start with a look at Anze Kopitar.

Anze Kopitar's closest offensive and defensive comparables around the league

Anze Kopitar's closest offensive and defensive comparables around the league

The above graphic is a screenshot of my visualization tool.

With the single input of a player’s name, the tool displays the NHL players who represent the five closest offensive and defensive comparables. It also shows an estimate of the strength of this relationship in the form of a similarity percentage.

The visualization is intuitive to read. Kopitar’s closest offensive comparable is Voracek, followed by Backstrom, Kane, Granlund and Bailey. His closest defensive comparables are Couturier, Frolik, Backlund, Wheeler, and Jordan Staal. All relevant similarity percentages are included as well.

The skeptics among you might be asking where these results come from. Great question.

 

A Brief Word on Distance

The idea of distance, specifically Euclidean distance, is crucial to the analysis that I’ve done. Euclidean distance is a fancy name for the length of the straight line that connects two different points of data. You may not have known it, but it’s possible that you used Euclidean distance during high school math to find the distance between two points in (X,Y) cartesian space.

Now think of any two points existing in three-dimensional space. If we know the details of these points then we’re able to calculate the length of the theoretical line that would connect them, or their Euclidean distance. Essentially, we can measure how close the data points are to each other.

Thanks to the power of mathematics, we’re not constrained to using data points with three or fewer dimensions. Despite being unable to picture the higher dimensions, we've developed techniques for measuring distance even as we increase the complexity of the input data.

 

Applying Distance to Hockey

Hockey is excellent at producing complex data points. Each NHL game produces an abundance of data for all players involved. This data can, in turn, be used to construct a robust statistical profile for each player.

As you might have guessed, we can calculate the distance between any two of these players. A relatively short distance between a pair would tell us that the players are similar, while a relatively long distance would indicate that they are not similar at all. We can use these distance measures to identify meaningful player comparables, thereby answering our original question.

I set out to do this for the NHL in its current state.

 

Data

First, I had to determine which player statistics to include in my analysis. Fortunately, the excellent Rob Vollman publishes a data set on his website that features hundreds of statistics combed from multiple sources, including Corsica Hockey (http://corsica.hockey/), Natural Stat Trick (https://naturalstattrick.com) and NHL.com. The downloadable data set can be found here: http://www.hockeyabstract.com/testimonials. From this set, I identified the statistics that I considered to be most important in measuring a player’s offensive and defensive impacts. Let’s talk about offense first.

List of offensive similarity input statistics

List of offensive similarity input statistics

I decided to base offensive similarity on the above 27 statistics. I’ve grouped them into five categories for illustrative purposes. The profile includes 15 even-strength stats, 7 power-play stats, and 3 short-handed stats, plus 2 qualifiers. This 15-7-3 distribution across game states reflects my view of the relative importance of each state in assessing offensive competence. Thanks to the scope of these statistical measures, we can construct a sophisticated profile for each player detailing exactly how they produce offense. I consider this offensive sophistication to be a strength of the model.

While most of the above statistics should be self-explanatory, some clarification is needed for others. ‘Pass’ is an estimate of a player’s passes that lead to a teammate’s shot attempt. ‘IPP%’ is short for ‘Individual Points Percentage’, which refers to the proportion of a team’s goals scored with a player on the ice where that player registers a point. Most stats are expressed as /60 rates to provide more meaningful comparisons.

You might have noticed that I double-counted production at even-strength by including both raw scoring counts and their /60 equivalent. This was done intentionally to give more weight to offensive production, as I believe these metrics to be more important than most, if not all, of the other statistics that I included. I wanted my model to reflect this belief. Double-counting provides a practical way to accomplish this without skewing the model’s results too heavily, as production statistics still represent less than 40% of the model’s input data.

Now, let's look at defense.

List of defensive similarity input statistics

List of defensive similarity input statistics

Defensive statistical profiles were built using the above 19 statistics. This includes 15 even-strength stats, 2 short-handed stats, and the same 2 qualifiers. Once again, even-strength defensive results are given greater weight than their special teams equivalents.

Sadly, hockey remains limited in its ability to produce statistical measurements of individual defensive talent. It’s hard to quantify events that don’t happen, and even harder to properly identify the individuals responsible for the lack of these events. Despite this, we still have access to a number of useful statistics. We can measure the rates at which opposing players record offensive events, such as shot attempts and scoring chances. We can also examine expected goals against, which gives us a sense of a player’s ability to suppress quality scoring chances. Additionally, we can measure the rates at which a player records defense-focused micro-events like shot blocks and giveaways. The defensive profile built by combining these stats is less sophisticated than its offensive counterpart due to the limited scope of its components, but the profile remains at least somewhat useful for comparison purposes.

 

Methodology

For every NHLer to play 10 or more games in 2017-18, I took a weighted average of their statistics across the past two seasons. I decided to weight the 2017-18 season at 60% and the 2016-17 season at 40%. If the player did not play in 2016-17, then their 2017-18 statistics were given a weight of 100%. These weights represent a subjective choice made to increase the relative importance of the data set’s more recent season.

Having taken this weighted average, I constructed two data sets; one for offense and the other for defense. I imported these spreadsheets into Pandas, which is a Python package designed to perform data science tasks. I then faced a dilemma. Distance is a raw quantitative measure and is therefore sensitive to its data’s magnitude. For example, the number of ‘Games Played’ ranges from 10-82, but Individual Points Percentage (IPP%) maxes out at 1. This magnitude issue would skew distance calculations unless properly accounted for.

To solve this problem, I proportionally scaled all data to range from 0 to 1. 0 would be given to the player who achieved the stat’s lowest rate league-wide, and 1 to the player who achieved the highest. A player whose stat was exactly halfway between the two extremes would be given 0.5, and so on. This exercise in standardization resulted in the model giving equal consideration to each of its input statistics, which was the desired outcome.

I then wrote and executed code that calculated the distance between a given player and all others around the league who share their position. This distance list was then sorted to identify the other players who were closest, and therefore most comparable, to the original input player. This was done for both offensive and defensive similarity, and then repeated for all NHL players.

This process generated a list of offensive and defensive comparables for every player in the league. I consider these lists to be the true value, and certainly the main attraction, of my visualization tool.

Not satisfied with simply displaying the list of comparable players, I wanted to contextualize the distance calculations by transforming them into a measure that was more intuitively meaningful and easier to communicate. To do this, I created a similarity percent measure with a simple formula.

Similarity Formula.jpg

In the above formula, A is the input player, B is their comparable that we’re examining, and C is the player least similar to A league-wide. For example, if A->B were to have a distance of 1 and A->C a distance of 5, then the A->B similarity would be 1 - (1/5), or 80%. Similarity percentages in the final visualization were calculated using this methodology and provide an estimate of the degree to which two players are comparable.

 

Limitations

While I wholeheartedly believe that this tool is useful, it is far from perfect. Due to a lack of statistics that measure individual defensive events, the accuracy of defensive comparisons remains the largest limitation. I hope that the arrival of tracking data facilitates our ability to measure pass interceptions, gap control, lane coverage, forced errors, and other individual defensive micro-events. Until we have this data, however, we must rely on rates that track on-ice suppression of the opposing team’s offense. On-ice statistics tend to be similar for players who play together often, which causes the model to overstate defensive similarity between common linemates. For example, Josh Bailey rates as John Tavares’ closest defensive comparable, which doesn’t really pass the sniff test. For this reason, I believe that the offensive comparisons are more relevant and meaningful than their defensive counterparts.

 

Use Scenarios

This tool’s primary use is to provide a league-wide talent barometer. Personally, I enjoy using the visualization tool to assess relative value of players involved in trades and contract signings around the league. Lists of comparable players give us a common frame through which we can inform our understanding of an individual's hockey abilities. Plus, they’re fun. Everyone loves comparables.

The results are not meant to advise, but rather to entertain. The visualization represents little more than a point-in-time snapshot of a player’s standing around the league. As soon as the 2018-19 season begins, the tool will lose relevance until I re-run the model with data from the new season. Additionally, I should explicitly mention that the tool does not have any known predictive properties.

If you have any questions or comments about this or any of my other work, please feel free to reach out to me. Twitter (@owenkewell) will be my primary platform for releasing all future analytics and visualization work, and so I encourage you to stay up to date with me through this medium.

What's a Corsi Anyway?: An Intro to Hockey Analytics by Scott Schiffner

By: Owen Kewell, Scott SchiffnerAdam Sigesmund (@Ziggy_14), Anthony Turgelis (@AnthonyTurgelis)

Advanced statistics is an area that has recently started to pick up steam and shift into the mainstream focus in hockey over the past decade. Many NHL teams now employ full-time analytics staff dedicated to breaking down the numbers behind the game. So, what makes analytics such a powerful tool? Aside from helping you dominate your next fantasy hockey pool, advanced statistics provide potent insights into what is really causing teams to win or lose.

Hockey is a sport that has long been misunderstood. Its gameplay is fundamentally volatile, spontaneous and difficult to follow. There are countless different factors that contribute to a team’s chances of scoring a goal or winning a game on a nightly basis. While many in Canada would beg to differ, ice hockey still firmly occupies last place in terms of revenue and fan support amongst the big four major North American sport leagues (NFL, MLB, NBA, & NHL). As such, hockey is on the whole overlooked and is often the last to implement certain changes that come about in professional sports. The idea of a set of advanced statistics that would offer better insights into the game arose as other major sports leagues, starting with Major League Baseball, began looking beyond superficial characteristics and searching for the underlying numbers influencing outcomes. Coaches, players, and fans alike have all been subjected over the years to an epidemic failure to truly understand what is happening out there on the ice. This is the motivation behind the hockey analytics movement: to use data analysis to enhance and develop our knowledge of ice hockey and inform decision-making for the benefit of all who wish to understand the sport better.

Another barrier to progress in the field of hockey analytics is the hesitance of the sport to embrace modern statistics. Most casual fans are familiar with basic stats such as goals, assists, PIM, and plus/minus. But do these stats really tell the full story? In fact, most of these are actually detrimental to the uninformed fan’s understanding of the game. For starters, there is usually no distinction between first and second assists in traditional stat-keeping. A player could have touched the puck thirty seconds earlier in the play or made an unbelievable pass to set up a goal, and either way it still counts as a single assist on the scoresheet. Looking only at goals and assists can be deceiving; we need more reliable, repeatable metrics to determine which players are most valuable to their teams. Advanced stats are all about looking beyond the surface and identifying what’s actually driving the play.

So, what are these so-called “advanced stats”? Let’s start with the basics.

PDO: PDO (it doesn't stand for anything) is defined as a team’s save percentage (usually 5v5) + shooting percentage, with an average score of 1. If you only learn one concept, it's this one. It is usually regarded as a measure of a team or player’s luck, and can be a useful indicator that a player is under/over performing and whether a regression to the mean (back towards 1.000) is likely. This will not happen in every situation, of course, but watch for teams that have astronomic PDOs to hit a reality check sooner rather than later. Team PDO stats can be found on corsica.hockey’s team stats page.

Without trying to scare anyone, the Toronto Maple Leafs currently boast the 4th highest PDO at 101.85. To help ease your mind a bit, the Tampa Bay Lightning who are considered the team to beat in the East have the highest PDO of 102.35, and there's a decent gap between second place. They could be currently playing at a higher level than they really are as well, time will tell. 

Corsi: You may have heard of terms like Corsi and/or Fenwick being thrown around before. These are core concepts that are fundamental to understanding what drives the play during a game. Basically, Corsi is an approximation of puck possession that measures the total shot attempts for your team, and against your team, and stats can be viewed for Corsi results when a specific player is on the ice.

A shot attempt is defined as any time the puck is directed at the goal, including shots on net, missed shots, and blocked shots. Anything above 50% possession is generally seen as being positive as you are generating more shot attempts than you are allowing.

Corsi stats are typically kept in the following ways: Corsi For (CF), Corsi Against (CA), +/-, and CF%. An example of how CF% can be useful is when evaluating offensive defensemen. Sometimes, these players are overvalued because of their noticeable offensive production, while failing to consider that their shaky defensive game offsets the offensive value they provide. 

Fenwick: Fenwick is similar to Corsi, but excludes shot attempts that are blocked. Of course, with both of these stats, one should also take into account that a player’s possession score is influenced by both their linemates as well as the quality of competition (QoC). These stats can always be adjusted to reflect different game scenarios, like whether the team was up or down by a goal at the time, etc.

Measuring puck possession in hockey makes sense, because the team that has the puck on their stick more often controls the play. Granted, Corsi/Fenwick are far from perfect, and the team with the better possession metrics doesn’t always come out ahead. But at the very least, including all shot attempts offers a much larger sample size of data than traditional stats, and provides a solid foundation for further analysis.

Zone Starts (ZS%): this measures the proportion of the time that a player starts a shift in each area of the ice (offensive zone vs. defensive zone). A ZS% of greater than 50% tells us that the player is deployed in offensive situations more frequently than defensive situations. This is important because it gives us insight into a player’s usage, or in what scenarios he is normally deployed by his team’s coach. It also provides context for interpreting a player’s Corsi/Fenwick. Players who are more skilled offensively will tend to have a higher ZS% because they give the team a better chance to take advantage of the offensive zone faceoff and generate scoring opportunities. At the very least, ZS% can be used to get a glimpse at how a coach favors a player’s skillset.

Intro on 5v5 Isolated Stats and Repeatability

Often times, you will see those who do work with hockey analytics cite a player's stats solely while they are at even strength, or 5v5. Why? There's a few reasons.

First, 5v5 obviously takes up most of the hockey game. If a player is valuable to his team at 5v5, he will be valuable to a team for more time throughout the game, and this should be seen as a large positive. A player's power play contributions are certainly valuable to a team, but often over-valued. Next, the game is played very differently at different states. It would be wildly unfair to penalty killers to have their penalty kill stats included in their overall line, as more goals against are scored on the penalty kill, even for the best penalty killers. Separating these statistics helps provide a more complete picture into the player's skillset and value that they have contributed to their team. Finally, 5v5 stats are generally regarded as the most repeatable, partially due to the larger sample. While players' PP and PK stats can highly vary by year, 5v5 stats typically remain relatively stable (read more at PPP here if you like).

In addition, primary points (goals and first assists) have been regarded as relatively repeatable stats, so be on the lookout for player's that have many secondary assists to possibly have their point totals regress in the future (read more on this here).

Intro to Comparison Tools

One of the areas that has most benefited from hockey analytics is the domain of player comparison. One of the best and most intuitive tools is the HERO chart, as pioneered by Domenic Galamini Jr (@MiminoHero). The HERO chart is a quick comparison of how players stack up across ice time, goal scoring, primary assists, shot generation and shot suppression. At a single glance, we can get a sense of the strengths and contributions of different players. Here we compare Sidney Crosby to Connor McDavid:

hero.png

We can see that Crosby is better at goal-scoring and shot generation, while McDavid is better at primary assists and shot suppression.

To compare any two players of your choice, or to compare a player to a positional archetype like First-Line Centre or Second-Pair Defender, you can use Galamini’s website: http://ownthepuck.blogspot.ca/. These comparisons can be used to enhance understanding of a player’s skill set, inform debates, and evaluate moves made by NHL teams, among other uses.

All-3-Zone Data Visualizations:

While a HERO chart is an all-encompassing snapshot of a players contributions on the ice, the All-Three-Zones visuals are concerned with more specific aspects of the game. CJ Turtoro (@CJTDevil) created two sets of visuals using data from Corey Sznajder’s (@ShutdownLine) massive tracking project.

You can find both sets of visuals at the links below:

  1. https://public.tableau.com/profile/christopher.turtoro#!/vizhome/ZoneTransitionsper60/5v5Entries

  2. https://public.tableau.com/profile/christopher.turtoro#!/vizhome/2-yearA3ZPlayerComps/ComparisonDashboard

In the first set of visuals, you will find 4 leaderboards. Players are ranked in the 5v5 stats listed below.

  • 5v5 Entries -- How often players enter the offensive zone by making a clean pass to a teammate (Entry passes/60) or by carrying the puck across the blue line themselves (Carry-ins/60).

Other notes: The best way to enter the zone is to enter with possession of the puck (Entry passes + Carry-ins, as discussed above). These types of entries are called Possession Entries. Although other types of attempts are included in the leaderboard as well, players are automatically sorted by Possession Entries/60 because these alternative attempts are less than ideal. If you decide to change this, use the “Sort By (Entries)” filter to rank the players in other ways.

  • 5v5 Exits -- This is the same as 5v5 entries, except at the blue line separating the defensive zone from the neutral zone. Players are ranked based on how often they transition the puck from the defensive zone into the neutral zone either by carrying it (Carries/60) or by passing it to a teammate (Exit Passes/60).

Other notes: Like 5v5 entries, the best ways to exit the defensive zone are classified as Possession Exits. This is why players are automatically sorted by Possession Exits/60. Again, the “Sort By (Exits)” filter will let you change how the leaderboard is sorted.

  • 5v5 Entries per Target (5v5 Entry Def %) -- This stat measures defence at the blue line. It answers the question: When a defender is in proximity to an attempted zone entry, how often does he stop the attempt?

Other Notes: It is important to note that a “defender” is any player on the team playing defence (i.e. the team without the puck). Forwards are included in this definition of defender, but the best way to use this leaderboard is to judge defensemen only. This is why forwards are automatically filtered out of the leaderboard, but you can always change this using the filter if you wish.

  • 5v5 Shots and Passes -- Players are ranked based on how often they contribute to shots. Players contribute to shots by being the shooter or by making one of three passes immediately before the shot in the same way they earn points by scoring a goal or by making one of two passes immediately before the shot was taken.

If you want a closer look at certain groups of players, the filters allow you to look at players who play certain positions (forwards/defencemen) and players who play on certain teams. In the screenshot below, for example, I filtered the 5v5 Entries leaderboard to see what it looks like for forwards on the Oilers:

entries:60.png

You can use these leaderboards to judge offence (5v5 entries, 5v5 shot contributions), and defence (5v5 exits, 5v5 Entry Def %). Ultimately, these four leaderboards will help you identify the best and worst players in these areas.

In order to focus on one or two players, you should use the second set of visuals: The A3Z Player Comparison Tool. While HERO charts allow for player comparisons in stats collected by the NHL, this visualization was designed to help you judge players based on their performance in several stats from the tracking project. Instead of standard deviations, however, the measurement of choice in this comparison tool is percentiles. So keep in mind that “100” means the result is better than 100% of the other results. You can view a players results in two 1-year windows and one 2-year window, covering the 2016-17 season and the 2017-18 season. Here’s a two-year snapshot of how Erik Karlsson and Sidney Crosby rank in some of these key stats:

a3z.png

You probably noticed that the stats for forwards and defencemen are slightly different. The only difference is that defencemen have three extra categories, which measure their ability (or lack thereof) to defend their own blue line (i.e. their 5v5 Entries per Target, as discussed in the previous section). You may have also noticed some useful information hidden beneath each players name, including the numbers of games and minutes that have been tracked for the player. Although the numbers in the screenshot above are from two seasons, another thing to keep in mind is that you can also compare a players development over two seasons by looking at their stats in one-year windows. To see what I mean, take a look at Nikita Zaitsev’s numbers in two consecutive seasons:

zaitsev.png

Visualizing the dramatic fall of Nikita Zaitsev in this way is an excellent starting point for further analysis. Likewise, you can also compare two different players in the same season or over two seasons. This is, after all, a Player Comparison Tool. Other common uses for both sets of A3Z visualizations are to identify strengths and weaknesses of certain players, to evaluate potential acquisitions, to design the optimal lineup for your favourite team, and many more.

Of course, there are countless other useful terms and concepts to consider in analytics, like relative stats, shot quality, and expected goals (xG), which we’ll be touching upon more in-depth in future articles. If you’re interested in advanced stats and would like to learn more, we’ll be putting out more content on exciting topics in hockey analytics over the coming months, so stay tuned.


Keep up to date with the Queen's Sports Analytics Organization. Like us on Facebook. Follow us on Twitter. For any questions or if you want to get in contact with us, email qsao@clubs.queensu.ca, or send us a message on Facebook.

Advanced Baseball Stats for Casual Baseball Fans by Anthony Turgelis

By Anthony Turgelis

We’ve all seen Moneyball. If you haven’t seen Moneyball, go see Moneyball, it’s on Netflix. The ‘Moneyball Revolution’ within baseball has shaken up the game, and changed the way that executives in baseball are looking at the game.

This will be an intro to some of the stats, metrics, and concepts that these executives are looking at. The goal here isn’t just to define what these things are, but rather to show how they can be used as tools of evaluation, to confirm the eye-test, or to just enhance the experience of the game. You might even end up sounding smart in front of your friends. When writing this article, I tried to include everything I wish I knew when first diving into the world of baseball analytics.

To avoid boring you with the history of how this Moneyball Revolution came to be, I’ll only drop one name that you should be familiar with - Bill James. Bill can be credited for being the pioneer of statistical analysis within baseball, as in the 1970s he was one of the first to publish this type of work that would be seen by a wide audience. Many people found his work fascinating, and attempted to replicate it, and - to make a long story short - after 30 years of this, the MLB finally took notice and the Moneyball Revolution began.

Concepts/Terms to Know:

The majority of these terms and concepts have been taken from Fangraphs, which is a site to find many advanced baseball stats and analysis. Links on where to find these concepts/stats will be provided.

Fielding Independent Pitching (FIP) - FIP is an adjusted Earned Run Average (ERA, or runs allowed by a pitcher excluding errors) metric that attempts to quantify what a pitcher’s value would be if they stripped out the defense component of the game. FIP assumes that all balls that are hit into play are given league average results on whether they fall for a hit or not. This way, a pitcher is not penalized for having a bad defense behind him, which certainly would affect their pitching results, and their ERA as a result. FIP is considered predictive as it has higher correlations across seasons than ERA, which makes sense considering it measures things that the pitcher can control and not things like defense which can fluctuate by game and by season. It is adjusted so that the league-average FIP is the same as the league-average ERA. This is done so that it can be easily compared to a player’s ERA to see if they are over/under-performing their FIP, and whether there may be any regression available for the player. There are cases of players who can consistently outperform their FIP numbers, such as Marco Estrada who in 2015-16 was elite at inducing weak contact (which can be considered a skill), so FIP assuming league-average results on balls-in-play would likely paint him as less effective than he actually is. On the other hand, his ERA did balloon to 4.98 in 2017 after significantly outperforming his FIP the previous two years, so the regression bug may have actually hit him as well.

FIP can be found on Fangraphs pitcher pages, such as Marco Estrada’s, next to ERA, where you will find his 2017 FIP to be 4.61.

Batting Average on Balls in Play (BABIP) - BABIP is a player’s batting average on only balls that were put into play, and the average is roughly .300 for both hitters and pitchers. The reason why this is a very important stat, is that it tends to stabilize after 800 balls in play. This means that if a player is having a stretch of months (or even a whole year) where they are achieving a much higher/lower BABIP than league average, and their career average, they are likely due for some regression as they have likely been getting lucky/unlucky on the results of the balls they have put into play. It’s worth noting that better hitters will likely have higher BABIPs, and vice-versa, and some players are able to sustain high BABIPs throughout their career without regression. The 2017 Toronto Blue Jays hitters ranked dead last in the entire MLB in BABIP in 2017, which can be seen as a source of optimism that they may achieve better results on their balls in play in 2018.

BABIP can be found on Fangraphs pitcher/batter pages, such as fringe prospect Dwight Smith Jr’s, who rode a .588 BABIP in 2017 to achieve his .370 batting average, which was less impressive and likely luck-driven given his ridiculous BABIP, and so he still earned a demotion and will likely not get an early look to crack the 2018 team.

Hit Probability - To temporarily stray from Fangraphs, Hit Probability is a metric that was introduced by Statcast at the beginning of the 2017 season to estimate the likelihood that a ball-in-play will be a hit, based on its launch angle and exit velocity compared to similarly hit balls in the past. Similarly to FIP, it attempts to negate the effects of defense and the ballpark on players who may have high percentage hits robbed by star outfielders making unlikely plays, or getting credit for many weak hits that likely would not be repeated. I did an analysis on how the 2017 Blue Jays were being affected by luck based on their hit probabilities, and throughout the season I saw players regress to what their averages were expected to be based on their Hit Probability numbers. The most extreme case was Devon Travis who had a cold start but still had high aggregated Hit Probability numbers but who, as the season progressed, positively regressed to the expected level. The quarter season report can be found here, and the mid-season report can be found here.

Hit Probability statistics can be found on Baseball-Savant here, where you can select any game and see the hit probabilities for all balls in play for that game.

Weighted Runs Created + (wRC+) - wRC+ is an attempt to quantify a player’s total offensive output into one total stat, based on the value of their contributions, after park adjustments. It uses the concept of Weighted On Base Average (wOBA) which simply gives the run value of each plate outcome. For example, it finds that triples contribute to runs roughly twice as often as a single, so a triple would be worth double the value of a single in this calculation. After doing this, you can find out the value of runs created by each player’s offensive outputs. wRC+ is a rate statistic, so it is very easy to be used even in smaller samples to see how a hitter has been performing. It is one of the best tools to use when evaluating a hitter’s offensive abilities. The league average wRC+ is 100, and each point above 100 is indicative of one percentage above league-average.

It can be found on the batter pages on Fangraphs, such as Mike Trout’s, who was the 2017 leader at 181 wRC+, beating Aaron Judge by 8 points even with 19 less home runs.

Park Adjustments - No Two Parks are The Same:

To state the obvious, no two MLB ballparks are the same. The most noticeable difference is obviously the different dimensions, but additionally there are many other factors at play such as weather and other environmental factors. As a result, there tend to be plenty of differences in player performance at different parks, and adjustments are calculated to reduce the effects of these parks as best as possible. They typically are separated for left and right-handed batters, since parks are not always symmetrical, they may favour one-sided batters over another.

Colorado’s Coors Field is regarded as the extreme case of a ‘Hitter’s Ballpark’ - hitters tend to generally perform well there due to the high altitude and large outfield so batters can expect more balls in the outfield to fall for hits. Conversely, AT&T Park in San Francisco is regarded as the largest case of a ‘Pitcher’s Ballpark’ due to its high walls and damp air. Rogers Center in Toronto is ranked as the 8th best ballpark for hitters. Four out of five ballparks in the AL East are considered to favour the hitter over the pitcher, so that could be one of the reasons why a team based in Toronto fails to attract premium free agent pitchers.

The War on WAR:

If you only have time to learn about one advanced stat in baseball, Wins Above Replacement (WAR) is the one to go with. WAR is an attempt to quantify the overall value of a player’s contributions into one easy number. It simply could be put as: The number of wins that you can expect your team to add while employing the player, compared to a different player that would be easily acquired from the minor-leagues or a team’s bench.

WAR is a counting stat and is based on what happened, rather than what will happen in the future. If an MVP-calibre player only played 20 games, they may have a lower WAR than many inferior players, due simply to the fact that they didn’t play enough games to accumulate a high WAR total.

Fangraphs goes into more details of what exactly goes into the WAR stat for hitters, but essentially it is the total value of runs that a batter contributes to the team in the areas of: hitting, baserunning, fielding, divided by how many wins a team can be expected to win with those runs added (Runs/Win generally fluctuates by year but is ~10). It is then adjusted by position (For example: CF is much harder to play than 1B, so they are credited accordingly - more here), adjusted by ballpark, and adjusted to consider the ‘Replacement Level’ player and how much more/less valuable that player is to this imaginary player.

For Pitchers, it is much more complicated, so it’s best to outline the two different WAR stats that are most commonly referenced. First, there’s Fangraphs WAR, commonly referred to as fWAR. fWAR uses Fielding Independent Pitching (FIP) during their calculations, instead of ERA. Recall that FIP is generally regarded as a more predictive stat than ERA, so fWAR could be better used as a tool to project future pitching performance. Conversely, Baseball Reference uses ERA when calculating their bWAR stat. ERA is based on what has actually happened, and could be influenced by team defense among other external effects. These effects are variable by game and are out of the pitcher's control, so this should be seen as more of a ‘what happened in the past?’ stat, rather than a ‘what should I expect in the future?’ stat.

Conclusion

I hope that this article has given you an introduction to some tools to enhance your viewership of baseball. These tools were selected as stats that may challenge how the game is traditionally viewed. Player’s are often over/undervalued by fans since traditional metrics such as batting average will never paint the full picture of their contributions. Hopefully the concepts learned today will allow you to form more complete opinions on player’s teams while enjoying the games.

Keep up to date with the Queen's Sports Analytics Organization. Like us on Facebook. Follow us on Twitter. For any questions or if you want to get in contact with us, email qsao@clubs.queensuca, or send us a message on Facebook.