One of the important background dimensions to comparative baseball statistics is known as “park adjustments,” a set of corrective factors applied to account for the physical differences (e.g., outfield wall depth) between each park. Among American sports today, only Major League Baseball and NASCAR (and golf, I suppose) permit such structural variation between the competitive arenas themselves.
Professional hockey used to be in that group too. More than merely adjusting, adding, and subtracting lines on the ice to affect the flow of play, as the NHL continues to do (cf. the NBA three-point line), the rinks themselves used to be different sizes. League rules mandate a uniform rink size, but so-called “small rinks” persisted in the NHL as late as the 1980s and 1990s in Boston, Chicago, and Buffalo.
While hockey does not face the structural differences present in baseball, there still is a need to apply rink-by-rink statistical adjustments. That’s because the compiling of basic hockey statistics (e.g., shots, hits, turnovers) requires statisticians to make judgment calls to a more significant degree than in a discrete-event sport like baseball.
By way of limited background, the NHL collects basic gameplay statistics through a computer system known as the Real Time Scoring System (RTSS). A benefit of RTSS is that it aggregates and organizes data for analysis by teams, players, and fans. A vulnerability of RTSS is the subjectivity alluded to above that comes when human scorers track a fluid, dynamic sport like hockey.
While others have noted certain biases among the RTSS scorers at different rinks, a paper by Michael Schuckers and Brian Macdonald published earlier this month analyzes those discrepancies across a spread of core statistics and proposes a “Rink Effects” model that aims to do for subjective rink-to-rink differences in hockey scoring what park adjustments do for structural differences between baseball parks.
Schuckers and Macdonald first examined rink effects for six statistics– shots, blocked shots, misses, giveaways, takeaways, and hits– over the past six completed seasons and made two types of findings: 1) deviations from the league-wide average and 2) bias in favor of or against the home team. My perception of their findings was that, in general, there were not many extreme deviations from the league averages and home/away bias was muted.
Organized by statistic, the following rinks are the most extreme outliers Schuckers and Macdonald found:
- Florida (1.030) and Saint Louis (.955)
- (No home/away bias found)
- Montreal (1.271) and New Jersey (.541)
- (No home/away bias found)
- Toronto (1.250) and Chicago (.562)
- Rangers (Home 1.125) and Colorado (Home .977)
- (Much variation, but no trend across seasons; home ice effect noted)
- Islanders (1.943) and Pittsburgh (.214)
- (No home/away bias found, but like giveaways, overall home ice effect noted)
- Los Angeles (1.298) and New Jersey (.592)
- New Jersey (Home 1.196)
In summarizing their findings, the authors note that there’s something funny going on in New Jersey, where, they write, “events are recorded at a rate that is much lower compared to the rest of the league.” If you want to read all the gory mathematical details and find out how your favorite team’s rink fits in, read Schuckers and Macdonald’s full paper here.
The goal in applying rink effects to a hockey player’s statistics is the same as applying park adjustments to a baseball player’s numbers: determining the player’s “true” ability by filtering out rink-to-rink variations that are out of the player’s control. Unlike the constant, structural differences that drive baseball park adjustments, however, hockey rink effects are a creature of human subjectivity and theoretically are subject to conscious fluctuation over time. (For example, the RTSS folks in New Jersey could read this paper and make an internal decision to change their approach.) This behavioral reality isn’t a knock against rink effects, though. In fact, it’s a testament to the work of Schuckers and Macdonald that they were able to corral this human behavior into discernible and useful patterns. The larger point is that, if changes– whether through human coordination or technological automation– in statistics recordation lead to greater homogenization in rink-to-rink statistical tabulation, the consequentially reduced need to apply rink effects would be a positive indicator. Rink effects and park adjustments are patches over sports statistics inefficiencies. Reducing our reliance on them means we are receiving improved raw statistical feedback that allows us to better determine players’ true talent.