High-Peak vs. High-Consistency Players and Winning
Two hitters on the same team each finish a season with a 0.370 wOBA, each with the same number of home runs, triples, doubles, singles, strikeouts, and walks. Does each contribute equally to wins for their team? I say ‘not necessarily’. Why? Because of consistency.
Some players compile strong stats over the long season because they are extremely hot during certain periods, and colder in others. Over a large enough sample size, they average out to a strong season-long performance. We’ll call these `High-Peak` players.
Then, there are players who never make headlines for their hot streaks, and also never slump. They can deliver the same volume of season-long production as High-Peak players, but they do so in a smoother way. These are `High-Consistency`players.
I posit that of the two players in our hypothetical situation above, the more consistent (High-Consistency) player will be worth more to his team than the streakier (High-Peak) player.
To test this hypothesis, I did a quick analysis of team production and winning. I’m not trying to investigate total production, just consistency in delivering however much production a hitter delivers. Thus, I used the standard deviation of runs scored by game as a proxy for High-Peak vs. High-Consistency offense.
Also since I’m not concerned with total offense, I’m not concerned with total wins either. Instead, I’m looking at the difference between a team’s actual win % and their Pythagorean win %. Since I am investigating the difference between High-Peak and High-Consistency players in generating the same amount of total offense, I believe these differences will explain the difference between actual win % and the projected win % derived from total runs scored using the Pythagorean method. In other words, High-Peak teams / players will more consistently under-perform against their Pythagorean win %, while High-Consistency teams / players will perform truer to their Pythagorean win %.
The chart below shows the results from 2017 and 2018. Each dot is a team and season, with that teams standard deviation of runs scored by game on the X-axis, and the difference between the actual win % and their Pythagorean win % (relative performance) on the Y-axis. The results show a strong correlation (r-squared = 0.22) between consistency of scoring and relative performance, with High-Consistency teams performing better than High-Peak teams compared to expected Pythagorean wins.
Logically, this makes sense. The team that scores 800 runs in a season by putting up 5 runs every game (assuming a decent pitching staff), should win more often than a team that scores 10 runs half of their games and 0 runs the other half. Consistency of production matters.
But how much? Can we put a value on a more consistent player? Yes, we can. Using the simple linear regression above, we can determine at the team level that each incremental run-scored in standard deviation results in a 4% drop in win %. This is equal to 6.5 wins over the course of 162 games. In other words, had the 2018 National (3.82 Runs Scored SD) had the consistency of the 2018 Tigers (2.84 Runs Scored SD), then they would’ve won 6 or 7 additional games with the same total production.
Why is this so? I think it relates to the investment strategy of portfolio diversification, which I will explore in future posts, as well as the practical strategy for applying this to baseball.