
Do Spring Training Wins and Losses Matter?
You hear it every year. If a team is doing poorly in spring training, its general manager, manager, and players are likely to say: “Spring training games don’t count, we aren’t concerned with our record, guys are working on things, we are giving young players some looks” etc. They will also state clearly that spring training performance will have no bearing on the regular season, usually with some “throwing out/away” and “starting over” metaphor.
In contrast, if a team is doing well in spring training, you are likely to hear that they “like to win no matter what, everything is clicking and the team is playing well,” and of course that the “momentum will carry us into the regular season.”
The Cubs started 0-6-1 this spring training and still sit at just 6-9-1. During these early days Anthony Rizzo stated definitively: “It is Spring Training, yes. Does it matter if we lost? No.” Manager Joe Maddon walked a bit on the line, downplaying the record and emphasizing the type of play he was seeing (the good and the bad), but also hinting that winning mattered: “Of course it does,” Maddon said, “You always want to win.”
So does winning in spring training matter? I surveyed the field to see what we know, if anything, about the relationship between spring training and regular season performance.
Sadly, much of what is out there is completely anecdotal and unhelpful. True Blue LA, for example, wants to remind us that “spring training records don’t matter” by looking at the definitive sample of one team—the Los Angeles Dodgers—over the past five years. Sigh… If you look, you will find writers for most teams making similar types of arguments in most years. In particular, if a team is doing very poorly in the spring or a surprise team is doing very well, it is a nice way to boost fan hopes going into the regular season. But otherwise these types of claims with this type of evidence are useless in understanding larger trends.
A bit better approach has been to examine subsets of teams and their won-loss records in spring training and the regular season. This essentially involves placing teams in categories (e.g. playoff teams), answering yes/no questions for each (e.g. above .500 in spring training or not?), and then reporting the counts as percentages.
For example, a few years ago David Schofield at SweetSpot examined 10 years of spring training performances, looking at how some of the best and worst teams fared. Only one of the 14 teams with the best regular season record finished below .500 in spring training; but there were also plenty of playoff teams that had atrocious records in March. Perhaps not surprisingly, no team with the worst regular season record even finished above .500 in spring training; yet many teams who improved the most from regular season to regular season had terrible springs too. In a similar approach, an article on Bill James Online examined records over a 12-year-period and found that 69% of playoff teams finished above .500 in spring training. A piece in the New York Times in 2011 took a similar approach.
So these studies seem to point to some evidence that spring training records matter, but not very much, and really only at the tails—the best and worst teams. There is very little relationship between spring training and regular season records for the bulk of teams who are in the middle. This type of approach, however, is hindered by the sets of cases they chose—they don’t examine all teams but only certain subsets—and a small sample size of just a few seasons.
Others have tried to apply more advanced statistical approaches to all teams, but are likewise hindered by looking at too few seasons. A Bleacher Report article from several years ago, for example, presents fancy correlations of spring training and regular season records, but only does so for one season—2010—making any findings essentially meaningless. Beyond the Box Score examines the correlation of all teams’ spring training and regular season records for a five-year period (2007-2011) and someone else subsequently updated the piece a few weeks ago. Neither finds much correlation at all between spring and regular season records, but the snapshot of just a few years doesn’t tell us much.
Similarly, an article in the American Journal of Management estimates some simple linear regressions on team records over a 5-year period and finds that spring training performance is a weak predictor of regular season performance from year to year, especially in contrast to the previous year’s regular season performance which performs far better. The findings are a bit stronger when looking at the entire 5-year period: a team’s general performance in spring training over those five years correlates to its general performance in the regular season. Intuitively, this makes sense. Say in that period a really bad team finishes well below .500 in the regular season four times, but gets lucky once. Likewise, they do terribly in spring training in three seasons but have an okay record in the other two. If the “good” spring trainings occur before bad regular seasons and a “bad” spring preceded their lucky regular season, then the year-to-year correlations wouldn’t look very good. But 3/5 and 4/5 suggest a bit more of a pattern.
Still, five years makes for a pretty small sample. The smaller the sample the greater the likelihood the findings could be occurring just by chance (think of flipping a coin 10 times vs. 1,000 times, the more you flip the closer your overall results should be to 50/50 heads/tails if the coin is true) and the more likely a few big outliers could be throwing us off.
The best assessment I found on the topic was a piece at the Captain’s Blog in 2012. The author looked at all major league teams and all seasons from 1984-2011 and finds very little correlation between spring training and regular season performance. One interesting note, Joe Maddon’s former Tampa Rays were the only team that has had a strong relationship between spring training and regular season results on a year-by-year basis.
But citing a small sample of just 28 seasons (which should tell you he has a better idea of what he is doing than the others), he more closely examines the divergence between spring training and regular season records for each team from year-to-year. Here again, however, he finds very little relationship. Zooming in on playoff teams only, he finds a bit more, showing that “two-thirds of all playoff teams over the last 28 years have at least played .500 in the spring, and only 13% have reached the post season after playing sub.-400 baseball.” Overall, he shows that “the chances of making the postseason gradually decrease as spring training records decline.”
The main takeaway from this piece and from my sense of reading widely on this topic, is that there is little to no correlation between spring training and regular season records. However, the extremes can be moderately predictive. A team that tears it up in spring training has a high likelihood of making the playoffs, and a team that has an atrocious spring record has a low likelihood of making the postseason. But for the remaining 25-27 teams each year, fans can focus on individual performances rather than team records.
Interestingly, that is where the new research is heading. Last year, Fivethirtyeight.com demonstrated that “spring numbers can and should affect our predictions for a player’s regular-season production, but only slightly, and only after a particularly strong or weak performance.” Similarly, a new article in the Economist from a few weeks ago shows that while most spring training statistics are meaningless, peripheral stats (like strikeout rates or fly ball percentage) are predictive and that players ZiPS projects with spring training peripherals added outperformed ZiPS forecasts alone.
Comments