Optimizing Breaking Ball Shape Through Data-Driven Pitch Design, Part Two
Note: The following is a preview of an article published at Driveline Baseball. Follow the link at the bottom of this page to read the full article.
In part one of this breaking ball pitch design series, we introduced a method to calculate expected velocity differentials in breaking balls compared to an athlete’s fastball.
We took a generalized approach to assess an athlete’s breaking ball and fastball characteristics to end up with an expected velocity differential for cutters, sliders, and curveballs. This allows us to say, for example, how much velocity we’d expect a certain athlete’s slider to drop relative to his average fastball speed in that time period.
The use case for this model is fairly straightforward: If an athlete is throwing a pitch slower than what our model expects, we can predict that this athlete may have low feel for a breaking ball and still have room to improve the pitch, overall.
If we recommend that the athlete increase the pitch’s speed to match the expected velocity, we would like to know if that potential improvement comes with additional considerations or tradeoffs worth quantifying.
In this piece, we introduce the velocity-spin efficiency tradeoff and explain how we can utilize that relationship to foresee potential changes in pitch shapes if velocity is increased.