/cdn.vox-cdn.com/uploads/chorus_image/image/67464566/usa_today_14957340.0.jpg)
All data originated from Pro Football Reference and nflFastR.
Week 2 was a much better result for the model. I went 13 for 16, but still lost ground to Vegas who picked 14 winners. The one discrepancy between us was that I had NE beating SEA and despite Cam being only 2 yards from proving me right, the Pats fell short on the last play.
On the season, my record is 21-11 (65.6%) which is right in line with historical results. Vegas is killing it at 24-8 (75%). Vegas was also better on the point spreads and takes the season lead by being 0.1 points per game more accurate than me (5.1 vs 5.2).
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/21902195/Untitled.png)
PROBABILITIES
Last week, Clydesdales left a comment:
If I was looking at it as a money maker, I wouldn’t be trying to get 10 or 11 right each week, I’d like to go 2 - 1 with my best 3 . . .
I never looked at the data that way, although I should have. If my win probability for each game has good predictive power, then the teams I predict to have a high chance of winning should win a high percentage of their games.
So, I took the data since 2010, determined the win probabilities for each game, rounded it to the nearest percentage point and compared it to the actual win rate for each probability “bin” (1).
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/21902766/Model_1.png)
The higher my predicted win probability, the higher the actual win rate of those teams. The R-squared value tells me that 91% of actual win rate is explained in my assigned probabilities. That’s not bad.
Looking at the same data by my accuracy rate (games correctly predicted / total games), I get the following:
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/21902769/Model_2.png)
This shows my model has a much better accuracy when I predict extreme probabilities, but when I forecast a much closer game, my accuracy falls to a coin flip. And of course that is exactly what you would expect if the probabilities reflect reality.
Based on Clydesdales’ criteria, I reviewed the 3 most lop-sided predictions each week and found that my model predicted 73% of those winners, which is better than going 2 - 1. So far this year, I am 5 - 1 in those contests. If you want to test that next week, bet PIT, BAL and IND.
PREDICTIONS
The data has been updated and the week 3 winners have been picked.
QBs Drew Lock, Tyrod Taylor and Jimmy Garoppolo were all injured in week 2. There isn’t enough data on their backups to change the passing numbers and so the model will assume that team passing efficiency won’t change. Clearly that is a false assumption, but oh well, them’s the rules I set up.
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/21902838/Model_2.png)
Vegas and I agree on 15 of 16 games with the lone exception as the Bucs-Broncos matchup. My numbers have no confidence in Tampa Bay, even though they had a dominant win last week. Brady hasn’t looked good going back into last year. His efficiency is way down and he’s throwing picks, depressing the numbers I use. I have the Bucs passing ranked 29th on the year.
But even so . . . Denver?! Really model? I had them ranked 28th in passing before Drew Lock went down. I have a feeling I’m going to lose that one big time and fall another game behind Sin City.
The initial spread differentials between me and Vegas are a bit larger this week, but still very close on average. It really is kind of blowing my mind that my blind model is matching the betting lines so well.
COLTS SEASON
Tracking just the Colts season, I am now 1 - 1. Even though the probabilities for weeks 3 - 17 were all updated, it didn’t change the predicted wins for the rest of the season and it barely moved the needle on expected wins going from 9.38 to 9.35.
Basically, nothing changed.
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/21903328/Model_2.png)
FOOTNOTES
1) Only bins with at least 10 predicted games were included.