A large number of people have developed models for predicting the point spreads of college basketball games. For those that have made their picks publicly available, ThePredictionTracker does a great service by tracking the live performance of each model over the course of the season. But, it's difficult to do an apples-to-apples comparison using the the Tracker, since each model has predicted a different subset of games (mostly this is random/accidental, but some models start submitting picks later in the season). Here, I show a subset of models which have made picks since the beginning of the season, and I throw out games for which any of those models did not make a pick. In a (somewhat lazy) attempt to address misprinted lines (which have appeared occasionally), I filtered out any games for which the opening and closing Vegas lines differed by more than 5 points (it's very rare that this really happens).

Results are shown as of 2020-02-17 for a set of 2547 games. First, let's look at some statistical benchmarks. Below we show the mean-squared error (in predicting the margins of victory), the binary accuracy (in predicting the win/loss outcomes), and the average bias of each model:
Model Mean Squared Error (MSE)
Line 134.005
Erik Forseth 134.513
Opening Line 134.658
TeamRankings 137.521
Dokter Entropy 138.067
Sagarin Predictor 138.353
ESPN BPI 139.751
Sagarin Golden Mean 140.431
Sagarin Rating 140.894
StatFox 145.648
DRatings.com 146.213
Kenneth Massey 147.053
Sonny Moore 148.658
ComPughter Ratings 153.941
Sagarin Recent 154.280

Model Binary Straight Up (%)
Line 73.83
Erik Forseth 73.73
Opening Line 73.62
TeamRankings 73.38
Sagarin Predictor 73.22
Dokter Entropy 73.07
ESPN BPI 73.05
Sagarin Rating 72.85
Sagarin Golden Mean 72.81
StatFox 72.38
ComPughter Ratings 72.22
DRatings.com 71.93
Kenneth Massey 71.77
Sagarin Recent 71.20
Sonny Moore 71.06

Model Average Bias
Erik Forseth 0.012
Opening Line -0.034
Sagarin Golden Mean -0.107
Sagarin Predictor -0.201
Line -0.202
ComPughter Ratings -0.214
Sagarin Rating -0.247
StatFox 0.258
Kenneth Massey 0.304
Sagarin Recent -0.411
ESPN BPI 0.435
TeamRankings 0.543
Sonny Moore -0.610
DRatings.com -0.704
Dokter Entropy -0.994

We might also ask how each model would have done betting against the Vegas line, shown below:
Model Against the Spread (%)
ESPN BPI 51.98
Dokter Entropy 51.37
TeamRankings 51.06
DRatings.com 51.04
Opening Line 50.90
ComPughter Ratings 50.90
Erik Forseth 50.86
StatFox 50.82
Sagarin Golden Mean 50.43
Sagarin Predictor 50.06
Sagarin Rating 49.98
Sonny Moore 49.41
Kenneth Massey 49.12
Sagarin Recent 48.90

Let's regress the observed margins of victory onto the predictions of each model, while constraining the regression to have nonnegative coefficients. This gives the optimal (backward-looking and subject to nonnegativity constraints) mixture of predictors. We find:
Model Coefficient
Line 0.466
Erik Forseth 0.316
ESPN BPI 0.138
DRatings.com 0.050
Opening Line 0.027
Kenneth Massey 0.000
Sagarin Rating 0.000
Sonny Moore 0.000
TeamRankings 0.000
Dokter Entropy 0.000
StatFox 0.000
Sagarin Recent 0.000
Sagarin Predictor 0.000
Sagarin Golden Mean 0.000
ComPughter Ratings 0.000

The MSE of this hypothetical predictor would be 133.461. (This is overly optimistic for obvious reasons.)

Finally, we can ignore the other models and look only at games for which I made a pick this year, constituting a sample of 3404 games:
Model Mean Squared Error (MSE)
Line 125.369
Erik Forseth 125.857
Opening Line 126.225

Model Binary Straight Up (%)
Erik Forseth 73.06
Line 72.86
Opening Line 72.58

Model Average Bias
Opening Line 0.006
Erik Forseth 0.043
Line -0.167

Model Against the Spread (%)
Erik Forseth 50.98
Opening Line 50.24