A Study of Two Pools

by Tom Adams

Note

I need your help in extending this study. I encourage pool managers and others to send me the details of all pool entries in other pools and/or other years to tadamsmar@aol.com so that this study can be extended. This study gives you an idea of what can be learned from simulating office pools.

I will keep your entry names confidential or you can change them before you send the data. I did not do this with the Packard pools in this study because the pool entries were already posted on web.

I will be happy to share the results of the analysis of any pool data sent to me.

Abstract

Two NCAA basketball pools for 2001 are simulated and win frequencies for the entry sheets are tabulated in a effort to determine the effectiveness of various pool-playing strategies. The results indicate that the best strategy is a function of the scoring rules of the pool. In a pool with incentives for picking upsets, the expected-point-maximization (EPM) strategy was effective. But in a standard-scoring pool without upset incentives, a contrarian strategy (avoiding the favorite as the championship pick) proved better.

Two multiple-entry strategies were also evalulated. In a standard-scoring pool, a multiple-entry strategy that involved varying the championship pick was better. But, in a pool with upset incentives, a multiple-entry strategy based on using EPM pool sheets derived from various Markov models proved to be better.

The validity of this study is limited by the fact that only two pools are analyzed from one year. Pool managers and others are encouraged to send the details of all pool entries in other pools and/or other years to tadamsmar@aol.com so that the study can be extended.

Introduction

Office pools based on the NCAA men's basketball tournament have been of widespread popular interest for many years, but the question of the best strategies for playing these pools has only recently been addressed by statisticians and operations research experts. In 1997, Brieter and Carlin [1] described a Monte Carlo process for estimating the pool entry sheet that would have the highest expected point total. In 2001, Kaplan and Garstka [2] described an improved direct calculation method for the expected-point-maximizing (EPM) entry sheet, explored three new alternatives methods for determining the Markov model used to calculate the EPM entry, and reported results from participating in a number of pools including two web-based pools managed by Eric Packard: Packard #1 and Packard #2.

The Markov model in question is just a table of probabilties for the outcome when one team meets another. These probabilities are derived from the predicted spreads or scores from rating systems and/or pre-tournament first-round betting lines. Given the number of rating systems (Sagarin, Massey, etc.) and the variety of ways that the ratings and betting lines can be combined to yield win probabilities, there are actually more than a dozen of potential Markov models, although only four such models have been developed in the literature. The Kaplan-Garstka Markov models are based on non-constant variance. They claim this as an advantage over the Breiter-Carlin Model which uses a constant variance. But, at this point, the advantage is theoretical since it has not been demonstrated to provide an edge in practice. Such an edge could well be subtle and hard to demonstrate.

The direct calculation method reported by Kaplan and Garstka was independently discovered by Tom Adams and used to develop Poologic Calculator , a Java applet that may be used to calculate the EPM pool sheet for a variety of pool scoring rules.

In a letter to the editor of Chance, Tom Adams pointed out that the EPM pool sheet might not always be best since there are situations where the EPM sheet will not maximize your probability of winning. For instance, the EPM pool sheet is not best in some situations where the EPM sheet picks the favorite as champ and that favorite is grossly overbet in the pool. Carlin's response letter argued that that contrarian strategies involve the complex field of game theory and that it is by no means clear how or when to implement an effective contrarian strategy.

The matter of the best multiple-entry stategy is also in dispute. Breiter and Carlin suggested entering multiple estimated EPM pool sheets resulting from different estimation strategies. The Poologic web site recommends varying only the championship pick on a single EPM pool sheet. These approaches are strikingly different because multiple estimated EPM pool sheets will often all have the same championship pick.

A New Simulation Analysis

This paper reports on a novel approach to analyzing a pool. If the details of all the entries are known, then it is possible to use a Markov model to simulate the tournament. The simulation may be repeated thousands of times and the pool winners tabulated. This is similar to replaying the tournament thousands of times. On a moderately powerful computer it is possible to simulate the tournament and tabulate the results 10,000 times in a few minutes.

The one limitation of this approach is that the details of each pool entry must be available. A source of this information from the thousands of office pools held each year is not readily available. This information is available on the web for Packard #1 and Packard #2. So, these pools are the used in a pilot study of the this pool simulation method.

The Packard pools are similar to an office pool, but there are some differences worth noting. First, the pools don't require an entry fee and do not have a cash prize, so this might change the behaviour of players somewhat. Second, someone (probably Eric Packard) adds entries directly derived from various rating systems.

Scoring rules vary from pool to pool. Most pools have standard scoring factors that are awarded for a win regardless of the teams seed. Others have seed factors that are multiplied by the team's seed, so that a 7-seed is awarded 7 times as many points as a 1-seed for a win in a particular round. These are the types of factors used in the Packard Pools. Other pools award points using factors based on the seed difference (winner seed - loser seed) when the winner beats a lower seeded team.

Packard #1 uses only standard scoring with standard scoring factors of 32, 48, 72, 108, 162, 243 for rounds 1 to 6. This type of scoring is common, but the round factors seem to vary from pool to pool. Kaplan and Garstka noted that the EPM pool sheet provides less advantage for these scoring rules, since simply betting on the lowest seed or the highest ranked team provides a good approximation of the EPM sheet.

Packard #2 scoring rules provide incentives for picking a higher seed to advance. Packard #2 used standard factors and seed factors. Both the standard and seed factors have values of 945, 1980, 3696, 6160, 9240 and 13860 for rounds 1 to 6. When high seed or upset incentives are present, then it is hard for a pool player to get close to an EPM pool sheet without the aid of a computer program. The relative size of the Packard #2 upset incentives (relative to the standard factors) fall in the middle range. I am aware of pools that have relatively larger and smaller upset incentives. The most extreme upset incentives I have seen used in an office pool are seed factors of 1,2,4,8,16, and 32 with zero standard factors.

In 2001, Kaplan and Garstka entered thee EPM pool sheets, based on the three Markov models described in their paper, in Packard #2. Tom Adams entered the Poologic Calculator EPM pool sheet (which is based on the Markov model described in Brieter and Carlin), in Packard #2. In Packard #1, Kaplan and Garstka entered two EPM pool sheets and Tom Adams entered none.

You can view the 2001 Packard #1 entries here and the original 2001 Packard #2 entries here.

In order to make this analysis similar to an office pool analysis, I removed all of the ranking and rating system based pool sheets entered by Eric Packard from the pools before simulation. While it is possible that some office pool players do base their entries strictly systems like the seed ordering or the Sagarin rating, it seems unlikely that 10% of the pool participants would do this, particularly in a pool like Packard #2 where the scoring rules have upset or high seed incentives. With the ranking and rating system entries removed, Packard #1 has 126 entries remaining and Packard #2 has 37 entries remaining.

The names on the EPM pool sheets are Stan Garstka, Ed Kaplan, linda beise, and Mr. Poologic. Note that Poologic should have some advantage in these simulations, since both the Poologic entry sheet and the simulation are base on the same Markov model.

In Packard #1, the Ed Kaplan entry is based on Vegas betting lines and the Garstka entry is based on the Massey Rating System. In Packard #2, the Ed Kaplan entry is based on Sagarin, the Garstka entry is based the Massey Rating System, and the Beise entry is based on Vegas betting lines. (Information from private communications with Ed Kaplan.) The Poologic entry is based on Vegas lines for the first round and Sagarin for subsequent rounds. Of all these EPM entries, only Poologic treated spread variance as a constant. The Poologic entry was based on methods from [1], whereas the other EPM entries where based methods from [2].

Here are the top entries (all entries with more than 1000 wins) in the Parkard #1 pool (based on 100,000 simulations of the tournament) ranked by number of wins:

Table 1

Wins EPM Mean Score S.d. of Score Entry name Champ pick 5443.0 1842.968 312.9614956907281 Rick Rowley Stanford 4810.6 1846.28346 302.4477679153074 Chuck Bohannon Stanford 3657.0 1775.07288 294.02294662736324 Andrea Reynolds Michigan St. 3178.6 1783.20924 297.671722713407 Kyle Kindley Michigan St. 2945.0 1808.46984 308.79388087684447 dan boland Stanford 2738.0 1691.37614 265.40265037178665 Joey Heinz North Carolina 2544.1 yes 1956.90734 320.7907129281329 stan garstka Duke 2410.6 1749.59492 320.0970912265083 Trevor Lumb Stanford 2349.5 1725.897 301.69943655316945 Duke Sergakis Stanford 2319.3 1824.45912 300.97658699288365 aron copeland Michigan St. 2066.8 1705.6513 299.86481613785486 John Doe #14 Stanford 1949.0 1749.4441 263.31512771535677 Ed Blaha Kentucky 1913.8 1787.68768 308.4671708564343 Curt Bish Stanford 1860.5 1876.15632 314.8105286688284 Rick Muffler Duke 1843.2 1877.74224 313.73902285422946 Erik Johansen Duke 1757.1 1708.52332 263.293362715671 Jeff Fox Arizona 1653.8 1783.55667 279.36936897404223 Jack Nygren Illinois 1630.6 1777.74059 278.06751344209977 Rich Morris Illinois 1562.1 1715.50859 260.87944995167067 Nathan Barnes Illinois 1538.8 1729.3678 312.2017522732712 Brian Skeen II Duke 1493.2 1693.58914 307.32641297047957 Tim Gray Duke 1417.6 1815.5352 321.3200288910882 John Doe #3 Duke 1373.9 yes 1918.8663 317.03297709133653 Ed Kaplan Duke 1305.0 1660.8827 262.0236111201387 LEMUEL Florida 1275.0 1774.68896 315.585985967793 Mark Goff Duke 1273.0 1726.68739 275.47548375928 Neil Rago Illinois 1006.3 1731.09202 326.12263293435564 Mark Elavsky Duke

EPM pool sheet was not very effective in this type of pool. This is partly due to the fact that Duke is overbet in this pool and all three EPM entries bet for Duke. It seems better to adopt the contrarian strategy of picking a team other than the favorite as champion on your entry sheet. The seven best sheets are contrarian entries that picked a champion other than Duke. One of the EPM sheet (beise) did not even get 1000 wins in the simulation. (Poologic did not have an entry in this pool.)

Here are the top entries (all entries with more than 1000 wins) in the Parkard #2 pool (based on 100,000 simulations of the tournament) ranked by the number of wins:

Table 2

Wins EPM Mean Score S.d. of Score Entry name Champ pick 8823.0 yes 277370.28646 61129.66059764006 Stan Garstka Duke 7510.0 yes 277435.73827 57944.649326211 Ed Kaplan Duke 4772.5 yes 284412.36426 52433.71905094579 Mr Poologic Duke 4541.0 267294.32929 45156.16568717295 Rick Rowley Stanford 4366.0 256132.74945 56359.01390295407 shari godfrey Duke 4206.0 yes 270862.80572 59024.37794482215 linda beise Duke 3840.5 266479.82712 46017.64778133519 Curt Bish Stanford 3728.0 266804.04651 52529.28304287571 John Morrell Duke 3588.0 263519.34717 43092.92260470489 Chuck Bohannon Stanford 3526.0 262163.4946 53685.76251523087 Pat Flynn Duke 3127.0 266729.79338 50326.82233769062 Steve DeClercq Duke 3116.0 259978.05683 49871.57892664825 Mike Bentz Duke 3050.0 237179.15642 43979.07419979012 duvy Illinois 2996.0 234404.34668 45976.92045328301 Ed Blaha Florida 2848.0 240502.17723 50147.77183904489 Mike Harris Stanford 2785.5 256802.70366 47790.94972464259 Keith Packard Arizona 2739.0 260715.01464 47080.68331393803 Bill Kappel Duke 2496.0 254684.34249 54001.2997208489 Leon Rosenblum Duke 2392.0 247891.40235 48295.90545338669 John Doe #1 Stanford 2330.0 256732.50419 50528.36695500016 Michael Sherick Duke 2269.0 201249.61765 52861.6766932252 Matt Scheidler Virginia 2269.0 262500.81081 49120.10936712257 Justin Mangold Duke 1993.0 262272.47766 44451.34621837686 Joe Poindexter Duke 1804.0 253041.54534 43227.44639738346 donald mcclernon Arizona 1790.0 206091.27297 54353.14255758276 Allison K. Gonzaga 1745.0 220818.04589 46125.25939346619 Wylie Stanford 1552.0 247072.70475 42878.14146096425 dan boland Stanford 1525.0 217512.70224 47024.99340121616 Jene' Gilbert Arizona 1489.0 264690.06733 46901.02281325994 don dugi Duke 1476.0 272668.88841 44117.83423915027 Gary Saso Duke 1392.0 256164.50373 49666.1213079305 Will Dudley Duke 1143.0 257604.66296 45652.00726518948 andy stephens Duke 1002.0 239528.96889 44455.83299119767 Rob Wagner Duke

In this pool, the EPM pool sheets did very well. Garstka won 8823 pools or 8.8% of the total. Four of the top six are EPM sheets. And, this result may even understate the effectiveness of EPM analysis since the four EPM pool sheets are somewhat similar and therefore tend compete for the same wins. Any single EPM sheet bet in an office pool would do relatively better.

With 37 entries in this pool, a player would have to win 1/37 of the time to break even if this were winner-take-all office pool. That would be 2703 wins in 100,000. Only 13 of 33 non-EPM players, or 39%, got more than the 2703 wins required for a break-even sheet. So, 61% of non-EPM player made bad bets, bets that had a negative expected return, in this pool.

It is notable that there is a 2-fold difference in the performance of the best (Garstka) vs the worst (beise) EPM pool sheet. This indicates that there are important factors not fully captured by the EPM analysis. But it is unclear what these factors are. And, it may not be possible to predict these factors in advance to improve on an EPM pool sheet. Note that the performance of an entry can be cut in half in someone by chance happens to bet a similar entry.

One factor in Garstka sheet performance seems to be that it advanced 4-seed Kansas to the Final Four rather than 2-seed Arizona. Kansas and Arizona provide about the same expected score, but the Kansas pick tend to distinguish the Garstka sheet, since only the Garstka and Kaplan sheets advanced Kansas. Also, the Garstka sheet was unique in advancing Wake to the final 8. Other simulations (data not reported) indicate that these are important factors. (Ed Kaplan brought these factors to my attention in private communications.)

The Garstka advantage might point to an advantage for this particular Kaplan-Garstka Markov model over all the other Markov models, but this observation is not statistically significant. Similar results for at least three different tournament years would be needed for this observation to be considered statistically significance at a .05 level.

Poologic had the best mean score, but this is due to the fact that the Poologic Markov model was used in the simulation.

Multiple-Entry Strategies Evaluated

In order to evaluate Poologic's recommended multiple-entry stategy I removed all the other EPM pool sheets from Packard #2 (that is, I removed the Kaplan, Garstka and Beise sheets) and then I bet four Poologic entries for the four 1-seeds. The other EPM pool sheets were removed on the theory that this is would make the Packard #2 more like a typical pool where the competition is not using EPM strategy. Here are the results of 10,000 simulations:

Table 3

(The EPM* column denotes contrarian variants of EPM sheets) Wins EPM* Mean Score S.d. of Score Entry name Champ pick 565.2 yes 279094.7402 50465.08595275286 Mr Poologic Michigan St. 541.2 yes 279358.0802 51867.17355389446 Mr Poologic Stanford 507.2 yes 284516.7722 52216.16727470524 Mr Poologic Duke 459.0 255179.425 55734.36330894362 shari godfrey Duke 447.0 261025.7989 53464.39667881489 Pat Flynn Duke 446.0 266539.3079 45123.85187724917 Rick Rowley Stanford 387.0 254870.956 54938.36507325236 Leon Rosenblum Duke 376.0 259798.7025 50420.52715241105 Mike Bentz Duke 368.0 266836.1343 50797.925127424125 Steve DeClercq Duke 368.0 260799.3624 47109.98670153457 Bill Kappel Duke 359.0 265794.7775 45869.0390915278 Curt Bish Stanford 357.0 266751.0722 52312.81365372756 John Morrell Duke 323.0 263765.9712 43240.12455026335 Chuck Bohannon Stanford 320.0 237083.4917 44151.92386773937 duvy Illinois 318.0 257297.5959 48081.74584748327 Keith Packard Arizona 318.0 241062.9986 50427.994049987654 Mike Harris Stanford 285.0 234165.3246 45398.32732929119 Ed Blaha Florida 272.0 262574.111 49454.73078856068 Justin Mangold Duke 261.0 255940.7513 50885.57931621576 Michael Sherick Duke 254.0 207054.3542 55217.1982941066 Allison K. Gonzaga 249.0 201307.378 52764.52382167143 Matt Scheidler Virginia 247.0 247675.4143 48492.66858873963 John Doe #1 Stanford 243.0 261998.946 44423.22069994952 Joe Poindexter Duke 178.0 252585.3819 42921.93528792867 donald mcclernon Arizona 177.0 221033.0759 46417.9834054376 Wylie Stanford 168.0 256140.6934 50329.602314867385 Will Dudley Duke 167.2 272666.621 44185.21737468095 Gary Saso Duke 156.0 yes 266277.7382 45526.26328376315 Mr Poologic Illinois 149.0 246782.0594 42928.35734900816 dan boland Stanford 147.0 264818.3202 47357.29027416448 don dugi Duke 134.0 216691.4594 46659.16024220325 Jene' Gilbert Arizona 132.0 257705.8 45807.611346879145 andy stephens Duke 119.0 239625.2364 44765.82668742993 Rob Wagner Duke 107.0 188297.5596 53243.645058842085 Edith Dando Creighton The Poologic multiple-entry strategy worked quite well. But notice that the strategy of entering the four original EPM pool sheets worked somewhat better (see Table 2). The four EPM sheet combined to win 25% of the simulations, whereas the Poologic multiple-entry sheets combined to win only 17% of the time. So overall, this implies that the Breiter-Carlin strategy of betting multiple estimates of the EPM pool sheet might be the better approach. (Note: Breiter and Carlin did not recommend different Markov models to generate the estimates, but their specific approach is similar.) Now we turn to the Packard #1 pool. Since there was no Poologic entry in the original pool, I used the Garstka entry to create four entries that picked the four 1-seeds as champ. The only other EPM entry (Kaplan) was removed from the pool for the simulation. Here are the results of 10,000 simulations:

Table 4

(The EPM* column denotes contrarian variants of EPM sheets) Wins EPM* Mean Score S.d. of Score Entry name Champ pick 455.3 yes 1908.4295 308.6718397590167 stan garstka Stanford 400.0 yes 1908.8345 299.48020927821807 stan garstka Michigan St. 392.5 1843.4081 300.6942557780839 Chuck Bohannon Stanford 378.0 1840.4083 311.45391913649826 Rick Rowley Stanford 280.7 1692.1848 264.5373048315206 Joey Heinz North Carolina 276.8 1772.8287 295.4114529752778 Andrea Reynolds Michigan St. 253.0 1786.4077 298.13006965845176 Kyle Kindley Michigan St. 245.0 1804.0717 309.74559366097384 dan boland Stanford 239.7 yes 1871.9957 275.87807842038745 stan garstka Illinois 232.2 yes 1956.8756 323.13150654445946 stan garstka Duke 210.5 1720.2865 298.66162708741075 Duke Sergakis Stanford 197.5 1749.5132 263.63875467210124 Ed Blaha Kentucky 194.0 1746.9583 317.2832624453624 Trevor Lumb Stanford 190.8 1876.0542 316.6442230939347 Erik Johansen Duke 185.3 1693.4426 310.7223254637394 Tim Gray Duke 178.5 1707.1504 264.2739787671602 Jeff Fox Arizona 174.0 1783.8109 307.4275796230085 Curt Bish Stanford 172.0 1874.4862 317.2698987115268 Rick Muffler Duke 154.0 1731.5224 316.69875703657374 Brian Skeen II Duke 151.0 1823.3401 302.1049701234447 aron copeland Michigan St. 136.3 1816.172 322.4612230021371 John Doe #3 Duke 129.5 1700.4663 296.8345103537723 John Doe #14 Stanford 123.0 1773.1708 316.93046417041086 Mark Goff Duke 122.0 1657.952 264.1674130891502 LEMUEL Florida 109.0 1801.6358 311.83533681430094 Brandon Shropshire Duke 103.0 1670.4087 246.90088916246737 Paul Fischer Maryland 100.3 1714.8625 258.95713404630766 Nathan Barnes Illinois The Poologic-style multiple-entry strategy works well in Packard #1, and it is a great improvement over any direct EPM strategy represented in Table 1.

One additional improvement might be to use 2-seed Arizona rather than 1-seed Illinois as the fourth championship pick, since the EPM sheets tended to pick Arizona over Illinois for the Final Four in 2001.

ROI Calculator Evaluation

The Poologic web site provides an ROI Calculator to assist in picking the champion or champions to bet in a contrarian or multiple-entry strategy. These Packard pool simulations provide on opportunity to evaluate the ROI Calculator. I took the data from the Packard #1 pool analyzed in Table 4 and plugged it into the ROI Calculator. I had the advantage of knowing the total size of the pool (126 entries) and the number of bets for each champion (Duke 67; Illinois 4; Michigan 7; Stanford 9). I set the strength of an EPM sheet to 2.5 since this caused the probability of a Stanford win to match that Table 4. Here are the results with the ROI calculator output rescaled to be a prediction of the simulation in Table 4.

Table 5

ROI Calculator Table 4 prediction result Stanford 450 455 Michigan 320 400 Illinois 220 240 Duke 110 230

(Note that the Stanford prediction is correct due to the fact that the EPM sheet strength was set to make it correct.) The ROI predictions are generally good. Duke is the worst prediction, off by a factor of about 2. The ROI predictions (not shown) for North Carolina, Kentucky, Maryland, and North Carolina are also reasonably consistent with Table 4 results.

These ROI inputs were derived with the pool and the simulation results in hand. If the default EPM sheet strength of 6 is used in the ROI calculator, then the ROIs too large by a factor of about 2. And, I did not have to guess the number of sheets that would be picked for each champion, as one has to do when using the ROI calculator prior to a tournament. So, larger errors are to be expected in the real-world application of the ROI calculator.

The main application of the ROI calculator is to determine the champions to bet. Therefore, the relative orders of the ROIs is perhaps more important than the ROI values. The ROI calculator picks Stanford as the best contrarian pick and the simulation supports this. In general, the relative order of the predicted ROIs is consistent with the simulation results in Table 4.

As pointed out in the ROI calculator documentation, it is unclear how to apply the ROI calculator to a pool like the Packard #2 pool. The ROI calculator does a poor job of predicting the Packard #2 pool simulation results represented in Table 3. The main problem seems to be that ROI calculator assumes that it is necessary to pick the correct champion to win the pool, but this is not true according to the Table 3 results. For instance, the Poologic sheet for Michigan loses at least 50% of the time when Michigan wins. This is due to the importance of upset incentives in the Packard #2 pool.

Conclusions

The results indicate that the strategy must be tailored to the scoring rules of the pool.

For pools like Packard #1 with only standard scoring and a relatively high number of points awarded for correctly picking the champion, the EPM strategy does poorly. It is better to make a contrarian pick for champ, avoiding the consesus favorite. A multiple-entry stategy based on varying the championship pick performs quite well.

For pool like Packard #1 with significant incentives for predicting upsets, the EPM strategy is very powerful. A multiple-entry stategy based on varying the championship pick in an EPM sheet does a good job, but it seems that betting multiple estimated EPM sheets based on multiple Markov models does even better.

Many office pools have rules similar to the Parkard #1 pool. Some have upset incentives as large as or larger than the Packard #2 pool. But some pools fall in the middle, with upset incentives that are smaller than Packard #2's incentive relative to the standard scoring factors in the pool. For these pools, it is still unclear whether a contrarian or EPM strategy is best.

Note that the best strategy is based on assumptions about the competition. In some pools, the nationwide consensus favorite might not be the local favorite. If use of the EPM stategy becomes widespread, then its advantage will decrease. Poologic recieved about 4000 hits in 2001, so we seem to be a long way from saturation knowledge of the EPM strategy at this point.

The overperformance of the Garstka sheet in Packard #2 implies that there may still be opportunities for improvement in pool strategy. If the factors that make one EPM sheet outperform another can be identified without access to all the pool entries, then further improvements in strategy may be possible.

The results support the use of the ROI calculator as an aid in predicting the best champion or champions to bet in a pool with only standard scoring. The values of the ROIs were off by a factor of 2 or more, but the relative order of the ROIs was approximately correct for the Packard #1 pool. However, it is not clear that the ROI calculator improves much on the obvious contrarian strategy of betting on the #2 team in the pools (Stanford) rather than the overbet #1 team (Duke).

The validity of generalizations from these simulation results are limited by the fact that that only two pools for the same year are analyzed. If you are a pool manager or have access to the details of all entries in a pool, please send to me at tadamsmar@aol.com so that I can extend this analysis. The names associated with entries sent to me will not be revealed to provide privacy. (Privacy was not a concern with the Packard pool entries since they are already posted on the web.)

References

[1] "How to Play Office Pools If You Must" by David Breiter and Bradley Carlin (Chance Vol. 10, No 1, 1997, pp. 5-11)

[2] "March Madness and the Office Pool" by Edward H. Kaplan and Stanley J. Garstka (Management Science Vol. 7, No 3, March 2001, pp. 369-382)