CMSC 471 -- Mastermind Project Description -- Fall 2012

Released 10/2/12, last updated 10/2/12

The Game of Mastermind

Mastermind is a classic guessing game. One player (the "chooser") chooses a "secret code" and the other player (the "guesser") must guess the code within a certain number of guesses.

In the standard version of the game, the code consists of four pegs ("code pegs"), each of which is one of six different colors (white, orange, yellow, red, blue, or green). For each guess, the guesser chooses four pegs of these same colors. The chooser then gives feedback on the guess, using smaller red and white pegs ("key pegs"). One red peg is given for each correct color that is also in the correct location; one white peg is given for each correct color that is in the wrong position.

The game can be made arbitrarily more difficult by increasing the number of code pegs and the number of colors. Your job is to write a Mastermind guesser that can play any size game.

Mastermind Strategy

There are a number of different strategies that can be applied to reduce the number of guesses required to guess the code. The Wikipedia page on Mastermind gives a simple six-guess strategy (which always finds a standard mastermind solution within six guesses), and a five-guess solution by Donald Knuth. The five-guess approach exhaustively enumerates all of the possible codes, eliminating those that are inconsistent with the answers (red/white pegs) received so far. It then chooses a guess with the highest "minimax" score -- in this case, the guess such that [the minimum number of possibilities that could be eliminated by any of the possible red/white answers] is maximized. This sounds a bit confusing but it is relatively straightforward to implement.

Of course, this five-guess strategy isn't at all scalable: as the number of code pegs and the number of colors grows, the complexity of this approach increases dramatically. In fact, Stuckman and Zhang showed that the "Mastermind Satisfiability Problem" (choosing a set of guesses that taken together will determine the code unambiguously) is NP-complete (i.e., presumed to have a complexity that grows exponentially in the number of code pegs and colors).

The 1981 SIGART article by Rao gives a heuristic algorithm that seems unlikely to be optimal, but which would probably work for any size solution. (Whether it would scale efficiently is another question...)

Here are three simple strategies that aren't very scalable, but will always work:

  1. Exhaustively enumerate all guesses, and try guessing each possible code, in lexicographic order, without paying any attention to the red/white peg responses. This method will take no more than c^p guesses (where c is the number of colors and p is the number of code pegs). Not very good, but it's a starting point.
  2. Exhaustively enumerate all guesses, and try guessing each possible code, in lexicographic order, skipping any codes that are demonstrably inconsistent with any previous red/white peg responses. This will be at least somewhat better than strategy #1.
  3. Use your first c-1 guesses to guess "all reds," "all whites," etc. - one "monochrome" guess per color, for all but one of the colors. At that point, you'll know how many code pegs there are of each color (namely, the number of red pegs associated with each guess). You don't need to actually guess the last color, since that color must have however many red pegs weren't allocated to other colors. Once you have this information, you can start generating only combinations that are consistent with the known color distribution. This approach has lower complexity than the first two strategies, since it "narrows in" on the right colors more quickly, and does less checking against the red/white responses. (Rao's approach is somewhat similar in spirit to this strategy, but adds other constraints to make it more sophisticated.)

These might be good starting points when you begin your implementation, as baselines to make sure you can get something working, and against which you can compare the performance of a more sophisticated strategy.

Mastermind Challenges

So how can you solve this problem in the general case (any number of code pegs and colors)? Well, that is your first of three challenges: (1) implementing a general-purpose Mastermind algorithm. (This challenge will be based on a fixed-size game, of 8 pegs and 8 colors. ) Clearly, one wants to choose a guess at each step that will maximally reduce the search space (number of remaining legal guesses) at the next step. How exactly to do this is an open question.

As the problem gets larger (more code pegs/colors), it will take any given algorithm longer to make each guess, and more guesses will be required. Therefore, the second challenge is to make your mastermind algorithms (2) as scalable as possible (in terms of the number of guesses needed to guess the code within a maximum running time (TBA, but currently set to 1 second of CPU time in the tournament code. The unit of CPU time in clisp's (get-internal-run-time) function used for timing is a millisecond, so *max-time* is set to 1,000,000. You may wish to change this for your own testing purposes; if you set *max-time* to 0, the tournament code will not enforce any time limit, which may be useful as you are developing your algorithms. Note that the timeout is handled by a throw from the mm-score function back to the tournament code, so your code doesn't even need to know about it. Since clisp doesn't support multithreading, there isn't an easy way to interrupt guessers that are in an infinite loop, so we may have to do some manual timeouts in the actual tournament).

But what if the player who is choosing the code is known to be biased in some way? What if you knew that the chooser never uses pegs of certain colors, never uses the same color more than once, always puts the same color in the first and last places, or prefers codes with fewer colors (but sometimes uses more colors just to throw you off the scent)? This leads us to your third and final challenge: (3) learning a chooser's strategy and using this knowledge to guide the guessing process.

For the learning challenge, I will design and implement several biased code choosers, each with a particular "pattern" of guesses that you will need to implement a learning approach to discover. Examples of possible biases might be: Never choose a code with a red or orange peg; always choose codes with exactly three colors; never use a color more than once; always choose codes that alternate colors; have a probabilistic preference for fewer colors (e.g., with probability .5, use exactly 1 color; with probability .25, use exactly 2 colors, etc.); always have the colors appear in a certain order (e.g., red is always before yellow). As you can see, there is a nearly unlimited space of possible biases, so you will have to think hard about your learning feature space and algorithm.

There will be a reasonably varied set of "example biased choosers" (where I will give you a Lisp implementation of the chooser and describe the strategy that it uses, along with a large sample of codes generated by the chooser).

There will also be several "test biased choosers" that will be used in the tournament. For these, you will receive only a large sample of generated codes. Most of these choosers will use strategies that are in some way similar to the example strategies; but two or three of them will use some completely novel strategy.

I will distribute the test choosers at least several weeks before the tournament so that you have some time to run your learner on them. The results of learning should not be used for you to try to figure out what the strategy is, but you can do your learning in advance (offline) and then produce a different parameterized guesser for each of the test choosers.

All teams should design a guessing strategy that is expected to do well on challenge #1. I expect that different teams will choose to focus more of their energy on either challenge #2 or #3, but you must should have some solution that is plausibly scalable (works for any size problem at least in theory), and some learning approach (even if it is quite simple and limited). Three-person teams will be expected to have good designs for all three categories. Also, all of the project writeups (as discussed below) must address all three challenges and what approaches you designed to try to meet them.

More on the Learning Challenge

For the learning challenge, your code should operate in two stages. In an offline stage (which can happen before the tournament begins, and before you submit your code), you should have a machine learning (classification) algorithm that learns the biased code generation rules. For each of the (three or four) biased rules, this ML algorithm should read a file containing positive and negative training instances, and generate some encoding of the learned mapping. Then your guesser should use this mapping to constrain or bias its guessing strategy (i.e., don't guess a code that this generator would never or rarely generate). You should save the encoding that you learned and submit it along with your guesser, so that in the tournament, your guesser will be able to use the appropriate bias (selecting from among the alternative biases using the generator argument to the solver).

I've posted four training files for the biases, along with the lisp code that I used to generate the training files. (You can generate more data if you want to, though you won't get the code for the "test biases" when it's time for the tournament.) The lisp code explains what each of the biases actually are (but again, at tournament time, you won't know! - though the tournament biases will be similar in complexity/nature to the four training biases).
http://www.cs.umbc.edu/courses/undergraduate/471/fall11/train-biases.lisp
http://www.cs.umbc.edu/courses/undergraduate/471/fall11/train-bias1.txt
http://www.cs.umbc.edu/courses/undergraduate/471/fall11/train-bias2.txt
http://www.cs.umbc.edu/courses/undergraduate/471/fall11/train-bias3.txt
http://www.cs.umbc.edu/courses/undergraduate/471/fall11/train-bias4.txt

Project Requirements and Grading

There are three graded components of the project: (1) your implemented system, (2) a group project report, and (3) your performance in the class tournament. Additionally there will be several required milestone assignments which will be graded on the basis of completion. These will include a dry-run of the tournament as well as a project design document. Note that you will be primarily graded on the thoughtfulness and clarity of your design and presentation, and not primarily on your algorithm's performance. The reason for this is that it gives you the freedom to try a risky approach that is interesting from a design perspective but might not work very well. An approach that doesn't work very well, and is also naive, trivial, or not well motivated will receive a correspondingly trivial grade.

Project Design Document. (due Thursday Nov. 8th) Submit an approximately one page design document outlining in particular the AI algorithms you expect to use, a description of how you will tie your system together to produce guesses and arrive at a solution, and a *very* basic framework for your implementation.

Primarily, I just want to see that you and your partner have been thinking about this problem and that you have come up with at least the basic details for your approach. I do not expect you to have a complete pseudocode implementation or anything like that, just enough to convince me you know what you're doing.

The project design document will contribute very slightly to your final project grade as a (completed, uncompleted) type grade. However, if your document leaves me unconvinced that you and your partner are on the right track, I will be calling you in to meet with me and discuss what is going on.

Please make sure to include the names of everyone on your team! (and feel free to come up with a team name!)

Implemented Mastermind Player (45%). Your implementation will be graded on correctness (i.e., did you correctly implement the solution that you described in your paper), as well as design (generality, clarity, and elegance) and readability (indentation, comments, modularity, etc.)

You must have a working Mastermind player by the date of the tournament dry run (Thursday, November 15). If you do not submit a working player into this tournament, you will receive a 5 point penalty on your grade for the final project. (Players that "almost work" but require some manual repair to get them working may receive a partial penalty.) Your final player must be submitted by the time of the second dry run (Thursday, December 6). You may resubmit your code after that time, but may not change any of the top-level function names, since we will already have set up the configuration file for the tournament. The in-class tournament will be held on the last day of class, Tuesday, December 11.

Note that we may have a "zeroth dry run" on some earlier date (before November 15) to test the tournament software itself. Participation in that dry run is entirely optional and will not affect your grade (but may give you some sense of how your solution performs compared to other teams' solutions).

Project Report (40%). Each team must submit a project report describing your approach, your experience in designing and implementing the approach, and the performance of your system. I would expect these reports to be somewhere in the 5-10 page range, but there is no minimum or maximum if you include all of the required information. Your report should include the following:

  1. Survey of the background reading you did on the Mastermind game and strategies, with citations.
  2. A discussion of your approaches for Mastermind guessing and for learning the biased code choosing strategies, with explicit citation of ideas that you drew upon or borrowed directly from the literature.
  3. Some theoretical analysis (mathematically formal would be nice, but intuitive is OK too) that discusses the computational complexity of your algorithm, and the number of expected guesses (which could be based on the degree to which each guess is expected to reduce the size of the remaining solution space) in terms of the size of the problem.
  4. Experimental evaluation of your system with respect to the three challenges. This evaluation should include performance of your system and at least one baseline. (The baseline can be the generate-and-test algorithm that I've provided, but preferably will be something a bit more reasonable, such as my strategy #2 or #3, or Rao's strategy.)
    1. Performance on the fixed-size problem (CPU time and number of guesses for at least 10 random codes).
    2. Performance on the scalability challenge, as measured by CPU time and number of guesses for a series of problems of increasing complexity (number of code pegs and colors) -- the exact problem sizes are up to you.
    3. Performance on the learning challenge, using data from the biased code choosers that I will provide. You should include results on both the "example" biased choosers (i.e., the ones with known strategies) and the "test" choosers (i.e., the ones that will be used in the tournament). Results here should include CPU time and number of guesses, and may also include other evaluations of the learning itself (e.g., some objective or subjective measure of how well your learner learned the biased chooser's strategy).
    The experimental results should be presented clearly; you may wish to include tables of data, scatterplots, and/or statistics (e.g., means, standard deviations, confidence intervals).

The final code (submitted on gl) and project report (hardcopy only) are due Monday, December 17, at ITE 208, by 2:30 p.m. You may submit the hardcopies to my mailbox in ITE 325B or in person up to the deadline. NO LATE PROJECT REPORTS OR CODE SUBMISSIONS WILL BE ACCEPTED UNDER ANY CIRCUMSTANCES. You may redesign and resubmit your code after the in-class tournament if you wish; your grade for the implementation and project report will be based on your final submission. If you change your code after the tournament, however, you should include a summary of these changes in your report, so that we are aware whether you have fixed any problems in your code after the tournament.

Class Tournament (15%). The tournament performance grade will be based on whether your player successfully runs (i.e., you have a working system at the time of the dry run and final tournament -- about half of the credit), and on how well it actually performs in the tournament (i.e., your system beats the other approaches -- about half of the credit). (Note that the latter grade, about 5% of your total project grade, is the only part of your project grade that is directly tied to performance (run time and number of guesses). So it is important to have a player that plays well, but it's even more important to have a strong justification and clear design for your player.)

The class tournament will pit teams' guessers against each other to see who is the best guesser in each of the three "challenge categories." In all rounds, there will be a time limit (to be announced in advance, after the dry run; most likely corresponding to around 30 seconds in elapsed time, on the gl server). Within this time constraint, the winner will be the team that uses the fewest guesses, on average, to find the correct solution. I may schedule some "elimination rounds" offline, prior to the real-time in-class tournament, to select the "seeds" who will compete in the in-class tournament. (But perhaps we will not reveal the selected entrants until the day of the tournament, just to keep things lively.)

For the fixed-size game, the score will just be the average number of guesses across some number of random codes (probably 5, but TBD once we see how long a typical solution takes, how many teams there are, etc.)

For the scalability challenge, the size of the problems will increase after each round (in both number of pegs and number of colors -- note that there could be more pegs than colors, or more colors than pegs).

For the learnability challenge, the winners will be the teams whose guessers have the smallest number of guesses, on average, for each of the several biased code choosers.

Exceptional performance in any of the categories will merit extra credit. Currently, I'm thinking that at least the top 3 teams in each category will earn some extra credit, but that could change depending on how tight the spread is between the teams.

Mastermind Resources

Here are a few online resources that you may find useful.

"Mastermind", Wikipedia, http://en.wikipedia.org/wiki/Mastermind_%28board_game%29

"Investigations into the Master Mind Board Game," Toby Nelson, February 1999, http://www.tnelson.demon.co.uk/mastermind/

Jeff Stuckman and Guo-Qiang Zhang, "Mastermind is NP-Complete," arXiv:cs/0512049, http://arxiv.org/abs/cs.CC/0512049

T. Mahadeva Rao, "An algorithm to play the game of mastermind," SIGART Bulletin 82, October 1982, http://portal.acm.org/ft_gateway.cfm?id=1056607&type=pdf&coll=Portal&dl=GUIDE&CFID=55406954&CFTOKEN=38683075