Every year, hundreds of people travel to MIT during the Independent Activities Period for the MIT Mystery Hunt, a popular puzzlehunt. This year was my second Hunt. This is a review, analysis, and/or postmortem of it. It contains some of the solutions, so if you want to go play with the puzzles yourself (as they are all posted online), be forewarned!

How does this thing work, anyway?



First, since most of the people who read this site probably aren't puzzlers, a brief description of the flow of the Hunt. Teams arrive and set up in pre-arranged headquarters (either a location near campus for teams that have them, or a classroom or two for any team that requests one). Everyone gets their stuff set up, then the hunt itself begins on Friday at noon with a kickoff presentation (traditionally in Lobby 7, which is functionally the 'main entrance' at MIT). Then teams return to their rooms and hit F5 repeatedly on the Hunt website (this year that was at borbonicusandbodley.com), waiting for the first round of puzzles to be released. Once they appear, the teams start trying to solve them.

Puzzles come in 'rounds', which are unlocked over time or via solving puzzles in the rounds you already have. Exactly how these work has varied somewhat from year to year. Each round also has a meta-puzzle (or simply 'meta'), which uses all of the answers from the round as the clues to some new (often quite difficult) puzzle.

Each year's hunt also has a theme: a nominal reason for the teams to be solving puzzles. This year's theme was related to the film The Producers, which led to a series of rounds based on ideas for terrible Broadway musicals (all of which were puns on existing musicals: A Circus Line, Okla-Holmes-a!, Into the Woodstock, Mayan Fair Lady, Phantom of the Operator, and Ogre of La Mancha). So, that was cute, and it made each round unlock produce a round of laughs and/or groans from the team.

If a team completes all of the rounds, they unlock the 'endgame', which usually involves some final puzzles and culminates in a runaround (a sort of scavenger hunt that involves actually running around MIT campus. Here is the beginning of one from last year). The runaround ends in the ultimate goal of the Hunt: finding a 'coin' (sometimes an actual coin, sometimes not). The team that finds the coin wins the Hunt.

There are also, at least in recent years, a number of 'events' during the hunt. Teams can send a couple members to these events, which are sometimes puzzle-oriented but can also be skill-based. The reward for the events are points that can be spent on puzzle answers. This is especially important strategic resource, and is mostly useful when you are working on a meta-puzzle and need more of the answers from its round to make sense of it.

A review



As a whole, the hunt was a lot of fun. I think last year's (video game-themed) hunt was a better hunt overall - the multiple runarounds were especially fun. But this year had a lot of interesting puzzles, and I certainly performed better than last year. I can claim two solid solves, which I'll discuss in detail later.

This year's approach to round unlocks was, I thought, quite good - each round had a set unlock time (a time at which every team was guaranteed to have it), and the more puzzles you solved the more points you accrued. Your point total was fed into a function that decreases the time until the next unlock happens. There were also multiple unlocks per round - each round came in two halves, and there were, I believe, two unlocks for each half (so, 4 unlock points per round).

This was very similar to last year's method, but more sensible - last year the unlocks were based solely on points, which accrued over time with a bonus given for solves. This made it a bit hard to get a quick estimate of how many solves your team had achieved. It felt like the points mapped more directly to how well the team was doing this year.

The result, for our team at least, was a fairly steady flow of new puzzles into the mix. This is good - it means that if a given team member didn't have any insight into any of the existing puzzles, there was always something new for them to work on coming fairly soon.

Each round in this hunt had two meta-puzzles, and the round ended in a 'production', in which teams were tasked with writing and performing a short skit that included the meta-puzzle answers as elements. I wasn't fond of this element - it strayed away from puzzling a little too far for my tastes. Luckily, there were enough people on my team that I didn't really feel pressured to participate. Still, this mostly left me longing for last year's runarounds through the tunnels.

I finally went to an event this year, as well, 'Bringing Stars Together'. I have mixed feelings about this one. On the one hand, the premise of the event was interesting: a logic puzzle (fairly straightforward, with 4 constraints) whose clues are discovered by chatting with the characters involved in the puzzle. This is a novel way to present a logic puzzle, and that part was a lot of fun. On the other hand, the effort vs. reward for this event was laughable - the event lasted more than an hour and a half, and was only worth 0.2 answer unlocks.

The hunt also seemed to have more 'mini-events' and puzzles that had physical components (that teams had to go and retrieve from various rooms around campus) than last year. I think I walked the entirety of the Infinite Corridor at least 10 times. One notably interesting one involved playing a game of Jenga to get the clues for a (very simple) puzzle. These were interesting and a welcome addition to the usual LAN-party feeling of sitting in the team headquarters staring at spreadsheets. Not that that isn't a lot more fun than it sounds, of course.

Also, none of the puzzles made it necessary to spend a lot of time outside. In Cambridge in January, this is a welcome feature. On a completely unrelated note, I really need to invest in some Boston-strength clothing.

So, that was the hunt. Codex made an admirable attempt at matching the bar set by Metaphysical Plant last year. I'd say they nearly reached it. I look forward to seeing what the Manic Sages can follow up with next year.


Puzzle Logs



I worked on a number of puzzles, most of which were eventually solved. Here are some of my thoughts on some of my favourites (and least favourites), along with a description of how we solved (or tried to solve) them.

Blinkenlights



In this puzzle I quickly realized that we have 'top' rows and 'bottom' rows of lights, and that we could click any of the currently lit top lights to change the other lights in some sort of pattern (and I mapped all of the positional changes out pretty quickly). Furthermore, the bottom 8 lights for each group of 16 top lights was counting up in binary, every time a move was made in their 'group'. At first it looked like each set of four lights were self-contained, but after finding a sequence that turned all 4 lights off, other lights in the group of 16 turned themselves on. I couldn't find any pattern to this, until one of my teammates (Max) suggested that maybe each group of 16 lights (4 groups of 4) acted like a group of four - following the same pattern. And each 2 sets of 4 lights corresponded to a letter at the top of the screen.

After some legwork (a whole lot of clicking), I turned out all of the lights, revealing the message "SolveRestThenPluralizeTitleWord4". Unfortunately, I didn't see how to solve the 'Rest', as the entire thing was already solved. This is where I dead-ended, and eventually abandoned the puzzle (after about an hour of solid work on it).

The solution, it turns out, was to find the *shortest path* that turns out all of the lights (i.e. solves the maze), which leaves the lights in the bottom rows in a state that spells out "PATENT 2,417,786" in ASCII. Finding the shortest path through this would have required both a lot of leg-work and some non-trivial programming (in javascript with greasemonkey, probably). I had considered this as a possible solution, but dismissed it as too much work to be practical. It's good to know I was on the right track, at least.

Pure and Simple



This puzzle fell into a certain class of puzzles: a simple series of images presented with little to no context. I'm not historically that great at these, but in this case, I had the a-ha moment that led to the puzzle being solved.

Image puzzles are a lot like word association games with a visual element. The first step is usually to identify all of the images, and our team had done that by the time I looked at the puzzle. Initially I just glanced at the images, nothing clicked, and I moved on. In the lull after solving Revisiting History (see below), though, I looked at this puzzle again. Someone had identified the second picture on the right side as Brahms. Which is when it hit me: the last picture on the left side was a picture of 3 coke cans.

'Cans and Brahms', of course, is an instrumental track from Yes' album Fragile, which consists of some brief excerpts from one of Brahms' pieces arranged and played with synthesizers. From there the team was easily able to deduce that all of the images could be paired to form song titles separated by 'and'.

I knew my near-encyclopaedic knowledge of progressive rock would come in handy some day.

Revisiting History



Ahh, the Doctor Who puzzle. I recognized what was going on in this puzzle at almost the exact same time as one of my team-mates - I turned to tell him about it (as he is easily the most knowledgeable Doctor Who fan I know) only to discover he was already beginning to match the descriptions to the companion(s), Doctor, and episode titles. We had that information down amazingly quickly, but extraction was difficult - nothing we tried seemed to work. Then another team-mate noticed that the word 'who' appeared in every clue. Using that as an index got us to the answer very quickly. I think this may have actually been our first solve - we had it within the first hour of the hunt, certainly.

Eek!



This puzzle was a lot of fun, and I am proud to say that I can claim most of the work for my team in solving it. It is, obviously, a 3-d maze. We took each self-contained 'piece' of the puzzle on each level and numbered them. Then, we mapped the connections between numbered nodes, and used this program (which took something like 10-20 minutes to write and test) to solve it:

[sourcecode language="python" gutter="false"]
#!/usr/bin/python

import networkx as nx
import sys

source = '44'
target = '3'

def parse_input(infile):
g = nx.Graph()
f = open(infile, 'r')

for line in f:
meta = line.split(':')
node = meta[0].strip()
edges = meta[1].split(',')

g.add_node(node)

for n in edges:
other_node = n.strip()
g.add_edge(node, other_node)

f.close()
return g


def main():
infile = sys.argv[1]
g = parse_input(infile)
path = nx.shortest_path(g, source, target)
print path
print len(path)

main()
[/sourcecode]

There were a couple of hitches: a small error in our mapping data being the crucial one. But we got a solution, then mapped it on the maze (with a highlighter). The result clearly said 'side elev' on the top row, so we took the thing and mapped it out with burr tools. We found the word 'Love' very quickly, but that wasn't the answer. So, I shelved the problem and went to bed.

Looking at it again the next day (Sunday morning), I saw the trick: the flavour text implies that the path taken should be the negative space, not the positive space. With this as a clue, I re-mapped the solution in burr tools, inverting which blocks were solid. This led to the answer: the word 'Love' was still visible on one side, while 'Etc' was visible on the other. 'LOVE ETC', is, of course, the answer.


JFK SHAGS A SAD SLIM LASS



We didn't solve this puzzle, but I want to include it because it is very clever. I almost solved it, too. I looked at the keyboard, typed out the phrase (which took a while, because I'm used to typing in dvorak), but I didn't spot any obvious patterns, probably because I was focusing too much on remembering qwerty. So close.


Sounds Good to Me



This was my absolute favourite puzzle of the hunt. It was delightful in every way. The cluing at the beginning sets the tone: in greek characters is the latin phrase 'nota bene: non sequitur lingua Iaponica!'. Which is to say, basically, 'what follows is not Japanese'.

Instead, it is toki pona, a constructed language (conlang) with, according to Wikipedia, 3 fluent speakers. This wonderfully obscure language is fairly light on vocabulary (120 root words and a smattering of loanwords where necessary). One of my team-mates got the hiragana transliterated into latin characters, and another team-mate identified it as toki pona (he recognized it because of a passing acquaintance with the creator of the language).

An automated translator for toki pona -> English exists, but doesn't work very well (as we quickly discovered). Instead, I translated most of the entries by hand, learning toki pona vocabulary and grammar as I went. This felt very much like my recent Old Norse translation project, and like all translation, was enjoyable for its own sake. As I translated, it became clear that the toki pona text was providing definitions of words or phrases in other languages. Eventually I found that the words at the end of each paragraph were language names in Toki Pona (the 'official' dictionary doesn't list these, as they are loanwords). After we figured out a couple of these clues, it became obvious that these were phrases or words that were used in English but were actually loanwords from the given languages.

These clues gave us the acrostic DANKESCHOENINJAPANESE. Of course, Japanese has a lot of ways to say 'thank you', but the 7/9 at the bottom of the page clued us into a 2-word phrase that, in Latin characters, would give us 7- and 9-letter words. So, ARIGATO GOZAIMASU was the obvious choice.

I often describe myself as an 'amateur linguist'. Philologist might be the better term. I really love Language. Learning languages and playing with language are both hobbies of mine. Most of the time, this isn't terribly useful, mainly because I never devote enough time to any one language to learn it thoroughly. However, in this case my exact sort of language skills and knowledge were perfectly suited to this puzzle. If any one puzzle next year is half as fun as this one was, it will be well worth the trip.