Some thoughts on the Singapore Puzzle Hunt

It’s been rather more than a month since the first Singapore Puzzle Hunt took place, which means I should really get around to putting down some thoughts about it.

The intention of the Singapore Puzzle Hunt was to introduce people who might be interested — in practice, mainly fans of escape rooms and the REG series — to the world of puzzle hunts. My focus, in particular, was to get them familiar with puzzle hunt staples such as Morse and semaphore. I think we ended up with a representative spread of common extraction methods as a result, although the puzzles were otherwise conservative in format, rarely straying from the identify-solve-extract framework (the need to sort was removed from several puzzles during the editing stage, to make them simpler).

Was it actually a good hunt for beginners, in the end? Well, if the feedback forms are anything to go by, almost all participants were kind enough to say they’d be interested in similar outings in the future. One issue if a 2016 hunt happens (and I hope it does) will be balancing between providing a challenge to teams who’ve now got the basics, yet keeping it accessible to complete newcomers.

Which might be a good point at which to talk about puzzle difficulty and the spread thereof. Leaving aside the meta (which, even in vastly simpler form, was still probably unfairly difficult), both the mean and median number of puzzles solved was six out of 12, while the mode was 6.5 — for whatever that’s worth. Might it have been better to aim for the average team being able to solve three-quarters of the hunt, maybe? I’d personally be inclined in that direction, but philosophies might differ.

A bigger issue, which I think is easier to agree on as an actual problem, was the uneven variation in how solvable puzzles were. The solve rates, grouped roughly, fell like this:

100% 96% 88%
71% 71%
58%
38%
29% 25% 21%
4% 0%

I’d personally have preferred the toughest puzzles to bottom out at around a 20% solve rate, and for more puzzles at around 60% – 70%.

I think the 2015 hunt has certainly given us a better sense of what does and doesn’t work for first-time solvers; a massive flaw in the organising process was a lack of test-solvers without prior puzzle hunt experience (partly because we wanted such people to take part in the actual thing instead!).

Puzzles aside, other things that bear thinking about for 2016 are hint systems and how best to integrate interactive elements. I imagine a lot of internal discussion when the time comes. But at least one takeaway from the 2015 hunt should not be controversial: the constantly-updated leaderboard, which was originally meant for the hunt organisers’ own reference but swiftly became popular with the teams.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s