I discovered Wordle on Wednesday. It’s a fun game where you have to guess a five-letter word in six or fewer tries.

At each go, you guess a five-letter word that must exist in the game’s dictionary. The letters are colour coded to tell you whether they are correct (green), in the target word but at a different location (yellow), or not in the word at all (grey).

For example, if today’s word is ELVEN, and you guessed LEANS on your first go, you would see the word LEANS with the letters L, E, and N highlighted in yellow:


Each guess in turn is added to the table until you guess correctly or run out of guesses.


There’s a new word each day. On Wedneday, I was lucky and succeeded in five tries, despite not really having much of a strategy. On Thursday, I came armed with an idea: start with a couple of words that cover the ten most common letters in English, i.e. CLEAN and TRIOS. This time, I got it in four.

I wrote a little command line implementation of the game to try different strategies, and then wrote a robot player to test out those strategies.

I ended up with a couple of different algorithms to choose the next word. First, the most informative word, which I compute as a word that

  • has not been guessed yet;
  • contains no letters that have been ruled out; and
  • has the highest total frequency score of letters that have not yet been guessed.

Second was the most plausible word: the word that

  • has not been guessed yet;
  • has all the known letters in the correct place (the green ones); and
  • contains all the letters known to be in the word (the yellow ones); and
  • has the highest total frequency score of letters that have not yet been guessed.

When the search space is large, the most informative word is more useful. When few words are left, most plausible gets to the answer quicker. And when you’re on the last guess, most plausible is the only useful approach.

However, this strategy always tended to guess the same first two words: SOARE and UNTIL. Not a bad choice: it covers 10 of the 11 most common letters. But why calculate it at all? It’s much quicker to hard code them.

But then I thought, what about if we could cover all the 20 most common letters in our first four guesses? And, are there four words that cover those 20 letters completely, without any duplication?

A quick hack of a recursive scan of the dictionary of five-letter words later, I had my answer. Sort the words by the number of letters in the top 10 most common, and you get:

  • FILES (4 of the top 10)
  • PRANG (3)
  • DUTCH (2)
  • WOMBY (1)

A naïve strategy that simply guesses those four words first before using one of the cleverer approaches gets the right answer in 5 guesses 82% of the time, only fails about 2% of the time, and runs very fast. (Speed doesn’t matter for playing, but it makes benchmarking a lot quicker.) But it doesn’t ever get the answer in fewer than five tries unless the answer is one of those words.

Changing to use the hard-coded word only when there are more than n possibilities left (I found 3 to be the optimum so far) does not noticeably change the failure rate but it does mean that it guesses the word on the fourth try much more often.

I’ve put my implementation on GitHub. Maybe you can do better?

I used the FILES PRANG DUTCH WOMBY strategy today and guessed the word on the fifth try, although that wasn’t really a guess: there was only one possible word at that stage, as I verified later using grep.