Finding Magic: The Gathering archetypes with Latent Dirichlet Allocation

Combining card games and topic modeling

Hlynur Davíð Hlynsson
Towards Data Science

--

This article sparked an interesting discussion on reddit and was featured by Wizards of the Coast.

One of the coolest projects I’ve done using machine learning revolved around using a method for topic modeling called Latent Dirichlet Allocation (LDA). Topic modeling simply means allocating topics to documents. This could be finding a category for a news story, a genre for a book or an archetype for a deck in a card game.

I’ve been playing Magic: The Gathering on and off since I was around twelve years old. I wondered whether it would make sense to apply LDA on Magic decklists to discover archetypes. The results were pleasantly surprising!

AI for deckbuilding

LDA is an unsupervised learning algorithm that accepts as input a set of documents (decklists) and a number of topics (archetypes). By unsupervised learning, I mean that I do not give it any hints what decks belong to what archetypes. I do not even try to describe the archetypes — I only show it a collection of decklists and ask for the number of archetypes to be found.

With LDA, we can discover what cards are associated with what archetype. Perhaps we ask it to describe all decks in terms of three archetypes and it returns lists of cards corresponding to aggro, combo and control decks. Or, for example, if we ask it to find five archetypes it might simply cluster cards into the basic colors: white, blue, black, red or green.

More interestingly, we also get the probability distribution of cards in an archetype. This means that for each archetype, there are card-and-probability pairs where all the probabilities naturally sum to one. Continuing with the example above, we might find that the most popular card in the blue archetype is Snapcaster Mage, the next most popular one is Serum Visions, then Dispel and so on.

Draw, go.

Finding the distributions analytically is usually hard and involves estimating intractable integrals. Methods from a discipline called Bayesian inference are used to get the best guess of how the distributions look like from what we know. In this case, “what we know” is our set of decklists which helps us get a good estimation of what the underlying archetypes look like.

Learning by example

As a general rule in statistics and machine learning: the more data we have, the better estimations we can make. Getting good data is usually a difficult task. Since we are giving the algorithm more and more examples of how decks look like, it will become better at understanding a format the more decklists it sees.

Angel of MTG Decks was so kind to supply us with data for this article. The data set I will analyze for this article consists of 500 Modern decklists from recent tournaments.

I should emphasize here that the data consists of raw lists of cards. The AI does not receive any information about deck names, who built it, where or when it was played, whatever. Just 500 lists of 75 cards.

Determining archetypes

Next, we decide on a number of archetypes to find. If we set it to 1, then we will just get one archetype that includes all the card in the data set. If we set it to 500, then we can expect that each of our 500 decks is one archetype.

The number of archetypes to find is a hyperparameter of LDA. This means that it’s an additional number we supply to the algorithm before it does its thing. An unfortunate aspect of machine learning is hyperparameter optimization — which is a fancy term for “trying different values until you’re happy with the results”.

Low numbers of archetypes, around 1–20, mostly found archetypes with decks of the same color or decks using similar cards. The archetypes could mostly be summarized by “decks containing a Mountains and a Plains” or “decks containing Path to Exile” and so on. With values around 20–30 the results were mostly good but ruined by one and one degenerate archetype, for example, an Affinity-Tron mixture.

Setting the archetypes to 30 worked well for me to find known archetypes.

What the machine learned

Let’s look at the top sixteen cards of some of the archetypes that were discovered:

For each archetype, the algorithm returns a list of cards and their respective probabilities. These are the probabilities that: an unknown card of a deck, of a given archetype, is a specific card. We can interpret this as indicating how many copies of a given card are usually in a deck of the given archetype. Modern players will recognize archetype 27 as Infect and archetype 26 as Tron.

Since I haven’t played for a while, I had to consult the metagame archetypes on MTG Decks to check if my results make any sense. I asked for 30 archetypes to be found, so I received 30 lists of cards with associated probabilities. The archetypes themselves are unnamed and just have an integer associated with them — I had to find out on my own what the archetypes are called. It seems like archetype 27 matches Infect nicely and archetype 26 matches Eldrazi Tron.

Which archetype is this?

I played standard back in the day where Skullclamp was in every deck and Arcbound Ravager was terrorizing the meta. After noticing my old buddy Ravager, I inspected archetype 13 more closely and sure enough — it’s Affinity.

To get a better sense of the numbers, notice that 0.031 ≈ 2.3/ 75. This can be interpreted in a way that there are 2.3 Memnites on average in a 75 card deck that’s exclusively “archetype 13”.

But consider Blinkmoth Nexus: 0.068 ≈ 4.1 / 75. You cannot have more than four copies of a non-basic land in a deck! Remember that the decks are modeled as mixtures of archetypes: it’s unlikely that a deck is described as being 100% any given archetype. If the AI sees an affinity deck, then it might say that it’s 97% archetype 13, giving Blinkmoth Nexus the expected amount of 4 cards per deck.

We can also consider the values of the top cards to be a measure of the card variety within the archetype. Comparing affinity to infect, we see that the top sixteen affinity cards have a higher probability sum, indicating this archetype has more 4-of staples.

Creative artificial intelligence

Another cool thing about LDA is that it’s a generative model. This means that we can use it to create new decks. To do this we first choose an archetype, let’s say affinity. Then we sample new cards to add to the deck, one at a time. The probability that a given card is added is determined by its per-archetype card distribution. However, LDA doesn’t know deck construction rules in Magic, so we might have to re-do a sample if it makes the deck illegal.

A deck with mixed main deck and sideboard generated from the multinomial distribution defined by the affinity probability vector. I sampled the cards, one at a time, sampling again if I would end up with 5 copies of a card.

The model can be extended to distinguish main deck cards from sideboard cards. For example, by first marking cards that frequently appear as sideboard cards in the dataset. Then setting the probabilities of these cards to zero during sampling of the main deck cards, and conversely for the sideboard cards.

Further reading

If you want to see how to recreate the results in this article for yourself, please refer to my notebook with the code.

The method was originally proposed in this paper by Blei et al., but the Wikipedia article gives a good technical overview of the method as well.

I used the python package gensim to generate the results, which has good tutorials and documentation.

--

--