Investigating the mathematics behind faro shuffles while answering the question: How many perfect shuffles does it take to return a deck of Cards to its original order?

It was 2 years ago when I came across a deck of artisan playing cards. Looking sleek with its neon red accent, I purchased it without much thought. I was never particularly good at handling cards, fanning out the cards or doing the faro shuffle was a bit of a struggle for me. But that brand new deck of cards gave me enough false confidence to try master it yet again.
Progress was gradual but I was increasingly hooked by the mystique of a simple deck of cards. This was when I experienced a r/ShowerThoughts moment.
How many times must I perfectly shuffle the cards to return it back to its original order?
As you can probably predict I was not at the level at which I can easily shuffle 52 cards perfectly 8 times to figure out that it took that amount of shuffles to return it to the original order. In the end, I wrote a simple script to do the shuffling for me.
I plotted the findings on a line graph (Fig.1) and what immediately caught my attention was the pattern that emerged.

I couldn’t explain why it only took 14 shuffles for a deck of 384 cards when it took 52 shuffles for a deck of 54 cards. Another interesting pattern is the "peaks" of the graph, perfectly forming a straight line. It was beginning to get a little complicated…
I took to _r/math_ in hopes of finding an answer. It received a few lengthy explanations (that I am grateful for) referencing the modulus function and power of twos. Equipped with basic high school math I struggled to comprehend the intuition behind the answers and slowly just left it as it is.
Fast forward to today, as I was working on the final day of Advent of Code where I came across the Discrete Logarithm Problem (DLP). Euler’s Totient function Φ(n) came up as an important concept as I was trying to solve DLP.
The totient function Φ(n), also called Euler’s totient function, is defined as the number of positive integers <=n that are relatively prime to (i.e., do not contain any factor in common with) n.
While searching up on the totient function, I popped by an explanatory page with a line graph of the totient function for a range of positive integers (Fig.2). Overwhelmed with a major sense of deja vu, I recognized the familiar patterns.

The totient function somewhat resembles the relationship between the number of shuffles needed to restore a deck of n cards. Immediately hypotheses started forming in my mind, convinced that somehow the secrets to the perfect shuffle had something to do with Discrete Logarithm and the Totient Function.
The Faro Shuffle
Before I begin going into the math, let’s look at what it means to do a "perfect" /faro shuffle.
To do a faro shuffle, you first split the deck into 2 equal parts and subsequently interweave them one card at a time, combining them.
Many can perform the faro-shuffle without much thought and do it relatively easily as well (after 2 years I am still as bad as I was before), however, many will not care to realize that there are actually 2 different types of faro shuffle.
Out-shuffle
An out-shuffle will keep the original top and bottom card in its original position.

In-shuffle
An in-shuffle will move the original top card to the second and move the original bottom card to the second from the bottom.

I guess you will realize by now that intuitively different types of faro shuffles will require a different number of shuffles to restore the deck to its original order.
To solidify that intuition, it is imperative we express the faro shuffles in a mathematical way.
What happens after one shuffle?
Let’s first look at the out-shuffle and how the position of each individual card will change after one shuffle.
Out-shuffle
We begin by labeling the positions of each card with a zero-indexed numbering system. The first card in the deck will be position 0 whereas the last card of the card will be position N-1 where N represents the total number of cards in the deck.

Since we will need to split the deck into 2 equal parts it will be more useful if we modify the notation using n where n represents the number of cards in each of the equal parts.

Positions 0, 1,..., n-2, n-1
belong to the first half, where its first card has a position value of 0 and its last card having a position value of n-1. Similarly, with positions n, n+1,..., 2n-1, 2n-1
belonging to the second half, its first card will have a position value of n whereas its last card has a position value of 2n-1.
This modified notation allows us to see the deck in 2 distinct halves, providing us a more natural expression in modeling a shuffle.
Since the top and bottom cards of the deck will have their positions unchanged, only the inner portion of the deck will have its positions changed after one shuffle.
We can visualize the final positions of each card after a single out-shuffle as such:

The cards are now interweaved, and for every out-shuffle, we can use this pattern to determine the position of each card (more on this later on).
In-shuffle
Following the same zero-indexed numbering system, we can express the result of an in-shuffle for a deck of cards like what we did for the out-shuffle.
After an in-shuffle, all of the cards will experience a change in their position, unlike the out-shuffle.
The result of an in-shuffle can be expressed as such:

The original top card becomes the second card from the top and the original bottom card rosed to take the place of the second card from the bottom.
Can we condense this into an elegant equation?
There’s a definite pattern to the shuffling, after all, intuition tells us that it is simply putting one card from one half on top of a card from the other half.
The interweaving motion of both types of shuffles can surely be condensed into an elegant equation.
Out-shuffle
For the sake of simplicity, let’s use a deck of 6 cards to visualize the out-shuffle.

After a single shuffle, the original card at position 1 moved to position 2 and the original card at position 2 moved to position 4. If you noticed this pattern right off the bat, it wouldn’t be wrong to say that perhaps after an out-shuffle, the position changed with a function of 2×original position. However, as you examined the other cards, you realize it does not generalize well for all of the cards, especially those from the second half.

The reason why a function of 2×original position works well for the first half is that the result directly equals the number of cards that will precede a single card in the first half after an out-shuffle.
In fact, we can think of it the same for the second half of the cards, just that it isn’t exactly obvious. The interweaving motion of a shuffle is the same for both halves of the cards, and hence if the function of 2×original position is enough to determine the final position of the cards in the first half, perhaps we just right at the cusp of a general equation.
The same interweaving motion is happening in the second half of the cards, but the function 2×original position gives us a final position that exceeds the maximum position. What if we just "wrapped" it around?

By applying a modulo operator, we "wrap" the result of the function 2×original position around and we are able to arrive at the correct position for cards in the second half.
Adding the extra modulo operator to function 2×original position gives us the general (and rather elegant) equation we have been looking for.

In-shuffle
We can adopt the same 6-cards experiment to visualize the in-shuffle as well.

After a single in-shuffle, we see that the top and bottom cards have their positions changed. The original card at position 0 moved to position 1 and the original card at position 1 had moved to position 3.
At this point, you realized that the equation we just discovered evidently does not generalize well for an in-shuffle. But what if we simply add a +1 to the back of it?
You are excited again, so you begin to test it on the second half of the cards…

Disappointed, you realized that 2j mod (2n-1) + 1
will not generalize for the second half of the card. But being ever so determined, you discovered that 2j-1 mod (2n-1)
works for the second half. It is starting to get messy having a conditional function to represent a single in-shuffle operation.
The only difference between the interweaving motion of an out-shuffle and the interweaving motion of an in-shuffle is in which half of the cards "go above" the other. In an out-shuffle, the first half of the cards goes above the second and vice versa.
Intuitively, there shouldn’t be much difference between the general equation of an in-shuffle and an out-shuffle.
We know that in an out-shuffle, the equation 2j mod (2n-1)
will give us the number of cards that will precede an individual card after a shuffle. It conveniently also equals the final position of the card. This is in part because of the nice properties of a zero-index numbering system.
Thinking along the same line, we want an equation with those properties for an in-shuffle. A zero-based indexing system didn’t work out well but what about a one-based indexing system?
Using a one-based indexing system:


Simply changing the indexing system doesn’t do the trick just yet. We need to reconsider how we will need to modify our "wrapping" logic/modulo operand.
Since we are using a one-based indexing system, our general equation should not produce a value of zero. Doing a simple test, if we were to assign our modulo operand as 6, the original card at position 4 will have a final value of 0 if we use the equation 2j mod (2n)
where 2n is the position of the last card.
Hence, for us to maintain the one-based indexing, our modulo operand should instead be 2n+1.
Piecing everything together, we now obtained our general (and equally elegant) equation for an in-shuffle.

Multiple Shuffles
Now that we have the equations to represent both in-shuffles and out-shuffles, let’s find out where a specific card will be positioned after n shuffles.
At its core, it is a simple recursion problem we are solving.
Writing a simple python script to run this recursion would be an easy task (simply increase the stack size for large n), however, oftentimes understanding the right mathematical concepts instead of brute-forcing recursions will give us an unparalleled performance boost.
Modular Arithmetic
We can get a sense of what our script would be doing when we write out the recursions by hand

The result from the previous operation will be input to the next operation, and this cycle repeats until the deck has completed n shuffles. Execution time surely wouldn’t be long but let’s try to simplify this to give it a little boost.
Let’s narrow in on just 2 out-shuffles:

By applying the modular multiplication property of modular arithmetic, we can expand the above equation to the following form.

By working this out for k shuffles, we will arrive at an equation with a very convenient property.

We have just defeated recursions!
The example above made use of the out-shuffle, but since the in-shuffle only differs in terms of the modulo operand, we can simplify its equation in the same exact way.

Back to the original question…
Now that we have analyzed the faro-shuffle for quite a bit, let’s shift our focus back to the problem we came to solve.
How many faro-shuffles does it take to shuffle a deck of cards back to its original order?
We explored ways to express the 2 different types of faro-shuffles mathematically and looked at equations that helped us figure out what position a card will be at after n shuffles. So far, we have looked at the problem from an individual card level, the next step is to generalize it for the entire deck.
A deck of cards is back at its original order when every card is back at its own original position.
Translating that intuition into mathematical terms, we get the following:

By further applying the same modular multiplication property of modular arithmetic, we see the first signs of a Discrete Logarithm Problem.

We see that we are now able to treat the shuffling operation independently from the card’s original position. What’s even better is that the shuffling operation models after a Discrete Logarithm Problem.
We’ve finally found it!
Focusing solely on the shuffling operation will give a slightly more condensed equation to solve:

Right now, the equation seems pretty hard to solve and brute force seems like the only viable way. I don’t want to do all that work of trial and error, so the cynic in me just decides to not shuffle the cards.
When k = 1 we satisfy the equation perfectly without all tedious work.
After all even AI algorithms sometimes think that that’s the best way to win a game. Although it is technically a perfectly acceptable and logical answer, it provides zero value to the problem.
This posits that perhaps we will need to set an important constraint, k must be at least 1.
Solving DLP
We can simplify the equation further to generalize it into a general Discrete Logarithm Problem. As a quick recap, the modular operand of 2n-1 & 2n+1 corresponds closely to the indexing system we have chosen to model both the out-shuffle and in-shuffle respectively. However, when trying to simplify the problem at hand to a general DLP, the form 2n-1 & 2n+1 has no mathematical importance. We can simply represent them with a positive integer X.


There are many ways to solve a DLP, be it using brute force or the Baby Step Giant Step(BSGS) algorithm. You could also speed things up by implementing the Pohlig–Hellman algorithm.
I will not go into the details of DLP or the various methods to solve it as it deserves another article on its own. But for the sake of completeness, I have included a simple implementation of BSGS written in python that I have used to solve the equation.
Back to the graph

This line plot was generated by manually simulating the shuffle operation. After condensing the faro shuffle down to a "simple" DLP, we can now verify if our new approach corresponds and works well.
I overlayed the results from solving the DLP (orange line) on top of the result from the manual simulation(blue hue).

We can see that it aligns perfectly and it serves as visual and quantitative proof that we have successfully modeled the problem as a Discrete Logarithm Problem.
Another thing you might have realized is that the number of shuffles needed for in and out shuffles seems to be identical from observing the plotted graph. This might be a little bit against our intuition as they are effectively still different kinds of shuffles.
When we do a comparison of the 2 line graphs, we see that there is in fact a small and interesting difference.

The red line marks the points where the same number of shuffles is needed to restore a deck of cards to its original position. What’s interesting is that the line plot of the number of in-shuffles needed seems to be "lagging behind" the plot of out-shuffles. Looking at it closely, we can see that the number of out-shuffles needed for N cards is the same as the number of in-shuffles needed for N-2 cards, effectively making an in-shuffle "one step" behind an out shuffle.
The key intuition arises from an important detail that we examined previously. An out-shuffle does not affect the top and bottom card of the deck, it only changes the position of the cards between them, as opposed to all of the cards changing their positions in an in-shuffle.
But how are the cards in the middle between the top and bottom cards are being shuffled during an out-shuffle?
Going back to the 6 cards example that we used, let’s break down the out-shuffle a little further.

Ignoring the top and bottom card that doesn’t have its position changed, we see that the middle portion is shuffled in an oddly familiar way.

Isolating the middle portion of the deck, we realized that all of its positions will change after an out-shuffle. This is similar to an in-shuffle, and as we look closer as to how its positions change, it is identical to an in-shuffle.
Every out-shuffle is essentially an in-shuffle of the cards between the top and bottom cards.
I guess it would not be inappropriate to say that an out-shuffle is fundamentally composed of an in-shuffle, just that it adds two extra cards at the ends that will not be shuffled in the process.
But wait…what about the peaks?

A striking detail amongst the graphs that I have shown is the presence of "peaks" when we shuffling a specific number of cards. The "peaks" and "valleys" are the reason why it takes 52 shuffles for a deck of 54 cards to restore its order as opposed to only 14 shuffles for a deck of 384 cards. This is unintuitive, to say the least, but perhaps looking into the Euler’s Totient function Φ(n) can give us a starting point.
When we plot the totient function out we see the same pattern of "peaks". Another interesting point to note is that the "peaks" form a linear line.

_The totient function gives us the number of positive integers up to a given integer n that are relatively prime to n. The "peaks" correspond to the prime integers as Φ(n) = n-1._ The straight hence follows the shape of a y = x -1 line.
Perhaps there is something to do with prime integers with our shuffling problem.
We know that we can condense the shuffling problem into a DLP, in fact, a very specific kind of DLP.

There so happens that Euler’s theorem shares a relatively similar form, and gives us a better idea of how the totient function connects with our shuffling problem.

Let’s take a look at why 52 out-shuffles are needed for a deck of 54 cards.

Immediately it becomes clear that when we put in the numbers, our modulo becomes a prime integer. Following Euler’s theorem, the exponent will the result of the totient function of the modulo. Since 53 is prime, there will be 53 – 1 positive integers that are coprime to it, resulting in the exponent being 52.
This extends to in-shuffles as well, as long as the modulo is prime and adheres to Euler’s theorem, we get an exponent, which corresponds to the number of shuffles needed, that is modulo-1.
As a general concluding statement, the "peaks" are therefore at points where the number of cards in the deck produces a modulo that is prime be it for out or in shuffles.

We have now cracked the inner workings of the perfect shuffle
This was a fun attempt at writing my first post and it truly helped solidified my understanding of the topic at hand. You will also be happy to know that the neon red deck of cards now occupies a permanent spot on my desk, both as a fidgeting toy and a source of inspiration to spend some time to dig deeper into the simple things.