I’m really excited to share this article with you because it’s closely tied to my current research: optimizing project schedules with quantum computing. My passion for this topic comes from my background as a project manager and my ongoing research into how quantum computing can help solving complex optimization problems. During my PhD, I specifically looked into how today’s quantum technology could be used to tackle complex scheduling challenges, that are specific for the field of Project Management.
We’re living in an incredible era where quantum computers are no longer just a concept – they’re real and being used. Imagine what Richard Feynman would say about them! However, these machines aren’t fully ready yet. They’re often referred to as NISQ (Noisy Intermediate-Scale Quantum) devices, a term coined by quantum computing pioneer John Preskill. These machines are still considered early versions of the more advanced quantum computers we hope to see in the future.
Currently, they’re small, with only a few qubits (quantum bits of information) available. For comparison, it’s not uncommon today to have a laptop with 16GB of RAM, which equates to around 128 billion bits of information, that we can work with. In contrast, the largest quantum computers have fewer than 6,000 qubits. Moreover, qubits are highly sensitive to noise, and the limited number of qubits makes it challenging to correct errors and perform complex calculations.
Despite these challenges, the quantum computers we have today are powerful enough to test out algorithms and see their potential for solving tough problems. In broad terms there are two main types of quantum computers: universal quantum machines, like the ones from IBM and Google, which work on a circuit-gate model of computation, and quantum annealers, which use adiabatic evolution principles (Technically speaking there is a third type of quantum computer, which are measurement based quantum machines, but these ones are still in early development). In Quantum Computing, "universal" means that these machines can, in theory, run any quantum algorithm by implementing any quantum mechanical gate (including classical gates). While quantum annealers might not be universal, they’re still super promising for optimization tasks.
In this article, I’ll walk you through how you can use a quantum annealer provided by a company called D-Wave to optimize project schedules, this article is based on one of my recently published research papers [1]. If you’re interested in a deeper dive, I invite you to read my full research article, which you can access using the link down below.
Solving the resource constrained project scheduling problem with quantum annealing – Scientific…
Before we get into the coding, I’ll give you a quick intro to quantum annealing and how it’s used in optimization. Let’s get started!
Article index:
- What is quantum annealing?
- The RCPSP and it’s binary MILP formulation
- Installing the libraries and setting up the Colab environment
- D-wave CQM for the RCPSP
- Solving the instance and extracting the output
- Conclusions
- References
What is quantum annealing?
Quantum Annealing is a quantum metaheuristic designed to solve combinatorial optimization problems by leveraging the principles of quantum mechanics [2]. This approach is inspired by simulated annealing [4], a classical metaheuristic where a system is gradually cooled to reach a state of minimum energy. In quantum annealing, quantum phenomena such as superposition (where a quantum system can exist in multiple states simultaneously until measured, at which point it collapses into one of the possible states) and quantum tunneling are utilized to efficiently navigate through local minima, with the goal of finding the global minimum of a cost function.
Quantum annealing devices are primarily employed for optimization purposes, such as finding the minimum or maximum value of a problem. Imagine trying to find the lowest point in a rugged, hilly landscape aka the "best" solution to a problem. Traditional methods might get stuck in small dips, mistaking them for the lowest point. In contrast, a quantum annealer leverages the properties of quantum mechanics, attempting to tunnel through these local minima and guide the system to the true lowest point, or optimal solution. The effectiveness of quantum annealing, compared to its classical counterpart, can vary depending on the specific problem and the hardware used. The literature presents both theoretical proofs [3] that highlight the advantages of quantum annealing, as well as contrasting perspectives, particularly when compared to simulated annealing
At the core of quantum annealing is the profound concept that the universe naturally tends to seek states of minimum energy, known as ground states. Quantum annealing also relies on the "adiabatic theorem" of quantum mechanics and the evolution of quantum systems as described by the time-dependent Schrödinger equation.
Optimization problems can be represented as physical systems and formulated as energy Hamiltonian problems, which are mathematical representations of a system’s total energy. A common approach is to use the ISING model to encode the optimization problem into a physical system. In this model, the optimization problem is mapped onto a representation of the magnetic spin of each qubit, interacting in a two-dimensional lattice grid, as shown in Figure 1 below.

The mathematical representation of the energy Hamiltonian H₁ in the ISING model is given by the formula below:
Here, σᵢ is the Z-Pauli gate applied to qubit i
, which takes the value +1
if the qubit is pointing upwards and -1
if it is oriented downwards. Jᵢ ⱼ represents the interaction coefficient between qubits i
and j
, while Hᵢ represents the coefficient of an external magnetic field interacting with qubit i
. Fortunately, most common optimization problems can be encoded using this model, as the ISING problem is NP-Complete, as demonstrated by Istrail in 2000 [5]. In this model, each variable is binary, acting like a magnet that can only point up or down. These magnets will naturally try to orient themselves to minimize the energy of the entire grid, resulting in the system’s minimum energy. The solution to the problem is simply the configuration of the magnets after they have evolved and properly aligned themselves.
You’ve likely already encountered some of the principles of this model without even realizing it. If you’ve ever played with magnets, you know that when you have two or more, they naturally try to align their poles in opposite directions to minimize the total energy of the system, as illustrated in Figure 2.

Mother Nature is a giant optimizer, that tries to solve this problem on her own. As illustrated in Figure 3 if we start with a random assortment of spin particles, each oriented in a different direction, they will naturally attempt to realign themselves, seeking the minimum energy state. Quantum annealing leverages this principle. However, this process doesn’t always guarantee finding the absolute global minimum, as nature can still get stuck in local minima. That’s why we also rely on the power of the adiabatic theorem.

The adiabatic theorem, first formally stated by Max Born in 1928 (although Einstein had also explored the concept earlier), tells us that:
If a system starts in its lowest energy state and changes "slowly enough", it will remain in that minimum energy state as it evolves.
This theorem and its impact on quantum annealing can be challenging to grasp, so I’m going to use a metaphor that has worked well for me in the past. Keep in mind that, like all metaphors, it may not be entirely accurate from a formal physics perspective (it is not), but it serves as a helpful way to illustrate the concept.
Imagine we have a toddler who, in this example, represents a quantum system. Let’s get creative, like Elon Musk, and call this child Psi Ψ. Now, Psi Ψ can interact with various environments within the house where they live. These environments represent what we’ll call energy Hamiltonians. For instance, one Hamiltonian could be the living room of the house, with a comfy couch in front of the TV, as shown in Figure 4.

Another Hamiltonian could be for example the bedroom of Psi Ψ, as depicted in Figure 5.

Now, Psi Ψ can exist in various energetic states. For example, Psi Ψ could be in an excited state, let’s say they’re in the middle of a sugar rush, as shown in Figure 6.

Psi Ψ can also be in a ground state, a state of minimum energy. For example Psi Ψ might be in deep sleep as shown in Figure 7.

Now, there are certain Hamiltonians, which we’ll call H₀, where Psi Ψ will naturally evolve toward a minimum energy ground state. For instance, if we let Psi Ψ play in the living room, even if they start in an excited state, they’ll eventually tire out and fall asleep on that comfy sofa, as shown in Figure 8. (I remember a childhood friend’s house with a terrace overlooking a beautiful valley, where you could feel the fresh mountain breeze. The terrace had a sofa and was filled with chimes that sang with the wind. It was one of the most relaxing environments I’ve ever experienced, and I’m sure anyone would naturally settle into a ground state in that kind of "Hamiltonian".)

In contrast, there are other Hamiltonians – typically the ones we’re most interested in – called H₁, where it’s much more difficult for Psi Ψ to reach the ground state. For example, Psi Ψ really struggles to fall asleep when they are in their bedroom, as shown in Figure 9.

Unfortunately, what we really want is for Psi Ψ to be in the ground state of H₁ – in other words, we want Psi Ψ to be fully asleep in their bedroom . But that’s tough to achieve, so what do we do?
Quantum annealing advises us not to fight against the current. Instead of directly seeking the ground state of H₁, it suggests leveraging the power of the adiabatic theorem. Quantum annealing says, "Look, the starting point of this process should be an easy Hamiltonian, H₀". We first allow Psi Ψ to naturally evolve into a ground state because it’s easy to achieve this in H₀. So, we let Psi Ψ play in the living room, and once they are fully asleep on that comfy sofa, then – and only then – do we very gently, very softly, without disturbing their sweet dreams, begin to slowly change the Hamiltonian from H₀ to H₁. We gently pick them up from the sofa, slowly climb the stairs, and softly place them in their bed. If we do this process correctly and "slowly enough" (adiabatically), by the end of it, we should find Psi Ψ in the ground state – not of H₀, but of H₁, as shown in Figure 10. And voila 🥐, this means that we could use this technique to find the ground states of Hamiltonians of interest H₁.

Great! So, this means that we can indeed use this technique to solve complex optimization problems. All we need to do is find a way to encode the problem into an H₁ (using the ISING model) and identify a suitable H₀. Then, we gradually transition from H₀ to H₁ by following the annealing schedule outlined below.

At this point, you might be wondering, "What about H₀? How do we find it?" Fortunately, the initial Hamiltonian H₀ is easy to identify. We typically use a Hamiltonian known as the "tunneling Hamiltonian," which is described by Equation 4 below. The ground state for this Hamiltonian is simply each qubit in a "uniform" superposition (equal probability of being 0 or 1).
Now, when it comes to optimization, we often prefer not to use the +1,−1 basis of the ISING model. Instead, we like to encode optimization problems into Hamiltonians H₁ using an equivalent isomorphic formulation of the ISING model called QUBO (Quadratic Unconstrained Binary Optimization). QUBO expresses the problem as minimizing a quadratic polynomial over binary variables that can be either 0 or 1. When we formulate a problem as a QUBO, we must represent all variables of our optimization problem as binary and include each constraint as a penalty term added to the objective function. An excellent guide on how to do this was published by Fred Glover [6].
Once the optimization problem is in QUBO or ISING form, it must be mapped onto the quantum chip. The chip has a fixed architecture for the qubits – a topology, if you will (the current generation of D-wave quantum annealers uses a topology known as Pegasus). Since the qubits are not fully connected, we need to be creative when mapping the problem into the quantum chip. If for example a qubit i
needs to interact with qubit j
, but they aren’t directly connected in the machine’s topology, we need to find other qubits (or a group of them) that form a connecting path between i
and j
. This path is known as a "chain", and the process of mapping a problem onto the quantum chip is called minor embedding, this process consumes extra qubits that are now going to be used as logical qubits. Minor embedding is not trivial, as it is also an NP problem, but we can rely on heuristics to help find these embeddings. An example of the mapping produced from my research can be seen in Figure 12 below.

There’s one small detail about the current generation of quantum annealers: it’s been nearly impossible to recreate the ideal conditions required by the adiabatic theorem. Even minor disturbances, like those caused by cosmic rays [9], can disrupt the ground state and interfere with the annealing process. That’s why, in practice, we run the annealing process multiple times and consider all the output solutions as samples. We then collect these samples and select the one with the lowest energy hoping that it is the optimal solution.
Now that we have a solid understanding of how quantum annealing works, let’s move on to the next section, where we’ll apply it to solve one of the most challenging scheduling problems – the Resource-Constrained Project Scheduling Problem (RCPSP).
The RCPSP and it’s binary MILP formulation
As a reminder, we’re tackling a project scheduling problem where we need to manage limited resources, such as workers or equipment. This problem is known as the Resource-Constrained Project Scheduling Problem (RCPSP). It’s a challenging combinatorial optimization problem, classified as NP-hard by Blazewicz et al. (1983) [7]. The project consists of many tasks, each with its own duration and resource requirements, and there’s a maximum capacity for each resource. Additionally, tasks have "precedence constraints," meaning some tasks must be completed before others can begin. (Note that there are different types of precedence constraints; the "Finish-to-Start" (FS) type is the most common.)
The main goal is to create a schedule with the minimum makespan – a plan that determines the timing of each task to complete the entire project as quickly as possible. Our schedule must respect all constraints, which means we must adhere to precedence requirements (not breaking the order of tasks) and ensure that resource usage doesn’t exceed the available capacities.
There are many different Mixed Integer Linear Programming (MILP) formulations for the RCPSP, but we need to choose one that best adapts to quantum annealing. In my research, we explored 12 of the most commonly used formulations for the RCPSP. We selected the one that consumes the fewest qubits, as qubits are a precious resource in the current stage of quantum computing development, and their availability is a limiting factor. After an extensive analysis, we determined that the most suitable formulation is the time-indexed binary formulation of Pritzker et al. (1969) [8]. For more details on how we reach this conclusion, please refer to the research article.
This is a binary formulation where we have one binary variable x_{i,t}
for each activity i
and starting date t
. This variable equals 1 if activity i
starts on day t
, and 0 otherwise. The total number of variables is equal to the number of project activities (plus two extra dummy activities – one for the project ‘Start’ and one for the project ‘End’) multiplied by the number of possible start dates t
. The number of possible start dates is determined by the project time horizon T
, which represents the maximum possible project duration. Each task has a duration p_i
and a resource consumption u_{i,k}
for each resource k
. Precedence constraints are provided as a list of tuples E
in the form (i,j)
, indicating that task j
cannot start until task i
is finished.

The formulation above might look daunting at first sight, but it is actually quite simple.
- The objective function (1) represents the sum of all possible start dates for the dummy variable
x_{n+1,t}
(where these dummy variables represent the project’s end date). We know that only one of these variables will equal 1 for a specifict
. Therefore, by minimizing the sum of the productt*x_{n+1,t}
, we are effectively minimizing the project duration. - Constraint (2) ensures that each activity
i
has exactly one start date. - Constraint (3) ensures that if an activity
j
follows another activityi
, then activityj
must start after the finish date of activityi
, which is equal to the start date ofi
plus its durationp_i
. - Finally, Constraint (4) guarantees that for any time period
t
, the schedule does not exceed the capacityR_k
for any resourcek
.
In this article, we will solve a project with 21 activities (n=21)
(including the two dummy ones) and two resources (k=2)
. The project details are provided in Table 1 below.
You can also download the instance as a text file in the .rcp
format using the GitHub link provided below.
medium_rcpsp_dwave_cqm/RCPSP_19_instance.rcp at main · ceche1212/medium_rcpsp_dwave_cqm
Our project instance in the .rcp
format is shown in the text file below.
21 2
10 10
0 0 0 11 2 3 4 5 6 10 16 17 18 19 20
3 1 1 3 15 14 7
3 2 8 3 13 12 11
3 5 5 2 12 11
2 7 3 1 9
2 5 6 1 8
2 0 0 1 12
3 4 4 1 11
9 4 6 1 11
4 5 4 1 11
3 5 6 1 21
3 7 5 1 21
6 9 6 1 21
4 7 0 1 21
7 6 3 1 21
2 5 3 1 21
1 3 7 1 21
1 0 6 1 21
2 6 4 1 21
5 4 8 1 21
0 0 0 0
The first line of the .rcp
instance is always empty. The next line contains the number of project tasks/activities (including the dummy start and dummy end activities), followed by the number of resources k
. In our project, we have two resources. Starting with the fourth line of the text file, you’ll find data specific to each project task/activity. The first column contains the duration p_i
of the activities. The next k
columns provide information on the amount of resources required u_{i,k}
for activity i
and resource k
. The subsequent columns contain the precedence constraints, listing the activities that have activity i
as a predecessor. For example, in the instance below, activities 11, 2, 3, 4, 5, 6, 10, 16, 17, 18, 19, and 20 require activity 1 to be completed before they can start.
Installing the libraries and setting up the Colab environment
In this section, we’ll walk through the steps to install and import the necessary libraries for solving our project scheduling problem, including setting up your D-Wave environment in Google Colab. We’ll also install [cheche_pm](https://pypi.org/project/cheche-pm/)
, a Python library I developed to simplify importing project data and constructing the required data structures for our optimization model. If you’re unfamiliar with cheche_pm
, I recommend checking out another article where I’ve detailed its functionalities for a deeper understanding of its capabilities.
Efficient Project Scheduling with Python. The Critical Path Method
Let’s start first with getting your D-wave leap API token.
-
- Go to https://cloud.dwavesys.com/leap/login/?next=/leap/ and click on sign-up to create a new account. Please note that to create an account, you are going to be asked to provide the address to one of your GitHub repo’s.

-
- Once you have created your account, please proceed to login and on the left-down side of your dashboard you will find your personnel API token.

-
- Copy your API token and paste it in the secrets tab of google colab using the key name
dwave_leap
. We will later import this secret key into the python session without having to reveal it, similar as the process of importing keys from a.env
file.
- Copy your API token and paste it in the secrets tab of google colab using the key name

Now that we have our D-wave key, let’s proceed to install and import the required libraries and setup the development environment. We can achieve this by following the snippet of code below.
!pip install cheche_pm
!pip install dwave-ocean-sdk
from itertools import product
from dwave.system.samplers import DWaveSampler
from dwave.system.composites import EmbeddingComposite
from dimod import ConstrainedQuadraticModel, CQM, SampleSet,
from dwave.system import LeapHybridCQMSampler
from dimod.vartypes import Vartype
from dimod import Binary, quicksum
from cheche_pm import Project
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import requests
from google.colab import userdata
We will now proceed to setup the D-Wave key using the userdata.get
method of google colab.
endpoint = 'https://cloud.dwavesys.com/sapi'
token = userdata.get('dwave_leap')
Now that the environment is set up, let’s move on to importing the project data and generating the required data structures. We’ll use some of the functionalities of cheche_pm
for this. I’ve coded a method that allows us to read RCPSP instances directly from a .rcp file. We’ll use this method to download the instance data from my GitHub repo, import it into our Colab session, and create a Project instance by reading the .rcp
file..
url = 'https://raw.githubusercontent.com/ceche1212/medium_rcpsp_dwave_cqm/main/RCPSP_19_instance.rcp'
response = requests.get(url)
# Check if the request was successful
if response.status_code == 200:
# Write the content to a file
with open("instance.rcp", "w") as file:
file.write(response.text)
print("File downloaded and saved as 'instance.rcp'")
else:
print("Failed to download the file. Status code:", response.status_code)
project = Project.from_rangen_1_rcp_file('instance.rcp')
project.create_project_dict()
Now that we have the project data, let’s visualize it using a project network diagram. This diagram represents a graph G(V,S)
where each node in V
is a project activity and each edge in S
represents a precedence constraint. We can create this diagram using the .plot_network_diagram
function from cheche_pm
, which leverages pydot
to generate and plot the graph.
project.plot_network_diagram()

We can also generate the data structures typically needed to formulate an RCPSP instance as a MILP. The cheche_pm
library includes a method called .produce_outputs_for_MILP
, which outputs a dictionary containing all the necessary data structures.
n,p,S,U,C,naive_ph = project.produce_outputs_for_MILP().values()
Here, n
represents the number of project activities, p
is a list of activity durations, S
is a list of all precedence constraints, U
is a list of resource consumption for each activity and each resource, and C
is a list of resource capacities. The naive project duration, or time horizon naive_ph
, is the sum of all activity durations, which is equivalent to executing the activities sequentially.
As I mentioned in the introduction of this article, qubits are a precious resource. Our formulation requires a number of variables/qubits equal to the combination of n+2
activities and t
potential start times, with t
ranging from 0 up to the time horizon naive_ph
. Currently, the naive_ph
for this project instance is 65 days, meaning we’re using a very conservative upper bound. If we can find a better upper bound, we can drastically reduce the number of qubits required and make the quantum annealer’s job easier. Fortunately, there are many constructive heuristics for project scheduling that can help us establish a better upper bound for the project time horizon. The cheche_pm
library includes a collection of these heuristics, which we can easily call by following the code snippet below.
heuristic = project.run_all_pl_heuristics(max_resources=C)
new_ph = heuristic['makespan']
The method generates a bar graph showing the different makespans for each heuristic tested by cheche_pm
.

Using this method, we were able to reduce the project time horizon from 65 to 40 days. This reduction allows us to save 525 qubits ((65−40)×21), representing a significant decrease in qubit usage, thanks to simple heuristics that run in fractions of a second.
Now that we’ve optimized our project time horizon and gathered all the necessary data, let’s proceed to optimize the project schedule using the quantum annealer.
D-wave CQM for the RCPSP
D-Wave offers various methods for solving optimization problems. If your problem is already in QUBO form, you can create a Binary Quadratic Model (BQM) and solve it. That’s exactly what I did in my research article. However, in this article, I wanted to demonstrate a simplified process that allows you to use the quantum annealer with a framework and language similar to those used by other commercial tools like Pyomo, CPLEX or Gurobi.
Here, we’ll use D-Wave’s Constrained Quadratic Model (CQM), which employs a hybrid quantum-classical approach. This approach simplifies the formulation process and enables us to solve larger problem instances. With the current Advantage 6.4 quantum annealer and the CQM solver, we can tackle problems with up to 1 million binary variables. In contrast, D-Wave’s purely quantum BQM method is limited to 5,760 available qubits, making it unsuitable for the project size we’re addressing in this article. The advantage of using a BQM, is that you have total control of the annealing parameters and the schedule, moreover you can use more advanced techniques such as Reverse Quantum Annealing, which is similar as starting the annealing evolution from a warm start.
If you’re interested in learning how to transform the RCPSP into a QUBO and solve smaller instances of the problem using the purely quantum BQM method, I cover that in detail in my research article. A link to all the BQM code used during the research is provided there, and the article is open access, so feel free to read it and download it.
To create a CQM model of our project, we simply follow the code below:
(R, J, T) = (range(len(C)), range(len(p)), range(new_ph))
# Create empty model
cqm=ConstrainedQuadraticModel()
# Create binary variables
x = {(i, t): Binary(f'x{i}_{t}') for i in J for t in T}
# Create objective function (1)
objective = quicksum(t*x[(n+1,t)] for t in T)
cqm.set_objective(objective)
# Add constraint (2)
for a in J:
cqm.add_constraint( quicksum(x[(a,t)] for t in T) == 1 )
# Add constraint (3) Precedence constraints
for (j,s) in S:
cqm.add_constraint( quicksum( t*x[(s,t)] - t*x[(j,t)] for t in T ) >= p[j])
# Add constraint (4) Resource capacity constraints
for (r, t) in product(R, T):
r_c = quicksum( U[j][r]*x[(j,t2)] for j in J for t2 in range(max(0, t - p[j] + 1), t + 1))
cqm.add_constraint( r_c <= C[r] )
print("CQM model created with number of variables = ",len(cqm.variables))
The CQM model has been created with a total of 840 variables. Now, we just need to invoke the hybrid QA sampler to solve it. First, we’ll create an empty sampler and pass in our API token and endpoint connection (defined at the start of the previous section).
cqm_sampler = LeapHybridCQMSampler(endpoint=endpoint, token=token,)
With the sampler created, the final step is to perform the QA evolution and retrieve the solution samples.
problem_name = 'RCPSP_Medium_Article'
sampleset = cqm_sampler.sample_cqm(cqm,label=problem_name)
annealing_solutions = len(sampleset) # number of solutions inside sampleset
running_time = sampleset.info['run_time'] # running time reported in microseconds
print(f"Annealing stoped after {running_time/1e6:.2f} seconds. {annealing_solutions} obtained")
Solving the instance and extracting the output
After 5.002 seconds, we obtained 58 samples from the hybrid quantum annealing process. Now, let’s extract the best solution and generate the output visualizations for our project.
We can create a dataframe from the results currently stored in the variable sampleset
. This dataframe will have three columns: the solution, the energy (which represents the objective function value), and a boolean variable indicating whether the solution obtained from the quantum annealer is valid (feasible) or not.
df_sampleset = sampleset.to_pandas_dataframe(sample_column=True)
df_sampleset = df_sampleset[['sample','energy','is_feasible']]
df_sampleset = df_sampleset.sort_values(by = ['is_feasible','energy'], ascending = [False,True])
Let’s also proceed to extract the best feasible solution of our problem and see if the quantum annealer was able to obtain the optimal project schedule.
try:
feasible_sampleset = sampleset.filter(lambda row: row.is_feasible)
best = feasible_sampleset.first
except:
print('No Feasible solution found')
Z = best.energy #objective value of the best solution
X_out = best.sample #best solution
print(f"Solution obtained with a final project duration of {Z} days")
The quantum annealer successfully produced a feasible project schedule with a makespan of 39 days, which corresponds to the optimal solution for this project instance. Now that we have the solution, let’s proceed to extract it into a schedule dictionary.
SCHEDULE = dict()
for act in project.PROJECT:
i = project.PROJECT[act]['idx']
for t in range(new_ph):
key = f'x{i}_{t}'
if X_out[key] == 1:
row = {'ES':t,'EF':t+p[i]}
SCHEDULE[act] = row
print(act,row)
With the schedule now stored in a dictionary, we can use another functionality of cheche_pm
to produce a Gantt chart by calling the .plot_date_gantt
method within the library.
schedule_df = project.generate_datetime_schedule(SCHEDULE)
project.plot_date_gantt(schedule_df,plot_type = 'matplotlib')

Conclusions
In conclusion, this article has guided you through a detailed process of using D-Wave’s Constrained Quadratic Model (CQM) to schedule a resource-constrained project. I’ve explained what quantum annealing is and how it can be applied to optimization problems, all based on my own research. It’s important to remember that, unlike classical optimization methods such as branch-and-bound, quantum annealing doesn’t guarantee optimality since it’s a heuristic method. However, it’s an extremely powerful heuristic that can be applied to a wide range of optimization problems.
I also compared Gurobi and quantum annealing on the same instance we have worked with in this article, but this time using for both methods the naive time horizon. As shown in Figure 20, Gurobi took 446.9 seconds to produce the optimal solution of 39 days using the naive time horizon, while quantum annealing took only 5.1 seconds to generate a near-optimal solution of 40 days.

Even though quantum annealing didn’t achieve the optimal solution in this benchmark, the beauty of heuristics lies in their speed and flexibility. Heuristics can be combined with exact methods to reach the optimal solution more quickly. In fact, most commercial solvers run a series of heuristics before employing more rigorous methods. When the solution from quantum annealing was used as a warm start for Gurobi, the entire process only took 83.5 seconds (including the time for QA) to produce the optimal solution 🎉 .
In my opinion, the true beauty of quantum computing isn’t just about proving supremacy or proving clear quantum advantages; it’s about how we can combine these quantum capabilities with classical methods to create even more powerful solutions to challenging. In summary I think hybrid is the future, and that is where I am currently steering my research.
You can find the entire notebook with the code developed for this article in the link to my GitHub repository just below.
medium_rcpsp_dwave_cqm/RCPSP_DWAVE_CQM_Medium.ipynb at main · ceche1212/medium_rcpsp_dwave_cqm
I sincerely hope that this article had been enjoyable and had served as a helpful resource for anyone, trying to learn more about quantum computing and more specifically about quantum annealing. If so, I’d love to hear your thoughts! Please feel free to leave a comment or show your appreciation with a clap 👏 . And if you’re interested in staying updated on my latest articles, consider following me on Medium. Your support and feedback are what drive me to keep exploring and sharing. Thank you for taking the time to read, and stay tuned for more insights in my next article!
References
- [1] Pérez Armas, L. F., Creemers, S., & Deleplanque, S. (2024). Solving the resource constrained project scheduling problem with quantum annealing. Scientific Reports, 14(1), 16784. https://doi.org/10.1038/s41598-024-67168-6
- [2] Albash, T., & Lidar, D. A. (2018). Adiabatic quantum computation. Reviews of Modern Physics, 90(1), 015002.
- [3] Kadowaki, T., & Nishimori, H. (1998). Quantum annealing in the transverse Ising model. Physical Review E, 58(5), 5355.
- [4] Kirkpatrick, S., Gelatt Jr, C. D., & Vecchi, M. P. (1983). Optimization by simulated annealing. science, 220(4598), 671–680.
- [5] Istrail, S. (2000, May). Statistical mechanics, three-dimensionality and NP-completeness: I. Universality of intracatability for the partition function of the Ising model across non-planar surfaces. In Proceedings of the thirty-second annual ACM symposium on Theory of computing (pp. 87–96).
- [6] Glover, F., Kochenberger, G., & Du, Y. (2019). Quantum Bridge Analytics I: a tutorial on formulating and using QUBO models. 4or, 17(4), 335–371.
- [7] Blazewicz, J., Lenstra, J. K., & Kan, A. R. (1983). Scheduling subject to resource constraints: classification and complexity. Discrete applied mathematics, 5(1), 11–24.
- [8] Pritsker A, Watters L, Wolfe P (1969) Multi-project scheduling with limited resources: a zero-one programming approach. Manage Sci 16:93–108
- [9] Martinis, J.M. Saving superconducting quantum processors from decay and correlated errors generated by gamma and cosmic rays. npj Quantum Inf 7, 90 (2021). https://doi.org/10.1038/s41534-021-00431-0