When it comes to algorithms and graph theory, the first thing that comes to people’s minds is either hardcore programming or complex mathematics. If you were to ask a non-programmer or a typical civilian, they would picture an algorithm as some complex computing process that requires brilliant people to develop. However, little did they know they were actually misguided. Over the past couple of decades, movies, TV shows, games, and stories have pictured algorithms as something only people that are good at mathematics could understand. But in reality, that’s not the case. For sure, as more online courses are being present on the web, such as Coursera or videos on Youtube, more people are being exposed to algorithm development. However, a general sample of programmers or developers still think algorithms are for the most prestigious programmers and mathematicians.
Multiple occupations in the IT industry involve algorithm development, such as machine learning (which is on the rise), backend development, etc. However, although multiple occupations (of which, many are rising) involve working with data structures and algorithms, there still isn’t enough interest as there is in front-end development despite the rise in occupations as stated before. The pay between occupations still remains the same; the skill-set may vary from company to company and occupation to occupation, but still remain the same in some ways for all developers/programmers, and all programmers must go through an interview process (depends on the company, but most do). Programming in different sectors does not seem that different. Companies may not require many algorithms or machine learning developers, but the real problem is the misconception of algorithms. Due to the misconception of algorithms being "complex," there aren’t many developers interested in the field of machine learning or artificial intelligence as much as other sectors like front-end development.
Though, what makes algorithms "hard" in the first place for many people? It’s not about the mathematics or the programming that makes it tedious for people to understand; it’s more about the conceptual behaviour of the algorithm. In other words, people have a hard time learning about how the algorithm conceptually works, rather than the theoretical part where all the programming and math comes in, which ironically is always portrayed in programming scenes with algorithms involved in Hollywood movies to show "hardcore programming."
There are many types of algorithms out there in the programming world, and each of them has its own purpose and applications. For example, you cannot apply a bubble sort algorithm instead of a binary search algorithm and expect it to run at the same speed with the same efficiency with every case. It’s just not possible due to the way they have been created and their purpose. However, if we were to broaden our spectrum of algorithms and compare them on a general scale based on their behaviour, we get three types of algorithms; sorting, traversing, and searching. Since sorting does follow the same protocol of traversing, but just compare previous or next values, for this case, since we are talking in general, we will assume that sorting behaviour mimics the same behaviour of a traversal algorithm (which it does in some ways). Hence, what it comes down to when learning and trying to understand the way algorithms work is more about how they traverse with certain values stored for reference or comparison and search for certain values in a given data set.
If you know these two basic behaviours and understand how they work, learning any algorithm will be much easier for you. But for those who have not yet, don’t worry. Using two graph search algorithms (yes, we will be learning basic algorithm behaviour through algorithms themselves), we will learn about their work and their applications. I decided to use graphs as there are the easiest to understand visually, which in the end, will lead to a better conceptual understanding.
The two graph search algorithms that will be used in reference are breadth-first search and depth-first search. Those who have read my previous article about famous Coding problems (check it out if you haven’t), or know about tree traversal, may be familiar with these two basic, but efficient algorithms. They both run linear time and can be implemented using either recursion or iteration. They both use a data structure for reference (breadth-first uses a queue and depth-first uses a stack), and most importantly, they both can be used for search and traversal. But you may wonder, if both of these two algorithms have so many similarities, what’s the point of using two algorithms instead of one? These algorithms may have many similarities, but the way they work and behave is different, which will help us understand how different types of algorithms work on a broader scale.
For those who don’t know what a graph data structure is in the first place, it essentially works like a tree but with no universal set order of traversal until stated by the developer or user. They are composed of nodes, or vertices, which hold data, and edges, which are a connection between two vertices.

Graphs have varying degrees of connections. The higher the ratio of edges to vertices, the more connected the graph. In this case, with the example above, we can state that the graph is highly connected as the ratio of vertices to edges is 6:9, which is equivalent to 2:3. What’s cool about graphs is that you can add weights to each edge. With weights, you can create efficient search and pathfinding algorithms, which we will get into later. However, a weighted graph is a typical graph with edges that have a specific number or cost associated with traveling between the vertices. Typically, in most applications, this cost is needed to evaluate the most efficient path or distance of two certain objects. Last but not least, the last thing you need to know about graphs for this article is the types of graphs. The two most basic types of graphs are directed and bi-directed. Directed graphs are graphs where the edges restrict the direction of movement between vertices. Take a look at the graph above. The edges only lead one way. On the other hand, though, if it were a bi-directional graph, the edge may lead to two paths. For example, one from edge A to edge B, and another from edge B to edge A.

Both graphs have their own applications, but for this article, just to keep it simple and concise, we will be working with directed graphs.
The next basic definition we will set before moving on to learning the behaviour of algorithms is what the breadth-first algorithm is more in-depth, and how it works in graphs. Breadth-first is considered a brute-force algorithm that checks the values of all neighbouring vertices before moving onto another level of depth. As vertices are being searched or traversed, their children are added to a queue known as the frontier. This queue follows the FIFO structure (first in, first out) to track the current vertex and vertices that still have unvisited neighbours.

The queue will deque when all the neighbouring vertices have been visited of a certain vertex. Breadth-first is an incredibly inefficient way to find just any path between two points, but it’s a great way to compare the paths between two vertices. Because of this, BFS is helpful in figuring out directions from one place to another. Moreover, when it comes to the performance of this algorithm, it runs at O(no. vertices + no.edges)
. Since the performance depends on the number of vertices and how big the graph is, therefore the input, we can consider this linear time.

Last but not least, the last definition we need to set is what depth-first search is and how it works. Depth-first search algorithms checks the values along a path of vertices before moving to another path laterally. These types of algorithms are beneficial for determining if a path is present or not between two vertices. In order to accomplish this path-finding task, depth-first algorithms use either a stack data structure, or, more commonly, recursion to keep track of the path the search is on, and the current vertex.

If we have reached the end of a path, then the stack will keep popping until there is a vertex with an unvisited edge. This process will continue until the stack is empty. The performance is the same as BFS at O(no. vertices + no. edges)
, which we have considered at linear time.

To reflect the algorithms and their behaviour for the rest of the article, we will be using these two functions that reflect the algorithms defined above:
Now we have our basic definitions set, let’s start off by talking about search behaviour. Search algorithms are essentially algorithms that will keep traversing through data until it finds its desired value. Of course, there can be various parameters to consider, but this is the basic universal definition. Take a look at linear search as an example. It will keep iterating through a list until it finds the desired value. If it doesn’t, it will not return or print anything. Most graph search algorithms run at linear time as they are brute-force algorithms (meaning they are easy to implement but inefficient in performance), but some run at log time, such as binary search. To get a search algorithm to run at log time is probably the best thing you could ever do as a programmer, as finding a general significant pattern within every data set is merely invisible to our eyes. However, let’s take a look at how search works more visually and in code. Above is the code of both depth-first and breadth-first search for graphs. Let’s start with depth-first search. Our algorithm will keep adding vertices (in terms of the path they follow as defined above when defining depth-first search) until we have reached the desired vertex.

If we don’t reach the desired vertex, then we recursively call the vertex’s edges until we find where the vertex is "located." If there is no location at all, meaning that there is no path between the two vertices, the algorithm will return None
.

Notice how the algorithm traverses and evaluates different paths, but still manages to fail to find the vertex with a value of 0. This is because there never was a vertex with a value of 0! However, we can notice the algorithm’s permutations and the order the algorithm traverse at each depth.
Simulate your own graph search or learn more about it in-depth: https://github.com/GEEGABYTE1/GraphSearch
Since we know now about how search works conceptually, we can move on to traversal. If I were to summarize traversal in one phrase, it would be that it’s the same as searching for a value, however, without an actual goal value to search for. Traversal algorithms are great when trying to figure something out visually or finding out future possibilities. Most traversal algorithms keep traversing through a data set in different ways until they have reached the end of the data set or have met a certain condition, like, traversing the path only a certain amount of times. For example, let’s take into consideration our traversal with depth-first search and breadth-first search.
We keep traversing until we have reached the end of the path, or have visited every possible edge. In breadth-first search, we know that we have reached the end when our BFS queue is empty, and for depth-first search, when the path is None
(meaning that there are no more possible edges to traverse) and the function returns None
on itself, ending the recursion.

Notice how the algorithm exhausts every possible edge in our graph until it reaches a new depth. Depth-first traversal follows the same printing and traversal process but by its definition.

Notice how depth-first traversal, just like BFS, traverses the same paths as if it were trying to search for a value. However, instead of just printing or returning one path, the traversal version prints every single different path they have been on until they have reached the end case (which in this case was if they had reached the vertex with no edges). But, how did we get different abstract paths? Believe it or not, getting abstract paths was pretty intuitive. From the root vertex, after every recursion and iteration, we pop the edge that had been used to traverse from the root vertex in the current path.
As the traversal went on, we kept popping edges of the root vertex that had been used, making sure that there weren’t any repetition of paths. This could also be considered another end case where we keep popping and traversing through the graph until the root vertex is no more part of the graph, or in other words, there are no more edges of the root vertex that need to be traversed.
Check out the traversal algorithms more in-depth: https://github.com/GEEGABYTE1/GraphSearch-Traversal
Now, in terms of applications. There are many applications of traversal and sort algorithms. We, as humans, know that algorithms make a fundamental part of the technology we use today. They make tasks easier for us, accessible to us, and much more. Think about it, breadth-first search and depth-first search alone have multiple applications, ranging from pathfinding, GPS systems, file systems, tracking an Uber, and much more! For example, let’s look at GPS systems more in-depth and how traversal and search work using the two algorithms we had used before.
I have created a GPS simulated system called SkyRoute that takes a sample of Vancouver stations and landmarks, and which station leads to which landmark. Users can input a start point and a destination, and the program will output the stations that need to be taken as the optimal path by using both breadth-first and depth-first.

The depth-first algorithm is used to see if there is a station that corresponds to a landmark. Another feature of SkyRoute is the fact that users can add corresponding stations under construction (check the GitHub link below to learn more). Thus, since users can add stations under construction, it would not make any sense for the station to still run and be an option. Hence, to make the program flow with those changes, if there is a station that is under construction, we will use DFS to see if there is another path from the start to the destination. If there is, the program will return the possible route as a path; otherwise, the program will conclude that there is no possible path. Essentially, we conclude that DFS does not return a path because each landmark has at least one station. When a station is under construction, the program removes the station from each landmark (this process is equivalent to us popping an edge from a vertex, such as a root vertex, to print abstract paths). Therefore, it creates a possibility where there may be no path between a start point and the desired destination.
But what if the station is not under construction? This is where the BFS algorithm comes in. Since our version of BFS appends a list of each possible path (which is also an array), we want to return the shortest path because that’s the whole point of an efficient GPS or map application nowadays. After the program runs BFS on the two destinations (start and the destination), it will compare each length of the path and return the one with the shortest length, assuming that taking the smallest number of stations means a quick and easy route. However, just like DFS, if there is no possible route between them, then the algorithm will return None
, which will indicate to the user that there was no path found.
In terms of traversal, maybe adding a function where users can see multiple paths of two points would be a great idea for the future of this project as it allows users to see different paths, just in case of collisions, weather, number of passengers, etc. The traversal algorithms would be the same as shown in the traversal simulation above as well.
Though, this is how map applications and GPS systems work. Using traversal and search algorithms, GPS systems and map applications can output different paths to your destination in less than a second! For example, the only way Google Maps can output different paths based on certain variables, such as fuel, traffic, mode of transportation, and etc, is using traversal and search algorithms. Of course, they may not use breadth-first or depth-first due to efficiency and performance, but their algorithms run based on the same logic.
Try SkyRoute out for yourself: https://github.com/GEEGABYTE1/SkyRoute
Therefore, algorithms do not seem so difficult to understand. For sure, it may take some time to understand how they work, but they aren’t entirely impossible. Most importantly, they aren’t for people who are just good at Math; anyone can understand them. Learning and understanding algorithms is like learning a new subject in school or taking a cooking course. You won’t be as good as you think you will be by learning for one day. It’s constant practice and studying that will make you improve your knowledge and comprehension of the subject. That’s how learning data structures and algorithms work. You can’t just learn algorithms without having some basic background in math. If you want to really understand them, you must have the time and patience to study them (conceptual and in programming) and practice them. All you have to remember is that if one person can do it, then you can too! I myself am very fond of algorithms and data structures, therefore, it comes very easily to me. However, if you are not at the moment, don’t worry, I will continue to write about algorithms and how we can get better at learning them because developing algorithms is as fun as developing a frontend.