
Linear programming is a powerful tool. It has applications in a wide range of fields, including engineering, finance, and operations research. Linear programming has been used to solve problems as diverse as scheduling airline flights and designing manufacturing processes. In this blog post, we will explore the basics of linear programming and how it can be used to solve practical problems.
Linear programming (LP) is a mathematical optimization technique. It is used to find the optimal solution to problems involving multiple variables and constraints. By representing the constraints and objective function of a problem as a system of linear equations and inequalities, linear programming algorithms can systematically search the solution space and identify solutions that maximize or minimize a given objective.
First, I’ll give a simple example. Then we’ll dive into the simplex algorithm, which is used in the background to find the optimal solution of LP problems fast.
Example of a Linear Programming Problem
Suppose a farmer has 120 acres of land on which to grow two crops: wheat and corn. Wheat requires 2 acres of land per ton, and corn requires 1 acre of land per ton. The farmer wants to maximize the profit from the crops, which is €100 per ton of wheat and €150 per ton of corn. The farmer also has a limit of 50 tons of wheat and 40 tons of corn that can be produced.
To solve this problem using Linear Programming, we first need to identify the decision variables, which are the quantities of wheat and corn that the farmer will grow. We will use the variables w and c to represent the number of tons of wheat and corn, respectively.
Next, we need to write the objective function, which represents the profit that the farmer wants to maximize. In this case, the objective function is:

Then, we need to write the constraints, which represent the limits on the quantities of wheat and corn that can be produced. The constraints are:

The first two constraints represent the limits on the total number of tons of wheat and corn that can be produced. The third constraint represents the limit on the total amount of land that is available.
To solve the problem, we can use the simplex algorithm or another linear programming method to find the values of w and c that maximize the objective function subject to the constraints. In this case, the optimal solution would be to grow 40 tons of wheat and 40 tons of corn, which would yield a profit of €10,000.
This is just a simple example to illustrate the basic steps of solving a linear programming problem. In practice, linear programming is used to solve much more complex problems with many more variables and constraints.
Below, a visualization of the problem:

The grey area is called the feasible region. Every point in the area contains a feasible solution. The area is bounded by the constraints.
The fundamental theorem of linear programming states that the maxima occur at the region’s corners. This is important information the simplex algorithm uses.
The Simplex Algorithm
The simplex algorithm is a widely used method for solving linear programming problems. It was developed by George Dantzig in 1947. The intuition behind the algorithm is to ‘walk’ from corner to corner in the feasible region space in a systematic way. When the optimal solution is found, the process stops.
The simplex algorithm is an iterative process that relies on mathematical calculations and logical reasoning to find the optimal solution to a linear programming problem. It is efficient and reliable and also used in mixed integer programming (after relaxation of the constraints). The basic steps of the simplex algorithm are as follows:
- Convert the constraints of the linear programming problem into a system of linear equations, the standard form.
- Find a basic feasible solution by setting some of the variables to zero and solving for the remaining variables.
- Test the basic feasible solution to see if it is optimal. If it is, the algorithm is finished. If not, the algorithm selects a variable to enter the basis, and updates the basic feasible solution accordingly.
- Repeat step 3 until the optimal solution is found.
Let’s walk through the steps one by one using the wheat and corn example.
Step 1. Convert the constraints of the linear programming problem into a system of linear equations, the standard form.The first step is rewriting the constraints and objective in standard form:

In the standard form, the ‘smaller than’ signs are replaced by equal signs, and every constraint has a slack variable added (s1, s2, s3). In this way, smaller than is still maintained because the slack variables can only get positive values.
Often, a tableau is used to make the problem more readable. A tableau is a table in which we write down the coefficients. A column represents a variable and a row a constraint. In this case the bottom row contains the coefficients of the objective. The final column holds the values on the right hand side of the equations. We use the standard form to create the tableau:

Step 2. Find a basic feasible solution by setting some of the variables to zero and solving for the remaining variables.Now that we have the tableau, we can use it to find a basic feasible solution. A basic feasible solution is a solution that satisfies all the constraints and lies on a vertex. We can easily detect one from the tableau.
In the tableau, there are two types of variables: basic variables and non-basic variables. Basic variables have only one non-zero entry, and non-basic variables have more than one non-zero entry. The basis contains all basic variables. If we look at the current state of the tableau, the slack variables s₁, s₂ and s₃ are in the basis. The variables w and c are non-basic and set to zero. The current basic feasible solution has the following values: s₁ = 50, s₂ = 40, s₃ = 120, Z = 0, w = 0 and c = 0.

This solution corresponds to the following corner point:

Step 3. Test the basic feasible solution to see if it is optimal. If it is, the algorithm is finished. If not, the algorithm selects a variable to enter the basis, and updates the basic feasible solution accordingly.Testing if the basic feasible solution is optimal is possible by looking at the objective row in the tableau. Because this is a maximization problem, we can improve it by removing the negative coefficients. We can’t improve anymore if all negative values are gone. There are negative values in the bottom row, so improvement is possible and we didn’t found the optimal solution yet.

Now, we need to select a variable for entering the basis. To select the entering variable, take the lowest coefficient in the objective row. In our case, it’s -150 corresponding to c. This is the entering variable. Then, we need to select the row to perform Gaussian elimination. This is done by dividing the RHS values by the coefficients of c. So in our case we got 40/1=40 for the second row and 120/1=120 for the third row. The lowest value is 40, and that’s the reason why the second row is used. Now we can wipe the table with row 2, to include c in the basis:

The new objective value is 6000, and the variables in the basis are c, s₁ and s₃, equal to 40, 50 and 80 respectively. We moved to a new corner point, the next basic feasible solution:

Step 4. Repeat step 3 until the optimal solution is found.The optimal solution isn’t found, because there is a negative value in the objective row of the tableau. So let’s repeat step 3: The entering variable is w, because it has the lowest coefficient in the objective row. The row we should use for Gaussian elimination is row 3, because 80/2 < 50/1. After row reduction, the tableau looks like this:

The variables in the basis are w, c and s₁ with corresponding values of 40, 40 and 10, respectively. The value of Z (objective) is equal to 10000. This is exactly the optimal solution we had in the beginning. There are no negative entries in the bottom row, so this is the optimal solution. The steps we took:

This is an easy example of how the simplex algorithm works. It is used to solve more complex problems than the one we discussed here.
Conclusion
Linear programming is a tool that can help individuals and organizations make the most of their resources and achieve their goals. Whether you are a business owner trying to maximize profit, a researcher seeking to optimize a complex process, or a student learning about optimization techniques, linear programming offers a wide range of applications and opportunities for growth. By understanding the principles of linear programming and the simplex algorithm, you can take the first step towards solving a variety of optimization problems and making informed decisions that can have a lasting impact.
When applying these techniques in practice, you don’t have to implement the algorithm, because optimization software does that for you.
The simplex algorithm is also applied in mixed integer programming, where relaxation of constraints makes it possible to apply the algorithm. If you want to learn more, you can read the articles below.
How to Handle Optimization Problems?
Mathematical Optimization Heuristics Every Data Scientist Should Know