Whether you’re a seasoned developer or just getting started with π Python, it’s important to know how to build robust and maintainable projects. This tutorial will guide you through the process of setting up a Python project using some of the most popular and effective tools in the industry. You will learn how to use GitHub and GitHub Actions for version control and continuous integration, as well as other tools for testing, documentation, packaging and distribution. The tutorial is inspired by resources such as Hypermodern Python and Best Practices for a new Python project. However, this is not the only way to do things and you might have different preferences or opinions. The tutorial is intended to be beginner-friendly but also cover some advanced topics. In each section, you will automate some tasks and add badges to your project to show your progress and achievements.
The repository for this series can be found at github.com/johschmidt42/python-project-johannes
Requirements
- OS: Linux, Unix, macOS, Windows (WSL2 with e.g. Ubuntu 20.04 LTS)
- Tools: python3.10, bash, git, tree
- Version Control System (VCS) Host: GitHub
- Continuous Integration (CI) Tool: GitHub Actions
It is expected that you are familiar with the versioning control system (VCS) git. If not, here’s a refresher for you: Introduction to Git
Commits will be based on best practices for git commits & Conventional commits. There is the conventional commit plugin for PyCharm or a VSCode Extension that help you to write commits in this format.
Overview
- Part I (GitHub, IDE)
- Part II (Formatting, Linting, CI)
- Part III (Testing, CI)
- Part IV (Documentation, CI/CD)
- Part V (Versioning & Releases, CI/CD)
- Part VI (Containerisation, Docker, CI/CD)
Structure
- Containerisation
- Docker
- Dockerfile
- Docker image
- Docker container
- Docker stages (base, builder, production)
- Container registries (ghcr.io)
- Docker push
- CI (_build.yml & build_andpush.yml)
- Badge (Build)
- Bonus (trivy)
In this article, we will explore the concept of containerisation, its benefits, and how it can be used with Docker to create and manage containerised applications. We will use Github Actions to continuously build Docker images & upload them to our repository when a new version is released.
Containerisation
Containerisation is a modern technology that has revolutionised the way software applications are developed, deployed, and managed. It has gained widespread adoption in recent years due to its ability to solve some of the biggest challenges in software development and deployment.
In simple terms, containerisation is a process of packaging an application and all its dependencies into a single container. This container is a lightweight, portable, and self-sufficient unit that can be run consistently across different computing environments. It provides an isolated environment for the application, ensuring that it runs consistently, regardless of the underlying infrastructure. It allows developers to create applications that are scalable, portable, and easy to manage. Additionally, containers provide an extra layer of security by isolating applications from the host system. If you hear someone say the phrase "it works on my computer", it is no longer valid because you can and should test your application in a Docker container. This ensures that it works consistently across different environments.
In conclusion, containerisation is a powerful technology that allows developers to create containerised applications that are reliable, efficient, and easy to manage, allowing them to focus on developing great software.
Docker
Docker is a popular containerisation platform that allows developers to create, deploy, and run containerised applications. It provides a range of tools and services that make it easy to package and deploy applications in a containerised format. With Docker, developers can create, test, and deploy applications in a matter of minutes, instead of days or weeks.
To create such an containerised application with docker we need to
- Build a Docker image from a Dockerfile
- Create a container from the Docker image
For this we will use the docker CLI.
Dockerfile
A Dockerfile is a text file that contains all commands needed to build a given image. It adheres to a specific format and set of instructions which you can find about here.
The goal for this section here is to create a Dockerfile that builds a wheel of our Python package:
FROM python:3.10-slim
WORKDIR /app
# install poetry
ENV POETRY_VERSION=1.2.0
RUN pip install "poetry==$POETRY_VERSION"
# copy application
COPY ["pyproject.toml", "poetry.lock", "README.md", "./"]
COPY ["src/", "src/"]
# build wheel
RUN poetry build --format wheel
# install package
RUN pip install dist/*.whl
This Dockerfile is essentially a set of instructions that tells Docker how to build a container for a Python application. It starts with a base image python:3.10-slim
which is a slim version of the Python 3.10 image that has already been pre-built with some basic libraries and dependencies.
The first instruction WORKDIR /app
sets the working directory to /app
inside the container where the application will be placed.
The next instruction ENV POETRY_VERSION=1.2.0
sets an environment variable called POETRY_VERSION
to 1.2.0
which will be used in the next command to install the Poetry package manager.
The RUN pip install "poetry==$POETRY_VERSION"
command installs the Poetry package manager inside the container, which is used to manage dependencies for Python applications.
The next instruction COPY ["pyproject.toml", "poetry.lock", "README.md", "./"]
copies the project files (including the pyproject.toml
, poetry.lock
and README.md
) to the container.
The README.md
file is required as there is a reference in the pyproject.toml. Without it we wouldn’t be able to build a wheel.
The instruction COPY ["src/", "src/"]
copies the source code of the application to the container.
The RUN poetry build --format wheel
command builds a Python wheel package for the Python application using the poetry.lock
file and the source code of the application.
Finally, the last instruction RUN pip install dist/*.whl
installs the package by using pip
and installs the generated .whl
package file which is located inside the dist
directory.
In summary, this Dockerfile sets up a container with Python 3.10 and Poetry installed, copies the application source code and dependencies, builds a package wheel and installs it.
This will not yet run the application. But don’t worry, we will update it in the next sections. We must first understand the flow of using Docker.
Docker image
We have created a Dockerfile that contains the instructions to build a Docker image. Why do we need a Docker image again? Because it ** allows us to build Docker container**s!
Let’s run the docker build command to create our image:
> docker build --file Dockerfile --tag project:latest .
...
=> [7/7] RUN pip install dist/*.whl 30.7s
=> exporting to image 0.5s
=> => exporting layers 0.5s
=> => writing image sha256:bb2acf440f4cf24ac00f051b1deaaefaf4e41b87aa26c34342cbb6faf6b55591 0.0s
=> => naming to docker.io/library/project:latest
This command is used to build a Docker image from a Dockerfile and tag it with a specified name and version. Let’s break down the command:
docker build
: This is the command used to build Docker images.--file Dockerfile
: This option specifies the path and name of the Dockerfile used for building the image. In this case, it is simply namedDockerfile
, so it’s using the default name.--tag project:latest
: This option specifies the name and version of the image to be created. In this case, the image name isproject
and its version islatest
.project
is the name given to the image, andlatest
is the version number. You can replaceproject
andlatest
with the name and version of your choice..
: This specifies the build context, which is the location of the files used for building the image. In this case,.
refers to the current directory where the command is executed.
So, when this command is executed, Docker reads the Dockerfile in the current directory and uses it to build a new image named project:latest
. We can find additional information about the resulting image (& other images) by running:
> docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
project latest bb2acf440f4c 2 minutes ago 271MB
Our image is 271 mb in size. The size will be reduced later on.
Docker container
We can create/run a Docker container from a Docker image using the docker run
command. The command requires one parameter which is the name of the image. For example, if your image is named myimage
, you can run it with the following command: docker run myimage
If we run our application like this:
> docker run -it --rm project:latest
it will open an Python terminal (you can close the session with CTRL + D or CMD + D; The -it
option is used to run a container in interactive mode with a pseudo-TTY (terminal emulation). This allows you to interact with the container’s shell and see its output in real-time. The -rm
option is used to remove the container automatically when it exits.):
Python 3.10.10 (main, Mar 23 2023, 03:59:34) [GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
Why does it open a Python session? That is, because the entrypoint of the docker image defaults to the Python interpreter in the standard python:3.10-slim image. If we want to have a look inside the container, we must overwrite the entrypoint. Because bash is installed by default on this build, we can run the docker container and get inside it with:
> docker run -it --rm project:latest /bin/bash
root@76eb4cb2d8fb:/app#
So we overwrite the entrypoint with /bin/bash.
And now we can check the content that is inside our container:
app
βββ README.md
βββ dist
β βββ example_app-0.3.0-py3-none-any.whl
βββ poetry.lock
βββ pyproject.toml
βββ src
βββ example_app
We can check the installed packages with
> pip freeze
...
dulwich==0.20.50
example-app @ file:///app/dist/example_app-0.3.0-py3-none-any.whl
fastapi==0.85.2
...
Great, we can jump inside a container, which is really good for troubleshooting. But how do we make it run our application? And where is our app installed? By default, packages can be found in the site-packages directory of the Python installation. To find that information we can use the pip show command:
> pip show example-app
Name: example-app
Version: 0.3.0
Summary:
Home-page: https://github.com/johschmidt42/python-project-johannes
Author: Johannes Schmidt
Author-email: [email protected]
License: MIT
Location: /usr/local/lib/python3.10/site-packages
Requires: fastapi, httpx, uvicorn
Required-by:
Since uvicorn, our ASGI server implementation, is installed by default, we can cd into _/usr/local/lib/python3.10/site-packages/exampleapp
and run the application with the uvicorn command:
> uvicorn app:app --host 0.0.0.0 --port 80 --workers 1
INFO: Started server process [17]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:80 (Press CTRL+C to quit)
where app:app
follows the pattern <file_name>:<variable_name>
.
The application runs on port 80 within the docker container with 1 worker. To be accessible on the host (your machine), we need to expose the container port and publish it to the host. This can be done by adding adding the --expose
and --publish
flag to the docker run command. Alternatevily, we can have the container expose a certain port by defining this in the Dockerfile. We will do this in a second. Before that, here’s what we’re gonna do:
Our application can be found in the site-packages directory. This requires us to change the directory before we can run the uvicorn app:app
command. If we want to avoid changing the directory, we can instead create a file that imports the app for us. Here’s an example:
Add a main.py
:
# main.py
from example_app.app import app
if __name__ == '__main__':
print(app.title)
where we import the application in a main.py
so that uvicorn can use it. If we now copy this file to our /app
directory:
# Dockerfile
...
COPY ["main.py", "./"]
...
we can run the app with
> uvicorn main:app --host 0.0.0.0 --port 80 --workers 1
INFO: Started server process [8]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:80 (Press CTRL+C to quit)
Great. Now let’s set this command as an entrypoint when starting a container.
FROM python:3.10-slim
WORKDIR /app
# install poetry
ENV POETRY_VERSION=1.2.0
RUN pip install "poetry==$POETRY_VERSION"
# copy application
COPY ["pyproject.toml", "poetry.lock", "README.md", "main.py", "./"]
COPY ["src/", "src/"]
# build wheel
RUN poetry build --format wheel
# install package
RUN pip install dist/*.whl
# expose port
EXPOSE 80
# command to run
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80", "--workers", "1"]
We now copy the main.py
file to the /app
directory. The EXPOSE
instruction informs Docker that the container listens on the specified network ports at runtime. In this case, it is exposing port 80.
The CMD
instruction specifies what command to run within the container. Here, it is running the command uvicorn main:app --host 0.0.0.0 --port 80 --workers 1
. This command starts a uvicorn server with the main:app
application, listening on host 0.0.0.0
and port 80
, with 1
worker.
We can then run a container with the docker run command:
> docker run -p 9000:80 -it --rm project:latest
[2023-01-30 21:04:33 +0000] [1] [INFO] Starting gunicorn 20.1.0
[2023-01-30 21:04:33 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
[2023-01-30 21:04:33 +0000] [1] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2023-01-30 21:04:33 +0000] [7] [INFO] Booting worker with pid: 7
[2023-01-30 21:04:34 +0000] [7] [INFO] Started server process [7]
[2023-01-30 21:04:34 +0000] [7] [INFO] Waiting for application startup.
[2023-01-30 21:04:34 +0000] [7] [INFO] Application startup complete.
The -p
flag in the docker run
command is used to publish a container’s port to the host. In this case, it is mapping port 9000
on the host to port 80
on the container. This means that any traffic sent to port 9000
on the host will be forwarded to port 80
on the container.
We see that our application that is running in the container can be reached:

Important remark: Instead of uvicorn, I recommend using gunicorn for production builds! For completeness, this is how the Dockerfile would look like instead:
FROM python:3.10-slim
WORKDIR /app
# install poetry
ENV POETRY_VERSION=1.2.0
RUN pip install "poetry==$POETRY_VERSION"
# install gunicorn (ASGI web implementation)
RUN pip install gunicorn==20.1.0
# copy application
COPY ["pyproject.toml", "poetry.lock", "README.md", "./"]
COPY ["src/", "src/"]
# build wheel
RUN poetry build --format wheel
# install package
RUN pip install dist/*.whl
# expose port
EXPOSE 80
# command to run
CMD ["gunicorn", "main:app", "--bind", "0.0.0.0:80", "--workers", "1", "--worker-class", "uvicorn.workers.UvicornWorker"]
What’s the difference between these two?
Uvicorn is an ASGI server that supports the ASGI protocol. It is built on uvloop and httptools and is known for its performance benefits. However, its capabilities as a process manager leave much to be desired.
Gunicorn, on the other hand, is a mature and fully-featured server and process manager. It is a pre-fork worker model ported from Ruby’s Unicorn project and is broadly compatible with various web frameworks.
Docker stages
Docker stages are a feature that allows you to create multiple stages in your Dockerfile. Each stage can have its own base image and set of instructions. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in a target stage. This feature is useful because it allows you to optimize your Docker images by reducing their size and complexity.
With Docker stages we can (and should!) optimize our Docker image. So what we want to achieve is this:
- poetry should not be in the production build
- the production build should only contain as little as necessary to run the app
And this is how we’re going to do it: We create a clean base stage. From the base stage we have a builder stage that installs poetry and builds the wheel. Another stage, production, can copy this artifact (.whl file) from the build stage and use it. This way we avoid having poetry installed in the production build and also limit it to have only the essentials, therby reducing the size of the final image.
About poetry in Docker
There are different strategies that I’ve seen with poetry in combination with Docker.
- Creating a virtual environment and then copying the whole venv from one stage to another.
- Creating requirements.txt files from the poetry.lock file and using these to pip install the requirements.
In the first case, Poetry is installed when building the image. In the second case, poetry is not installed within the docker build but Poetry needs to be used to create the requirements.txt files.
In both cases, we need Poetry to be installed in some way – either in the Docker image or on the host that runs the docker build command.
Having Poetry inside Docker will slightly increase the build time while having it outside of Docker will require you install Poetry on the host and add additional steps for the build process (creating the requirements.txt files from poetry.lock). In the context of a Docker build CI pipeline, the Poetry installation on the host machine could be cached and the build process will be generally faster. Both approaches have their advantages and disadvantages, and the best approach will depend on your specific needs and preferences.
For the sake of this tutorial, I will keep it simple and use the venv strategy described above. So here’s the new Dockfile with stages (To identiy the different stages seperated by the FROM statement, I highlited the lines in bold):
FROM python:3.10-slim as base
WORKDIR /app
# ignore 'Running pip as the root user...' warning
ENV PIP_ROOT_USER_ACTION=ignore
# update pip
RUN pip install --upgrade pip
FROM base as builder
# install poetry
ENV POETRY_VERSION=1.3.1
RUN pip install "poetry==$POETRY_VERSION"
# copy application
COPY ["pyproject.toml", "poetry.lock", "README.md", "./"]
COPY ["src/", "src/"]
# build wheel
RUN poetry build --format wheel
FROM base as production
# expose port
EXPOSE 80
# copy the wheel from the build stage
COPY --from=builder /app/dist/*.whl /app/
# install package
RUN pip install /app/*.whl
# copy entrypoint of the app
COPY ["main.py", "./"]
# command to run
CMD ["uvicorn", "main:app","--host", "0.0.0.0", "--port", "80", "--workers", "1"]
This Dockerfile defines a multi-stage build with three stages: base
, builder
, and production
.
- The
base
stage starts from a Python 3.10-slim image and sets the working directory to/app
. It also sets an environment variable to ignore a warning about running pip as the root user and updates pip to the latest version. - The
builder
stage starts from thebase
stage and installs Poetry using pip. It then copies the application files and uses Poetry to build a wheel for the application. - The
production
stage starts from thebase
stage again and exposes port 80. It copies the wheel built in thebuilder
stage and installs it using pip. It also copies the entrypoint of the app and sets the command to run the app using uvicorn.
We can now re-build our Docker image with:
> docker build --file Dockerfile --tag project:latest --target production .
We can specify the stage we would like to build with the --target
flag.
The file size is now ~70 Mb less, with a total of 197MB:
> docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
project latest f1be09c32a55 14 minutes ago 197MB
And we can run it with
> docker run -p 9000:80 -it --rm project:latest
The API will be available under http://localhost:9000 in the browser.

Container registries
A container registry is a repository or collection of repositories used to store and access container images. Container registries can support container-based application development, often as part of DevOps processes. They can connect directly to container orchestration platforms like Docker and Kubernetes.
The most popular container registry is Docker Hub. Every Cloud Provider has its own. ACR for Azure, ECR for AWS and many many more. GitHub has its own package registry solution called GitHub Packages.
As we’ve done basically everything on GitHub so far, we will use GitHub Packages in this tutorial.

It has a free tier for a normal user on GitHub. This allows us to use up to 500 MB of storage for our containers. That’s enough for our application.

Docker push
The docker push
command is used to upload a Docker image to a container registry. This allows you to share your images with others or deploy them to different environments. The command takes the name of the image you want to push and the name of the registry you want to push it to as arguments. You need to be logged in to the registry before you can push an image to it.
Here are the steps to push a Docker image to a container registry:
- Tag (rename) your image with the registry name:
docker tag project:latest <registry-name>/<project>:latest
- Log in to the container registry:
docker login <registry-url>
- Push your image to the registry:
docker push <registry-name>/<project>:latest
We will push the image to GitHub Packages:
GitHub Packages
GitHub Packages only supports authentication using a personal access token (February, 2023). But we created a personal access token (PAT) in Part V, so we can use it here as well.
We need to login to the container registry with
> CR_PAT="XYZ"
> echo $CR_PAT | docker login ghcr.io -u johschmidt42 --password-stdin
Login Succeeded
It’s a shell command that uses a pipe to connect two commands. A pipe is a symbol (|
) that redirects the output of one command to the input of another command. In this case, the first command is echo $(CR_PAT)
, which prints the value of the CR_PAT variable to the standard output. The second command is docker login ghcr.io -u johschmidt42 --password-stdin
, which logs in to ghcr.io using johschmidt42 as the username and reading the password from the standard input. By using a pipe, the output of the echo command becomes the input of the docker login command, which means that the value of the CR_PAT variable is used as the password for logging in.
Let’s add this to our Makefile
# Makefile
...
login: ## login to ghcr.io using a personal access token (PAT)
@if [ -z "$(CR_PAT)" ]; then
echo "CR_PAT is not set";
else
echo $(CR_PAT) | docker login ghcr.io -u johschmidt42 --password-stdin;
fi
...
We need to write a little if-else statement in bash so that this target login requires us to set the CR_PAT first.
This allows us to login like so now:
> make login CR_PAT="XYZ"
For anyone confused by the bash command. Here’s an explanation:
The shell command uses an if-else statement to check a condition and execute different actions accordingly. The condition is [ -z "$(CR_PAT)" ]
, which means "is the CR_PAT variable empty?". The -z
flag tests for zero length. The $(CR_PAT)
part expands the value of the CR_PAT variable inside the brackets. If the condition is true, then the action after then
is executed, which is echo "CR_PAT is not set"
. This prints a message to the standard output. If the condition is false, then the action after else
is executed, which is echo $(CR_PAT) | docker login ghcr.io -u johschmidt42 --password-stdin
. The ` at the end of each line means that the command continues on the next line. The
fi` at the end marks the end of the if-else statement.
Now that we’re logged in, we need to rename the docker file so that we can push it to the remote registry using the docker tag command:
> docker tag project:latest ghcr.io/johschmidt42/project:latest
# Makefile
...
tag: ## tag docker image to ghcr.io/johschmidt42/project:latest
@docker tag project:latest ghcr.io/johschmidt42/project:latest
...
We can see information about our docker images with:
> docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
project latest f1be09c32a55 About an hour ago 197MB
ghcr.io/johschmidt42/project latest f1be09c32a55 About an hour ago 197MB
If we now try to push the image to the registry, it will fail:
> docker push ghcr.io/johschmidt42/project:latest
denied: permission_denied: The token provided does not match expected scopes.
# Makefile
...
push: tag ## docker push to container registry (ghcr.io)
@docker push ghcr.io/johschmidt42/project:latest
...
That’s because our token does not have the expected scopes. The message does not tell us which scopes (permissions) it requires but we can find this information in the documentation.
So we need to add these scopes:
- read:packages
- delete:packages

And now we’re seeing it being pushed to the container registry:
> make push
1a3ba1c1448c: Pushed
0ad139eaf32a: Pushing [========================================> ] 43.3MB/54.08MB
0e0b5d4aea1e: Pushed
a179cef7de6a: Pushing [==================================================>] 18.15MB
22f1e17dcfe4: Pushed
805fe34ec92b: Pushing [==================================================>] 12.76MB
fa04dee82d1b: Pushed
42d55226bf51: Pushing [==================================================>] 30.83MB
7d13900c8624: Pushed
650abce4b096: Pushing [==============> ] 22.72MB/80.51MB
latest: digest: sha256:57d409bb564f465541c2529e77ad05a02f09e2cc22b3c38a93967ce1b277f58a size: 2414
In GitHub, under profile
there is now a docker image in the packages
tab:


Clicking on it, allows us to connect the package to our repository:

And now this docker image can be found in the landing page of the repo github.com/johschmidt42/python-project-johannes:

Excellent. We have created a Docker Image, pushed it to the remote repository, linked it to our current version and now everyone who wants to test our application can do so by running the docker pull command:
> docker pull ghcr.io/johschmidt42/python-project-johannes:v0.4.1
CI/CD:
CI/CD stands for Continuous Integration and Continuous Deployment. With Docker images, CI/CD can automate the process of building, testing, and deploying images. In this tutorial, we’ll focus on continuously building our Docker image and pushing it to a remote container registry (CI) whenever there’s a new version. However, we won’t be deploying the image (CD) in this tutorial (stay tuned for a future blog post). Our Docker container will be built when:
- A commit is made to a branch with an open PR
- A commit is made to the default branch (main)
- A new release is created (this will push the image to the container registry)
The first action helps us catch bugs early on. The second action enables us to create and use a badge in our README.md file. The last action creates a new version of the Docker image and pushes it to the container registry. The overall flow of actions is summarised here:

Let’s create the build
pipeline:
This GitHub Actions workflow builds a Docker image. It is triggered when there is a push or pull request to the main branch or when the workflow is called. The job is named "Build" and has two steps. The first step checks out the repository using the actions/checkout
action. The second step builds the Docker image by running the make build
command. That’s it.

We also need to update the orchestrator.yml
accordingly:
The orchestrator is triggered when we push to the branch main
.

To build a new docker image with every new version released in our GitHub repository, we need to create a new GitHub actions workflow:
This is a GitHub Actions workflow that builds and pushes a Docker image to the GitHub Container Registry (ghcr.io) when a release is published. The job named "build_and_push" has three steps. The first step checks out the repository using the actions/checkout
action. The second step logs in to the GitHub Container Registry using the docker/login-action
. The third step builds and pushes the Docker image using the docker/build-push-action
.

Please note that, in order to login to GitHub Container Registry using docker/login-action@v2, we need to provide the secret GH_TOKEN, which is the PAT, we defined in Part V.
Here is a brief explanation of the parameters used in the last step docker/build-push-action@4:
context: .
specifies the build context as the current directory.push: true
specifies that the image should be pushed to the registry after it is built.tags: ghcr.io/${{ github.repository }}:${{ github.ref_name }}
specifies the tag for the image. In this case, it is tagged with the name of the repository and the branch or tag name that triggered the workflow.labels:
specifies labels for the image. In this case, it sets labels for the source, title, and version of the image.target: production
specifies the target stage to build in a multi-stage Dockerfile.github-token: ${{ secrets.GH_TOKEN }}
specifies the GitHub token to use for authentication.
We can see our new docker image on GitHub:

Badge:
For this part, we will add a badge to our repo as we’ve done it before in the other parts. This time for the build pipeline. We can retrieve the badge when we click on a build.yml workflow run:

Create a status badge from the workflow file on GitHub

and select the main branch. The badge markdown can be copied and added to the README.md:
Our landing page of the GitHub now looks like this β€:

If you want to know how this magically shows the current status of the last pipeline run in main, have a look the commit statuses API on GitHub.
That concludes the core portion of this tutorial! We successfully created a Dockerfile and used it to build an Docker image that enables us to run our application in a Docker container. Additionally, we implemented a CI/CD pipeline that automatically builds our Docker images and pushes them to the container registry. To top it off, we added a badge to our README.md file to proudly display our functional build pipeline to the world!
That was the last part! Did this tutorial help you to build a Python project on GitHub? Any suggestions for improvement? Let me know your thoughts!
Bonus
Clean up:
Here are some useful commands, that you can use when using the Docker CLI:
To stop all containers & remove them:
> docker stop $(docker ps -a -q) && docker rm $(docker ps -a -q)
To remove all unused docker images:
> docker rmi $(docker images --filter "dangling=true" -q --no-trunc)
Vulnerability scanning in Docker images
Vulnerability scanning is a crucial step in ensuring the security of your docker images. It helps you identify and fix any potential weaknesses or risks that could compromise your application or data. One of the tools that can help you is trivy.
This open-source tool is a simple and fast vulnerability scanner for docker images that supports multiple formats and sources. I will demonstrate how to use it locally. Ideally, you should consider creating a GitHub actions workflow that runs whenever you build a docker image!
We first should install trivy according to the documentation. After building the production docker image with
> docker build --file Dockerfile --tag project:latest --target production .
we can scan the built image with
> trivy image project:latest --scanners vuln --format table --severity CRITICAL,HIGH
This will download the latest known vulnerabilites from a database and scan the image. The output will be shown in a table --format table
with only the findings that have either CRITICAL or HIGH severity --severity CRITICAL,HIGH
:
project:latest (debian 12.0)
Total: 27 (HIGH: 27, CRITICAL: 0)
ββββββββββββββββββ¬βββββββββββββββββ¬βββββββββββ¬ββββββββββββββββββββ¬ββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Library β Vulnerability β Severity β Installed Version β Fixed Version β Title β
ββββββββββββββββββΌβββββββββββββββββΌβββββββββββΌββββββββββββββββββββΌββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β linux-libc-dev β CVE-2013-7445 β HIGH β 6.1.27-1 β β kernel: memory exhaustion via crafted Graphics Execution β
β β β β β β Manager (GEM) objects β
β β β β β β https://avd.aquasec.com/nvd/cve-2013-7445 β
β ββββββββββββββββββ€ β βββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2019-19449 β β β β kernel: mounting a crafted f2fs filesystem image can lead to β
β β β β β β slab-out-of-bounds read... β
β β β β β β https://avd.aquasec.com/nvd/cve-2019-19449 β
β ββββββββββββββββββ€ β βββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2019-19814 β β β β kernel: out-of-bounds write in __remove_dirty_segment in β
β β β β β β fs/f2fs/segment.c β
β β β β β β https://avd.aquasec.com/nvd/cve-2019-19814 β
β ββββββββββββββββββ€ β βββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2021-3847 β β β β low-privileged user privileges escalation β
β β β β β β https://avd.aquasec.com/nvd/cve-2021-3847 β
β ββββββββββββββββββ€ β βββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2021-3864 β β β β descendant's dumpable setting with certain SUID binaries β
β β β β β β https://avd.aquasec.com/nvd/cve-2021-3864 β
β ββββββββββββββββββ€ β βββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2023-1194 β β β β use-after-free in parse_lease_state() β
β β β β β β https://avd.aquasec.com/nvd/cve-2023-1194 β
β ββββββββββββββββββ€ β βββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2023-2124 β β β 6.1.37-1 β OOB access in the Linux kernel's XFS subsystem β
β β β β β β https://avd.aquasec.com/nvd/cve-2023-2124 β
β ββββββββββββββββββ€ β β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2023-2156 β β β β IPv6 RPL protocol reachable assertion leads to DoS β
β β β β β β https://avd.aquasec.com/nvd/cve-2023-2156 β
β ββββββββββββββββββ€ β βββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2023-2176 β β β β Slab-out-of-bound read in compare_netdev_and_ip β
β β β β β β https://avd.aquasec.com/nvd/cve-2023-2176 β
β ββββββββββββββββββ€ β βββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2023-3090 β β β 6.1.37-1 β out-of-bounds write caused by unclear skb->cb β
β β β β β β https://avd.aquasec.com/nvd/cve-2023-3090 β
β ββββββββββββββββββ€ β βββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2023-31248 β β β β use-after-free in nft_chain_lookup_byid() β
β β β β β β https://avd.aquasec.com/nvd/cve-2023-31248 β
β ββββββββββββββββββ€ β βββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2023-32247 β β β 6.1.37-1 β session setup memory exhaustion denial-of-service β
β β β β β β vulnerability β
β β β β β β https://avd.aquasec.com/nvd/cve-2023-32247 β
β ββββββββββββββββββ€ β β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2023-32248 β β β β tree connection NULL pointer dereference denial-of-service β
β β β β β β vulnerability β
β β β β β β https://avd.aquasec.com/nvd/cve-2023-32248 β
β ββββββββββββββββββ€ β β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2023-32250 β β β β session race condition remote code execution vulnerability β
β β β β β β https://avd.aquasec.com/nvd/cve-2023-32250 β
β ββββββββββββββββββ€ β β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2023-32252 β β β β session NULL pointer dereference denial-of-service β
β β β β β β vulnerability β
β β β β β β https://avd.aquasec.com/nvd/cve-2023-32252 β
β ββββββββββββββββββ€ β β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2023-32254 β β β β tree connection race condition remote code execution β
β β β β β β vulnerability β
β β β β β β https://avd.aquasec.com/nvd/cve-2023-32254 β
β ββββββββββββββββββ€ β β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2023-32257 β β β β session race condition remote code execution vulnerability β
β β β β β β https://avd.aquasec.com/nvd/cve-2023-32257 β
β ββββββββββββββββββ€ β β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2023-32258 β β β β session race condition remote code execution vulnerability β
β β β β β β https://avd.aquasec.com/nvd/cve-2023-32258 β
β ββββββββββββββββββ€ β β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2023-3268 β β β β out-of-bounds access in relay_file_read β
β β β β β β https://avd.aquasec.com/nvd/cve-2023-3268 β
β ββββββββββββββββββ€ β β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2023-3269 β β β β distros-[DirtyVMA] Privilege escalation via β
β β β β β β non-RCU-protected VMA traversal β
β β β β β β https://avd.aquasec.com/nvd/cve-2023-3269 β
β ββββββββββββββββββ€ β β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2023-3390 β β β β UAF in nftables when nft_set_lookup_global triggered after β
β β β β β β handling named and anonymous sets... β
β β β β β β https://avd.aquasec.com/nvd/cve-2023-3390 β
β ββββββββββββββββββ€ β βββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2023-3397 β β β β slab-use-after-free Write in txEnd due to race condition β
β β β β β β https://avd.aquasec.com/nvd/cve-2023-3397 β
β ββββββββββββββββββ€ β βββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2023-35001 β β β β stack-out-of-bounds-read in nft_byteorder_eval() β
β β β β β β https://avd.aquasec.com/nvd/cve-2023-35001 β
β ββββββββββββββββββ€ β βββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2023-35788 β β β 6.1.37-1 β out-of-bounds write in fl_set_geneve_opt() β
β β β β β β https://avd.aquasec.com/nvd/cve-2023-35788 β
β ββββββββββββββββββ€ β βββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2023-35827 β β β β race condition leading to use-after-free in ravb_remove() β
β β β β β β https://avd.aquasec.com/nvd/cve-2023-35827 β
β ββββββββββββββββββ€ β βββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β CVE-2023-3640 β β β β a per-cpu entry area leak was identified through the β
β β β β β β init_cea_offsets function when... β
β β β β β β https://avd.aquasec.com/nvd/cve-2023-3640 β
ββββββββββββββββββΌβββββββββββββββββ€ βββββββββββββββββββββΌββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β perl-base β CVE-2023-31484 β β 5.36.0-7 β β CPAN.pm before 2.35 does not verify TLS certificates when β
β β β β β β downloading distributions over... β
β β β β β β https://avd.aquasec.com/nvd/cve-2023-31484 β
ββββββββββββββββββ΄βββββββββββββββββ΄βββββββββββ΄ββββββββββββββββββββ΄ββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
There or 2 OS libraries with the severity HIGH. Both don’t provide a version we can upgrade to (see Fixed Version column) in order to fix the vulnerability in our docker image. So here’s how we’re going to deal with them:
linux-libc-dev:
This is a package that is not required for our application to run. So it’s probably best to uninstall it!
perl-base
This OS package provides the Perl interpreter and is required for other libraries that our application uses. That means that we cannot uninstall it and we cannot fix it. Hence, we must accept the risk. Accepting known vulnerabilites should be acknowledged and approved by management. We can then add the vunerability, e.g. CVE-2023β31484, to a .trivyignore file run the scanner again.
Here are the changes:
# Dockerfile
...
FROM base as production
# expose port
EXPOSE 80
# copy the wheel from the build stage
COPY --from=builder /app/dist/*.whl /app/
# install package
RUN pip install /app/*.whl
# copy entrypoint of the app
COPY ["main.py", "./"]
# Remove linux-libc-dev (CVE-2023-31484)
RUN apt-get remove -y --allow-remove-essential linux-libc-dev
# command to run
CMD ["uvicorn", "main:app","--host", "0.0.0.0", "--port", "80", "--workers", "1"]
# .trivyignore
# vulnerabilities to be ignored by trivy are added here
CVE-2023-31484
When we run the command again (this time including the .trivyignore file):
> trivy image project:latest --scanners vuln --format table --severity CRITICAL,HIGH --ignorefile .trivyignore
No vulnerabilites of severity HIGH or CRITICAL are reported anymore:
project:latest (debian 12.0)
Total: 0 (HIGH: 0, CRITICAL: 0)
Cheers!