The world’s leading publication for data science, AI, and ML professionals.

How to add a DEBUG mode for your Python Logging Mid-Run

Configuring a Debug mode with your logging on the fly

Python tips and tricks

Image created by Author | Used Free Content License elements from canva.com
Image created by Author | Used Free Content License elements from canva.com

Let’s face it, python’s logging under workings are complicated. What if you wanted to add a –debug-mode or a –verbose to your python Application. You’d want it so that activating this would create a console output or a log file at the logging "debug" level when your default is at the "info" level. It’s easy when you’ve got one script, you’ll just do a:

That’s simple enough. But there just arent enough tutorials or simple explanations for doing so across modules. That just changes the scripts logger. I’ll give you a framework you can apply to all your applications.

What We Want to Achieve

The criteria

Let’s lay out simply what we want. We want the following criteria:

  • Easy framework for Logging across multiple modules.
  • A way to change features midway through our application of logging across all modules, whether this is logging level, handlers, configuration file etc.
Anything (in most cases a user) can initiate logging debug mode anytime while the application runs, whether as an input or midway | Image created by Author | Used Free Content License elements from canva.com
Anything (in most cases a user) can initiate logging debug mode anytime while the application runs, whether as an input or midway | Image created by Author | Used Free Content License elements from canva.com

There are a multitude of tutorials out there covering the basics of logging when you initiate it in ONE script. But why is it so hard for beginners to find tutorials for logging over multiple modules?

This guy agrees, thankfully he’s written about it:

The Reusable Python Logging Template For All Your Data Science Apps

He’s on the right track, but still, that doesn’t help give us the option to change aspects of all logging on the fly. This requires some information.

Required Background information on Loggers

I won’t go into explaining the concept of the Python logging package. I am going to assume you have some basic knowledge of the python logging package. I wouldn’t want to reinvent the wheel so here’s a link to the documentation:

https://docs.python.org/3/library/logging.html

And a link to a good explanation of loggers.

https://www.geeksforgeeks.org/logging-in-python/

There are, however, a couple of concepts I need to touch on that are relevant:

  • There are multiple ways to configure loggers: writing functions/classes, dictConfig, using logging.basicConfig, a config YAML file etc. The first one I find to be the most versatile during a running application, and so this is the framework I follow
  • Whenever you initiate logging.getLogger(name) you create a CHILD logger. If the bracket is empty, then it is the root logger. It is known to be best practice that you create a child class for every module.
  • This one is super important. When a child logger does not have a logging level specified (i.e. Info, debug, warning etc.) then it will inherit the parent class’ setting. Which in most cases, and our case is the ROOT logger.
Image created by Author
Image created by Author

So you see where I’m going here right? We are going to initiate a child class in each module but will inherit the level from the root logger. We can change the root logger’s logging level midway through the application and this will be applied to all child loggers throughout.

The Framework: Steps and Explanation

The framework is really difficult to explain in an article so I’ve put the full example in a GitHub repository here:

GitHub – Causb1A/logging-debug-mode: A repository to complement the TDS article

If you’re struggling to follow the steps outlined below. Debug the repository code step by step starting from main.py. Then you’ll understand how it works.

Here’s a figure of what we are going to achieve:

Image created by Author
Image created by Author

As seen above, we will create a main.py which initiates the application, that is connected to the logger class in its own folder and in its own module. Module_1 and Module_2 are both python modules required to run your application, but module_1 takes a function from module_2. All Modules use the single instantiated class called Logger(). You don’t have to use a singleton class for this purpose, you can just have a normal class or just functions. I use it to keep the logging versatile for future implementations.

To initiate "debug_mode" we will have a function in module_1.py which initiates debug level in the root logger.

Step 1: File creations

Create the files like so:

Image created by Author
Image created by Author

Ignore the pycache.

  1. Have a folder called logger, and make sure there’s an init.py
  2. Have a folder called my_app (or your application name) and make sure there’s an init.py
  3. Have a logger.py inside your logger folder
  4. Create a main.py being your main run file
  5. Put your application modules within my_app or any other folder your modules will be
  6. Optional: put a tests folder and test the logger features here

Step 2: The logging class

This is located in the Logger folder in logger.py. Like I said earlier, this does not have to be a class, nor does it have to be a singleton class in particular. You can get away with just defining the functions without the class. Just make sure you modify the framework accordingly.

For those asking: What is a singleton class?

A singleton class in python is one that allows you to create one instance of a class. This one instance remains throughout the lifetime of a program. Imagine a class object that you instantiate once. That single class cannot be instantiated again. You can access and change variables of this instance throughout your program in any python module without having to instantiate the class again.

For the specifics of singleton classes and why they work, I won’t reinvent the wheel. Here’s a really good explanation:

Click on here to go to Geek for Geek’s explanation

The reason we use it here is to have a global logger class with the features we want. So every time we start a child logger class, it follows the same framework we define. I’ve left it a singleton class for future implementations, for example, it could serve as a global class which holds all the child loggers as attributes. Or it could be used to redefine the log after an attribute of the class is changed. I find it much more versatile to have a global single instance class; a centralised access point for child loggers. For the "debug mode" we speak of, it’s not required.

Below is the logger.py file.

The functions are self-explanatory with the docstrings but it’s important to note a couple of important elements.

  • def new(cls): This is the part that makes this class a singleton, it checks if the instance is none. If it is, it will instantiate an instance. Otherwise, it will return the same instance. Since this is a logger, we don’t need to be reinstantiating anyway.
  • def get_logger(self, logger_name): This is the function we will call in all the modules to initiate a child logger.
  • Handlers: here we create 2 handlers, a terminal output and a file handler. In future, you can use this singleton class to also handle handlers.
  • def add_handlers(): the function add handlers checks if it’s not duplicating handlers. If it so happens that in your application you have to recreate a child logger, this function will stop duplicate handlers. If in future you wish you rework this class to accommodate for more handler management, be careful of making duplicate handlers.
  • def set_debug_mode(self, debug_mode:bool): This function sets the debug root logger to be at DEBUG level. This means all child loggers will inherit this feature from the debug root logger.

Step 3: Don’t forget the init.py!

We want the logger folder to be treated like a package. So within the init.py of the logger folder, put:

Step 4: The main.py

In your main.py, or your main module that runs your whole application, you must initiate the logger class and set the root level logging at INFO. This must happen before you run anything else in your application.

See below my main.py

Notice I call Logger() to instantiate the class to create a child logger? Would we be instantiating a class every time we call the Logger().get_logger function in the singleton class? No, because the point of a singleton class is to only have one instance so it would return the same existing instance.

Step 5: Using the logger class in any modules

To use the singleton logger class in each module, we need to import the logger class and call get_logger. Remember, logging good practice means you need to create a child logger for each module, so be sure to put the module name in each function.

Here is an example from "module_2.py" in the Github repository.

Step 6: Changing the debug level midway through the application

And now, the part we came for, changing the root logger to debug mode midway through the application. Remember in step 2 there’s a set_debug_mode function? This function changes the root level logger to debug. Within module_1.py in the Github repository, I change a class attribute by calling that function.

See module_1.py below.

The function does_something() is simply an example function to show you the use case.

What does this module do?

The function run() is the function that collates everything to show the use case. Let’s go through it step by step.

Line 16: This calls the function does_something() remember, at this point the logging is still at logging.INFO level. So in the console/log file you will see only info.

Line 18: This calls the function does_something_module_2() in ANOTHER module. The same module is seen in step 4. Remember, logging is still at logging.INFO level so will only see info.

Line 23: Changes the root logger level to be at DEBUG. All module child loggers will inherit from this.

Line 26: The same as Line 16, but this time DEBUG mode is initiated and the logging level is at Logging.DEBUG. You should now see debug outputs in the logging output

Line 29: The same as line 18, but now DEBUG mode is initiated and logging level is at Logging.DEBUG even in ANOTHER module. So you’ve done it, you’ve initialised debug mode in one module and it’s carried forward in every other module that initiates the logger class.

Here are the outputs when running the file.

Image created by Author
Image created by Author

Step 7: Applying to your application

To apply the framework to your application: follows steps 1–6, but instead of using module_1.py and module_2.py and main.py, place the loggers in your modules.

When changing the root level to debug level, you can have a function that triggers this. For example, parsing arguments from – – debug-mode or user input. Whatever it is you’d like to trigger.

Extra – Test functions

I’ve included some test functions in the repository under test just to test out various features of the singleton class. The tests are self-explanatory.

What We’ve achieved

What we’ve achieved within the repository and from steps 1–7 is we’ve applied a logger class framework to an application, and triggered the logging level to change midway through the application.

Final Words

Hopefully, this has helped you manage logging a bit more in your python application, and open up doors for you to use the singleton class even more.

If you’ve enjoyed this article, please leave a clap and a follow for support!

Or if you’re interested in joining the Medium community, here’s a referral link:

Join Medium with my referral link – Adrian Causby


Related Articles