Lines Matching full:logging
4 Logging Cookbook
9 This page contains a number of recipes related to logging, which have been found
13 .. currentmodule:: logging
15 Using logging in multiple modules
18 Multiple calls to ``logging.getLogger('someLogger')`` return a reference to the
26 import logging
30 logger = logging.getLogger('spam_application')
31 logger.setLevel(logging.DEBUG)
33 fh = logging.FileHandler('spam.log')
34 fh.setLevel(logging.DEBUG)
36 ch = logging.StreamHandler()
37 ch.setLevel(logging.ERROR)
39 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
58 import logging
61 module_logger = logging.getLogger('spam_application.auxiliary')
65 self.logger = logging.getLogger('spam_application.auxiliary.Auxiliary')
101 Logging from multiple threads
104 Logging from multiple threads requires no special effort. The following example
105 shows logging from the main (initial) thread and another thread::
107 import logging
113 logging.debug('Hi from myfunc')
117 … logging.basicConfig(level=logging.DEBUG, format='%(relativeCreated)6d %(threadName)s %(message)s')
123 logging.debug('Hello from main')
155 This shows the logging output interspersed as one might expect. This approach
164 text file while simultaneously logging errors or above to the console. To set
165 this up, simply configure the appropriate handlers. The logging calls in the
169 import logging
171 logger = logging.getLogger('simple_example')
172 logger.setLevel(logging.DEBUG)
174 fh = logging.FileHandler('spam.log')
175 fh.setLevel(logging.DEBUG)
177 ch = logging.StreamHandler()
178 ch.setLevel(logging.ERROR)
180 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
207 Logging to multiple destinations
216 import logging
218 # set up logging to file - see previous section for more details
219 logging.basicConfig(level=logging.DEBUG,
225 console = logging.StreamHandler()
226 console.setLevel(logging.INFO)
228 formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
232 logging.getLogger('').addHandler(console)
235 logging.info('Jackdaws love my big sphinx of quartz.')
240 logger1 = logging.getLogger('myapp.area1')
241 logger2 = logging.getLogger('myapp.area2')
293 Suppose you configure logging with the following JSON:
307 "class": "logging.StreamHandler",
313 "class": "logging.StreamHandler",
319 "class": "logging.FileHandler",
358 "class": "logging.StreamHandler",
372 level = getattr(logging, level)
392 import logging
393 import logging.config
412 "class": "logging.StreamHandler",
419 "class": "logging.StreamHandler",
425 "class": "logging.FileHandler",
443 level = getattr(logging, level)
450 logging.config.dictConfig(json.loads(CONFIG))
451 logging.debug('A DEBUG message')
452 logging.info('An INFO message')
453 logging.warning('A WARNING message')
454 logging.error('An ERROR message')
455 logging.critical('A CRITICAL message')
491 Here is an example of a module using the logging configuration server::
493 import logging
494 import logging.config
499 logging.config.fileConfig('logging.conf')
502 t = logging.config.listen(9999)
505 logger = logging.getLogger('simpleExample')
508 # loop through logging calls to see the difference
519 logging.config.stopListening()
523 properly preceded with the binary-encoded length, as the new logging
549 .. currentmodule:: logging.handlers
551 Sometimes you have to get your logging handlers to do their work without
552 blocking the thread you're logging from. This is common in web applications,
591 handler = logging.StreamHandler()
593 root = logging.getLogger()
595 formatter = logging.Formatter('%(threadName)s: %(message)s')
612 async code, but rather about slow logging handlers, it should be noted that
613 when logging from async code, network and even file handlers could lead to
614 problems (blocking the event loop) because some logging is done from
616 application, to use the above approach for logging, so that any blocking code
629 .. _network-logging:
631 Sending and receiving logging events across a network
634 Let's say you want to send logging events across a network, and handle them at
638 import logging, logging.handlers
640 rootLogger = logging.getLogger('')
641 rootLogger.setLevel(logging.DEBUG)
642 socketHandler = logging.handlers.SocketHandler('localhost',
643 logging.handlers.DEFAULT_TCP_LOGGING_PORT)
649 logging.info('Jackdaws love my big sphinx of quartz.')
654 logger1 = logging.getLogger('myapp.area1')
655 logger2 = logging.getLogger('myapp.area2')
666 import logging
667 import logging.handlers
673 """Handler for a streaming logging request.
675 This basically logs the record using whatever logging policy is
694 record = logging.makeLogRecord(obj)
707 logger = logging.getLogger(name)
716 Simple TCP socket-based logging receiver suitable for testing.
722 port=logging.handlers.DEFAULT_TCP_LOGGING_PORT,
741 logging.basicConfig(
769 Running a logging socket listener in production
774 To run a logging listener in production, you may need to use a
796 | :file:`main.py` | A simple web application which performs logging |
840 Adding contextual information to your logging output
843 Sometimes you want logging output to contain contextual information in
844 addition to the parameters passed to the logging call. For example, in a
852 level of granularity you want to use in logging an application, it could
861 with logging event information is to use the :class:`LoggerAdapter` class.
870 information. When you call one of the logging methods on an instance of
885 contextual information is added to the logging output. It's passed the message
886 and keyword arguments of the logging call, and it passes back (potentially)
901 class CustomAdapter(logging.LoggerAdapter):
911 logger = logging.getLogger(__name__)
922 that it looks like a dict to logging. This would be useful if you want to
945 import logging
948 class ContextFilter(logging.Filter):
966 levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL)
967 logging.basicConfig(level=logging.DEBUG,
969 a1 = logging.getLogger('a.b.c')
970 a2 = logging.getLogger('d.e.f')
979 lvlname = logging.getLevelName(lvl)
1011 logging messages from the library (and other request processing code) are directed to
1020 import logging
1023 logger = logging.getLogger(__name__)
1040 import logging
1046 logger = logging.getLogger(__name__)
1047 root = logging.getLogger()
1048 root.setLevel(logging.DEBUG)
1077 …formatter = logging.Formatter('%(threadName)-11s %(appName)s %(name)-9s %(user)-6s %(ip)s %(method…
1080 # processing, and used in the logging that happens during that processing
1085 class InjectingFilter(logging.Filter):
1108 handler = logging.FileHandler(name + '.log', 'w')
1148 handler = logging.FileHandler('app.log', 'w')
1184 ~/logging-contextual-webapp$ python main.py
1187 ~/logging-contextual-webapp$ wc -l *.log
1192 ~/logging-contextual-webapp$ head -3 app1.log
1196 ~/logging-contextual-webapp$ head -3 app2.log
1200 ~/logging-contextual-webapp$ head app.log
1211 ~/logging-contextual-webapp$ grep app1 app1.log | wc -l
1213 ~/logging-contextual-webapp$ grep app2 app2.log | wc -l
1215 ~/logging-contextual-webapp$ grep app1 app.log | wc -l
1217 ~/logging-contextual-webapp$ grep app2 app.log | wc -l
1230 import logging
1232 def filter(record: logging.LogRecord):
1238 logger = logging.getLogger()
1239 logger.setLevel(logging.INFO)
1240 handler = logging.StreamHandler()
1241 formatter = logging.Formatter('%(message)s from %(user)-8s')
1250 Logging to a single file from multiple processes
1253 Although logging is thread-safe, and logging to a single file from multiple
1254 threads in a single process *is* supported, logging to a single file from
1262 :ref:`This section <network-logging>` documents this approach in more detail and
1274 .. currentmodule:: logging.handlers
1277 all logging events to one of the processes in your multi-process application.
1280 them according to its own logging configuration. Although the example only
1283 analogous) it does allow for completely different logging configurations for
1288 import logging
1289 import logging.handlers
1297 # Because you'll want to define the logging configurations for listener and workers, the
1299 # for configuring logging for that process. These functions are also passed the queue,
1309 root = logging.getLogger()
1310 h = logging.handlers.RotatingFileHandler('mptest.log', 'a', 300, 10)
1311 f = logging.Formatter('%(asctime)s %(processName)-10s %(name)s %(levelname)-8s %(message)s')
1315 # This is the listener process top-level loop: wait for logging events
1325 logger = logging.getLogger(record.name)
1334 LEVELS = [logging.DEBUG, logging.INFO, logging.WARNING,
1335 logging.ERROR, logging.CRITICAL]
1347 # will run the logging configuration code when it starts.
1349 h = logging.handlers.QueueHandler(queue) # Just the one handler needed
1350 root = logging.getLogger()
1353 root.setLevel(logging.DEBUG)
1364 logger = logging.getLogger(choice(LOGGERS))
1392 A variant of the above script keeps the logging in the main process, in a
1395 import logging
1396 import logging.config
1397 import logging.handlers
1408 logger = logging.getLogger(record.name)
1413 qh = logging.handlers.QueueHandler(q)
1414 root = logging.getLogger()
1415 root.setLevel(logging.DEBUG)
1417 levels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,
1418 logging.CRITICAL]
1423 logger = logging.getLogger(random.choice(loggers))
1432 'class': 'logging.Formatter',
1438 'class': 'logging.StreamHandler',
1442 'class': 'logging.FileHandler',
1448 'class': 'logging.FileHandler',
1454 'class': 'logging.FileHandler',
1476 logging.config.dictConfig(d)
1483 # And now tell the logging thread to finish up, too
1489 ``foo`` subsystem in a file ``mplog-foo.log``. This will be used by the logging
1490 machinery in the main process (even though the logging events are generated in
1536 `Running a logging socket listener in production`_ for more details.
1543 .. (see <https://pymotw.com/3/logging/>)
1549 logging package provides a :class:`~handlers.RotatingFileHandler`::
1552 import logging
1553 import logging.handlers
1558 my_logger = logging.getLogger('MyLogger')
1559 my_logger.setLevel(logging.DEBUG)
1562 handler = logging.handlers.RotatingFileHandler(
1602 When logging was added to the Python standard library, the only way of
1608 Logging (as of 3.2) provides improved support for these two additional
1620 >>> import logging
1621 >>> root = logging.getLogger()
1622 >>> root.setLevel(logging.DEBUG)
1623 >>> handler = logging.StreamHandler()
1624 >>> bf = logging.Formatter('{asctime} {name} {levelname:8s} {message}',
1628 >>> logger = logging.getLogger('foo.bar')
1633 >>> df = logging.Formatter('$asctime $name ${levelname} $message',
1642 Note that the formatting of logging messages for final output to logs is
1643 completely independent of how an individual logging message is constructed.
1650 Logging calls (``logger.debug()``, ``logger.info()`` etc.) only take
1651 positional parameters for the actual logging message itself, with keyword
1653 logging call (e.g. the ``exc_info`` keyword parameter to indicate that
1656 you cannot directly make logging calls using :meth:`str.format` or
1657 :class:`string.Template` syntax, because internally the logging package
1660 all logging calls which are out there in existing code will be using %-format
1665 arbitrary object as a message format string, and that the logging package will
1722 approach: the actual formatting happens not when you make the logging call, but
1732 import logging
1742 class StyleAdapter(logging.LoggerAdapter):
1751 logger = StyleAdapter(logging.getLogger(__name__))
1757 logging.basicConfig(level=logging.DEBUG)
1764 .. currentmodule:: logging
1771 Every logging event is represented by a :class:`LogRecord` instance.
1779 logging an event. This invoked :class:`LogRecord` directly to create an
1791 :meth:`Logger.makeRecord`, and set it using :func:`~logging.setLoggerClass`
1808 logger = logging.getLogger(__name__)
1811 could also add the filter to a :class:`~logging.NullHandler` attached to their
1816 In Python 3.2 and later, :class:`~logging.LogRecord` creation is done through a
1818 :func:`~logging.setLogRecordFactory`, and interrogate with
1819 :func:`~logging.getLogRecordFactory`. The factory is invoked with the same
1820 signature as the :class:`~logging.LogRecord` constructor, as :class:`LogRecord`
1827 old_factory = logging.getLogRecordFactory()
1834 logging.setLogRecordFactory(record_factory)
1840 overhead to all logging operations, and the technique should only be used when
1901 return logging.makeLogRecord(msg)
1906 Module :mod:`logging`
1907 API reference for the logging module.
1909 Module :mod:`logging.config`
1910 Configuration API for the logging module.
1912 Module :mod:`logging.handlers`
1913 Useful handlers included with the logging module.
1915 :ref:`A basic logging tutorial <logging-basic-tutorial>`
1917 :ref:`A more advanced logging tutorial <logging-advanced-tutorial>`
1923 Below is an example of a logging configuration dictionary - it's taken from
1924 …he Django project <https://docs.djangoproject.com/en/stable/topics/logging/#configuring-logging>`_.
1927 LOGGING = {
1940 '()': 'project.logging.SpecialFilter',
1951 'class':'logging.StreamHandler',
1980 section <https://docs.djangoproject.com/en/stable/topics/logging/#configuring-logging>`_
1992 import logging
1993 import logging.handlers
2007 rh = logging.handlers.RotatingFileHandler('rotated.log', maxBytes=128, backupCount=5)
2011 root = logging.getLogger()
2012 root.setLevel(logging.INFO)
2014 f = logging.Formatter('%(asctime)s %(message)s')
2034 The following working example shows how logging can be used with multiprocessing
2042 see logging in the main process, how the workers log to a QueueHandler and how
2043 the listener implements a QueueListener and a more complex logging
2052 import logging
2053 import logging.config
2054 import logging.handlers
2062 A simple handler for logging events. It runs in the listener process and
2064 which then get dispatched, by the logging system, to the handlers
2070 logger = logging.getLogger()
2072 logger = logging.getLogger(record.name)
2076 # doing the logging to files and console
2085 This initialises logging according to the specified configuration,
2089 logging.config.dictConfig(config)
2090 listener = logging.handlers.QueueListener(q, MyHandler())
2099 logger = logging.getLogger('setup')
2110 This initialises logging according to the specified configuration,
2118 logging.config.dictConfig(config)
2119 levels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,
2120 logging.CRITICAL]
2130 logger = logging.getLogger('setup')
2134 logger = logging.getLogger(random.choice(loggers))
2145 'class': 'logging.StreamHandler',
2164 'class': 'logging.handlers.QueueHandler',
2174 # logging configuration is available to dispatch events to handlers however
2184 'class': 'logging.Formatter',
2188 'class': 'logging.Formatter',
2194 'class': 'logging.StreamHandler',
2199 'class': 'logging.FileHandler',
2205 'class': 'logging.FileHandler',
2211 'class': 'logging.FileHandler',
2228 # Log some initial events, just to show that logging in the parent works
2230 logging.config.dictConfig(config_initial)
2231 logger = logging.getLogger('setup')
2250 # Logging in the parent still works normally.
2270 :class:`~logging.handlers.SysLogHandler` to insert a BOM into the message, but
2281 #. Attach a :class:`~logging.Formatter` instance to your
2282 :class:`~logging.handlers.SysLogHandler` instance, with a format string
2300 :rfc:`5424`-compliant messages. If you don't, logging may not complain, but your
2304 Implementing structured logging
2307 Although most logging messages are intended for reading by humans, and thus not
2311 straightforward to achieve using the logging package. There are a number of
2316 import logging
2328 logging.basicConfig(level=logging.INFO, format='%(message)s')
2329 logging.info(_('message 1', foo='bar', bar='baz', num=123, fnum=123.456))
2344 import logging
2367 logging.basicConfig(level=logging.INFO, format='%(message)s')
2368 logging.info(_('message 1', set_value={1, 2, 3}, snowman='\u2603'))
2385 .. currentmodule:: logging.config
2390 There are times when you want to customize logging handlers in particular ways,
2402 return logging.FileHandler(filename, mode, encoding)
2404 You can then specify, in a logging configuration passed to :func:`dictConfig`,
2405 that a logging handler be created by calling this function::
2407 LOGGING = {
2441 import logging, logging.config, os, shutil
2448 return logging.FileHandler(filename, mode, encoding)
2450 LOGGING = {
2480 logging.config.dictConfig(LOGGING)
2481 logger = logging.getLogger('mylogger')
2519 :class:`~logging.FileHandler` - for example, one of the rotating file handlers,
2523 .. currentmodule:: logging
2530 In Python 3.2, the :class:`~logging.Formatter` gained a ``style`` keyword
2534 governs the formatting of logging messages for final output to logs, and is
2535 completely orthogonal to how an individual logging message is constructed.
2537 Logging calls (:meth:`~Logger.debug`, :meth:`~Logger.info` etc.) only take
2538 positional parameters for the actual logging message itself, with keyword
2539 parameters used only for determining options for how to handle the logging call
2543 logging calls using :meth:`str.format` or :class:`string.Template` syntax,
2544 because internally the logging package uses %-formatting to merge the format
2546 backward compatibility, since all logging calls which are out there in existing
2553 For logging to work interoperably between any third-party libraries and your
2555 individual logging call. This opens up a couple of ways in which alternative
2562 In Python 3.2, along with the :class:`~logging.Formatter` changes mentioned
2563 above, the logging package gained the ability to allow users to set their own
2582 :ref:`arbitrary-object-messages`) that when logging you can use an arbitrary
2583 object as a message format string, and that the logging package will call
2635 approach: the actual formatting happens not when you make the logging call, but
2645 .. currentmodule:: logging.config
2650 You *can* configure filters using :func:`~logging.config.dictConfig`, though it
2652 :class:`~logging.Filter` is the only filter class included in the standard
2654 base class), you will typically need to define your own :class:`~logging.Filter`
2655 subclass with an overridden :meth:`~logging.Filter.filter` method. To do this,
2659 :class:`~logging.Filter` instance). Here is a complete example::
2661 import logging
2662 import logging.config
2665 class MyFilter(logging.Filter):
2678 LOGGING = {
2688 'class': 'logging.StreamHandler',
2699 logging.config.dictConfig(LOGGING)
2700 logging.debug('hello')
2701 logging.debug('hello - noshow')
2718 in :ref:`logging-config-dict-externalobj`. For example, you could have used
2723 handlers and formatters. See :ref:`logging-config-dict-userdef` for more
2724 information on how logging supports using user-defined objects in its
2738 import logging
2740 class OneLineExceptionFormatter(logging.Formatter):
2755 fh = logging.FileHandler('output.txt', 'w')
2759 root = logging.getLogger()
2760 root.setLevel(logging.DEBUG)
2765 logging.info('Sample message')
2769 logging.exception('ZeroDivisionError: %s', e)
2787 Speaking logging messages
2790 There might be situations when it is desirable to have logging messages rendered
2803 import logging
2807 class TTSHandler(logging.Handler):
2819 root = logging.getLogger()
2822 root.setLevel(logging.DEBUG)
2825 logging.info('Hello')
2826 logging.debug('Goodbye')
2839 .. _buffered-logging:
2841 Buffering logging messages and outputting them conditionally
2846 start logging debug events in a function, and if the function completes without
2852 functions where you want logging to behave this way. It makes use of the
2853 :class:`logging.handlers.MemoryHandler`, which allows buffering of logged events
2862 all the logging levels, writing to ``sys.stderr`` to say what level it's about
2863 to log at, and then actually logging a message at that level. You can pass a
2868 conditional logging that's required. The decorator takes a logger as a parameter
2872 records buffered). These default to a :class:`~logging.StreamHandler` which
2873 writes to ``sys.stderr``, ``logging.ERROR`` and ``100`` respectively.
2877 import logging
2878 from logging.handlers import MemoryHandler
2881 logger = logging.getLogger(__name__)
2882 logger.addHandler(logging.NullHandler())
2886 target_handler = logging.StreamHandler()
2888 flush_level = logging.ERROR
2928 logger.setLevel(logging.DEBUG)
2968 As you can see, actual logging output only occurs when an event is logged whose
2981 Sending logging messages to email, with buffering
2986 :class:`~logging.handlers.BufferingHandler`. In the following example, which you can
2994 import logging
2995 import logging.handlers
2998 class BufferingSMTPHandler(logging.handlers.BufferingHandler):
3001 logging.handlers.BufferingHandler.__init__(self, capacity)
3011 self.setFormatter(logging.Formatter("%(asctime)s %(levelname)-5s %(message)s"))
3026 if logging.raiseExceptions:
3042 default='Test Logging email from Python logging module (buffering)',
3045 logger = logging.getLogger()
3046 logger.setLevel(logging.DEBUG)
3069 import logging
3072 class UTCFormatter(logging.Formatter):
3076 :class:`~logging.Formatter`. If you want to do that via configuration, you can
3077 use the :func:`~logging.config.dictConfig` API with an approach illustrated by
3080 import logging
3081 import logging.config
3084 class UTCFormatter(logging.Formatter):
3087 LOGGING = {
3101 'class': 'logging.StreamHandler',
3105 'class': 'logging.StreamHandler',
3115 logging.config.dictConfig(LOGGING)
3116 logging.warning('The local time is %s', time.asctime())
3131 Using a context manager for selective logging
3134 There are times when it would be useful to temporarily change the logging
3136 manager is the most obvious way of saving and restoring the logging context.
3138 optionally change the logging level and add a logging handler purely in the
3141 import logging
3177 logger = logging.getLogger('foo')
3178 logger.addHandler(logging.StreamHandler())
3179 logger.setLevel(logging.INFO)
3182 with LoggingContext(logger, level=logging.DEBUG):
3185 h = logging.StreamHandler(sys.stdout)
3186 with LoggingContext(logger, level=logging.DEBUG, handler=h, close=True):
3233 logging filters temporarily. Note that the above code works in Python 2 as well
3244 * Use a logging level based on command-line arguments
3245 * Dispatch to multiple subcommands in separate files, all logging at the same
3254 command-line argument, defaulting to ``logging.INFO``. Here's one way that
3259 import logging
3291 logging.basicConfig(level=options.log_level,
3302 import logging
3304 logger = logging.getLogger(__name__)
3314 import logging
3316 logger = logging.getLogger(__name__)
3335 import logging
3337 logger = logging.getLogger(__name__)
3366 The first word is the logging level, and the second word is the module or
3369 If we change the logging level, then we can change the information sent to the
3399 A Qt GUI for logging
3411 can log to the GUI from both the UI itself (via a button for manual logging)
3412 as well as a worker thread doing work in the background (here, just logging
3426 import logging
3442 logger = logging.getLogger(__name__)
3450 signal = Signal(str, logging.LogRecord)
3463 class QtHandler(logging.Handler):
3483 # Used to generate random levels for logging.
3485 LEVELS = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,
3486 logging.CRITICAL)
3527 logging.DEBUG: 'black',
3528 logging.INFO: 'blue',
3529 logging.WARNING: 'orange',
3530 logging.ERROR: 'red',
3531 logging.CRITICAL: 'purple',
3550 formatter = logging.Formatter(fs)
3601 @Slot(str, logging.LogRecord)
3623 logging.getLogger().setLevel(logging.DEBUG)
3632 Logging to syslog with RFC5424 support
3636 use the older :rfc:`3164`, which hails from 2001. When ``logging`` was added to Python
3639 servers, the :class:`~logging.handlers.SysLogHandler` functionality has not been
3647 import logging.handlers
3652 class SysLogHandler5424(logging.handlers.SysLogHandler):
3731 import logging
3754 logging.basicConfig(level=logging.DEBUG)
3755 logger = logging.getLogger('demo')
3756 info_fp = LoggerWriter(logger, logging.INFO)
3757 debug_fp = LoggerWriter(logger, logging.DEBUG)
3778 sys.stdout = LoggerWriter(logger, logging.INFO)
3779 sys.stderr = LoggerWriter(logger, logging.WARNING)
3781 You should do this *after* configuring logging for your needs. In the above
3782 example, the :func:`~logging.basicConfig` call does this (using the
3795 :func:`~logging.basicConfig`, but you can use a different formatter when you
3796 configure logging.
3804 sys.stderr = LoggerWriter(logger, logging.WARNING)
3894 * Logging output can be garbled because multiple threads or processes try to
3895 write to the same file. Although logging guards against concurrent use of the
3915 given logger instance by name using ``logging.getLogger(name)``, so passing
3924 Configuring logging by adding handlers, formatters and filters is the
3927 loggers other than a :class:`~logging.NullHandler` instance.
3947 Module :mod:`logging`
3948 API reference for the logging module.
3950 Module :mod:`logging.config`
3951 Configuration API for the logging module.
3953 Module :mod:`logging.handlers`
3954 Useful handlers included with the logging module.
3956 :ref:`Basic Tutorial <logging-basic-tutorial>`
3958 :ref:`Advanced Tutorial <logging-advanced-tutorial>`