Summary of python logging Module

  • 2021-07-10 20:22:31
  • OfStack

Log level

CRITICAL 50 ERROR 40 WARNING 30 INFO 20 DEBUG 10

Specific parameter meaning in logging. basicConfig () function

filename: Creates FiledHandler with the specified file name so that the log is stored in the specified file; filemode: File opening mode, which is used when filename is specified. The default value is "w" and can also be specified as "a"; format: Specifies the log display format used by handler; datefmt: Specifies the datetime format. Format reference strftime time formatting (below) level: Set the log level of rootlogger stream: Creates an StreamHandler with the specified stream. You can specify output to sys. stderr, sys. stdout, or a file, default to sys. stderr. If both filename and stream parameters are listed, the stream parameter is ignored.

Formatting information for format parameters

参数 描述
%(name)s Logger的名字
%(levelno)s 数字形式的日志级别
%(levelname)s 文本形式的日志级别
%(pathname)s 调用日志输出函数的模块的完整路径名,可能没有
%(filename)s 调用日志输出函数的模块的文件名
%(module)s 调用日志输出函数的模块名
%(funcName)s 调用日志输出函数的函数名
%(lineno)d 调用日志输出函数的语句所在的代码行
%(created)f 当前时间,用UNIX标准的表示时间的浮 点数表示
%(relativeCreated)d 输出日志信息时的,自Logger创建以 来的毫秒数
%(asctime)s 字符串形式的当前时间。默认格式是 “2003-07-08 16:49:45,896”。逗号后面的是毫秒
%(thread)d 线程ID。可能没有
%(threadName)s 线程名。可能没有
%(process)d 进程ID。可能没有
%(message)s 用户输出的消息

Use logging to print logs to standard output


import logging
logging.debug('debug message')
logging.info('info message')
logging.warning('warning message')

Enter the log to a file using logging. baseConfig ()


import os

logging.basicConfig(
  filename=os.path.join(os.getcwd(),'all.log'),
  level=logging.DEBUG,
  format='%(asctime)s %(filename)s : %(levelname)s %(message)s', #  Defining Output log Format of 
  filemode='a',
  datefmt='%Y-%m-%d %A %H:%M:%S',
)

logging.debug('this is a message')

Custom Logger

Setting the log write file to be automatically split by log file size


import logging
from logging import handlers


class Logger(object):
  level_relations = {
    'debug': logging.DEBUG,
    'info': logging.INFO,
    'warning': logging.WARNING,
    'error': logging.ERROR,
    'crit': logging.CRITICAL
  }

  def __init__(self, filename, level='info', when='D', backCount=3,
         fmt='%(asctime)s - %(pathname)s[line:%(lineno)d] - %(levelname)s: %(message)s'):
    self.logger = logging.getLogger(filename)
    format_str = logging.Formatter(fmt) #  Format the log 
    self.logger.setLevel(self.level_relations.get(level)) #  Setting Log Levels 

    #  Output log to console 
    stream_handler = logging.StreamHandler()
    stream_handler.setFormatter(format_str)
    self.logger.addHandler(stream_handler)

    #  Log writes to file by file size 
    # 1MB = 1024 * 1024 bytes
    #  Here set the file size to 500MB
    rotating_file_handler = handlers.RotatingFileHandler(
      filename=filename, mode='a', maxBytes=1024 * 1024 * 500, backupCount=5, encoding='utf-8')
    rotating_file_handler.setFormatter(format_str)
    self.logger.addHandler(rotating_file_handler)


log = Logger('all.log', level='info')

log.logger.info('[ Test log] hello, world')

Automatically generate log files by interval date


import logging
from logging import handlers


class Logger(object):
  level_relations = {
    'debug': logging.DEBUG,
    'info': logging.INFO,
    'warning': logging.WARNING,
    'error': logging.ERROR,
    'crit': logging.CRITICAL
  }

  def __init__(self, filename, level='info', when='D', backCount=3,
         fmt='%(asctime)s - %(pathname)s[line:%(lineno)d] - %(levelname)s: %(message)s'):
    self.logger = logging.getLogger(filename)
    format_str = logging.Formatter(fmt) #  Format the log 
    self.logger.setLevel(self.level_relations.get(level)) #  Setting Log Levels 

    #  Write to a file 
    #  Processor that automatically generates files at specified intervals 
    timed_rotating_file_handler = handlers.TimedRotatingFileHandler(
      filename=filename, when=when, backupCount=backCount, encoding='utf-8')

    #  Instantiation TimedRotatingFileHandler
    # interval Is the time interval, backupCount Is the number of backup files. If this number is exceeded, it will be automatically deleted. when Is the time unit of interval, and there are the following units: 
    # S  Seconds 
    # M  Points 
    # H  Hours, 
    # D  God, 
    # W  Every week ( interval==0 The hour stands for the week 1 ) 
    # midnight  Every morning 
    timed_rotating_file_handler.setFormatter(format_str) #  Format writes in files 
    self.logger.addHandler(timed_rotating_file_handler)

    #  Output to the screen 
    stream_handler = logging.StreamHandler()
    stream_handler.setFormatter(format_str)
    self.logger.addHandler(stream_handler)


log = Logger('all.log', level='info')
log.logger.info('[ Test log] hello, world')

Application of logging Module in Flask
I read a lot of Flask documents about logging in the process of using Flask, but it is not very convenient to use, so I wrote the following log module according to the official document of Flask, so as to integrate it into Flask.

restful api Project Directory:


.
 --  apps_api
 The    --  common
 The    --  models
 The    Off-  resources
 --  logs
 --  migrations
 The    Off-  versions
 --  static
 --  templates
 --  test
 Off-  utils
 Off-  app.py
 Off-  config.py
 Off-  exts.py
 Off-  log.py
 Off-  manage.py
 Off-  run.py
 Off-  README.md
 Off-  requirements.txt

log. py file


# -*- coding: utf-8 -*-

import logging
from flask.logging import default_handler
import os

from logging.handlers import RotatingFileHandler
from logging import StreamHandler

BASE_DIR = os.path.dirname(os.path.abspath(__file__))

LOG_PATH = os.path.join(BASE_DIR, 'logs')

LOG_PATH_ERROR = os.path.join(LOG_PATH, 'error.log')
LOG_PATH_INFO = os.path.join(LOG_PATH, 'info.log')
LOG_PATH_ALL = os.path.join(LOG_PATH, 'all.log')

#  Maximum log file  100MB
LOG_FILE_MAX_BYTES = 100 * 1024 * 1024
#  The number of rotations is  10  A 
LOG_FILE_BACKUP_COUNT = 10


class Logger(object):

  def init_app(self, app):
        #  Remove the default handler
    app.logger.removeHandler(default_handler)

    formatter = logging.Formatter(
      '%(asctime)s [%(thread)d:%(threadName)s] [%(filename)s:%(module)s:%(funcName)s] '
      '[%(levelname)s]: %(message)s'
    )

    #  Output the log to a file 
    # 1 MB = 1024 * 1024 bytes
    #  Set the log file size here to 500MB , more than 500MB Automatically start writing new log files, archiving history files 
    file_handler = RotatingFileHandler(
      filename=LOG_PATH_ALL,
      mode='a',
      maxBytes=LOG_FILE_MAX_BYTES,
      backupCount=LOG_FILE_BACKUP_COUNT,
      encoding='utf-8'
    )

    file_handler.setFormatter(formatter)
    file_handler.setLevel(logging.INFO)

    stream_handler = StreamHandler()
    stream_handler.setFormatter(formatter)
    stream_handler.setLevel(logging.INFO)

    for logger in (
        #  You can also add more logging modules here, see Flask Official documents 
        app.logger,
        logging.getLogger('sqlalchemy'),
        logging.getLogger('werkzeug')

    ):
      logger.addHandler(file_handler)
      logger.addHandler(stream_handler)

Add the log module to the exts. py extension file


# encoding: utf-8
from log import Logger

logger = Logger()

Introduce the logger module in the app. py file, which is the factory module of create_app.


# encoding: utf-8
from flask import Flask
from config import CONFIG
from exts import logger


def create_app():
  app = Flask(__name__)

  #  Load configuration 
  app.config.from_object(CONFIG)

    #  Initialization logger
  logger.init_app(app)

  return app

Run run. py


# -*- coding: utf-8 -*-

from app import create_app

app = create_app()

if __name__ == '__main__':
  app.run()

$ python run.py
* Serving Flask app "app" (lazy loading)
* Environment: production
  WARNING: This is a development server. Do not use it in a production deployment.
  Use a production WSGI server instead.
* Debug mode: on
2019-07-08 08:15:50,396 [140735687508864:MainThread] [_internal.py:_internal:_log] [INFO]: * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
2019-07-08 08:15:50,397 [140735687508864:MainThread] [_internal.py:_internal:_log] [INFO]: * Restarting with stat
2019-07-08 08:15:50,748 [140735687508864:MainThread] [_internal.py:_internal:_log] [WARNING]: * Debugger is active!
2019-07-08 08:15:50,755 [140735687508864:MainThread] [_internal.py:_internal:_log] [INFO]: * Debugger PIN: 234-828-739

Related articles: