When deploying a project , It's impossible to output all the information directly to the console , We can record this information in a log file , This is not only convenient for us to check the running situation of the program , You can also quickly locate the problem location according to the log generated by the runtime when the project fails .
Python Standard library logging Used as a log , There are six log levels by default ( Parentheses are the values corresponding to the level ),NOTSET(0)、DEBUG(10)、INFO(20)、WARNING(30)、ERROR(40)、CRITICAL(50). When we customize the log level, be careful not to have the same value as the default log level ,logging During execution, log information greater than or equal to the set log level is output , If the log level is set to INFO, be INFO、WARNING、ERROR、CRITICAL All level logs will be output .
Official logging The work flow chart of the module is as follows :
From the figure below, we can see these Python type ,Logger、LogRecord、Filter、Handler、Formatter.
Type specification :
Logger: journal , Expose functions to applications , Determine which logs are valid based on the logger and filter levels .
LogRecord : Loggers , Send the log to the corresponding processor for processing .
Handler : processor , take ( The loggers produce ) Log records are sent to the appropriate destination .
Filter : filter , Provides better granularity control , It can decide which logging to output .
Formatter: formatter , Indicates the layout of the logging in the final output .
The output format of the log can be considered as setting , The default format is shown below .
logging Very simple to use , Use basicConfig() Method can meet the basic use needs , If the method does not pass in parameters , Will be created according to the default configuration Logger object , The default log level is set to WARNING, The default log output format is shown in the figure above , The optional parameters of this function are shown in the following table .
Parameter name
Parameters to describe
filename
The file name of the log output to the file
filemode
File mode ,r[+]、w[+]、a[+]
format
The format of the log output
datefat
Format of date and time attached to the log
style
Format placeholder , The default is "%" and “{}”
level
Set the log output level
stream
Define the output stream , Used to initialize StreamHandler object , You can't filename Parameters used together , Otherwise ValueError abnormal
handles
Define processor , Used to create Handler object , Unable to join filename 、stream Parameters used together , Otherwise, it will throw ValueError abnormal
The sample code is as follows :
import logging logging.basicConfig() logging.debug('This is a debug message') logging.info('This is an info message') logging.warning('This is a warning message') logging.error('This is an error message') logging.critical('This is a critical message')
The output is as follows :
WARNING:root:This is a warning message ERROR:root:This is an error message CRITICAL:root:This is a critical message
Pass in commonly used parameters , The sample code is as follows ( Here, the variables in the log format placeholder will be described later ):
import logging logging.basicConfig(filename="test.log", filemode="w", format="%(asctime)s %(name)s:%(levelname)s:%(message)s", datefmt="%d-%m-%Y %H:%M:%S", level=logging.DEBUG) logging.debug('This is a debug message') logging.info('This is an info message') logging.warning('This is a warning message') logging.error('This is an error message') logging.critical('This is a critical message')
Generated log files test.log , The contents are as follows :
13-10-18 21:10:32 root:DEBUG:This is a debug message 13-10-18 21:10:32 root:INFO:This is an info message 13-10-18 21:10:32 root:WARNING:This is a warning message 13-10-18 21:10:32 root:ERROR:This is an error message 13-10-18 21:10:32 root:CRITICAL:This is a critical message
But when an exception occurs , Use parameterless directly debug()、info()、warning()、error()、critical() Method does not record exception information , Need to set up exc\_info Parameter is True Can only be , Or use exception() Method , You can also use log() Method , But also set the log level and exc\_info Parameters .
import logging logging.basicConfig(filename="test.log", filemode="w", format="%(asctime)s %(name)s:%(levelname)s:%(message)s", datefmt="%d-%M-%Y %H:%M:%S", level=logging.DEBUG) a = 5 b = 0 try: c = a / b except Exception as e: # Choose one of the following three ways , The first one is recommended logging.exception("Exception occurred") logging.error("Exception occurred", exc_info=True) logging.log(level=logging.DEBUG, msg="Exception occurred", exc_info=True)
The above basic use can let us get started quickly logging modular , But generally, it can not meet the actual use , We also need to customize Logger.
A system has only one Logger object , And the object cannot be instantiated directly , you 're right , The singleton pattern is used here , obtain Logger The method of the object is getLogger.
Be careful : The singleton pattern here does not mean that there is only one Logger object , It means that the whole system has only one root Logger object ,Logger Object is executing info()、error() When a method is called, it is actually the root Logger The object corresponds to info()、error() Other methods .
We can create multiple Logger object , But the real output log is the root Logger object . Every Logger Objects can have a name , If you set logger = logging.getLogger(__name__)
,\_\_name\_\_ yes Python A special built-in variable in , It represents the name of the current module ( The default is \_\_main\_\_). be Logger Object's name It is recommended to use a namespace hierarchy with a dot as a separator .
Logger Object can be set to more than one Handler Objects and Filter object ,Handler Object can be set Formatter object .Formatter Object is used to set the specific output format , The format of common variables is shown in the following table , See... For all parameters Python(3.7) Official documents :
Variable
Format
Variable description
asctime
%(asctime)s
Construct the time of the log into a readable form , By default, it is accurate to milliseconds , Such as 2018-10-13 23:24:57,832, You can specify additional datefmt Parameter to specify the format of the variable
name
%(name)
The name of the log object
filename
%(filename)s
File name without path
pathname
%(pathname)s
The file name containing the path
funcName
%(funcName)s
The name of the function where the log is located
levelname
%(levelname)s
Level name of the log
message
%(message)s
Specific log information
lineno
%(lineno)d
The line number of the log record
pathname
%(pathname)s
The full path
process
%(process)d
The current process ID
processName
%(processName)s
Current process name
thread
%(thread)d
Current thread ID
threadName
%threadName)s
Current thread name
Logger Objects and Handler Objects can be set to a level , And default Logger The object level is 30 , That is to say WARNING, Default Handler The object level is 0, That is to say NOTSET.logging The module is designed for better flexibility , For example, sometimes we both want to output DEBUG Level of logging , I also want to output in the file WARNING Level of logging . You can set only the lowest level Logger object , Two different levels of Handler object , The sample code is as follows :
import logging import logging.handlers logger = logging.getLogger("logger") handler1 = logging.StreamHandler() handler2 = logging.FileHandler(filename="test.log") logger.setLevel(logging.DEBUG) handler1.setLevel(logging.WARNING) handler2.setLevel(logging.DEBUG) formatter = logging.Formatter("%(asctime)s %(name)s %(levelname)s %(message)s") handler1.setFormatter(formatter) handler2.setFormatter(formatter) logger.addHandler(handler1) logger.addHandler(handler2) # Respectively 10、30、30 # print(handler1.level) # print(handler2.level) # print(logger.level) logger.debug('This is a customer debug message') logger.info('This is an customer info message') logger.warning('This is a customer warning message') logger.error('This is an customer error message') logger.critical('This is a customer critical message')
The console output is :
2018-10-13 23:24:57,832 logger WARNING This is a customer warning message 2018-10-13 23:24:57,832 logger ERROR This is an customer error message 2018-10-13 23:24:57,832 logger CRITICAL This is a customer critical message
The output content in the file is :
2018-10-13 23:44:59,817 logger DEBUG This is a customer debug message 2018-10-13 23:44:59,817 logger INFO This is an customer info message 2018-10-13 23:44:59,817 logger WARNING This is a customer warning message 2018-10-13 23:44:59,817 logger ERROR This is an customer error message 2018-10-13 23:44:59,817 logger CRITICAL This is a customer critical message
Created a custom Logger object , Don't use it logging The log output method in , These methods are configured by default Logger object , Otherwise, the output log information will be repeated .
import logging import logging.handlers logger = logging.getLogger("logger") handler = logging.StreamHandler() handler.setLevel(logging.DEBUG) formatter = logging.Formatter("%(asctime)s %(name)s %(levelname)s %(message)s") handler.setFormatter(formatter) logger.addHandler(handler) logger.debug('This is a customer debug message') logging.info('This is an customer info message') logger.warning('This is a customer warning message') logger.error('This is an customer error message') logger.critical('This is a customer critical message')
The output is as follows ( You can see that the log information is output twice ):
2018-10-13 22:21:35,873 logger WARNING This is a customer warning message WARNING:logger:This is a customer warning message 2018-10-13 22:21:35,873 logger ERROR This is an customer error message ERROR:logger:This is an customer error message 2018-10-13 22:21:35,873 logger CRITICAL This is a customer critical message CRITICAL:logger:This is a customer critical message
explain : In the introduction of log output python When you file , Such as import test.py
, After meeting the log level greater than the current setting, the log in the import file will be output .
Through the example above , We know how to create a Logger The required configuration of the object , The above is directly hard coded to configure objects in the program , Configuration can also be obtained from dictionary type objects and configuration files . open logging.config Python file , You can see the configuration resolution conversion function .
Get configuration information from the dictionary :
import logging.config config = { 'version': 1, 'formatters': { 'simple': { 'format': '%(asctime)s - %(name)s - %(levelname)s - %(message)s', }, # Other formatter }, 'handlers': { 'console': { 'class': 'logging.StreamHandler', 'level': 'DEBUG', 'formatter': 'simple' }, 'file': { 'class': 'logging.FileHandler', 'filename': 'logging.log', 'level': 'DEBUG', 'formatter': 'simple' }, # Other handler }, 'loggers':{ 'StreamLogger': { 'handlers': ['console'], 'level': 'DEBUG', }, 'FileLogger': { # both console Handler, also file Handler 'handlers': ['console', 'file'], 'level': 'DEBUG', }, # Other Logger } } logging.config.dictConfig(config) StreamLogger = logging.getLogger("StreamLogger") FileLogger = logging.getLogger("FileLogger") # Omit log output
Get configuration information from the configuration file :
Common configuration files are ini Format 、yaml Format 、JSON Format , Or get it from the network , As long as there is a corresponding file parser parsing configuration , The following shows only ini Format and yaml Format configuration .
test.ini file
[loggers] keys=root,sampleLogger [handlers] keys=consoleHandler [formatters] keys=sampleFormatter [logger_root] level=DEBUG handlers=consoleHandler [logger_sampleLogger] level=DEBUG handlers=consoleHandler qualname=sampleLogger propagate=0 [handler_consoleHandler] class=StreamHandler level=DEBUG formatter=sampleFormatter args=(sys.stdout,) [formatter_sampleFormatter] format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
testinit.py file
import logging.config logging.config.fileConfig(fname='test.ini', disable_existing_loggers=False) logger = logging.getLogger("sampleLogger") # Omit log output
test.yaml file
version: 1 formatters: simple: format: '%(asctime)s - %(name)s - %(levelname)s - %(message)s' handlers: console: class: logging.StreamHandler level: DEBUG formatter: simple loggers: simpleExample: level: DEBUG handlers: [console] propagate: no root: level: DEBUG handlers: [console]
testyaml.py file
import logging.config # Need to install pyymal library import yaml with open('test.yaml', 'r') as f: config = yaml.safe_load(f.read()) logging.config.dictConfig(config) logger = logging.getLogger("sampleLogger") # Omit log output
1、 Chinese garbled
In the above example, the log output is in English , If you can't output the log to the file, there will be Chinese garbled code , How to solve this problem ?FileHandler When creating an object, you can set the file code , If the file encoding is set to “utf-8”(utf-8 and utf8 Equivalent ), You can solve the problem of Chinese garbled code . One way is to customize Logger object , You need to write a lot of configuration , Another method is to use the default configuration method basicConfig(), Pass in handlers Processor list object , Among them handler Set the encoding of the file . Many online methods are ineffective , The key reference codes are as follows :
# Customize Logger To configure handler = logging.FileHandler(filename="test.log", encoding="utf-8")
# Use default Logger To configure logging.basicConfig(handlers=[logging.FileHandler("test.log", encoding="utf-8")], level=logging.DEBUG)
2、 Temporarily disable log output
Sometimes we don't want log output , But after that, I want to output the log . If we print information with print() Method , Then you need to put all print() Methods are commented out , And used logging after , We have a one button switch to close the log " magic ". One way is to use the default configuration , to logging.disabled() Method to pass in the disabled log level , You can disable log output below the setting level , Another way is to customize Logger when ,Logger Object's disable Property set to True, The default value is False, That is, do not disable .
logging.disable(logging.INFO)
logger.disabled = True
3、 Log files are divided by time or size
If you save the log in a file , For a long time , Or more logs , A single log file will be large , It is not conducive to backup , And it's not good to see . We wonder if we can divide the log files according to time or size ? The answer must be yes , And it's simple ,logging Considering our needs .logging.handlers Provided in the document TimedRotatingFileHandler and RotatingFileHandler Class can be divided according to time and size . Open this. handles file , You can see that there are other functions Handler class , They all inherit from the base class BaseRotatingHandler.
# TimedRotatingFileHandler Class constructor def __init__(self, filename, when='h', interval=1, backupCount=0, encoding=None, delay=False, utc=False, atTime=None): # RotatingFileHandler Class constructor def __init__(self, filename, mode='a', maxBytes=0, backupCount=0, encoding=None, delay=False)
The sample code is as follows :
# every other 1000 Byte Divide a log file , Backup file is 3 individual file_handler = logging.handlers.RotatingFileHandler("test.log", mode="w", maxBytes=1000, backupCount=3, encoding="utf-8")
# every other 1 Hours Divide a log file ,interval It's the time interval , Backup file is 10 individual handler2 = logging.handlers.TimedRotatingFileHandler("test.log", when="H", interval=1, backupCount=10)
**Python Although the official website says logging Libraries are thread safe , But in multiple processes 、 Multithreading 、 There are still problems worth considering in a multi process and multi-threaded environment , such as , How to log by process ( Or thread ) Divided into different log files , That is, a process ( Or thread ) Corresponding to a file .
summary :Python logging The library design is really flexible , If you have special needs, you can also use this foundation logging Improve on the library , Create a new Handler Class to solve problems in practical development .
In this paper, from https://juejin.cn/post/6844903692915703815, If there is any infringement , Please contact to delete .