Recently, because of a small demand , You need to save the log to a file . Because debugging is usually done with print, When you don't need it, you have to print Delete , It's not convenient , And this can only output the error message to the console . So I went online to check ,python There's a built-in module logging, Used to output log information , Various configurations can be made , After reading it, I felt sorry to meet you so late . Here are some personal summaries , It is mainly the induction of their own learning , I hope it can help you .
The first is the simplest use :
# -*- coding: utf-8 -*-
import logging
logging.debug('debug Level , It is generally used to print some debugging information , The lowest level ')
logging.info('info Level , It is generally used to print some normal operation information ')
logging.warning('waring Level , Generally used to print warning information ')
logging.error('error Level , Usually used to print some error messages ')
logging.critical('critical Level , It is usually used to print some fatal error messages , The highest level ')
In this way, you can directly output log information on the console :
WARNING:root:waring Level , Generally used to print warning information
ERROR:root:error Level , Usually used to print some error messages
CRITICAL:root:critical Level , It is usually used to print some fatal error messages , The highest level
You will find that only the following three messages are output , This is because logging It's hierarchical , above 5 Levels of information are incremented from top to bottom , Can be set by logging Of level, Make it only print information above a certain level . Because the default level is WARNING, So only WARNING Logs of above levels are printed out . If we want to debug and info And print it out , have access to basicConfig Configure it :
logging.basicConfig(level=logging.DEBUG)
So the output of the console will include the above 5 All information .
The log level is not only python Only then , Basically, logs are classified by level , This allows us to focus on different priorities at different times , For example, we put some debugging information as debug Level output of , And the logging Of level Set to DEBUG, So when we don't need to display these logs in the future , Only need to level Set to info Or higher , Don't look like print You should also comment or delete that statement .
We found that the log output information above is very brief , It can't meet our needs for the time being , For example, we may need to output the time of this message , Location, etc , This can also be done by basicConfig To configure .
logging.basicConfig(format='%(asctime)s - %(filename)s[line:%(lineno)d] - %(levelname)s: %(message)s',
level=logging.DEBUG)
Then the output will be in this format :
2019-07-19 15:54:26,625 - log_test.py[line:11] - DEBUG: debug Level , It is generally used to print some debugging information , The lowest level
format You can specify the content and format of the output , Its built-in parameters are as follows :
%(name)s:Logger Name
%(levelno)s: Print log level values
%(levelname)s: Print the log level name
%(pathname)s: Print the path of the currently executing program , In fact, that is sys.argv[0]
%(filename)s: Print the name of the currently executing program
%(funcName)s: Print the current function of the log
%(lineno)d: Print the current line number of the log
%(asctime)s: Time to print the log
%(thread)d: Print thread ID
%(threadName)s: Print thread name
%(process)d: Printing process ID
%(message)s: Print log information
Besides ,basicConfig There are many other configurations available , We will continue to introduce .
We just output the log to the console , But many times we may need to save the log to a file , When something goes wrong with this program , It is convenient for us to locate according to the log information . The easiest way is to use basicConfig:
logging.basicConfig(format='%(asctime)s - %(filename)s[line:%(lineno)d] - %(levelname)s: %(message)s',
level=logging.DEBUG,
filename='test.log',
filemode='a')
Just add... To the above configuration filename
and filemode
Parameters , So you can output the log to test.log It's in the document , If there is no such file, it will be created automatically . The parameter filemode
Indicates file open mode , If it is not set, it defaults to ’a’, That is, append mode , You don't have to ; It can also be set to ’w’, Each log write overwrites the previous log . But after doing this , We will find that the console does not output , How to output to the console and write to the file ? This requires further study .
We just use logging Perform very simple operations , But the effect is limited , Actually logging The library adopts a modular design , There are many components available : Recorder 、 processor 、 Filters and formatters .
In short , among Logger Is responsible for logging messages , And then where do we put these log messages , hand Handler Handle ,Filter Help us filter information ( Not limited to filtering by level ),Formatter Just follow the above format One meaning , Used to set the log content and format .
such , Let's try using modules , Log again :
logger = logging.getLogger('test')
logger.debug('debug Level , It is generally used to print some debugging information , The lowest level ')
logger.info('info Level , It is generally used to print some normal operation information ')
logger.warning('waring Level , Generally used to print warning information ')
logger.error('error Level , Usually used to print some error messages ')
logger.critical('critical Level , It is usually used to print some fatal error messages , The highest level ')
First line getLogger Got a logger , The name identifies this Logger. Then the following output method starts with us logging The usage of is very similar , Doesn't it look very simple . But that won't work , An error will be reported after operation :
No handlers could be found for logger "test"
We didn't do it for this logger Appoint handler, It doesn't know what to do with logs , Where to export . Then we'll add one to him Handler Well ,Handler There are many kinds of , Commonly used 4 Kind of :
Now let's start with the simplest StreamHandler Output the log to the console :
logger = logging.getLogger('test')
stream_handler = logging.StreamHandler()
logger.addHandler(stream_handler)
...
So you can see on the console :
waring Level , Generally used to print warning information
error Level , Usually used to print some error messages
critical Level , It is usually used to print some fatal error messages , The highest level
A few logs are missing , Because we have not set the log level , We also set the level , And also use Formatter The module sets the output format .
logger = logging.getLogger('test')
logger.setLevel(level=logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(filename)s[line:%(lineno)d] - %(levelname)s: %(message)s')
stream_handler = logging.StreamHandler()
stream_handler.setLevel(logging.DEBUG)
stream_handler.setFormatter(formatter)
logger.addHandler(stream_handler)
...
We found that Formatter It's for handler Set up , That makes sense , because handler Is responsible for outputting logs to , So it is to format it , Rather than to logger; What then? level It needs to be set twice ? to logger Setting is to tell it which levels of logs to log , to handler Setting is to tell it which level of logs to output , It is equivalent to two times of filtration . The advantage of this is , When we have multiple log destinations , Such as saving to a file , And output to the console , You can set different levels for them ;logger The level of is filtered first , So be logger Filtered logs handler It can't be recorded , So you can just change logger And affect all outputs . The combination of the two makes it easier to manage the level of logging .
With handler, We can easily output the log to the console and file at the same time :
logger = logging.getLogger('test')
logger.setLevel(level=logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(filename)s[line:%(lineno)d] - %(levelname)s: %(message)s')
file_handler = logging.FileHandler('test2.log')
file_handler.setLevel(level=logging.INFO)
file_handler.setFormatter(formatter)
stream_handler = logging.StreamHandler()
stream_handler.setLevel(logging.DEBUG)
stream_handler.setFormatter(formatter)
logger.addHandler(file_handler)
logger.addHandler(stream_handler)
Just one more FileHandler that will do .
Sometimes we need to split the log file , To facilitate our management .python Two processors are provided , It is convenient for us to split the file :
How to use it is the same as above Handler similar , Just add some parameter configurations , such as when='D'
Indicates that the file is segmented in days , The meaning of other parameters can be referred to :Python + logging Output to the screen , take log Log write file
from logging import handlers
time_rotating_file_handler = handlers.TimedRotatingFileHandler(filename='rotating_test.log', when='D')
time_rotating_file_handler.setLevel(logging.DEBUG)
time_rotating_file_handler.setFormatter(formatter)
logger.addHandler(time_rotating_file_handler)
If it's changed to when='S'
, Then cut in seconds , After running several times, the file will be generated :
Those without suffix are the latest log files .
Reference article : Python + logging Output to the screen , take log Log write file Python Standard module –logging