stay Python in , In general, we may use our own logging Module to log , Even before me . We need to configure some when using Handler、Formatter To do something , For example, output logs to different locations , Or set a different output format , Or set the log block and backup . But in fact, my personal feeling logging It's not so easy to use , In fact, the configuration is more complicated .
Common use
First look at it. logging Common solutions , I usually configure output to file 、 Console and Elasticsearch. Output to the console is just for direct viewing ; Output to file is convenient for direct storage , Keep a backup of all history ; Output to Elasticsearch, Direct will Elasticsearch As the center of storage and Analysis , Use Kibana It's very convenient to analyze and view the operation .
So here I'm basically right logging Do the following encapsulation :
import logging
import sys
from os import makedirs
from os.path import dirname, exists
from cmreslogging.handlers import CMRESHandler
loggers = {}
LOG_ENABLED = True # Whether to open the log
LOG_TO_CONSOLE = True # Output to console or not
LOG_TO_FILE = True # Output to file
LOG_TO_ES = True # Whether output to Elasticsearch
LOG_PATH = './runtime.log' # Log file path
LOG_LEVEL = 'DEBUG' # The level of logging
LOG_FORMAT = '%(levelname)s - %(asctime)s - process: %(process)d - %(filename)s - %(name)s - %(lineno)d - %(module)s - %(message)s' # Output format of each log
ELASTIC_SEARCH_HOST = 'eshost' # Elasticsearch Host
ELASTIC_SEARCH_PORT = 9200 # Elasticsearch Port
ELASTIC_SEARCH_INDEX = 'runtime' # Elasticsearch Index Name
APP_ENVIRONMENT = 'dev' # Running environment , Such as test environment or production environment
def get_logger(name=None):
"""
get logger by name
:param name: name of logger
:return: logger
"""
global loggers
if not name: name = __name__
if loggers.get(name):
return loggers.get(name)
logger = logging.getLogger(name)
logger.setLevel(LOG_LEVEL)
# Output to console
if LOG_ENABLED and LOG_TO_CONSOLE:
stream_handler = logging.StreamHandler(sys.stdout)
stream_handler.setLevel(level=LOG_LEVEL)
formatter = logging.Formatter(LOG_FORMAT)
stream_handler.setFormatter(formatter)
logger.addHandler(stream_handler)
# output to a file
if LOG_ENABLED and LOG_TO_FILE:
# If the path doesn't exist , Create a log file folder
log_dir = dirname(log_path)
if not exists(log_dir): makedirs(log_dir)
# add to FileHandler
file_handler = logging.FileHandler(log_path, encoding='utf-8')
file_handler.setLevel(level=LOG_LEVEL)
formatter = logging.Formatter(LOG_FORMAT)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
# Output to Elasticsearch
if LOG_ENABLED and LOG_TO_ES:
# add to CMRESHandler
es_handler = CMRESHandler(hosts=[{'host': ELASTIC_SEARCH_HOST, 'port': ELASTIC_SEARCH_PORT}],
# You can configure the corresponding authentication authority
auth_type=CMRESHandler.AuthType.NO_AUTH,
es_index_name=ELASTIC_SEARCH_INDEX,
# One a month Index
index_name_frequency=CMRESHandler.IndexNameFrequency.MONTHLY,
# Additional environmental signage
es_additional_fields={'environment': APP_ENVIRONMENT}
)
es_handler.setLevel(level=LOG_LEVEL)
formatter = logging.Formatter(LOG_FORMAT)
es_handler.setFormatter(formatter)
logger.addHandler(es_handler)
# Save to global loggers
loggers[name] = logger
return logger
How to use it after definition ? Just use the defined method to get one logger, then log The corresponding content can be :
logger = get_logger()
logger.debug('this is a message')
The operation results are as follows :
DEBUG - 2019-10-11 22:27:35,923 - process: 99490 - logger.py - __main__ - 81 - logger - this is a message
Let's take a look at the basic implementation of this definition . First of all, some constants are used to define logging Some basic properties of the module , such as LOG_ENABLED Indicates whether the log function is enabled ,LOG_TO_ES Represents whether to output logs to Elasticsearch, In addition, there are many other basic log configurations , Such as LOG_FORMAT The basic output format of each entry of the log is configured , There is also some necessary information about the connection . These variables can be docked with the command line or environment variables at run time , It is convenient to change some switches and configurations .
And then define this one get_logger Method , Receive a parameter name. First of all, the way to get name after , It's going to get to the big picture loggers Find... In variables ,loggers Variable is a global dictionary , If there is something that has been declared logger, Just get it back directly , No more initialization . If loggers There was no name Corresponding logger, Then create it . establish logger after , You can add various corresponding Handler, If output to the console, use StreamHandler, Output to file FileHandler or RotatingFileHandler, Output to Elasticsearch Just use CMRESHandler, Just configure the corresponding information .
Finally , Will the new logger Save to global loggers Inside and back , So if there's one with the same name logger You can find loggers Straight back .
It depends on extra output to Elasticsearch My bag , be called CMRESHandler, It can support log output to Elasticsearch Inside , You can install it if you want to use it :
pip install CMRESHandler
Its GitHub The address is :github.com/cmanaha/py.…, For specific usage, please refer to its official instructions , For example, configure authentication information , To configure Index Separate information and so on .
good , That's what I used to use before logging To configure , With the above configuration , I can make it logging Output to three positions , And can achieve the corresponding effect . For example, output to Elasticsearch after , I can use it very easily Kibana To view the current operation ,ERROR Log And so on ,
We can also do further statistical analysis based on it .
loguru
The above implementation method is a more feasible configuration scheme . However , I'll still feel something Handler Deserve trouble , Especially when creating a new project, I am too lazy to write some configuration . Even without the configuration above , Use the most basic lines logging To configure , Like the following general configuration :
import logging
logging.basicConfig(level = logging.INFO,format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
I'm too lazy to write , Feeling is not an elegant way to achieve .
If there's demand, there's motivation , see , Someone has implemented such a library , be called loguru, Can be log The configuration and use of is more simple and convenient .
Let's see how it works .
install
First , The installation of this library is very simple , Just use the basic pip Can be installed ,Python 3 The installation of version is as follows :
pip3 install loguru
After installation , We can use this in the project loguru The library .
Basic use
So how to use this library ? Let's use an example to feel :
from loguru import logger
logger.debug('this is a debug message')
See? , There's no need to configure anything , Introduce a logger, Then call it. debug The method can .
stay loguru There is only one main object in it , That's it logger,loguru There is only one logger, And it has been configured with some basic information in advance , For example, friendly formatting 、 Text color information and so on .
The above code runs as follows :
2019-10-13 22:46:12.367 | DEBUG | __main__:<module>:4 - this is a debug message
You can see that the default output format is the above , Have the time 、 Level 、 Module name 、 Line number and log information , You don't need to create... Manually logger, It can be used directly , In addition, its output is color , Looks more friendly .
The above log information is directly output to the console , No output to other places , If you want to output to another location , Like saving as a file , We just need to use one line of code declaration .
For example, output the result to a runtime.log In the document , It can be written like this :
from loguru import logger
logger.add('runtime.log')
logger.debug('this is a debug')
Very simple. , We don't need to declare one more FileHandler 了 , Just a line add Statement handling , After running, you will find the directory runtime.log There is also the output of the console DEBUG Information .
Above are some basic uses , But it's not enough , Let's learn more about some of its functional modules .
Detailed use
Since it's a journal , So the most common one is output to a file .loguru Very powerful support for output to file configuration , For example, it supports output to multiple files , Output by level , Create new files too large , Too long automatic deletion and so on .
Let's take a look at how to implement these , This is basically add How to use this method . Because of this add The method is equivalent to giving logger Added a Handler, It exposes us a lot of parameters to implement Handler Configuration of , Let's introduce in detail .
Let's first look at its method definition :
def add(
self,
sink,
*,
level=_defaults.LOGURU_LEVEL,
format=_defaults.LOGURU_FORMAT,
filter=_defaults.LOGURU_FILTER,
colorize=_defaults.LOGURU_COLORIZE,
serialize=_defaults.LOGURU_SERIALIZE,
backtrace=_defaults.LOGURU_BACKTRACE,
diagnose=_defaults.LOGURU_DIAGNOSE,
enqueue=_defaults.LOGURU_ENQUEUE,
catch=_defaults.LOGURU_CATCH,
**kwargs
):
pass
Look at its source code , It supports so many parameters , Such as level、format、filter、color wait .
sink
We also notice that it has a very important parameter sink, Let's look at the official documents :loguru.readthedocs.io..., It can be learned through sink We can pass in many different data structures , The summary is as follows :
•sink You can pass in a file object , for example sys.stderr perhaps open('file.log', 'w') Fine .
•sink You can pass in a str String or pathlib.Path object , In fact, it represents the file path , If this type is recognized , It will automatically create the log file of the corresponding path and output the log in .
•sink It could be a way , You can define your own output implementation .
•sink It could be a logging Modular Handler, such as FileHandler、StreamHandler wait , Or as we mentioned above CMRESHandler It's OK to do the same , This enables customization Handler Configuration of .
•sink It can also be a custom class , Please refer to official documents for specific implementation specifications .
So , The output we just demonstrated to the file , Just passed it one str String path , He created a log file for us , That's how it works .
format、filter、level
Now let's look at the other parameters , for example format、filter、level wait .
In fact, their concept and format and logging The modules are basically the same , For example, use format、filter、level To specify the output format :
logger.add('runtime.log', format="{time} {level} {message}", filter="my_module", level="INFO")
Delete sink
Add... In addition sink Then we can delete it , Equivalent to refreshing and writing new content .
Delete according to just add Method id Delete it , See the following example :
from loguru import logger
trace = logger.add('runtime.log')
logger.debug('this is a debug message')
logger.remove(trace)
logger.debug('this is another debug message')
Look here , First of all add One. sink, Then get its return value , The assignment is trace. Then I output a log , And then trace Variable to remove Method , Output a log again , See what the result is .
The console output is as follows :
2019-10-13 23:18:26.469 | DEBUG | __main__:<module>:4 - this is a debug message
2019-10-13 23:18:26.469 | DEBUG | __main__:<module>:6 - this is another debug message
Log files runtime.log The contents are as follows :
2019-10-13 23:18:26.469 | DEBUG | __main__:<module>:4 - this is a debug message
You can find , Calling remove After method , It's true that history log Deleted .
In this way, we can refresh and write the log again .
rotation To configure
It was used loguru We can also use it very easily rotation To configure , For example, we want to output a log file every day , Or the file is too large to automatically separate log files , We can use it directly add Methodical rotation Parameters to configure .
Let's take a look at the following example :
logger.add('runtime_{time}.log', rotation="500 MB")
Through such a configuration, we can realize every 500MB Store a file , Every log If the file is too large, a new one will be created log file . We're configuring log A... Was added to the name time Place holder , In this way, the time can be automatically replaced when generating , Generate a file name containing the time of log file .
We can also use rotation Parameter implementation timing creation log file , for example :
logger.add('runtime_{time}.log', rotation='00:00')
In this way, it can be realized every day 0 Click to create a new log File output .
In addition, we can also configure log File cycle time , For example, create one every other week log file , It is written as follows :
logger.add('runtime_{time}.log', rotation='1 week')
So we can create one in a week log The file .
retention To configure
In many cases , Some very old log It's no use to us , It takes up some storage space , It's a waste if you don't get rid of it .retention This parameter can be used to configure the maximum retention time of logs .
For example, we want to set the maximum retention of log files 10 God , It can be configured like this :
logger.add('runtime.log', retention='10 days')
such log The file will be kept up to date 10 Days of log, Mom doesn't have to worry anymore log The problem of deposition .
compression To configure
loguru You can also configure the compressed format of the file , For example, use zip File format save , Examples are as follows :
logger.add('runtime.log', compression='zip')
This can save more storage space .
String formatting
loguru At output log It also provides a very friendly string formatting function , like this :
logger.info('If you are using Python {}, prefer {feature} of course!', 3.6, feature='f-strings')
So it's very convenient to add parameters .
Traceback Record
In many cases , If a run error is encountered , And we're printing out log In case of carelessness, it is not configured properly Traceback Output , There's a good chance we won't be able to track down the mistakes .
But it did loguru after , We can do it directly with the decorator it provides Traceback The record of , A configuration like this will do :
@logger.catch
def my_function(x, y, z):
# An error? It's caught anyway!
return 1 / (x + y + z)
Let's do a test , When we call, all three parameters are passed in 0, Direct initiation divided by 0 Error of , See what happens :
my_function(0, 0, 0)
After running , You can find log It's inside Traceback Information , And it gives us the value of the variable at that time , I really can't praise ! give the result as follows :
> File "run.py", line 15, in <module>
my_function(0, 0, 0)
└ <function my_function at 0x1171dd510>
File "/private/var/py/logurutest/demo5.py", line 13, in my_function
return 1 / (x + y + z)
│ │ └ 0
│ └ 0
└ 0
ZeroDivisionError: division by zero
therefore , use loguru Log tracking can be easily implemented ,debug It's probably ten times more efficient ?
The above is all the content shared this time , Want to know more python Welcome to official account :Python Programming learning circle , send out “J” Free access to , Daily dry goods sharing