Browse documents by chapter :
Python 3.11 What's new ?
Or display from 2.0 Since All new changes
course
From here on
Standard library reference
Put it beside the pillow for reference
Language reference
Explain the basic content and basic grammar
Python Installation and use
Various operating systems are introduced
Python Common guidelines
Learn more about specific topics
install Python modular
From the official PyPI Or install modules from other sources
distribution Python modular
Publish module , For others to install
Extend and embed
to C/C++ Programmer's tutorial
Python/C API Interface
to C/C++ Programmer's reference manual
common problem
Frequently asked questions ( The answer is also !)
Indexes and tables :
Global module index
Quickly find all modules
General catalogue
All functions 、 class 、 The term
Glossary
Explain the most important terms
Search page
Search this document
Complete directory
List all chapters and sub chapters
Python course — Python 3.11.0b3 file
Python Module index — Python 3.11.0b3 file
Python Language reference manual — Python 3.11.0b3 file
Built in functions — Python 3.11.0b3 file
Python Standard library — Python 3.11.0b3 file
Extend and embed Python Interpreter — Python 3.11.0b3 file
Python The interpreter has many built-in functions and types , It can be used at any time . The following list is given in alphabetical order .
Built in functions
A
abs()
aiter()
all()
any()
anext()
ascii()
B
bin()
bool()
breakpoint()
bytearray()
bytes()
C
callable()
chr()
classmethod()
compile()
complex()
D
delattr()
dict()
dir()
divmod()
E
enumerate()
eval()
exec()
F
filter()
float()
format()
frozenset()
G
getattr()
globals()
H
hasattr()
hash()
help()
hex()
I
id()
input()
int()
isinstance()
issubclass()
iter()
L
len()
list()
locals()
M
map()
max()
memoryview()
min()
N
next()
O
object()
oct()
open()
ord()
P
pow()
print()
property()
R
range()
repr()
reversed()
round()
S
set()
setattr()
slice()
sorted()
staticmethod()
str()
sum()
super()
T
tuple()
type()
V
vars()
Z
zip()
_
__import__()
Python Language reference manual — Python 3.11.0b3 file
Python Language reference manual It describes Python The specific grammar and semantics of language , This library reference introduces and Python Standard library issued together . It also describes what is usually included in Python Some optional components in the distribution .
Python The standard library is huge , The range of components provided is very wide , As shown in the following table of contents . This library contains multiple built-in modules ( With C To write ),Python Programmers must rely on them to implement system level functions , Such as files I/O, In addition, there are a lot of Python Write modules , Provides standard solutions to many problems in everyday programming . Some of these modules are specially designed , By abstracting platform specific functions into platform neutral API To encourage and strengthen Python Portability of program .
Windows Version of Python The installation program usually contains the entire standard library , Often there are many extra components . For classes Unix operating system ,Python It's usually divided into a series of software packages , Therefore, you may need to use the package management tools provided by the operating system to obtain some or all of the optional components .
Beyond this standard library, there are thousands and ever-increasing other components ( From a separate program 、 modular 、 Software package to complete application development framework ), visit Python Package index You can get these third-party packages .
abs(x)
Returns the absolute value of a number . Parameters can be integers 、 Floating point numbers or anything that implements
__abs__()
The object of . If the parameter is a complex number , Returns its module .
aiter(async_iterable)
return asynchronous iterable Of asynchronous iterator . Equivalent to calling
x.__aiter__()
.Be careful : And iter() Different ,aiter() Version without two parameters .
3.10 New features .
all(iterable)
If iterable All elements of are true ( Or the iteratable object is empty ) Then return to
True
. Equivalent to :def all(iterable): for element in iterable: if not element: return False return True
awaitable anext(async_iterator[, default])
When entering await In the state of , From the given asynchronous iterator Return to the next data item , When the iteration is completed, it returns default.
This is a built-in function next() The asynchronous version of , Be similar to :
call async_iterator Of __anext__() Method , Return to one awaitable. Wait for the next value of the iterator to be returned . If given default, Then the given value will be returned after the iteration , Otherwise it will trigger StopAsyncIteration.
3.10 New features .
any(iterable)
If iterable If any element of is true, it returns
True
. If the iteratable object is empty , returnFalse
. Equivalent to :def any(iterable): for element in iterable: if element: return True return False
ascii(object)
And repr() similar , Returns a string , Represents the printable form of an object , But in repr() In the returned string , Not ASCII Characters can use
\x
、\u
and\U
Transference . The resulting string is similar to Python 2 in repr() Return result of .
bin(x)
Convert an integer to “0b” Prefix binary string . The result is a legal Python expression . If x No Python Of int object , It has to define
__index__()
Method , To return an integer value . Here are some examples :>>>
>>> bin(3) '0b11' >>> bin(-10) '-0b1010'To control whether prefixes are displayed “0b”, The following two schemes can be adopted :
>>>
>>> format(14, '#b'), format(14, 'b') ('0b1110', '1110') >>> f'{14:#b}', f'{14:b}' ('0b1110', '1110')See also format() For more information .
class bool([x])
Returns a Boolean value ,
True
orFalse
.x Use standard Truth test process convert . If x by False Or omit , Then return toFalse
; Otherwise return toTrue
. bool Class is int Subclasses of ( see Numeric type --- int, float, complex ). It can no longer be inherited . Its only example isFalse
andTrue
( Refer to the Boolean value ).stay 3.7 Version change : x Now it can only be used as a position parameter .
breakpoint(*args, **kws)
This function will trap you in the debugger when called . say concretely , It calls sys.breakpointhook() , Direct delivery
args
andkws
. By default ,sys.breakpointhook()
call pdb.set_trace() And there are no parameters . under these circumstances , It is purely a convenience function , So you don't have to explicitly import pdb And type as little code as possible to enter the debugger . however , sys.breakpointhook() It can be set to other functions and be breakpoint() Automatically call , To allow access to the debugger you want .Trigger a Audit events
builtins.breakpoint
With parametersbreakpointhook
.3.7 New features .
class bytearray([source[, encoding[, errors]]])
Back to a new bytes Array . bytearray Class is a variable sequence , The scope is 0 <= x < 256 The integer of . It has variable sequences most common methods , see Variable sequence type Description of ; At the same time there is bytes Most methods of type , See bytes and bytearray operation .
Optional parameters source Arrays can be initialized in different ways :
If it's a string, You must provide encoding Parameters (errors Parameters are still optional );bytearray() Will use str.encode() Methods to string Into a bytes.
If it's a integer, The array with the size of this number will be initialized , And use null Byte padding .
If it is a follow Buffer interface The object of , The read-only buffer of this object will be used to initialize the byte array .
If it's a iterable Iteratable object , The range of its elements must be
0 <= x < 256
The integer of , It will be used as the initial content of the array .If there are no arguments , The creation size is 0 Array of .
See also Binary sequence type --- bytes, bytearray, memoryview and bytearray object .
class bytes([source[, encoding[, errors]]])
Back to a new “bytes” object , This is an immutable sequence , The scope is
0 <= x < 256
The integer of .bytes yes bytearray The immutable version of —— With the same method of not changing the sequence , Support the same index 、 Slicing operation .therefore , Arguments of constructor bytearray() identical .
Byte objects can also be created with literals , See String and byte string literals .
See also Binary sequence type --- bytes, bytearray, memoryview,bytes object and bytes and bytearray operation .
callable(object)
If parameters object If it is callable, it returns True, Otherwise return to False. If you return
True
, The call may still fail , But if you go backFalse
, Call object Will definitely not succeed . Note that classes are callable ( The calling class will return a new instance ); If the class to which the instance belongs has__call__()
Then it is callable .3.2 New features : This function starts at Python 3.0 Removed , But in Python 3.2 Be rejoined .
chr(i)
return Unicode The code point is an integer i The character string format of . for example ,
chr(97)
Return string'a'
,chr(8364)
Return string'€'
. This is a ord() The inverse function of .The legal range of arguments is 0 To 1,114,111(16 The decimal representation is 0x10FFFF). If i Beyond that range , Will trigger ValueError abnormal .
@classmethod
Encapsulate a method into a class method .
The first parameter implied by a class method is the class , Just as the instance method receives an instance as a parameter . To declare a class method , As usual, please use the following scheme :
class C: @classmethod def f(cls, arg1, arg2): ...
@classmethod
This form is called functional decorator -- For details, please refer to Function definition .Class methods can be called on a class ( for example
C.f()
) It can also be done on the instance ( for exampleC().f()
). Class instances other than the class to which they belong are ignored . If a class method is called on a derived class of its class , The derived class object will be passed in as the first implicit parameter .Class method and C++ or Java The static methods in are different . If you need the latter , Please refer to staticmethod(). More about class methods , see also Standard type hierarchy .
stay 3.9 Version change : Class methods can now wrap other Descriptor for example property().
stay 3.10 Version change : Class methods now inherit the properties of methods (
__module__
、__name__
、__qualname__
、__doc__
and__annotations__
), And have a new ``__wrapped__`` attribute .stay 3.11 Version change : Class methods can no longer wrap other descriptors such as property().
compile(source, filename, mode, flags=0, dont_inherit=False, optimize=- 1)
take source Compile into code or AST object . Code objects can be exec() or eval() perform .source It can be a regular string 、 Byte string , perhaps AST object . See ast Module documentation to learn how to use AST object .
filename The argument needs to be the file name read by the code ; If the code does not need to be read from the file , You can pass in some recognizable values ( Often use
'<string>'
).mode The argument specifies the pattern that must be used to compile the code . If source It's a sequence of statements , It can be
'exec'
; If it's a single expression , It can be'eval'
; If it's a single interactive statement , It can be'single'
.( In the last case , If the expression execution result is notNone
Will be printed out .)Optional parameters flags and dont_inherit Controls which... Should be activated Compiler Options And which... Should be allowed future characteristic . If neither is provided ( Or it's all zero ) Then the code will apply and call compile() The code is compiled with the same flag . If given flags Parameters are not given dont_inherit ( Or zero ) Will be used in addition to the flags that will be used anyway flags Compiler options and... Specified by parameters future sentence . If dont_inherit Is a nonzero integer , Only use flags Parameters -- Flags in peripheral code (future Features and compiler options ) Will be ignored .
Compiler options and future Statements are indicated by bits . Bits can be bit by bit together OR To indicate multiple options . Specify specific future The bits required for the feature can be in __future__ Modular
_Feature
Example ofcompiler_flag
Property . Compiler flag Can be in ast Find the module withPyCF_
The name of the prefix .optimize The argument specifies the optimization level of the compiler ; The default value is
-1
Select the... Associated with the interpreter -O Option the same optimization level . The explicit level is0
( No optimization ;__debug__
It's true )、1
( Assertion deleted ,__debug__
For false ) or2
( The document string is also deleted ).If the compiled source code is illegal , This function triggers SyntaxError abnormal ; If the source code contains null byte , It triggers ValueError abnormal .
If you want to analyze Python Code AST Express , see also ast.parse().
Trigger a Audit events
compile
With parameterssource
,filename
.remarks
stay
'single'
or'eval'
When compiling multiline code strings in mode , Input must end with at least one newline character . This makes code Modules make it easier to detect statement integrity .Warning
When compiling a string large enough or complex enough into AST Object time ,Python The interpreter may be because Python AST The compiler crashed due to the stack depth limit .
stay 3.2 Version change : Windows and Mac All line breaks can be used . And in
'exec'
Input in mode doesn't have to end with a newline . Another added optimize Parameters .stay 3.5 Version change : Before source Contained in the null Bytes will trigger TypeError abnormal .
3.8 New features :
ast.PyCF_ALLOW_TOP_LEVEL_AWAIT
You can now pass in the flag to enable access to the highest levelawait
,async for
andasync with
Support for .
class complex([real[, imag]])
The return value is real + imag*1j Complex number , Or convert a string or number to a complex number . If the first parameter is a string , Then it is interpreted as a plural number , And the function call must have no second formal parameter . The second formal parameter cannot be a string . Each argument can be of any numeric type ( Including the plural ). If you omit imag, The default value is zero , Constructors will look like int and float Same as the numerical conversion . If both arguments are omitted , Then return to
0j
.For an ordinary Python object
x
,complex(x)
Will be entrusted tox.__complex__()
. If__complex__()
Undefined will fall back to__float__()
. If__float__()
Undefined will fall back to__index__()
.remarks
When converting from string , The string is in
+
or-
There must be no spaces around the . for examplecomplex('1+2j')
It's legal. , butcomplex('1 + 2j')
Will trigger ValueError abnormal .Numeric type --- int, float, complex Describes the plural type .
stay 3.6 Version change : You can use underscores to group numbers in code text .
stay 3.8 Version change : If
__complex__()
and__float__()
If not defined, go back to__index__()
.
delattr(object, name)
setattr() Related functions . The argument is an object and a string . The string must be a property of the object . If the object allows , This function will delete the specified property . for example
delattr(x, 'foobar')
Equivalent todel x.foobar
.class dict(**kwarg)
class dict(mapping, **kwarg)
class dict(iterable, **kwarg)
Create a new dictionary .dict Object is a dictionary class . See dict and Mapping type --- dict Understand this class .
Other container types , See built-in list、set and tuple class , as well as collections modular .
dir([object])
If there are no arguments , Returns a list of names in the current local scope . If there are arguments , It will try to return the list of valid properties of the object .
If the object has a name
__dir__()
Methods , Then the method will be called , And you must return a list of attributes . This allows for customization__getattr__()
or__getattribute__()
The object of the function can be customized dir() To report their properties .If the object does not provide
__dir__()
Method , This function will try to start from the __dict__ Property and its type object . The resulting list is not necessarily complete , If the object has a custom__getattr__()
When the method is used , The results may not be accurate .default dir() Mechanisms behave differently for different types of objects , It will try to return the most relevant rather than the most complete information :
If the object is a module object , Then the list contains the attribute name of the module .
If the object is a type or class object , Then the list contains their attribute names , And recursively find the properties of all base classes .
otherwise , The list contains the attribute names of the object , Its class attribute name , And recursively find the properties of all base classes of its class .
The returned list is sorted alphabetically . for example :
>>>
>>> import struct >>> dir() # show the names in the module namespace ['__builtins__', '__name__', 'struct'] >>> dir(struct) # show the names in the struct module ['Struct', '__all__', '__builtins__', '__cached__', '__doc__', '__file__', '__initializing__', '__loader__', '__name__', '__package__', '_clearcache', 'calcsize', 'error', 'pack', 'pack_into', 'unpack', 'unpack_from'] >>> class Shape: ... def __dir__(self): ... return ['area', 'perimeter', 'location'] >>> s = Shape() >>> dir(s) ['area', 'location', 'perimeter']remarks
because dir() Mainly for the convenience of interactive use , So it will try to return a collection of names that people are interested in , Instead of trying to ensure the strictness or consistency of the results , Its specific behavior may also change between versions . for example , When the argument is a class ,metaclass The attribute of is not included in the result list .
divmod(a, b)
With two ( Not plural ) The number is the parameter , When doing integer division , Return quotient and remainder . If the operand is a mixed type , Then the rules of binary arithmetic operators apply . For integers , Results and
(a // b, a % b)
identical . For floating-point numbers, the result is ``(q, a % b)``, among q Usually it ismath.floor(a / b)
, But it may be smaller 1. In any case ,q * b + a % b
It's all very close a, Ifa % b
Nonzero , Then the result symbol and b identical , also0 <= abs(a % b) < abs(b)
.
enumerate(iterable, start=0)
Returns an enumeration object .iterable It has to be a sequence , or iterator, Or other objects that support iteration . enumerate() Of the iterator returned __next__() Method returns a tuple , It contains a count value ( from start Start , The default is 0) And through iteration iterable Value gained .
>>>
>>> seasons = ['Spring', 'Summer', 'Fall', 'Winter'] >>> list(enumerate(seasons)) [(0, 'Spring'), (1, 'Summer'), (2, 'Fall'), (3, 'Winter')] >>> list(enumerate(seasons, start=1)) [(1, 'Spring'), (2, 'Summer'), (3, 'Fall'), (4, 'Winter')]Equivalent to :
def enumerate(sequence, start=0): n = start for elem in sequence: yield n, elem n += 1
eval(expression[, globals[, locals]])
The argument is a string , And optional globals and locals.globals The argument must be a dictionary .locals It can be any mapping object .
Expression parsing parameters expression And as a Python Expression to evaluate ( Technically, it's a list of conditions ), use globals and locals Dictionaries as global and local namespaces . If there is globals Dictionaries , And does not contain
__builtins__
Value of key , In the analysis of expression Previously, the string will be inserted as the key to the built-in module builtins The dictionary of is referenced as an item of value . So you can put globals Pass to eval() Before, by passing in your own__builtins__
Dictionary to control which built-in modules can be used by the executed code . If locals If the dictionary is omitted, it defaults to globals Dictionaries . If both dictionaries are omitted , Will use the call eval() In the environment of globals and locals To execute the expression . Be careful ,eval() Cannot access... In a closure environment Nested scope ( Nonlocal variables ).The return value is the evaluation result of the expression . Syntax errors will be reported as exceptions . for example :
>>>
>>> x = 1 >>> eval('x+1') 2This function can also be used to execute arbitrary code objects ( For example, by compile() Objects created ). At this time, the code object is passed in , Instead of a string . If the code object has been used, the parameter is mode Of
'exec'
Compiled , that eval() The return value of will beNone
.Tips : exec() The function supports the dynamic execution of statements . globals() and locals() Function returns the current global and local dictionaries respectively , Available for transmission to eval() or exec() Use .
If the given source data is a string , Then the spaces and tabs before and after it will be eliminated .
See also ast.literal_eval(), This function can safely execute an expression string containing only text .
Trigger a Audit events
exec
With parameterscode_object
.
exec(object, [globals, [locals, ]]*, closure=None)
This function supports dynamic execution Python Code . object Must be a string or code object . If it's a string , Then the string will be parsed into a series Python Statement and execute ( Unless there is a syntax error ). 1 If it's a code object , It will be executed directly . In any case , The executed code should be valid file input ( See... In the reference manual File input section ). Please note that even when passing to exec() In the context of function code ,nonlocal, yield and return Statements cannot be used outside of function definitions . The return value of this function is
None
.In any case , If the optional part is omitted , The code will run in the current scope . If only globals, Must be a dictionary object ( It can't be a subclass of a dictionary ), It is also used to store global variables and local variables . If provided globals and locals, Will be used for global and local variables respectively .locals It can be any dictionary mapping object . please remember , At the module level ,globals and locals It's the same dictionary . If exec Get two independent objects as globals and locals, The code executes as if it were embedded in a class definition .
If globals The dictionary does not contain
__builtins__
Key value , The key will be inserted into the built-in builtins Reference to module dictionary . therefore , Passing the executed code to exec() Before , You can use your own__builtins__
Insert dictionary into globals To control which built-in code can be used .The closure argument specifies a closure--a tuple of cellvars. It's only valid when the object is a code object containing free variables. The length of the tuple must exactly match the number of free variables referenced by the code object.
Trigger a Audit events
exec
With parameterscode_object
.remarks
built-in globals() and locals() The function returns the current global and local dictionaries respectively , So you can pass them on to exec() The second and third arguments of .
remarks
By default ,locals The behavior is as follows locals() The function describes the same : Don't try to change the default locals Dictionaries . If you need to be in exec() Check the code pair when the function returns locals Influence , Please clearly convey locals Dictionaries .
stay 3.11 Version change : Added the closure parameter.
filter(function, iterable)
use iterable Middle function function Back to the real elements , Build a new iterator .iterable It could be a sequence , A container that supports iteration , Or an iterator . If function yes
None
, It will be assumed that it is an identity function , namely iterable All elements that return false will be removed .Please note that ,
filter(function, iterable)
Equivalent to a generator expression , When function NoNone
For the time being(item for item in iterable if function(item))
;function yesNone
For the time being(item for item in iterable if item)
.see also itertools.filterfalse() understand , Only function return false Only when iterable Supplementary functions of elements in .
class float([x])
Return from number or string x Floating point numbers generated .
If the parameter is a string , It should contain a decimal number , The front can be optionally marked , Optionally, there is a blank character before and after . The symbol can be ``'+'`` or
'-'
;'+'
Symbols have no effect on values . Parameters can also be a representation NaN( The digital ) Or a string with positive and negative infinity . To be more exact , After removing the leading and trailing blanks , Input parameters must conform to the following syntax :sign ::= "+" | "-" infinity ::= "Infinity" | "inf" nan ::= "nan" numeric_value ::=floatnumber| infinity | nan numeric_string ::= [sign] numeric_valuethere
floatnumber
Refer to Python Floating point format , stay Floating point face value In the introduction . Case doesn't matter , therefore “inf”、“Inf”、“INFINITY”、“iNfINity” It can be accepted as a spelling form of positive infinity .On the other hand , If the argument is an integer or floating point number , Returns a value with the same value ( stay Python Floating point precision range ) Floating point number . If the argument is Python Floating point precision out of range , It triggers OverflowError.
For an ordinary Python object
x
,float(x)
Will be entrusted tox.__float__()
. If__float__()
Undefined will fall back to__index__()
.If there are no arguments , Then return to
0.0
.Example :
>>>
>>> float('+1.23') 1.23 >>> float(' -12345\n') -12345.0 >>> float('1e-003') 0.001 >>> float('+1E6') 1000000.0 >>> float('-Infinity') -infNumeric type --- int, float, complex Describes floating point types .
stay 3.6 Version change : You can use underscores to group numbers in code text .
stay 3.7 Version change : x Now it can only be used as a position parameter .
stay 3.8 Version change : If
__float__()
If not defined, go back to__index__()
.
format(value[, format_spec])
take value Convert to “ After the formatting ” In the form of , Format by format_spec Control .format_spec The interpretation of depends on value Type of parameter ; But most built-in types use a standard formatting syntax : Format specification Mini language .
default format_spec It's an empty string , It usually gives and calls str(value) Same result .
call
format(value, format_spec)
Will be converted into atype(value).__format__(value, format_spec)
, So in the example dictionary__format__()
The method will not be called . If the method search goes back to object Class but format_spec Not empty , Or if format_spec Or the return value is not a string , It triggers TypeError abnormal .stay 3.4 Version change : When format_spec When it is not an empty string ,
object().__format__(format_spec)
Will trigger TypeError.
class frozenset([iterable])
Back to a new frozenset object , It contains optional parameters iterable The elements in .
frozenset
Is a built-in class . Documentation of this kind , see also frozenset and Collection types --- set, frozenset.Please refer to the built-in set、list、tuple and dict class , as well as collections Module to understand other containers .
getattr(object, name[, default])
Returns the value of the named property of an object .name Must be a string . If the string is one of the properties of the object , Returns the value of the property . for example ,
getattr(x, 'foobar')
Equate tox.foobar
. If the specified property does not exist , And provides default value , Then return it , Otherwise it triggers AttributeError.remarks
because Private name mix Occurs at compile time , Therefore must Manually mix private properties ( Attributes that begin with two underscores ) Name to use getattr() To extract it .
globals()
Returns the dictionary that implements the current module namespace . For the code in the function , This is set when defining the function , No matter where the function is called, it remains the same .
hasattr(object, name)
The argument is an object and a string . If the string is the name of one of the properties of the object , Then return to
True
, Otherwise return toFalse
.( This function is achieved by callinggetattr(object, name)
See if there is any AttributeError Exception .)
hash(object)
Returns the hash value of the object ( If it had ). The hash value is an integer . They are used to quickly compare dictionary keys when looking up elements in the dictionary . Numeric variables of the same size have the same hash value ( Even if they're of different types , Such as 1 and 1.0).
remarks
If the object implements its own
__hash__()
Method , Please note that ,hash() Truncate the return value according to the word length of the machine . See also__hash__()
.
help([object])
Start the built-in help system ( This function is mainly used in interactive ). If there are no arguments , The interactive help system will be launched in the interpreter console . If the argument is a string , In the module 、 function 、 class 、 Method 、 Search for the string in a keyword or document topic , And print help information on the console . If the argument is any other object , A help page for this object will be generated .
Please note that , If you're calling help() when , There is a slash in the formal parameter list of the objective function (/), It means that the parameter before the slash can only be a positional parameter . Please refer to About positional parameters only FAQ entry .
This function passes through site The module is added to the built-in namespace .
stay 3.4 Version change : pydoc and inspect The change of makes the signature information of callable objects more comprehensive and consistent .
hex(x)
Convert integers to “0x” The lowercase hexadecimal string for the prefix . If x No Python int object , You must define
__index__()
Method . Some examples :>>>
>>> hex(255) '0xff' >>> hex(-42) '-0x2a'If you want to convert an integer to an uppercase or lowercase hexadecimal string , And you can choose whether or not “0x” Prefix , You can use the following method :
>>>
>>> '%#x' % 255, '%x' % 255, '%X' % 255 ('0xff', 'ff', 'FF') >>> format(255, '#x'), format(255, 'x'), format(255, 'X') ('0xff', 'ff', 'FF') >>> f'{255:#x}', f'{255:x}', f'{255:X}' ('0xff', 'ff', 'FF')See also format() For more information .
See also int() Converts a hexadecimal string to a string in 16 Integer with Radix .
remarks
If you want to get the hexadecimal string form of floating-point numbers , Please use float.hex() Method .
id(object)
Return object's “ Tag value ”. The value is an integer , It is guaranteed to be unique and constant in the life cycle of this object . Two objects whose lifetimes do not overlap may have the same id() value .
CPython implementation detail: This is the address of the object in memory.
Trigger a Audit events
builtins.id
, With parametersid
.
input([prompt])
If there is prompt Actual parameters , Write it to the standard output , No line breaks at the end . Next , This function reads a line from the input , Convert it to a string ( Except for the line break at the end ) And back to . When read to EOF when , The trigger EOFError. for example :
>>>
>>> s = input('--> ') --> Monty Python's Flying Circus >>> s "Monty Python's Flying Circus"If loaded readline modular ,input() It will be used to provide complex row editing and history functions .
Trigger a Audit events
builtins.input
With parametersprompt
.After successfully reading the input, a Audit events
builtins.input/result
Incidental results .
class int([x])
class int(x, base=10)
Returns a number based or string based x Constructed integer object , Or return... When no parameters are given
0
. If x Defined__int__()
,int(x)
Will returnx.__int__()
. If x Defined__index__()
, It will returnx.__index__()
. If x Defined__trunc__()
, It will returnx.__trunc__()
. For floating-point numbers , It will round to zero .If x Not numbers , Or there is base Parameters ,x Must be a string 、bytes、 Indicates that the base is base Of The whole face value Of bytearray example . This text can be preceded by
+
or-
( No spaces in between ), There can be spaces before and after . One base is n The number of contains 0 To n-1 Number of numbers , amonga
Toz
( orA
ToZ
) Express 10 To 35. default base by 10 , The allowed base numbers are 0、2-36.2、8、16 Hexadecimal digits can be used in code0b
/0B
、0o
/0O
、0x
/0X
Prefix to indicate . Into the system for 0 Explain the literal quantity of the security code accurately , The end result will be 2、8、10、16 One of the hexadecimal . thereforeint('010', 0)
It's illegal. , butint('010')
andint('010', 8)
It's legal. .For the definition of integer type, see Numeric type --- int, float, complex .
stay 3.4 Version change : If base No int Example , but base Objects have base.__index__ Method , This method will be called to get the hexadecimal number . Previous versions used base.__int__ instead of base.__index__.
stay 3.6 Version change : You can use underscores to group numbers in code text .
stay 3.7 Version change : x Now it can only be used as a position parameter .
stay 3.8 Version change : If
__int__()
If not defined, go back to__index__()
.stay 3.11 Version change : The delegation to
__trunc__()
is deprecated.
isinstance(object, classinfo)
Return
True
if the object argument is an instance of the classinfo argument, or of a (direct, indirect, or virtual) subclass thereof. If object is not an object of the given type, the function always returnsFalse
. If classinfo is a tuple of type objects (or recursively, other such tuples) or a union type of multiple types, returnTrue
if object is an instance of any of the types. If classinfo is not a type or tuple of types and such tuples, a TypeError exception is raised. TypeError may not be raised for an invalid type if an earlier check succeeds.stay 3.10 Version change : classinfo It could be a union type .
issubclass(class, classinfo)
If class yes classinfo Subclasses of ( direct 、 Indirect or Virtual ), Then return to
True
. Class will be treated as its own subclass .classinfo Tuples that can be class objects ( Or recursively , Other such tuples ) or union type , At this moment if class yes classinfo Subclasses of any entries in , Then return toTrue
. Any other situation will trigger TypeError abnormal .stay 3.10 Version change : classinfo It could be a union type .
iter(object[, sentinel])
Return to one iterator object . Depending on whether there is a second argument , The first argument is interpreted very differently . If there is no second argument ,object It has to be support iterable agreement ( Yes
__iter__()
Method ) Collection object for , Or must support sequence protocols ( Yes__getitem__()
Method , And the numerical parameters are from0
Start ). If it does not support these protocols , Will trigger TypeError. If there is a second argument sentinel, that object Must be a callable object . The iterator generated in this case , Each iteration calls its __next__() Methods are called without arguments object; If the result is sentinel The trigger StopIteration, Otherwise, the result of the call is returned .See also Iterator type .
fit iter() One of the applications of the second form of is the building block reader . for example , Read fixed width blocks from binary database files , Until you reach the end of the file :
from functools import partial with open('mydata.db', 'rb') as f: for block in iter(partial(f.read, 64), b''): process_block(block)
len(s)
Returns the length of the object ( Element number ). Arguments can be sequences ( Such as string、bytes、tuple、list or range etc. ) Or set ( Such as dictionary、set or frozen set etc. ).
CPython implementation detail:
len
For greater than sys.maxsize The length of is as range(2 ** 100) May trigger OverflowError.
class list([iterable])
Although it is called a function ,list It is actually a variable sequence type , Please refer to list and Sequence type --- list, tuple, range.
locals()
Update and return the dictionary representing the current local symbol table . Call in a function code block but not a class code block locals() Will return a free variable . Please note that at the module level ,locals() and globals() It's the same dictionary .
remarks
Do not change the contents of this dictionary ; Changing the value of a local or free variable that does not affect the interpreter .
map(function, iterable, ...)
Return to a function be applied to iterable An iterator that outputs the result of each item in the . If additional iterable Parameters ,function The same number of arguments must be accepted and applied to items obtained in parallel from all iteratible objects . When there are multiple iteratable objects , If the shortest iteratible object is exhausted, the whole iteration will end . For the case that the input of the function is already a parameter tuple , see also itertools.starmap().
max(iterable, *[, key, default])
max(arg1, arg2, *args[, key])
Returns the largest element in an iteratable object , Or return the largest of two or more arguments .
If only one position parameter is provided , It must be non empty iterable, Returns the largest element in an iteratable object ; If two or more position parameters are provided , Then return the maximum position parameter .
There are two optional arguments that can only use keywords .key The arguments specify the parameters used by the sorting function , Pass it on to list.sort() Of .default The argument is the value returned when the iteratable object is empty . If the iteratable object is empty , And didn't give default , It triggers ValueError.
If there are multiple maximum elements , This function will return the first found . This and other stable sorting tools such as
sorted(iterable, key=keyfunc, reverse=True)[0]
andheapq.nlargest(1, iterable, key=keyfunc)
bring into correspondence with .3.4 New features : keyword-only Actual parameters default .
stay 3.8 Version change : key It can be for
None
.
class memoryview(object)
Returns the... Created by the given argument “ Memory view ” object . For more information , see also Memory view .
min(iterable, *[, key, default])
min(arg1, arg2, *args[, key])
Returns the smallest element in an iteratable object , Or return the smallest of two or more arguments .
If only one position parameter is provided , It has to be iterable, Returns the smallest element in an iteratable object ; If two or more position parameters are provided , Then the minimum position parameter is returned .
There are two optional arguments that can only use keywords .key The arguments specify the parameters used by the sorting function , Pass it on to list.sort() Of .default The argument is the value returned when the iteratable object is empty . If the iteratable object is empty , And didn't give default , It triggers ValueError.
If there are multiple minimum elements , This function will return the first found . This and other stable sorting tools such as
sorted(iterable, key=keyfunc)[0]
andheapq.nsmallest(1, iterable, key=keyfunc)
bring into correspondence with .3.4 New features : keyword-only Actual parameters default .
stay 3.8 Version change : key It can be for
None
.
next(iterator[, default])
By calling iterator Of __next__() Method to get the next element . If the iterator runs out , Returns the given default, If there is no default, trigger StopIteration.
class object
Returns a new object without features .object Is the base class of all classes . It has all Python Class instances are common methods . This function does not accept any arguments .
remarks
because object No, __dict__, Therefore, you cannot assign any attribute to object Example .
oct(x)
Convert an integer to a prefix of “0o” Octal string of . The result is a legal Python expression . If x No Python Of int object , Then it needs to be defined
__index__()
Method returns an integer . Some examples :>>>
>>> oct(8) '0o10' >>> oct(-56) '-0o70'To convert an integer to an octal string , And you can choose whether to include “0o” Prefix , The following methods can be used :
>>>
>>> '%#o' % 10, '%o' % 10 ('0o12', '12') >>> format(10, '#o'), format(10, 'o') ('0o12', '12') >>> f'{10:#o}', f'{10:o}' ('0o12', '12')See also format() For more information .
open(file, mode='r', buffering=- 1, encoding=None, errors=None, newline=None, closefd=True, opener=None)
open file And return the corresponding file object. If the file cannot be opened , The cause OSError. see also Read and write files Get more usage examples of this function .
file It's a path-like object, Indicates the path of the file to be opened ( Absolute path or path relative to the current working directory ), It can also be the integer type file descriptor corresponding to the file to be encapsulated .( If the file descriptor is given , Then when it returns I/O When an object closes, it also closes , Unless closefd Set to
False
.)mode is an optional string that specifies the mode in which the file is opened. It defaults to
'r'
which means open for reading in text mode. Other common values are'w'
for writing (truncating the file if it already exists),'x'
for exclusive creation, and'a'
for appending (which on some Unix systems, means that all writes append to the end of the file regardless of the current seek position). In text mode, if encoding is not specified the encoding used is platform-dependent: locale.getencoding() is called to get the current locale encoding. (For reading and writing raw bytes use binary mode and leave encoding unspecified.) The available modes are:character
meaning
'r'
Read ( Default )
'w'
write in , And truncate the file first
'x'
Exclusive creation , Fail if file already exists
'a'
Open file for writing , If the file exists, append
'b'
Binary mode
't'
Text mode ( Default )
'+'
Open for update ( Read and write )
The default mode is
'r'
( Open the file for reading text , And'rt'
Synonymous ).'w+'
and'w+b'
Mode will open the file and empty the contents . and'r+'
and'r+b'
Mode will open the file without emptying the contents .As in summary Mentioned in ,Python Distinguish between binary and text I/O. Files opened in binary mode ( Include mode In the parameter
'b'
) The content returned is bytes object , No decoding . In text mode ( By default , Or in mode Parameter contains't'
) when , The contents of the file are returned as str , First use the specified encoding ( If a given ) Or use the default byte encoding and decoding of the platform .remarks
Python The concept of text files that do not depend on the underlying operating system ; All processes are handled by Python It's done by itself , Therefore, it has nothing to do with the platform .
buffering Is an optional integer , Used to set the buffer policy . Pass in 0 To close the buffer ( Only in binary mode ), Pass in 1 To select the row buffer ( Available only in text mode ), Pass in an integer > 1 To represent the byte size of a fixed size block buffer . Be careful , This specifies that the size of the buffer is suitable for binary buffers I/O , but
TextIOWrapper
( The boxmode='r+'
Open file ) There will be another buffer . To disable inTextIOWrapper
buffer , Consider using io.TextIOWrapper.reconfigure() Ofwrite_through
Sign . When not given buffering When parameters are , The default buffer policy works as follows .
Binary files are buffered in fixed size blocks ; Use heuristic method to select the size of buffer , Try to determine the of the underlying device “ Block size ” Or use io.DEFAULT_BUFFER_SIZE. On many systems , The length of the buffer is usually 4096 or 8192 byte .
“ Interactive ” text file ( isatty() return
True
The file of ) Use line buffering . Other text files use the above strategy for binary files .encoding is the name of the encoding used to decode or encode the file. This should only be used in text mode. The default encoding is platform dependent (whatever locale.getencoding() returns), but any text encoding supported by Python can be used. See the codecs module for the list of supported encodings.
errors Is an optional string parameter , Specifies how encoding and decoding errors are handled - This cannot be used in binary mode . Various standard error handlers can be used ( Listed in Error handling scheme ), But use codecs.register_error() Any error handling name registered is also valid . Standard names include :
If there is a coding error ,
'strict'
May trigger ValueError abnormal . The default value isNone
Have the same effect .
'ignore'
Ignore mistakes . Please note that , Ignoring coding errors can result in data loss .
'replace'
The replacement tag is ( for example'?'
) Insert where there is wrong data .
'surrogateescape'
Any incorrect bytes will be represented as U+DC80 to U+DCFF Substitute code point below the range . Use when writing datasurrogateescape
When the handle is handled incorrectly, these alternate code points will be returned to the same bytes . This applies to files with unknown encoding formats .Only supported when writing to a file
'xmlcharrefreplace'
. Characters that are not supported by encoding will be replaced with the corresponding XML Character reference&#nnn;
.
'backslashreplace'
use Python Replace malformed data with the reverse escape sequence of .
'namereplace'
( It only supports ) use\N{...}
Escape sequence replaces unsupported characters .newline control universal newlines How the model works ( It applies only to text mode ). It can be
None
,''
,'\n'
,'\r'
and'\r\n'
. How it works :
When reading input from a stream , If newline by
None
, Then enable the general line feed mode . The lines in the input can be in the form of'\n'
,'\r'
or'\r\n'
ending , These lines are translated into'\n'
Before returning to the caller . If it is''
, Then enable the general line feed mode , But the end of the line will be returned to the caller untranslated . If it has any other legal value , Then the input line is terminated only by the given string , And the end of the line will be returned to the caller who did not call .When writing output to stream , If newline by
None
, Then write any'\n'
All characters will be converted to the system default line separator os.linesep. If newline yes''
or'\n'
, No translation . If newline Is any other legal value , Then write any'\n'
The character will be converted to the given string .If closefd by
False
And what is given is not the file name but the file descriptor , So when the file is closed , The underlying file descriptor will remain open . If the file name is given , be closefd It has to be forTrue
( The default value is ), Otherwise, an error will be triggered .You can pass callable opener To use a custom opener . Then by using parameters ( file,flags ) call opener Gets the base file descriptor of the file object . opener Must return an open file descriptor ( Use os.open as opener Time and transmission
None
The effect is the same ).The newly created file is Not inheritable .
The following example uses os.open() Functional dir_fd The parameter of , Open the file with relative path from the given directory :
>>>
>>> import os >>> dir_fd = os.open('somedir', os.O_RDONLY) >>> def opener(path, flags): ... return os.open(path, flags, dir_fd=dir_fd) ... >>> with open('spamspam.txt', 'w', opener=opener) as f: ... print('This will be written to somedir/spamspam.txt', file=f) ... >>> os.close(dir_fd) # don't leak a file descriptoropen() Function returned by file object The type depends on the mode used . When using open() In text mode (
'w'
,'r'
,'wt'
,'rt'
etc. ) When opening a file , It will return io.TextIOBase ( Specific for io.TextIOWrapper) A subclass of . When using buffering to open a file in binary mode , The returned class is io.BufferedIOBase A subclass of . There are many specific classes : In read-only binary mode , It will return io.BufferedReader; In write binary and append binary modes , It will return io.BufferedWriter, While reading / In write mode , It will return io.BufferedRandom. When buffering is disabled , The original stream will be returned , namely io.RawIOBase A subclass of io.FileIO.See also the document operation module , Such as fileinput、io ( The statement open())、os、os.path、tempfile and shutil.
Trigger a Audit events
open
With parametersfile
,mode
,flags
.
mode
Andflags
Parameters can be modified or passed based on the original call .stay 3.3 Version change :
Added opener Shape parameter .
Added
'x'
Pattern .Triggered in the past IOError, Now it is OSError Another name for .
If the file already exists but exclusive creation mode is used (
'x'
), It will now trigger FileExistsError.stay 3.4 Version change :
File inheritance is now prohibited .
stay 3.5 Version change :
If the system call is interrupted , But the signal handler did not trigger an exception , This function will now retry the system call , Instead of triggering InterruptedError abnormal ( The reasons are detailed in PEP 475).
Added
'namereplace'
Error handling interface .stay 3.6 Version change :
Add pairs to achieve os.PathLike Object support .
stay Windows On , Opening a console buffer will return io.RawIOBase Subclasses of , instead of io.FileIO.
stay 3.11 Version change : The
'U'
mode has been removed.
ord(c)
A pair represents a single Unicode String of characters , Back to represent it Unicode The whole number of code points . for example
ord('a')
Return integer97
,ord('€')
( The euro symbol ) return8364
. This is a chr() The inverse function of .
pow(base, exp[, mod])
return base Of exp The next power ; If mod There is , Then return to base Of exp Power pair mod Remainder ( Than
pow(base, exp) % mod
More efficient ). Two parameter formpow(base, exp)
Equivalent to the power operator :base**exp
.Parameter must be of numeric type . For mixed operand types , Then the type cast rules of binary arithmetic operators apply . about int Operands , The result has the same type as the operand ( After the transformation ), Unless the second parameter is negative ; under these circumstances , All parameters will be converted to floating-point numbers and the floating-point result will be output . for example ,
pow(10, 2)
return100
, butpow(10, -2)
return0.01
. about int or float The negative basis of type and a non integer exponent , Will produce a complex result . for example ,pow(-9, 0.5)
Returns a value close to3j
Value .about int Operands base and exp, If given mod, be mod Must be of type integer and mod Must not be zero . If given mod also exp negative , be base Must be relative to mod Not divisible . under these circumstances , Will return
pow(inv_base, -exp, mod)
, among inv_base by base The reciprocal pair of mod Remainder .The following example is
38
The reciprocal pair of97
Remainder :>>>
>>> pow(38, -1, mod=97) 23 >>> 23 * 38 % 97 == 1 Truestay 3.8 Version change : about int Operands , Three parameter form
pow
Now allow the second parameter to be negative , That is, the remainder of the reciprocal can be calculated .stay 3.8 Version change : Keyword parameters are allowed . Previously, only position parameters were supported .
print(*objects, sep=' ', end='\n', file=sys.stdout, flush=False)
take objects Print out to file The specified text stream , With sep Separate and add... At the end end. sep 、 end 、 file and flush Must be given as a keyword parameter .
All non keyword parameters are converted to strings , It's like execution str() equally , And will be written to the stream , With sep And add end. sep and end All must be strings ; They can also be for
None
, This means using the default values . If not given objects, be print() Write only end.file The parameter must be one with
write(string)
Object of method ; If the parameter does not exist or isNone
, Will use sys.stdout. Because the parameters to be printed are converted to text strings , therefore print() Cannot be used for binary mode file objects . For these objects , Should be used insteadfile.write(...)
.Whether the output is cached or not usually depends on file, But if flush The key parameter is True, The output stream is forced to refresh .
stay 3.3 Version change : Added flush Key parameters .
class property(fget=None, fset=None, fdel=None, doc=None)
return property attribute .
fget Is a function that gets the value of an attribute . fset Is the function used to set the value of a property . fdel Is a function for deleting attribute values . also doc Create a document string for a property object .
A typical use is to define a managed property
x
:class C: def __init__(self): self._x = None def getx(self): return self._x def setx(self, value): self._x = value def delx(self): del self._x x = property(getx, setx, delx, "I'm the 'x' property.")If c by C Example ,
c.x
Will call getter,c.x = value
Will call setter,del c.x
Will call deleter.If given ,doc Will become the property Property of the document string . Otherwise property Will copy fget The document string of ( If there is ). This makes use of property() As decorator To create read-only feature attributes can be easily implemented :
class Parrot: def __init__(self): self._voltage = 100000 @property def voltage(self): """Get the current voltage.""" return self._voltageabove
@property
The decorator willvoltage()
Method to a read-only property with the same name "getter", And will voltage The document string of is set to "Get the current voltage."The feature attribute object has
getter
,setter
as well asdeleter
Method , They can be used as decorators to create copies of the feature attributes , And set the corresponding access function as the decorated function . This is best explained by an example :class C: def __init__(self): self._x = None @property def x(self): """I'm the 'x' property.""" return self._x @x.setter def x(self, value): self._x = value @x.deleter def x(self): del self._xThe above code is completely equivalent to the first example . Be sure to give the attached function the same name as the original feature attribute ( In this case
x
.)The returned feature attribute object also has attributes corresponding to constructor parameters
fget
,fset
andfdel
.stay 3.5 Version change : The document string of the feature attribute object is now writable .
class range(stop)
class range(start, stop[, step])
Although it is called a function , but range It's actually an immutable sequence type , See in range object And Sequence type --- list, tuple, range Document description in .
repr(object)
Returns the printable form string of the object . For many types , The string that this function attempts to return , Will pass the object to eval() The results generated are the same ; Otherwise , The result is a string wrapped in angle brackets , Contains the object type name and its additional information , Additional information usually includes the name and memory address of the object . By defining
__repr__()
Method , Class can control what this function will return for the instance .
reversed(seq)
Return a reverse iterator. seq Must be a person with
__reversed__()
Method or support the sequence protocol ( Have from0
Of the starting integer type parameter__len__()
Methods and__getitem__()
Method ).
round(number[, ndigits])
return number Round to the decimal point ndigits The value of bit precision . If ndigits Omitted or omitted
None
, Then return the integer closest to the input value .For support round() The built-in type of the method , The resulting value is rounded to the nearest 10 Negative ndigits A multiple of the power ; If it is as close to two multiples , Select even number . therefore ,
round(0.5)
andround(-0.5)
All obtained0
andround(1.5)
Then for2
.ndigits Can be any integer value ( Positive numbers 、 Zero or negative number ). If you omit ndigits Or forNone
, The return value will be an integer . Otherwise, the return value is the same as number Same type of .For general Python object
number
,round
Entrust tonumber.__round__
.remarks
Execute... On floating-point numbers round() The behavior of may be surprising : for example ,
round(2.675, 2)
Will give2.67
Not the expected2.68
. It's not a procedural mistake : This result is due to the fact that most decimal numbers are not exactly represented by floating-point numbers . see also Floating point arithmetic : Disputes and restrictions Learn more .
class set([iterable])
Back to a new set object , You can choose from iterable Elements obtained .
set
It's a built-in type . Please check out set and Collection types --- set, frozenset Get the documentation about this class .For other containers, see the built-in frozenset, list, tuple and dict class , as well as collections modular .
setattr(object, name, value)
This function is similar to getattr() Corresponding . Its argument is an object 、 A string and an arbitrary value . The string can be the name of an existing attribute , Or new attribute . As long as the object allows , The function assigns values to attributes . Such as
setattr(x, 'foobar', 123)
Equivalent tox.foobar = 123
.remarks
because Private name mix Occurs at compile time , Therefore, private attributes must be mixed manually ( Attributes that begin with two underscores ) Name for use setattr() To set it up .
class slice(stop)
class slice(start, stop[, step])
Return to one slice object , Representative from
range(start, stop, step)
Specifies the slice of the index set . The parameter start and step The default value isNone
. Slice objects have read-only data attributesstart
、stop
andstep
, Just return the corresponding parameter value ( Or the default value ). These attributes have no other explicit function ; however NumPy And other third-party extensions will use . When using extended index syntax , Slice objects will also be generated . for example :a[start:stop:step]
ora[start:stop, i]
. Another option is to return the iterator object , Please see the itertools.islice() .
sorted(iterable, /, *, key=None, reverse=False)
according to iterable Returns a new sorted list .
Has two optional parameters , They must all be specified as keyword parameters .
key Specify a function with a single argument , For from iterable Extract the key for comparison from each element of the ( for example
key=str.lower
). The default value isNone
( Compare elements directly ).reverse For a Boolean value . If it is set to
True
, Then each list element will be sorted in reverse order .Use functools.cmp_to_key() The old-fashioned cmp Function to key function .
Built in sorted() Make sure it is stable . If a sort ensures that the relative order of elements with equal comparison results will not be changed, it is called stable --- This facilitates multiple sorting ( For example, first by Department 、 Then sort by salary grade ).
The sorting algorithm only uses
<
Compare between projects . Although define a __lt__() Method is enough to sort , but PEP 8 It is recommended to implement all six Rich comparison . This will help avoid being compared with other sorting tools ( Such as max() ) An error occurred while using the same data , These tools depend on different underlying methods . Implementing all six comparisons also helps avoid confusion in mixed type comparisons , Because mixed type comparison can call reflection to __gt__() Methods .For sorting examples and a brief sorting tutorial , see also Sorting guide .
@staticmethod
Convert a method to a static method .
Static methods do not receive the first parameter implicitly . To declare a static method , Please use this syntax
class C: @staticmethod def f(arg1, arg2, ...): ...
@staticmethod
This form is called functional decorator -- For details, please refer to Function definition .Static methods can be called from classes ( Such as
C.f()
), It can also be called from the instance ( Such as ```C().f()``). Besides , It can also be called as a normal function ( Such as ``f()``).Python Static method and Java or C++ similar . See also classmethod() , Can be used to create another kind of constructor .
Like all ornaments , You can also call
staticmethod
, And perform some operations on the results . For example, in some cases, you need to reference functions from the class body and you want to avoid automatic conversion to instance methods . For these cases , Please use this syntax :def regular_function(): ... class C: method = staticmethod(regular_function)
Want to know more about static methods , see also Standard type hierarchy .
stay 3.10 Version change : Static methods inherit multiple properties of methods (
__module__
、__name__
、__qualname__
、__doc__
and__annotations__
), Also have a new ``__wrapped__`` attribute , And now it can also be called as an ordinary function .
class str(object='')
class str(object=b'', encoding='utf-8', errors='strict')
Return to one str Version of object . For more information , see also str() .
str
It's a built-in string class . For more information about strings, see Text sequence type --- str.
sum(iterable, /, start=0)
from start Start from left to right iterable And returns the total value . iterable The term of is usually a number , and start Value is not allowed to be a string .
For some use cases , There is sum() A better alternative to . A better and faster way to splice string sequences is to call
''.join(sequence)
. To sum floating-point values with extended precision , see also math.fsum(). To splice a series of iteratible objects , Please consider using itertools.chain().stay 3.8 Version change : start Formal parameters can be specified in the form of keyword parameters .
class super([type[, object-or-type]])
Returns a proxy object , It delegates method calls to type Parent or sibling of . This is useful for accessing inherited methods that have been overloaded in the class .
object-or-type Determine what to use for the search method resolution order. The search will start from type The next class starts .
for instance , If object-or-type Of __mro__ by
D -> B -> C -> A -> object
also type The value of isB
, be super() Will searchC -> A -> object
.object-or-type Of __mro__ Attributes list getattr() and super() The common method used to parse the search order . This attribute is dynamic , It can be changed when any inheritance hierarchy is updated .
If you omit the second parameter , The returned superclass object is unbound . If the second parameter is an object , be
isinstance(obj, type)
Must be true . If the second parameter is a type , beissubclass(type2, type)
Must be true ( This applies to class methods ).super There are two typical use cases . In a class hierarchy with single inheritance ,super Can be used to reference parent classes without explicitly specifying their names , This makes the code easier to maintain . This usage is similar to that in other programming languages super It's very similar .
The second use case is to support collaborative multiple inheritance in a dynamic execution environment . This use case is Python It is unique and does not exist in static coding languages or languages that only support single inheritance . This is achieved using “ Diamond chart ” Make it possible , That is, there are multiple base classes that implement the same method . Good design forces such methods to have the same call signature in every case ( Because the call order is determined at run time , Also because this order should adapt to the change of class hierarchy , Also because this order may include siblings unknown before runtime ).
For the above two use cases , A typical superclass call looks like this :
class C(B): def method(self, arg): super().method(arg) # This does the same thing as: # super(C, self).method(arg)
In addition to method lookup ,super() It can also be used for attribute lookup . One possible application is to call Descriptor .
Please note that super() It is implemented as part of the binding process of explicitly adding attribute lookup , for example
super().__getitem__(name)
. It does this by realizing its own__getattribute__()
Method , This allows you to search for classes in a predictable order , And support collaborative multiple inheritance . Correspondingly ,super() In imagesuper()[name]
In this way, the implicit search using statements or operators is undefined .Also note that , Except for the form of zero parameters ,super() It is not limited to use inside the method . The form of two parameters clearly specifies the parameters and makes corresponding references . The form of zero parameters only applies inside the class definition , Because the compiler needs to fill in the necessary details to correctly retrieve the defined classes , You also need to let ordinary methods access the current instance .
For information about how to use super() Practical suggestions on how to design collaboration classes , see also Use super() Guide to .
class tuple([iterable])
Although it is called a function , but tuple It's actually an immutable sequence type , See in Tuples And Sequence type --- list, tuple, range Document description in .
class type(object)
class type(name, bases, dict, **kwds)
When a parameter is passed in , return object The type of . The return value is one type object , Usually with object.__class__ The returned objects are the same .
Recommended isinstance() Built in functions to detect the type of object , Because it will consider the subclass .
When three parameters are passed in , Back to a new type object . This is essentially class A dynamic form of a statement ,name The string is the class name and will become __name__ attribute ;bases Tuples contain base classes and become __bases__ attribute ; If empty, the ultimate base class of all classes will be added object. dict The dictionary contains the attribute and method definitions of the class body ; It's becoming __dict__ Properties may be copied or wrapped before . The following two statements will create the same type object :
>>> >>> class X: ... a = 1 ... >>> X = type('X', (), dict(a=1))
See also Type object .
The keyword parameters provided to the three parameter form will be passed to the appropriate metaclass mechanism ( Usually it is __init_subclass__()), Equivalent to keywords in class definitions ( except metaclass) Behavior of .
See also Custom class creation .
stay 3.6 Version change : type If the subclass of is not overloaded
type.__new__
, It will no longer be possible to get the type of object in the form of a parameter .
vars([object])
Return module 、 class 、 Instance or any other object with __dict__ Property of the object __dict__ attribute .
Objects such as modules and instances have updatable __dict__ attribute ; however , Other objects __dict__ Property may be set to restrict writing ( for example , Class will use types.MappingProxyType To prevent the dictionary from being updated directly ).
Without parameters ,vars() The behavior of is similar to locals(). Please note that ,locals Dictionaries work only for reading , Because of locals Dictionary updates will be ignored .
If an object is specified but it does not __dict__ attribute ( for example , When the class it belongs to defines __slots__ Attribute ) Will trigger TypeError abnormal .
zip(*iterables, strict=False)
Iterate in parallel over multiple iterators , One data item is returned from each iterator to form a tuple .
Example :
>>>
>>> for item in zip([1, 2, 3], ['sugar', 'spice', 'everything nice']): ... print(item) ... (1, 'sugar') (2, 'spice') (3, 'everything nice')
More formally : zip() Iterators that return tuples , Among them the first i The argument of each tuple is contained in the iterator i Elements .
You might as well know... In another way zip() : It turns rows into columns , Turn columns into rows . This is similar to Matrix transposition .
zip() Delay execution : The element is not processed until the iteration , such as
for
Circulate or put in list in .It's worth considering that , Pass to zip() Iteratable objects may be of different lengths ; Sometimes it's intentional , Sometimes there are errors in the code that prepares these objects .Python Three different treatment schemes are provided :
By default ,zip() Stop after the shortest iteration . The remaining items in the longer iteratible object will be ignored , The result is trimmed to the length of the shortest iteratible object :
>>> >>> list(zip(range(3), ['fee', 'fi', 'fo', 'fum'])) [(0, 'fee'), (1, 'fi'), (2, 'fo')]
Usually zip() Used when iteratable objects are equal in length . It is suggested to use
strict=True
The option to . Output and ordinary zip() identical :.>>> >>> list(zip(('a', 'b', 'c'), (1, 2, 3), strict=True)) [('a', 1), ('b', 2), ('c', 3)]
Unlike the default behavior , It checks whether the length of the iteratable object is the same , If not, trigger ValueError .
>>> >>> list(zip(range(3), ['fee', 'fi', 'fo', 'fum'], strict=True)) Traceback (most recent call last): ... ValueError: zip() argument 2 is longer than argument 1
If not specified
strict=True
Parameters , All errors that lead to different lengths of iteratable objects are suppressed , This may appear as an undetectable error elsewhere in the program .In order for all iteratable objects to have the same length , Shorter lengths can be filled with constants . This can be done by itertools.zip_longest() To complete .
An extreme example is that there is only one iteratable object parameter ,zip() Will return a one tuple iterator . If no parameters are given , Then an empty iterator is returned .
Tips :
It can ensure that the evaluation order of iterators is from left to right . So that we can use
zip(*[iter(s)]*n, strict=True)
Press the data list by length n Grouping . This will repeat identical The iteratorn
Time , Each tuple of the output containsn
The result of calling iterator times . The effect of this is to split the input into a length of n The block .zip() And
*
The combination of operators can be used to disassemble a list :>>> >>> x = [1, 2, 3] >>> y = [4, 5, 6] >>> list(zip(x, y)) [(1, 4), (2, 5), (3, 6)] >>> x2, y2 = zip(*zip(x, y)) >>> x == list(x2) and y == list(y2) True
stay 3.10 Version change : Added
strict
Parameters .
__import__(name, globals=None, locals=None, fromlist=(), level=0)
remarks
And importlib.import_module() Different , This is a daily Python Advanced functions that are not needed in programming .
This function is represented by import Statement initiates a call . It can be replaced ( By importing builtins Module and assign it to
builtins.__import__
) To modifyimport
The semantics of the statement , however strong This is not recommended , Because you use the import hook ( See PEP 302) It is usually easier to achieve the same goal , And will not cause code problems , Because many code will assume that the default implementation is used . It is also not recommended to use directly __import__() And we should use importlib.import_module().This function will import modules name, utilize globals and locals To decide how to interpret the name in the context of the package .fromlist It is given that name The name of the imported object or sub module in the module . The standard implementation code will not be used at all locals Parameters , It's only used. globals Used to determine the import The package context of the statement .
level Specify whether to use absolute or relative Import .
0
( The default value is ) Means that only absolute import is performed . level Is a positive value, indicating relative to the module call __import__() The catalog of , The number of parent directory layers to search ( For details, see PEP 328).When name The form of the variable is
package.module
when , Usually it will return the highest level package ( The name before the first dot ), and No With name Named module . however , When given a non empty fromlist When parameters are , Will return to name Named module .for example , sentence
import spam
The result will be the same bytecode as the following code :spam = __import__('spam', globals(), locals(), [], 0)sentence
import spam.ham
The result of will be the following call :spam = __import__('spam.ham', globals(), locals(), [], 0)Please note here __import__() How to return to the top-level module , Because this is through import Statement is bound to an object with a specific name .
On the other hand , sentence
from spam.ham import eggs, sausage as saus
The result will be_temp = __import__('spam.ham', globals(), locals(), ['eggs', 'sausage'], 0) eggs = _temp.eggs saus = _temp.sausagead locum ,
spam.ham
The module will consist of __import__() return . The objects to be imported will be extracted from this object and assigned their corresponding names .If you only want to import modules by name ( Maybe in the bag ), Please use importlib.import_module()
stay 3.3 Version change : level The value of no longer supports negative numbers ( The default value is also changed to 0).
stay 3.9 Version change : When command line parameters are used -E or -I when , environment variable PYTHONCASEOK It will now be ignored .
The following example shows whether to display the prompt (>>">>>> And ...) Distinguish between input and output : When entering the code in the example , To type everything after the prompt in the line beginning with the prompt ; Lines that do not begin with a prompt are the output of the interpreter . Be careful , The second prompt appearing on a line in the example is used to end multi line commands , here , To type a blank line .
You can use the >>>
Click on to toggle the display prompt and output . If you hide the prompt and output of an example , Then you can easily copy and paste the input line into your interpreter .
Many examples in this manual , Even interactive commands contain comments .Python Annotate with #
start , Until the end of the physical line . Comments can be at the beginning of a line , Or white space after the code , But not in the string . In the string # Number is # Number . Comments are used to clarify the code ,Python Do not explain notes , When typing examples , You may not enter comments .
Examples are as follows :
# this is the first comment spam = 1 # and this is the second comment # ... and now a third! text = "# This is not a comment because it's inside quotes."
Now? , Try some simple Python command . Start the interpreter , Wait for the main prompt (>>>
) appear .
The interpreter is like a simple calculator : Enter the expression , Will give the answer . The syntax of the expression is straightforward : Operator +
、-
、*
、/
The usage of is the same as most other languages ( such as ,Pascal or C); Brackets (()
) Used to group . for example :
>>>
>>> 2 + 2 4 >>> 50 - 5*6 20 >>> (50 - 5*6) / 4 5.0 >>> 8 / 5 # division always returns a floating point number 1.6
Integers ( Such as ,2
、4
、20
) The type is int, whole number with a decimal ( Such as ,5.0
、1.6
) The type is float. The second half of this tutorial will cover more number types .
Division operations (/
) Return floating point number . use //
Operator execution floor division The result is an integer ( Ignore decimals ); Calculate the remainder with %
:
>>>
>>> 17 / 3 # classic division returns a float 5.666666666666667 >>> >>> 17 // 3 # floor division discards the fractional part 5 >>> 17 % 3 # the % operator returns the remainder of the division 2 >>> 5 * 3 + 2 # floored quotient * divisor + remainder 17
Python use **
Operator to calculate power 1:
>>>
>>> 5 ** 2 # 5 squared 25 >>> 2 ** 7 # 2 to the power of 7 128
Equal sign (=
) Used to assign values to variables . After the assignment , The location of the next interactive prompt does not show any results :
>>>
>>> width = 20 >>> height = 5 * 9 >>> width * height 900
If the variable is undefined ( namely , Unassigned ), Using this variable will prompt an error :
>>>
>>> n # try to access an undefined variable Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'n' is not defined
Python Fully support floating point numbers ; The operation of mixed type operands will convert integers to floating-point numbers :
>>>
>>> 4 * 3.75 - 1 14.0
In interactive mode , The last output expression will be assigned to the variable _
. hold Python When used as a calculator , Using this variable to realize the next calculation is simpler , for example :
>>>
>>> tax = 12.5 / 100 >>> price = 100.50 >>> price * tax 12.5625 >>> price + _ 113.0625 >>> round(_, 2) 113.06
It's best to treat this variable as a read-only type . Don't assign an explicit value to it , Otherwise, an independent local variable with the same name will be created , This variable will mask the built-in variable with its magic behavior .
except int and float,Python Other numeric types are also supported , for example Decimal or Fraction.Python It also has built-in support The plural , suffix j
or J
Used to represent imaginary numbers ( for example 3+5j
).
In addition to digital ,Python You can also manipulate strings . String has many forms , single quote ('……'
) Or double quotes ("……"
) The results of the annotation are the same 2. The backslash \
Used to escape :
>>>
>>> 'spam eggs' # single quotes 'spam eggs' >>> 'doesn\'t' # use \' to escape the single quote... "doesn't" >>> "doesn't" # ...or use double quotes instead "doesn't" >>> '"Yes," they said.' '"Yes," they said.' >>> "\"Yes,\" they said." '"Yes," they said.' >>> '"Isn\'t," they said.' '"Isn\'t," they said.'
The interactive interpreter will quote the output string , Special characters are escaped with backslashes . although , Sometimes the output string looks different from the input string ( The enclosed quotation marks may change ), But the two strings are the same . If there are single quotes but no double quotes in the string , The string will be enclosed by double quotation marks , conversely , Then add single quotation marks .print() The output of the function is more concise and readable , It omits the quotation marks around , And output special characters after escape :
>>>
>>> '"Isn\'t," they said.' '"Isn\'t," they said.' >>> print('"Isn\'t," they said.') "Isn't," they said. >>> s = 'First line.\nSecond line.' # \n means newline >>> s # without print(), \n is included in the output 'First line.\nSecond line.' >>> print(s) # with print(), \n produces a new line First line. Second line.
If you don't want to \
The character of escape into a special character , have access to Original string , Add... Before quotation marks r
that will do :
>>>
>>> print('C:\some\name') # here \n means newline! C:\some ame >>> print(r'C:\some\name') # note the r before the quote C:\some\name
String literals can contain multiple lines . One implementation is to use triple quotes :"""..."""
or '''...'''
. The line terminator... Will be automatically included in the string , But you can also add a... Where the line breaks \
To avoid this situation . See the following example :
print("""\ Usage: thingy [OPTIONS] -h Display this usage message -H hostname Hostname to connect to """)
Output is as follows ( Note that the initial line break is not included ):
Usage: thingy [OPTIONS] -h Display this usage message -H hostname Hostname to connect to
String can be used +
Merge ( Stick together ), It can also be used. *
repeat :
>>>
>>> # 3 times 'un', followed by 'ium' >>> 3 * 'un' + 'ium' 'unununium'
Two or more adjacent string literal ( Characters marked in quotation marks ) Will automatically merge :
>>>
>>> 'Py' 'thon' 'Python'
When splitting long strings , This function is particularly useful :
>>>
>>> text = ('Put several strings within parentheses ' ... 'to have them joined together.') >>> text 'Put several strings within parentheses to have them joined together.'
This function can only be used for two literal values , Cannot be used with variables or expressions :
>>>
>>> prefix = 'Py' >>> prefix 'thon' # can't concatenate a variable and a string literal File "<stdin>", line 1 prefix 'thon' ^ SyntaxError: invalid syntax >>> ('un' * 3) 'ium' File "<stdin>", line 1 ('un' * 3) 'ium' ^ SyntaxError: invalid syntax
Merge multiple variables , Or combine variables with literals , Use +
:
>>>
>>> prefix + 'thon' 'Python'
String support Indexes ( The subscript access ), The index of the first character is 0. Single character has no special type , Is a string with a length of one :
>>>
>>> word = 'Python' >>> word[0] # character in position 0 'P' >>> word[5] # character in position 5 'n'
Indexes also support negative numbers , When indexed with negative numbers , Count from the right :
>>>
>>> word[-1] # last character 'n' >>> word[-2] # second-last character 'o' >>> word[-6] 'P'
Be careful ,-0 and 0 equally , therefore , Negative index from -1 Start .
Except index , String also supports section . The index can extract a single character , section Then extract the substring :
>>>
>>> word[0:2] # characters from position 0 (included) to 2 (excluded) 'Py' >>> word[2:5] # characters from position 2 (included) to 5 (excluded) 'tho'
The default values for slice indexes are useful ; Omit when starting index , The default value is 0, Omit end index , The default is to the end of the string :
>>>
>>> word[:2] # character from the beginning to position 2 (excluded) 'Py' >>> word[4:] # characters from position 4 (included) to the end 'on' >>> word[-2:] # characters from the second-last (included) to the end 'on'
Be careful , The output contains the slice start , But it does not include the end of the slice . therefore ,s[:i] + s[i:]
Always equal to s
:
>>>
>>> word[:2] + word[2:] 'Python' >>> word[:4] + word[4:] 'Python'
Slice can also be understood in this way , The index points to the character Between , The left side of the first character is marked 0, The right side of the last character is marked with n ,n Is the length of the string . for example :
+---+---+---+---+---+---+ | P | y | t | h | o | n | +---+---+---+---+---+---+ 0 1 2 3 4 5 6 -6 -5 -4 -3 -2 -1
The first line of numbers is the index in the string 0...6 The location of , The second line of numbers is the corresponding negative index position .i To j The slice of is made of i and j All corresponding characters between .
For slices that use non negative indexes , If both indexes do not cross the boundary , The slice length is the difference between the start and end indexes . for example , word[1:3]
Is the length of the 2.
Index out of bounds will report an error :
>>>
>>> word[42] # the word only has 6 characters Traceback (most recent call last): File "<stdin>", line 1, in <module> IndexError: string index out of range
however , Slicing automatically handles out of bounds indexes :
>>>
>>> word[4:42] 'on' >>> word[42:] ''
Python String cannot be modified , yes immutable Of . therefore , Assigning a value to an index position in the string will result in an error :
>>>
>>> word[0] = 'J' Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'str' object does not support item assignment >>> word[2:] = 'py' Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'str' object does not support item assignment
To generate different strings , A new string should be created :
>>>
>>> 'J' + word[1:] 'Jython' >>> word[:2] + 'py' 'Pypy'
Built in functions len() Returns the length of the string :
>>>
>>> s = 'supercalifragilisticexpialidocious' >>> len(s) 34
See
Text sequence type --- str
The string is Sequence type , Support various operations of sequence type .
String method
String supports many deformation and search methods .
Format string literal
String literal value of the embedded expression .
Format String Syntax
Use str.format() Formatted string .
printf Style string formatting
Here is a detailed description of using %
Operator to format the string .
Python Support for multiple Reunite with data type , Different values can be combined . Most commonly used list , Is marked with square brackets , Comma separated set of values . list Can contain different types of elements , But in general , Each element has the same type :
>>>
>>> squares = [1, 4, 9, 16, 25] >>> squares [1, 4, 9, 16, 25]
And string ( And other built-in sequence type ) equally , Lists also support indexing and slicing :
>>>
>>> squares[0] # indexing returns the item 1 >>> squares[-1] 25 >>> squares[-3:] # slicing returns a new list [9, 16, 25]
The slicing operation returns a new list containing the requested elements . The following slicing operation will return a list of Shallow copy :
>>>
>>> squares[:] [1, 4, 9, 16, 25]
The list also supports merge operations :
>>>
>>> squares + [36, 49, 64, 81, 100] [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
And immutable String is different , The list is mutable type , Its content can be changed :
>>>
>>> cubes = [1, 8, 27, 65, 125] # something's wrong here >>> 4 ** 3 # the cube of 4 is 64, not 65! 64 >>> cubes[3] = 64 # replace the wrong value >>> cubes [1, 8, 27, 64, 125]
append()
Method You can add a new element at the end of the list ( See the later ):
>>>
>>> cubes.append(216) # add the cube of 6 >>> cubes.append(7 ** 3) # and the cube of 7 >>> cubes [1, 8, 27, 64, 125, 216, 343]
Assigning a value to the slice can change the size of the list , Even empty the whole list :
>>>
>>> letters = ['a', 'b', 'c', 'd', 'e', 'f', 'g'] >>> letters ['a', 'b', 'c', 'd', 'e', 'f', 'g'] >>> # replace some values >>> letters[2:5] = ['C', 'D', 'E'] >>> letters ['a', 'b', 'C', 'D', 'E', 'f', 'g'] >>> # now remove them >>> letters[2:5] = [] >>> letters ['a', 'b', 'f', 'g'] >>> # clear the list by replacing all the elements with an empty list >>> letters[:] = [] >>> letters []
Built in functions len() Lists are also supported :
>>>
>>> letters = ['a', 'b', 'c', 'd'] >>> len(letters) 4
You can also nest lists ( Create a list with other lists ), for example :
>>>
>>> a = ['a', 'b', 'c'] >>> n = [1, 2, 3] >>> x = [a, n] >>> x [['a', 'b', 'c'], [1, 2, 3]] >>> x[0] ['a', 'b', 'c'] >>> x[0][1] 'b'
Of course ,Python You can also complete more complex tasks than two plus two . for example , You can write Fibonacci sequence The initial subsequence of , As shown below :
>>>
>>> # Fibonacci series: ... # the sum of two elements defines the next ... a, b = 0, 1 >>> while a < 10: ... print(a) ... a, b = b, a+b ... 0 1 1 2 3 5 8
This example introduces several new functions .
In the first row Multiple assignments : Variable a
and b
Get new values at the same time 0 and 1. The last line uses a multiple assignment , This is reflected in the fact that the right expression has been evaluated before the assignment . The evaluation order of right expression is from left to right .
while Cycle as long as the condition ( Here it means :a < 10
) If it remains true, it will always be implemented .Python and C equally , Any non-zero integer is true , Zero is false . This condition can also be the value of a string or list , in fact , Any sequence can ; If the length is non-zero, it is true , Empty sequence is false . The judgment in the example is only the simplest comparison . Compare the standard writing of operators with C The language is the same : <
( Less than )、 >
( Greater than )、 ==
( be equal to )、 <=
( Less than or equal to )、 >=
( Greater than or equal to ) And !=
( It's not equal to ).
The loop body yes Indented : Indentation is Python The way statements are organized . On the interactive command line , You have to enter tabs or spaces for each contraction . More complex input methods can be achieved by using a text editor ; All decent text editors support automatic indentation . When inputting compound statements interactively , To end, enter a blank line ( Because the parser doesn't know which line of code is the last line ). Be careful , The indent of each line of the same statement is the same .
print() Function outputs the value of a given parameter . Unlike expressions ( such as , Previous examples of calculators ), It can handle multiple parameters , Including floating-point numbers and strings . It outputs a string without quotation marks , And a space will be inserted between the parameter items , This enables better formatting :
>>>>>> i = 256*256 >>> print('The value of i is', i) The value of i is 65536
Key parameters end You can cancel the line feed after the output , Or end with another string :
>>>>>> a, b = 0, 1 >>> while a < 1000: ... print(a, end=',') ... a, b = b, a+b ... 0,1,1,2,3,5,8,13,21,34,55,89,144,233,377,610,987,
remarks
1
**
Than -
Higher priority , therefore -3**2
Will be interpreted as -(3**2)
, therefore , The result is -9
. To avoid this problem , And get 9
, It can be used (-3)**2
.
2
Different from other languages , Special characters such as \n
In single quotes ('...'
) And double quotes ("..."
) In the same sense . The only difference between these two quotation marks is , There is no need to escape double quotation marks in single quotation marks "
, But you must escape the single quotation mark into \'
, vice versa .
In addition to what was introduced in the previous chapter while sentence ,Python It also supports common process control statements in other languages , It's just a little different .
if
sentence The most familiar thing should be if sentence . for example :
>>>
>>> x = int(input("Please enter an integer: ")) Please enter an integer: 42 >>> if x < 0: ... x = 0 ... print('Negative changed to zero') ... elif x == 0: ... print('Zero') ... elif x == 1: ... print('Single') ... else: ... print('More') ... More
if The statement contains zero or more elif Clause and optional else Clause . keyword 'elif
' yes 'else if' Abbreviation , Suitable for avoiding excessive indentation .if
... elif
... elif
... Sequences can be treated as in other languages switch
or case
Alternatives to sentences .
If you want to compare a value with multiple constants , Or check for specific types or properties ,match
Statement is more practical . See match sentence .
for
sentence Python Of for Statements and C or Pascal Different in .Python Of for
Statement does not iterate arithmetic increment values ( Such as Pascal), Or give users the ability to define iteration steps and pause conditions ( Such as C), Instead, it iterates over any sequence, such as a list or a string , The iterative order of elements is consistent with the order in which they appear in the sequence . for example :
>>>
>>> # Measure some strings: ... words = ['cat', 'window', 'defenestrate'] >>> for w in words: ... print(w, len(w)) ... cat 3 window 6 defenestrate 12
Modify the contents of the set when traversing the set , It's easy to generate wrong results . Therefore, it is not possible to cycle directly , Instead, you should traverse a copy of the set or create a new set :
# Create a sample collection users = {'Hans': 'active', 'Éléonore': 'inactive', ' Jing Tailang ': 'active'} # Strategy: Iterate over a copy for user, status in users.copy().items(): if status == 'inactive': del users[user] # Strategy: Create a new collection active_users = {} for user, status in users.items(): if status == 'active': active_users[user] = status
Built in functions range() Often used to traverse a sequence of numbers , This function can generate arithmetic series :
>>>
>>> for i in range(5): ... print(i) ... 0 1 2 3 4
The generated sequence does not contain the given termination value ;range(10)
Generate 10 It's worth , This is a length of 10 Sequence , The element indexes are legal .range You can't do it from 0 Start , You can also increase by a specified range ( The increment is called ' Stepping ', Support negative numbers ):
>>>
>>> list(range(5, 10)) [5, 6, 7, 8, 9] >>> list(range(0, 10, 3)) [0, 3, 6, 9] >>> list(range(-10, -100, -30)) [-10, -40, -70]
range() and len() Put together , The sequence can be iterated by index :
>>>
>>> a = ['Mary', 'had', 'a', 'little', 'lamb'] >>> for i in range(len(a)): ... print(i, a[i]) ... 0 Mary 1 had 2 a 3 little 4 lamb
however , Most of the time ,enumerate() Functions are more convenient , See The technique of cycling .
If only output range, There will be unexpected results :
>>>
>>> range(10) range(0, 10)
range() The operation of returning objects is very similar to that of a list , But in fact, these two objects are not the same thing . iteration , This object returns consecutive items based on the desired sequence , No real list generated , Thus saving space .
Such objects are called iteratable objects iterable, Function or program structure can obtain continuous items through this object , Until all elements are iterated .for Statement is such an architecture ,sum() Is a function that takes an iteratable object as a parameter :
>>>
>>> sum(range(4)) # 0 + 1 + 2 + 3 6
More functions that return iteratable objects or take iteratable objects as parameters will be introduced below . stay data structure In this chapter , We are going to talk about list() More details of .
break
、continue
Statement and else
Clause break Statement and C Similar to , Used to jump out of the nearest for or while loop .
Circular statements support else
Clause ;for In circulation , All elements in the iteratable object are cycled , or while When the condition of the loop is false , Execute the clause ;break When the statement terminates the loop , Do not execute this clause . Take a look at the following example of a loop to find prime numbers :
>>>
>>> for n in range(2, 10): ... for x in range(2, n): ... if n % x == 0: ... print(n, 'equals', x, '*', n//x) ... break ... else: ... # loop fell through without finding a factor ... print(n, 'is a prime number') ... 2 is a prime number 3 is a prime number 4 equals 2 * 2 5 is a prime number 6 equals 2 * 3 7 is a prime number 8 equals 2 * 4 9 equals 3 * 3
( you 're right , That's how this code is written . If you look carefully, :else
The clause belongs to for loop , Do not belong to if sentence .)
And if Statement than , Cyclic else
Clauses are more like try Of else
Clause : try Of else
Clause executes when no exception is triggered , Cyclic else
Clause is not run break
When the .try
Statements and exceptions are detailed in Exception handling .
continue The sentence is also borrowed from C Language , Indicates that the next iteration of the loop continues :
>>>
>>> for num in range(2, 10): ... if num % 2 == 0: ... print("Found an even number", num) ... continue ... print("Found an odd number", num) ... Found an even number 2 Found an odd number 3 Found an even number 4 Found an odd number 5 Found an even number 6 Found an odd number 7 Found an even number 8 Found an odd number 9
pass
sentence pass Statement does nothing . Syntax requires a statement , But when the program does not actually perform any action , You can use this statement . for example :
>>>
>>> while True: ... pass # Busy-wait for keyboard interrupt (Ctrl+C) ...
The following code creates a minimal class :
>>>
>>> class MyEmptyClass: ... pass ...
pass It can also be used as a placeholder for functions or conditional clauses , Let developers focus on more abstract levels . here , Program directly ignore pass
:
>>>
>>> def initlog(*args): ... pass # Remember to implement this! ...
match
sentence A match statement takes an expression and compares its value to successive patterns given as one or more case blocks. This is superficially similar to a switch statement in C, Java or JavaScript (and many other languages), but it's more similar to pattern matching in languages like Rust or Haskell. Only the first pattern that matches gets executed and it can also extract components (sequence elements or object attributes) from the value into variables.
The simplest form is to compare a target value with one or more literal values :
def http_error(status): match status: case 400: return "Bad request" case 404: return "Not found" case 418: return "I'm a teapot" case _: return "Something's wrong with the internet"
Notice the last block of code :“ Variable name ” _
Be treated as wildcard And will match successfully . without case Statement matching succeeded , Then no branches will be taken .
Use |
(“ or ”) Multiple literals can be combined in one pattern :
case 401 | 403 | 404: return "Not allowed"
The form of the pattern is similar to unpacking assignment , And can be used to bind variables :
# point is an (x, y) tuple match point: case (0, 0): print("Origin") case (0, y): print(f"Y={y}") case (x, 0): print(f"X={x}") case (x, y): print(f"X={x}, Y={y}") case _: raise ValueError("Not a point")
Please study this code carefully ! The first pattern has two literal values , It can be seen as an extension of the literal pattern shown above . But the next two patterns combine a literal and a variable , Variables binding A value from the target (point
). The fourth pattern captures two values , This makes it conceptually similar to unpacking (x, y) = point
.
If you use classes to implement data structures , You can add a constructor like parameter list after the class name , In this way, you can put attributes into variables :
class Point: x: int y: int def where_is(point): match point: case Point(x=0, y=0): print("Origin") case Point(x=0, y=y): print(f"Y={y}") case Point(x=x, y=0): print(f"X={x}") case Point(): print("Somewhere else") case _: print("Not a point")
Can be found in dataclass And other built-in classes that support attribute sorting . You can also set... In the class __match_args__
The special attribute specifies the location for the attribute definition of the schema . If it's set to ("x", "y"), Then the following modes are equivalent , And put y
Property is bound to var
Variable :
Point(1, var) Point(1, y=var) Point(x=1, y=var) Point(y=var, x=1)
The recommended way to read patterns is to think of them as an extended form of what you will put to the left of the assignment operation , In order to understand the value that each variable will be set . Only a single name ( For example, above var
) Will be match Statement . Names with dots ( for example foo.bar
)、 The attribute name ( For example, above x=
and y=
) Or class name ( Through the following "(...)" To identify , For example, above Point
) Will never be assigned .
Patterns can be nested arbitrarily . for example , If you have a short list of points , Then you can use the following methods to match :
match points: case []: print("No points") case [Point(0, 0)]: print("The origin") case [Point(x, y)]: print(f"Single point {x}, {y}") case [Point(0, y1), Point(0, y2)]: print(f"Two on the Y axis at {y1}, {y2}") case _: print("Something else")
Add a daemon for the pattern if
Clause . If the value of the guard item is false , be match
Continue to match the next case Sentence block . Be careful , Value capture occurs before the daemon is evaluated :
match point: case Point(x, y) if x == y: print(f"Y=X at {x}") case Point(x, y): print(f"Not on the diagonal")
match Other features of the statement :
Similar to unpacking assignment , Tuple and list patterns have exactly the same meaning , And can actually match any sequence . But they can't match iterators or strings .
The sequence mode supports extended unpacking :[x, y, *rest]
and (x, y, *rest)
The function of is similar to unpacking assignment . stay *
The following name can also be _
, therefore ,(x, y, *_)
You can match a sequence that contains at least two entries , Instead of binding the rest of the entries .
Mapping mode :{"bandwidth": b, "latency": l}
Capture from the dictionary "bandwidth"
and "latency"
Value . Different from the sequence pattern , Extra keys are ignored .**rest
And other unpacking operations are also supported . but **_
It's redundant , Not allowed .
Use as
Keywords can capture sub patterns :
case (Point(x1, y1), Point(x2, y2) as p2): ...
The second element of the input will be captured as p2
( As long as the input is a sequence of two points )
Most numerical denominations are compared by equality , But singleton objects True
, False
and None
Is compared according to the identification number .
Patterns can use named constants . These named constants must be names with dots to prevent them from being interpreted as capture variables :
from enum import Enum class Color(Enum): RED = 'red' GREEN = 'green' BLUE = 'blue' color = Color(input("Enter your choice of 'red', 'blue' or 'green': ")) match color: case Color.RED: print("I see red!") case Color.GREEN: print("Grass is green") case Color.BLUE: print("I'm feeling the blues :(")
For more detailed instructions and additional examples , You can refer to PEP 636.
The following code creates a Fibonacci sequence function that can output a limited number :
>>>
>>> def fib(n): # write Fibonacci series up to n ... """Print a Fibonacci series up to n.""" ... a, b = 0, 1 ... while a < n: ... print(a, end=' ') ... a, b = b, a+b ... print() ... >>> # Now call the function we just defined: ... fib(2000) 0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597
Definition Functions use keywords def, Followed by the function name and the list of formal parameters in parentheses . The function statement starts on the next line , And must indent .
When the first statement in the function is a string , This string is the document string , Also known as docstring, See docstring . Using the document string, you can automatically generate online documents or printed documents , It also allows developers to view the documentation directly when browsing the code ;Python It's best for developers to get into the habit of adding document strings to their code .
Function in perform Use the function local variable symbol table , All function variable assignments exist in the local symbol table ; When referencing variables , First , Look up variables in the local symbol table , then , Is the outer function local symbol table , Then there is the global symbol table , Finally, the built-in name symbol table . therefore , Although you can reference global variables and variables of outer functions , But it's best not to assign values directly in functions ( Unless it is global Statement to define the global variable , or nonlocal The outer function variable defined by the statement ).
When the function is called, the actual parameters will be ( Actual parameters ) Introduced into the local symbol table of the called function ; therefore , The argument is to use Call... By value To deliver ( Among them value It is always the object quote Not the value of the object ). 1 When a function calls another function , A new local symbol table is created for this call .
Function definitions associate function names with function objects in the current symbol table . The interpreter takes the object pointed to by the function name as a user-defined function . You can also use other names to point to the same function object , And visit the function :
>>>
>>> fib <function fib at 10042ed0> >>> f = fib >>> f(100) 0 1 1 2 3 5 8 13 21 34 55 89
fib
No return value , therefore , Other languages do not treat it as a function , But as a process . in fact , No, return The function of the statement also returns a value , But the value is None
( Is a built-in name ). Generally speaking , The interpreter will not output a separate return value None
, To view this value , have access to print():
>>>
>>> fib(0) >>> print(fib(0)) None
Write without directly outputting Fibonacci sequence operation results , Instead, the function that returns the list of operation results is also very simple :
>>>
>>> def fib2(n): # return Fibonacci series up to n ... """Return a list containing the Fibonacci series up to n.""" ... result = [] ... a, b = 0, 1 ... while a < n: ... result.append(a) # see below ... a, b = b, a+b ... return result ... >>> f100 = fib2(100) # call it >>> f100 # write the result [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]
This example also introduces some new Python function :
return Statement returns the value of the function .return
Statement without expression parameters , return None
. After the function is executed, the exit also returns None
.
result.append(a)
Statement called the list object result
Of Method . The method is “ be subordinate to ” Object function , Name it obj.methodname
,obj
It's the object ( It can also be an expression ),methodname
Is the method name defined by the object type . Different types define different methods , Different types of method names can be the same , And will not cause ambiguity .( use class You can customize object types and methods , See class ) The method in the example append()
Is defined for list objects , Used to add a new element at the end of the list . In this case , This method is equivalent to result = result + [a]
, But more effective .
The function definition supports a variable number of parameters . Here are three forms that can be combined .
Specifying default values for parameters is a very useful way . When you call a function , You can use fewer parameters than you defined , for example :
def ask_ok(prompt, retries=4, reminder='Please try again!'): while True: ok = input(prompt) if ok in ('y', 'ye', 'yes'): return True if ok in ('n', 'no', 'nop', 'nope'): return False retries = retries - 1 if retries < 0: raise ValueError('invalid user response') print(reminder)
This function can be called in the following way :
Only the required arguments are given :ask_ok('Do you really want to quit?')
Give an optional argument :ask_ok('OK to overwrite the file?', 2)
Give all arguments :ask_ok('OK to overwrite the file?', 2, 'Come on, only yes or no!')
This example also uses keywords in , Used to confirm whether the sequence contains a value .
The default value is in Definition Evaluate the function definition in the scope , therefore :
i = 5 def f(arg=i): print(arg) i = 6 f()
The output of the above example is 5
.
Important warning : The default value is calculated only once . The default value is list 、 Variable objects such as dictionaries or class instances , Will produce different results from this rule . for example , The following function will accumulate the parameters passed in subsequent calls :
def f(a, L=[]): L.append(a) return L print(f(1)) print(f(2)) print(f(3))
The output is as follows :
[1] [1, 2] [1, 2, 3]
When you don't want to share default values between subsequent calls , The function should be written as follows :
def f(a, L=None): if L is None: L = [] L.append(a) return L
kwarg=value
Formal Key parameters It can also be used to call functions . The function example is as follows :
def parrot(voltage, state='a stiff', action='voom', type='Norwegian Blue'): print("-- This parrot wouldn't", action, end=' ') print("if you put", voltage, "volts through it.") print("-- Lovely plumage, the", type) print("-- It's", state, "!")
This function accepts a required parameter (voltage
) And three optional parameters (state
, action
and type
). This function can be called in the following ways :
parrot(1000) # 1 positional argument parrot(voltage=1000) # 1 keyword argument parrot(voltage=1000000, action='VOOOOOM') # 2 keyword arguments parrot(action='VOOOOOM', voltage=1000000) # 2 keyword arguments parrot('a million', 'bereft of life', 'jump') # 3 positional arguments parrot('a thousand', state='pushing up the daisies') # 1 positional, 1 keyword
The following methods of calling functions are invalid :
parrot() # required argument missing parrot(voltage=5.0, 'dead') # non-keyword argument after a keyword argument parrot(110, voltage=220) # duplicate value for the same argument parrot(actor='John Cleese') # unknown keyword argument
When a function is called , The keyword parameter must follow the positional parameter . All passed keyword parameters must match the parameters accepted by a function ( such as ,actor
It's not a function parrot
Valid parameters for ), The order of keyword parameters is not important . This also includes the required parameters ,( such as ,parrot(voltage=1000)
It works ). The same parameter cannot be assigned more than once , The following is an example of failure due to this limitation :
>>>
>>> def function(a): ... pass ... >>> function(0, a=0) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: function() got multiple values for argument 'a'
The last formal parameter is **name
Formal time , Receive a dictionary ( See Mapping type --- dict), The dictionary contains all keyword parameters except those corresponding to the defined formal parameters in the function .**name
Formal parameters can be associated with *name
Shape parameter ( This is covered in the next section ) Use a combination of (*name
Must be in **name
front ), *name
The formal parameter receives a Tuples , The tuple contains positional parameters other than the formal parameter list . for example , You can define the following functions :
def cheeseshop(kind, *arguments, **keywords): print("-- Do you have any", kind, "?") print("-- I'm sorry, we're all out of", kind) for arg in arguments: print(arg) print("-" * 40) for kw in keywords: print(kw, ":", keywords[kw])
This function can be called in the following way :
cheeseshop("Limburger", "It's very runny, sir.", "It's really very, VERY runny, sir.", shopkeeper="Michael Palin", client="John Cleese", sketch="Cheese Shop Sketch")
The output is as follows :
-- Do you have any Limburger ? -- I'm sorry, we're all out of Limburger It's very runny, sir. It's really very, VERY runny, sir. ---------------------------------------- shopkeeper : Michael Palin client : John Cleese sketch : Cheese Shop Sketch
Be careful , The order of keyword parameters in the output result is consistent with that when calling the function .
By default , Parameters can be passed to by location or explicit keywords Python function . To make the code easy to read 、 Efficient , It's best to limit the way parameters are passed , such , Developers only need to view the function definition , It can be determined that the parameter item is only by position 、 By location or keyword , Or just by keyword .
The function is defined as follows :
def f(pos1, pos2, /, pos_or_kwd, *, kwd1, kwd2): ----------- ---------- ---------- | | | | Positional or keyword | | - Keyword only -- Positional only
/
and *
It's optional . These symbols show how parameters pass parameter values to functions : Location 、 Location or keyword 、 keyword . Keyword parameters are also called named parameters .
4.8.3.1. Location or keyword parameter
Not used in function definition /
and *
when , Parameters can be passed to functions by position or keyword .
4.8.3.2. Only position parameters
Here are some more details , Specific formal parameters can be marked as Position only . Position only when , The order of formal parameters is very important , And these formal parameters cannot be passed with keywords . Position only formal parameters should be placed in /
( Forward slash ) front ./
It is used to logically divide only positional shapes and participate in other formal parameters . If there is no... In the function definition /
, It means that there is no location only formal parameter .
/
It can be Location or keyword or Keywords only Shape parameter .
4.8.3.3. Key parameters only
Mark the formal parameter as Keywords only , Indicates that the formal parameter must be passed as a keyword parameter , Should be the first... In the parameter list Keywords only Add... Before formal parameters *
.
4.8.3.4. Example of function
See the following function definition example , Be careful /
and *
Mark :
>>>
>>> def standard_arg(arg): ... print(arg) ... >>> def pos_only_arg(arg, /): ... print(arg) ... >>> def kwd_only_arg(*, arg): ... print(arg) ... >>> def combined_example(pos_only, /, standard, *, kwd_only): ... print(pos_only, standard, kwd_only)
The first function defines standard_arg
Is the most common form , There are no restrictions on how to call , Parameters can be passed by location or keyword :
>>>
>>> standard_arg(2) 2 >>> standard_arg(arg=2) 2
The second function pos_only_arg
In the function definition of /
, Only use positional parameters :
>>>
>>> pos_only_arg(1) 1 >>> pos_only_arg(arg=1) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: pos_only_arg() got some positional-only arguments passed as keyword arguments: 'arg'
The third function kwd_only_args
The function of is defined by *
Indicates that only keyword parameters :
>>>
>>> kwd_only_arg(3) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: kwd_only_arg() takes 0 positional arguments but 1 was given >>> kwd_only_arg(arg=3) 3
The last function is in the same function definition , All three calling conventions are used :
>>>
>>> combined_example(1, 2, 3) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: combined_example() takes 2 positional arguments but 3 were given >>> combined_example(1, 2, kwd_only=3) 1 2 3 >>> combined_example(1, standard=2, kwd_only=3) 1 2 3 >>> combined_example(pos_only=1, standard=2, kwd_only=3) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: combined_example() got some positional-only arguments passed as keyword arguments: 'pos_only'
In the following function definition ,kwds
hold name
As key , therefore , May be related to location parameters name
Generate potential conflicts :
def foo(name, **kwds): return 'name' in kwds
Calling this function cannot return True
, Because of keywords 'name'
Always bound to the first formal parameter . for example :
>>>
>>> foo(1, **{'name': 2}) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: foo() got multiple values for argument 'name' >>>
add /
( Positional parameters only ) after , That's all right. . here , The function definition puts name
As a positional parameter ,'name'
It can also be used as a key for keyword parameters :
def foo(name, /, **kwds): return 'name' in kwds >>> foo(1, **{'name': 2}) True
let me put it another way , Only the name of the location parameter can be in **kwds
Use in , Without ambiguity .
4.8.3.5. Summary
The following use cases determine which formal parameters can be used for function definition :
def f(pos1, pos2, /, pos_or_kwd, *, kwd1, kwd2):
explain :
Use positional parameters only , It can prevent users from using formal parameter names . When the formal parameter name has no practical meaning , When forcing the order of arguments to call a function , Or when receiving location parameters and keywords at the same time , It works .
When the formal parameter name has practical meaning , And the explicit name can make the function definition easier to understand , When preventing users from relying on the location where arguments are passed , Only use keywords .
about API, Use positional parameters only , It can prevent damaging when modifying the formal parameter name in the future API change .
When you call a function , Using any number of arguments is the least common option . These arguments are contained in tuples ( See Tuples and sequences ). Before a variable number of arguments , There may be several common parameters :
def write_multiple_items(file, separator, *args): file.write(separator.join(args))
variadic Parameters are used to collect all remaining parameters passed to the function , therefore , They are usually at the end of the formal parameter list .*args
Any formal parameter after a formal parameter can only be a keyword only parameter , That is, it can only be used as a keyword parameter , Cannot be used as a positional parameter :
>>>
>>> def concat(*args, sep="/"): ... return sep.join(args) ... >>> concat("earth", "mars", "venus") 'earth/mars/venus' >>> concat("earth", "mars", "venus", sep=".") 'earth.mars.venus'
Function calls require independent positional parameters , But when the argument is in a list or tuple , To do the opposite . for example , Built in range() Function requires independent start and stop Actual parameters . If these parameters are not independent , When calling a function , use *
Operators unpack arguments from lists or tuples :
>>>
>>> list(range(3, 6)) # normal call with separate arguments [3, 4, 5] >>> args = [3, 6] >>> list(range(*args)) # call with arguments unpacked from a list [3, 4, 5]
Again , A dictionary can be used **
The operator passes keyword parameters :
>>>
>>> def parrot(voltage, state='a stiff', action='voom'): ... print("-- This parrot wouldn't", action, end=' ') ... print("if you put", voltage, "volts through it.", end=' ') ... print("E's", state, "!") ... >>> d = {"voltage": "four million", "state": "bleedin' demised", "action": "VOOM"} >>> parrot(**d) -- This parrot wouldn't VOOM if you put four million volts through it. E's bleedin' demised !
lambda Keyword is used to create small anonymous functions .lambda a, b: a+b
Function returns the sum of two parameters .Lambda Functions can be used anywhere a function object is needed . In grammar , Anonymous functions can only be single expressions . Semantically , It's just a syntax sugar defined by a regular function . Same as nested function definition ,lambda Functions can reference variables included in the scope :
>>>
>>> def make_incrementor(n): ... return lambda x: x + n ... >>> f = make_incrementor(42) >>> f(0) 42 >>> f(1) 43
The above example is used lambda The expression returns the function . You can also use anonymous functions as arguments to pass :
>>>
>>> pairs = [(1, 'one'), (2, 'two'), (3, 'three'), (4, 'four')] >>> pairs.sort(key=lambda pair: pair[1]) >>> pairs [(4, 'four'), (1, 'one'), (3, 'three'), (2, 'two')]
The following is the Convention of document string content and format .
The first line should be a brief summary of the purpose of the object . To keep it simple , Don't explicitly specify the object name or type here , Because this information can be obtained in other ways ( Unless the name happens to be a verb describing the operation of a function ). This line should start with a capital letter , End with a period .
When the document string is multiline , The second line should be blank , Visually separate the abstract from the rest of the description . The following line can contain several paragraphs , Describe the calling convention of the object 、 Side effects, etc .
Python The parser will not delete Python Indent the literal value of a multi line string in , therefore , The document processing tool should remove indents if necessary . This operation follows the following conventions : The first line of the document string after The first non blank line of determines the indentation of the entire document string ( The first line is usually adjacent to the quotation mark at the beginning of the string , The indentation is not obvious in the string , therefore , You cannot indent the first line ), then , Delete the indent at the beginning of all lines in the string “ Equivalent ” Blank character of . There can't be less indented lines , But if there is a line with less indentation , All leading white space characters in these lines should be deleted . After converting tabs ( Usually it is 8 A space ), The equivalence of white space characters should be tested .
The following is an example of a multi line document string :
>>>
>>> def my_function(): ... """Do nothing, but document it. ... ... No, really, it doesn't do anything. ... """ ... pass ... >>> print(my_function.__doc__) Do nothing, but document it. No, really, it doesn't do anything.
Function Annotations Is the complete metadata information of optional user-defined function types ( See PEP 3107 and PEP 484 ).
mark In the form of a dictionary __annotations__
Properties of the , And it doesn't affect any other part of the function . Parameter labels are defined by adding a colon after the parameter name , Followed by an expression , The expression will be evaluated to the value of the dimension . The return value annotation is defined by adding a combination symbol ->
, Followed by an expression , The annotation is located in the parameter list and represents def Between colons at the end of a statement . The following example has a required parameter , An optional keyword parameter and return value are marked with the corresponding annotation :
>>>
>>> def f(ham: str, eggs: str = 'eggs') -> str: ... print("Annotations:", f.__annotations__) ... print("Arguments:", ham, eggs) ... return ham + ' and ' + eggs ... >>> f('spam') Annotations: {'ham': <class 'str'>, 'return': <class 'str'>, 'eggs': <class 'str'>} Arguments: spam eggs 'spam and eggs'
This chapter explains in depth some of the contents learned before , meanwhile , New knowledge points have also been added .
List data types support many methods , All methods of the list object are as follows :
list.append(x)
Add an element at the end of the list , amount to a[len(a):] = [x]
.
list.extend(iterable)
Expand the list with the elements of the iteratable object . amount to a[len(a):] = iterable
.
list.insert(i, x)
Inserts an element at the specified location . The first parameter is the index of the inserted element , therefore ,a.insert(0, x)
Insert an element at the beginning of the list , a.insert(len(a), x)
Equate to a.append(x)
.
list.remove(x)
Remove the first value from the list as x The elements of . When the specified element is not found , Trigger ValueError abnormal .
list.pop([i])
Delete the element at the specified position in the list , And return the deleted element . When no location is specified ,a.pop()
Delete and return the last element of the list .( Method signature i Square brackets around indicate that the parameter is optional , Square brackets are not required . This representation is common in Python Reference Library ).
list.clear()
Delete all elements in the list , amount to del a[:]
.
list.index(x[, start[, end]])
The first value in the return list is x The zero base index of the element . When the specified element is not found , Trigger ValueError abnormal .
Optional parameters start and end It's a slice symbol , Used to restrict a search to a specific subsequence of a list . The index returned is calculated relative to the beginning of the entire sequence , instead of start Parameters .
list.count(x)
Return the elements in the list x Number of occurrences .
list.sort(*, key=None, reverse=False)
Sort elements in the list in place ( To learn about custom sorting parameters , See sorted()).
list.reverse()
Flip the elements in the list .
list.copy()
Returns a shallow copy of the list . amount to a[:]
.
Most list method examples :
>>>
>>> fruits = ['orange', 'apple', 'pear', 'banana', 'kiwi', 'apple', 'banana'] >>> fruits.count('apple') 2 >>> fruits.count('tangerine') 0 >>> fruits.index('banana') 3 >>> fruits.index('banana', 4) # Find next banana starting a position 4 6 >>> fruits.reverse() >>> fruits ['banana', 'apple', 'kiwi', 'banana', 'pear', 'apple', 'orange'] >>> fruits.append('grape') >>> fruits ['banana', 'apple', 'kiwi', 'banana', 'pear', 'apple', 'orange', 'grape'] >>> fruits.sort() >>> fruits ['apple', 'apple', 'banana', 'banana', 'grape', 'kiwi', 'orange', 'pear'] >>> fruits.pop() 'pear'
insert
、remove
、sort
And other methods only modify the list , Do not output the return value —— The default value returned is None
.1 This is all. Python Design principle of variable data structure .
also , Not all data can be sorted or compared . for example ,[None, 'hello', 10]
It cannot be sorted , Because integers cannot be compared with strings , and None Cannot be compared with other types . Some types don't define sequential relationships at all , for example ,3+4j < 5+7j
This comparison operation is invalid .
Using the list method to implement the stack is very easy , The last one inserted is the first one taken out (“ Last in, first out ”). Add the element to the top of the stack , Use append()
. Take the element from the top of the stack , Use pop()
, No index is specified . for example :
>>>
>>> stack = [3, 4, 5] >>> stack.append(6) >>> stack.append(7) >>> stack [3, 4, 5, 6, 7] >>> stack.pop() 7 >>> stack [3, 4, 5, 6] >>> stack.pop() 6 >>> stack.pop() 5 >>> stack [3, 4]
Lists can also be used as queues , The first element added , Take out first (“ fifo ”); However , Lists are inefficient as queues . because , Adding and removing elements at the end of the list is very fast , But inserting or removing elements at the beginning of the list is slow ( Because all other elements must be moved by one bit ).
The best way to implement a queue is to use collections.deque, You can quickly add or remove elements from both ends . for example :
>>>
>>> from collections import deque >>> queue = deque(["Eric", "John", "Michael"]) >>> queue.append("Terry") # Terry arrives >>> queue.append("Graham") # Graham arrives >>> queue.popleft() # The first to arrive now leaves 'Eric' >>> queue.popleft() # The second to arrive now leaves 'John' >>> queue # Remaining queue in order of arrival deque(['Michael', 'Terry', 'Graham'])
List derivation is a simpler way to create a list . The common usage is , Apply an operation to each element in a sequence or iteratable object , Create a new list with the generated results ; Or create subsequences with elements that meet specific conditions .
for example , Create a list of square values :
>>>
>>> squares = [] >>> for x in range(10): ... squares.append(x**2) ... >>> squares [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
Be careful , This code creates ( Or cover ) Variable x
, This variable still exists after the end of the loop . The following method can calculate the square list without side effects :
squares = list(map(lambda x: x**2, range(10)))
Or equivalent to :
squares = [x**2 for x in range(10)]
The above way of writing is more concise 、 Easy to read .
The list derivation contains the following in square brackets : An expression , There is a for
Clause , then , Is zero or more for
or if
Clause . The result is based on the expression for
and if
Clause evaluates and evaluates to a new list . for instance , The following list derivation combines the unequal elements in the two lists :
>>>
>>> [(x, y) for x in [1,2,3] for y in [3,1,4] if x != y] [(1, 3), (1, 4), (2, 3), (2, 1), (2, 4), (3, 1), (3, 4)]
Equivalent to :
>>>
>>> combs = [] >>> for x in [1,2,3]: ... for y in [3,1,4]: ... if x != y: ... combs.append((x, y)) ... >>> combs [(1, 3), (1, 4), (2, 3), (2, 1), (2, 4), (3, 1), (3, 4)]
Be careful , In the above two pieces of code ,for and if In the same order .
Expressions are tuples ( For example, in the example above (x, y)
) when , It has to be bracketed :
>>>
>>> vec = [-4, -2, 0, 2, 4] >>> # create a new list with the values doubled >>> [x*2 for x in vec] [-8, -4, 0, 4, 8] >>> # filter the list to exclude negative numbers >>> [x for x in vec if x >= 0] [0, 2, 4] >>> # apply a function to all the elements >>> [abs(x) for x in vec] [4, 2, 0, 2, 4] >>> # call a method on each element >>> freshfruit = [' banana', ' loganberry ', 'passion fruit '] >>> [weapon.strip() for weapon in freshfruit] ['banana', 'loganberry', 'passion fruit'] >>> # create a list of 2-tuples like (number, square) >>> [(x, x**2) for x in range(6)] [(0, 0), (1, 1), (2, 4), (3, 9), (4, 16), (5, 25)] >>> # the tuple must be parenthesized, otherwise an error is raised >>> [x, x**2 for x in range(6)] File "<stdin>", line 1, in <module> [x, x**2 for x in range(6)] ^ SyntaxError: invalid syntax >>> # flatten a list using a listcomp with two 'for' >>> vec = [[1,2,3], [4,5,6], [7,8,9]] >>> [num for elem in vec for num in elem] [1, 2, 3, 4, 5, 6, 7, 8, 9]
List derivation can use complex expressions and nested functions :
>>>
>>> from math import pi >>> [str(round(pi, i)) for i in range(1, 6)] ['3.1', '3.14', '3.142', '3.1416', '3.14159']
The initial expression in a list derivation can be any expression , It can even be another list derivation .
The following 3x4 matrix , from 3 A length of 4 The list of :
>>>
>>> matrix = [ ... [1, 2, 3, 4], ... [5, 6, 7, 8], ... [9, 10, 11, 12], ... ]
The following list derivation can transpose rows and columns :
>>>
>>> [[row[i] for row in matrix] for i in range(4)] [[1, 5, 9], [2, 6, 10], [3, 7, 11], [4, 8, 12]]
As we saw in the previous section, the inner list comprehension is evaluated in the context of the for that follows it, so this example is equivalent to:
>>>
>>> transposed = [] >>> for i in range(4): ... transposed.append([row[i] for row in matrix]) ... >>> transposed [[1, 5, 9], [2, 6, 10], [3, 7, 11], [4, 8, 12]]
On the other hand , It's also equivalent to :
>>>
>>> transposed = [] >>> for i in range(4): ... # the following 3 lines implement the nested listcomp ... transposed_row = [] ... for row in matrix: ... transposed_row.append(row[i]) ... transposed.append(transposed_row) ... >>> transposed [[1, 5, 9], [2, 6, 10], [3, 7, 11], [4, 8, 12]]
Practical application , It's best to replace complex process statements with built-in functions . here ,zip() Functions are better used :
>>>
>>> list(zip(*matrix)) [(1, 5, 9), (2, 6, 10), (3, 7, 11), (4, 8, 12)]
Detailed description of the asterisk in this bank , See Unpack argument list .
del
sentence del Statement by index , Instead of removing elements from the list by value . With the return value pop()
The method is different , del
Statement can also remove slices from the list , Or empty the entire list ( Before that, the empty list was assigned to the slice ). for example :
>>>
>>> a = [-1, 1, 66.25, 333, 333, 1234.5] >>> del a[0] >>> a [1, 66.25, 333, 333, 1234.5] >>> del a[2:4] >>> a [1, 66.25, 1234.5] >>> del a[:] >>> a []
del It can also be used to delete the entire variable :
>>>
>>> del a
thereafter , To quote a
You're going to report a mistake ( Until you give it another value ). It will be introduced later del Other USES .
Lists and strings have a lot in common , for example , Indexing and slicing operations . The two data types are Sequence ( See Sequence type --- list, tuple, range). With Python The development of language , Other sequence types have also been added . This section introduces another type of standard : Tuples .
Tuples consist of multiple values separated by commas , for example :
>>>
>>> t = 12345, 54321, 'hello!' >>> t[0] 12345 >>> t (12345, 54321, 'hello!') >>> # Tuples may be nested: ... u = t, (1, 2, 3, 4, 5) >>> u ((12345, 54321, 'hello!'), (1, 2, 3, 4, 5)) >>> # Tuples are immutable: ... t[0] = 88888 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'tuple' object does not support item assignment >>> # but they can contain mutable objects: ... v = ([1, 2, 3], [3, 2, 1]) >>> v ([1, 2, 3], [3, 2, 1])
When the output , Tuples should be marked with parentheses , In this way, nested tuples can be correctly interpreted . When the input , Parentheses are optional , But it is often necessary ( If the tuple is part of a larger expression ). Assignment to a single element in a tuple is not allowed , Of course , You can create tuples with variable objects such as lists .
although , Tuples are much like lists , But the use scenario is different , It's used for different purposes . A tuple is immutable ( Immutable ), Generally, it can contain heterogeneous element sequences , By unpacking ( See later in this section ) Or index access ( If it is namedtuples, Properties can be accessed ). The list is mutable ( Variable ), List elements are generally of homogeneous type , Iteratively accessible .
structure 0 Or 1 Tuples of elements are special : In order to adapt to this situation , There are some additional syntactic changes . An empty tuple can be created with a pair of empty parentheses ; Tuples of only one element can be constructed by adding commas after this element ( If there is only one value in parentheses, it is not clear enough ). Ugly , But effective . for example :
>>>
>>> empty = () >>> singleton = 'hello', # <-- note trailing comma >>> len(empty) 0 >>> len(singleton) 1 >>> singleton ('hello',)
sentence t = 12345, 54321, 'hello!'
yes Tuple packing Example : value 12345
, 54321
and 'hello!'
Packed together into tuples . Reverse operation can also :
>>>
>>> x, y, z = t
be called Sequence unpacking It's all right , Applies to any sequence on the right . When unpacking the sequence , The number of variables on the left and sequence elements on the right should be equal . Be careful , In fact, multiple assignment is just a combination of tuple packaging and sequence unpacking .
Python And support aggregate This data type . A collection is an unordered container of non repeating elements . Basic usage includes member detection 、 Eliminate duplicate elements . Collection objects support collections 、 intersection 、 Difference set 、 Symmetrical difference and other mathematical operations .
Create a collection with curly braces or set() function . Be careful , Creating an empty collection can only use set()
, Out-of-service {}
,{}
An empty dictionary is created , The next section introduces the data structure : Dictionaries .
Here are some simple examples
>>>
>>> basket = {'apple', 'orange', 'apple', 'pear', 'orange', 'banana'} >>> print(basket) # show that duplicates have been removed {'orange', 'banana', 'pear', 'apple'} >>> 'orange' in basket # fast membership testing True >>> 'crabgrass' in basket False >>> # Demonstrate set operations on unique letters from two words ... >>> a = set('abracadabra') >>> b = set('alacazam') >>> a # unique letters in a {'a', 'r', 'b', 'c', 'd'} >>> a - b # letters in a but not in b {'r', 'd', 'b'} >>> a | b # letters in a or b or both {'a', 'c', 'r', 'd', 'b', 'm', 'z', 'l'} >>> a & b # letters in both a and b {'a', 'c'} >>> a ^ b # letters in a or b but not both {'r', 'd', 'b', 'm', 'z', 'l'}
And List derivation similar , Sets also support derivation :
>>>
>>> a = {x for x in 'abracadabra' if x not in 'abc'} >>> a {'r', 'd'}
Dictionaries ( See Mapping type --- dict) It is also a common Python Built in data types . Other languages may call dictionaries Federated memory or Associative array . Unlike a sequence indexed by consecutive integers , Dictionary with keyword Index , Keywords are usually strings or numbers , It can also be any other immutable type . Contains only strings 、 Numbers 、 Tuples of tuples , It can also be used as a keyword . But if a tuple contains mutable objects directly or indirectly , Can't be used as a keyword . List cannot be used as keyword , Because lists can be indexed 、 section 、append()
、extend()
And so on .
A dictionary can be understood as Key value pair Set , But the key of the dictionary must be unique . Curly braces {}
Used to create an empty dictionary . Another way to initialize a dictionary is , Enter comma separated key value pairs in curly braces , This is also the output mode of the dictionary .
The main purpose of a dictionary is to store... Through keywords 、 Extract value . use del
You can delete key value pairs . Store values with existing keywords , The old value associated with the keyword is replaced . Extract value from nonexistent key , May be an error .
Execute... On the dictionary list(d)
operation , Returns a list of all keys in the dictionary , Arrange in order of insertion ( To sort , Please use sorted(d)
). Check if there is a key in the dictionary , Use keywords in.
Here are some simple examples of dictionaries :
>>>
>>> tel = {'jack': 4098, 'sape': 4139} >>> tel['guido'] = 4127 >>> tel {'jack': 4098, 'sape': 4139, 'guido': 4127} >>> tel['jack'] 4098 >>> del tel['sape'] >>> tel['irv'] = 4127 >>> tel {'jack': 4098, 'guido': 4127, 'irv': 4127} >>> list(tel) ['jack', 'guido', 'irv'] >>> sorted(tel) ['guido', 'irv', 'jack'] >>> 'guido' in tel True >>> 'jack' not in tel False
dict() Constructors can directly create dictionaries with key value pairs :
>>>
>>> dict([('sape', 4139), ('guido', 4127), ('jack', 4098)]) {'sape': 4139, 'guido': 4127, 'jack': 4098}
Dictionary derivation can create a dictionary with any key value expression :
>>>
>>> {x: x**2 for x in (2, 4, 6)} {2: 4, 4: 16, 6: 36}
When the keyword is a relatively simple string , It is more convenient to specify key value pairs directly with keyword parameters :
>>>
>>> dict(sape=4139, guido=4127, jack=4098) {'sape': 4139, 'guido': 4127, 'jack': 4098}
When cycling in the dictionary , use items()
Method can take out the key and the corresponding value at the same time :
>>>
>>> knights = {'gallahad': 'the pure', 'robin': 'the brave'} >>> for k, v in knights.items(): ... print(k, v) ... gallahad the pure robin the brave
When cycling in a sequence , use enumerate() The function can fetch the location index and the corresponding value at the same time :
>>>
>>> for i, v in enumerate(['tic', 'tac', 'toe']): ... print(i, v) ... 0 tic 1 tac 2 toe
When two or more sequences are cycled at the same time , use zip() Function can match the elements in it one by one :
>>>
>>> questions = ['name', 'quest', 'favorite color'] >>> answers = ['lancelot', 'the holy grail', 'blue'] >>> for q, a in zip(questions, answers): ... print('What is your {0}? It is {1}.'.format(q, a)) ... What is your name? It is lancelot. What is your quest? It is the holy grail. What is your favorite color? It is blue.
When the reverse cycle sequence , First, forward positioning sequence , And then call reversed() function :
>>>
>>> for i in reversed(range(1, 10, 2)): ... print(i) ... 9 7 5 3 1
Cycle the sequence in the specified order , It can be used sorted() function , Without changing the original sequence , Returns a new sequence :
>>>
>>> basket = ['apple', 'orange', 'apple', 'pear', 'orange', 'banana'] >>> for i in sorted(basket): ... print(i) ... apple apple banana orange orange pear
Use set() Remove duplicate elements from the sequence . Use sorted() Add set() Then in the sorted order , Loop through the only element in the sequence :
>>>
>>> basket = ['apple', 'orange', 'apple', 'pear', 'orange', 'banana'] >>> for f in sorted(set(basket)): ... print(f) ... apple banana orange pear
Generally speaking , When modifying the contents of the list in a loop , Creating a new list is easy , And it's safe :
>>>
>>> import math >>> raw_data = [56.2, float('NaN'), 51.7, 55.3, 52.5, float('NaN'), 47.8] >>> filtered_data = [] >>> for value in raw_data: ... if not math.isnan(value): ... filtered_data.append(value) ... >>> filtered_data [56.2, 51.7, 55.3, 52.5, 47.8]
while
and if
Conditional sentences can not only compare , You can also use any operator .
Comparison operator in
and not in
Used to determine whether a value exists ( Or not ) Detect members in a container . Operator is
and is not
Used to compare whether two objects are the same object . All comparison operators have the same priority , And lower than any numeric operator .
Comparison operation supports chain operation . for example ,a < b == c
check a
Is less than b
, And b
Is it equal to c
.
The comparison operation can use Boolean operators and
and or
Combine , also , Comparison operation ( Or other Boolean operations ) All the results can be used not
Take the opposite . These operators take precedence over the comparison operator ;not
The highest priority , or
Has the lowest priority , therefore ,A and not B or C
Equivalent to (A and (not B)) or C
. Like other operators , You can also use parentheses to indicate the desired combination .
Boolean operator and
and or
Also known as A short circuit Operator : Its parameters are parsed from left to right , Once the results can be determined , Parsing will stop . for example , If A
and C
It's true ,B
For false , that A and B and C
No resolution C
. When used as a normal value instead of a Boolean value , The value returned by the short circuit operator is usually the last variable .
You can also assign the result of a comparison operation or logical expression to a variable , for example :
>>>
>>> string1, string2, string3 = '', 'Trondheim', 'Hammer Dance' >>> non_null = string1 or string2 or string3 >>> non_null 'Trondheim'
Be careful ,Python And C Different , Assignment inside an expression must explicitly use Walrus operators :=
. This avoids C Common problems in the program : To write... In an expression ==
when , But it was written =
.
Sequence objects can be compared with other objects of the same sequence type . This comparison uses Lexicographic The order : First , Compare the first two corresponding elements , If it's not equal , Then the comparison results can be determined ; If equal , Then the next two elements are compared , And so on , Until one of the sequences ends . If the two elements to be compared are themselves sequences of the same type , The dictionary order comparison is performed recursively . If all the corresponding elements in two sequences are equal , Then the two sequences are equal . If one sequence is the initial subsequence of another , Then the shorter sequence can be regarded as smaller ( Less ) Sequence . For Strings , Use in dictionary order Unicode Code point sequence number sorting single character . Here are some examples of comparing sequences of the same type :
(1, 2, 3) < (1, 2, 4) [1, 2, 3] < [1, 2, 4] 'ABC' < 'C' < 'Pascal' < 'Python' (1, 2, 3, 4) < (1, 2, 4) (1, 2) < (1, 2, -1) (1, 2, 3) == (1.0, 2.0, 3.0) (1, 2, ('aa', 'ab')) < (1, 2, ('abc', 'a'), 4)
Be careful , For different types of objects , As long as the object to be compared provides an appropriate comparison method , You can use <
and >
Compare . for example , Mixed numeric types are compared by numeric values , therefore ,0 be equal to 0.0, wait . otherwise , The interpreter will not give a comparison result casually , It's a trigger TypeError abnormal .
There are several ways to display program output ; Data can be output for human reading , You can also write to a file for backup . This chapter explores some of the available ways .
thus , We have learned two ways to write values : Expression statement and print() function . The third method is to use the of file objects write()
Method ; The standard output file is called sys.stdout
. See standard library reference for details .
The control over the output format is not just printing space delimited values , More ways are needed . Formatting output includes the following methods .
Use Format string literals , The quotation mark at the beginning of the string / Add... Before the three quotation marks f
or F
. In this string , Can be in {
and }
Enter referenced variables between characters , Or literal Python expression .
>>> year = 2016 >>> event = 'Referendum' >>> f'Results of the {year} {event}' 'Results of the 2016 Referendum'
A string of str.format() Method requires more manual operation . This method is also used {
and }
Mark the position of the replacement variable , Although this method supports detailed formatting instructions , But you need to provide formatting information .
>>> yes_votes = 42_572_654 >>> no_votes = 43_132_495 >>> percentage = yes_votes / (yes_votes + no_votes) >>> '{:-9} YES votes {:2.2%}'.format(yes_votes, percentage) ' 42572654 YES votes 49.67%'
Last , You can also use string slicing and merging operations to complete string processing operations , Create any layout . String types also support filling strings with a given column width , These methods are also very useful .
If you don't need fancy output , Just want to quickly display variables for debugging , It can be used repr() or str() The function converts a value to a string .
str() Function returns a human readable value ,repr() Then generate a value suitable for reading by the interpreter ( If there is no equivalent grammar , To enforce SyntaxError). For objects that don't support reading and presenting results , str() Return and repr() The same value . In general , Numbers 、 Values of structures such as lists or dictionaries , Using these two functions, the output is expressed in the same form . String has two different forms .
Examples are as follows :
>>>
>>> s = 'Hello, world.' >>> str(s) 'Hello, world.' >>> repr(s) "'Hello, world.'" >>> str(1/7) '0.14285714285714285' >>> x = 10 * 3.25 >>> y = 200 * 200 >>> s = 'The value of x is ' + repr(x) + ', and y is ' + repr(y) + '...' >>> print(s) The value of x is 32.5, and y is 40000... >>> # The repr() of a string adds string quotes and backslashes: ... hello = 'hello, world\n' >>> hellos = repr(hello) >>> print(hellos) 'hello, world\n' >>> # The argument to repr() may be any Python object: ... repr((x, y, ('spam', 'eggs'))) "(32.5, 40000, ('spam', 'eggs'))"
string The module contains Template class , Provides another way to replace a value with a string . This kind of use $x
Place holder , And replace it with the value of the dictionary , However, the support for format control is relatively limited .
Format string literals ( Referred to as f- character string ) Prefix the string with f
or F
, adopt {expression}
expression , hold Python Add the value of the expression to the string .
The format specifier is optional , Write it after the expression , You can better control how values are formatted . The following example will pi Round to three decimal places :
>>>
>>> import math >>> print(f'The value of pi is approximately {math.pi:.3f}.') The value of pi is approximately 3.142.
stay ':'
Then pass the integer , Set the minimum character width for this field , Commonly used for column alignment :
>>>
>>> table = {'Sjoerd': 4127, 'Jack': 4098, 'Dcab': 7678} >>> for name, phone in table.items(): ... print(f'{name:10} ==> {phone:10d}') ... Sjoerd ==> 4127 Jack ==> 4098 Dcab ==> 7678
There are also modifiers that can convert values before formatting . '!a'
application ascii() ,'!s'
application str(),'!r'
application repr():
>>>
>>> animals = 'eels' >>> print(f'My hovercraft is full of {animals}.') My hovercraft is full of eels. >>> print(f'My hovercraft is full of {animals!r}.') My hovercraft is full of 'eels'.
Refer to the reference guide for format specifications Format specification Mini language .
str.format() The basic usage of the method is as follows :
>>>
>>> print('We are the {} who say "{}!"'.format('knights', 'Ni')) We are the knights who say "Ni!"
Characters in curly braces and ( Called format fields ) Replaced by passed to str.format() Object of method . Numbers in curly braces are passed to str.format() The location of the object of the method .
>>>
>>> print('{0} and {1}'.format('spam', 'eggs')) spam and eggs >>> print('{1} and {0}'.format('spam', 'eggs')) eggs and spam
str.format() Method uses keyword parameter names to reference values .
>>>
>>> print('This {food} is {adjective}.'.format( ... food='spam', adjective='absolutely horrible')) This spam is absolutely horrible.
Location parameters and keyword parameters can be combined arbitrarily :
>>>
>>> print('The story of {0}, {1}, and {other}.'.format('Bill', 'Manfred', other='Georg')) The story of Bill, Manfred, and Georg.
If you don't want to split long format strings , It's best to format variables by name , Don't press the position . This can be done by passing a dictionary , And use square brackets '[]'
Access key to complete .
>>>
>>> table = {'Sjoerd': 4127, 'Jack': 4098, 'Dcab': 8637678} >>> print('Jack: {0[Jack]:d}; Sjoerd: {0[Sjoerd]:d}; ' ... 'Dcab: {0[Dcab]:d}'.format(table)) Jack: 4098; Sjoerd: 4127; Dcab: 8637678
This could also be done by passing the table
dictionary as keyword arguments with the **
notation.
>>>
>>> table = {'Sjoerd': 4127, 'Jack': 4098, 'Dcab': 8637678} >>> print('Jack: {Jack:d}; Sjoerd: {Sjoerd:d}; Dcab: {Dcab:d}'.format(**table)) Jack: 4098; Sjoerd: 4127; Dcab: 8637678
With built-in functions vars() When used in combination , This method is very practical , You can return a dictionary containing all local variables .
for example , The following code generates a neat set of columns , Contains a given integer and its squares and cubes :
>>>
>>> for x in range(1, 11): ... print('{0:2d} {1:3d} {2:4d}'.format(x, x*x, x*x*x)) ... 1 1 1 2 4 8 3 9 27 4 16 64 5 25 125 6 36 216 7 49 343 8 64 512 9 81 729 10 100 1000
str.format() For a complete overview of string formatting, see Format String Syntax .
The following is a table of the same sum of squares and cubes using manual formatting :
>>>
>>> for x in range(1, 11): ... print(repr(x).rjust(2), repr(x*x).rjust(3), end=' ') ... # Note use of 'end' on previous line ... print(repr(x*x*x).rjust(4)) ... 1 1 1 2 4 8 3 9 27 4 16 64 5 25 125 6 36 216 7 49 343 8 64 512 9 81 729 10 100 1000
( Be careful , The space between each column is by using print() Added : It always adds spaces between its parameters .)
String object str.rjust() Method by filling in spaces on the left , Right align the string in the given width field . There are other similar methods str.ljust() and str.center() . These methods do not write anything , Only one new string is returned , If the input string is too long , They don't truncate strings , But return as is ; Although this method will mess up the column layout , But it is also better than another method , The latter may be inaccurate when displaying values ( If you really want to truncate the string , have access to x.ljust(n)[:n]
Such slicing operation .)
The other way is str.zfill() , This method fills the left side of the numeric string with zero , And be able to identify signs :
>>>
>>> '12'.zfill(5) '00012' >>> '-3.14'.zfill(7) '-003.14' >>> '3.14159265359'.zfill(5) '3.14159265359'
% Operator ( Remainder operator ) It can also be used for string formatting . Given 'string' % values
, be string
Medium %
The instance will have zero or more values
Element substitution . This operation is called string interpolation . for example :
>>>
>>> import math >>> print('The value of pi is approximately %5.3f.' % math.pi) The value of pi is approximately 3.142.
printf Style string formatting Section introduces more related content .
open() Return to one file object , The most commonly used are two positional parameters and one keyword parameter :open(filename, mode, encoding=None)
>>>
>>> f = open('workfile', 'w', encoding="utf-8")
The first argument is the file name string . The second argument is a string containing characters describing how the file is used .mode The values of include 'r'
, Indicates that the file can only read ;'w'
Indicates that only ( Existing files with the same name will be overwritten );'a'
Means to open a file and append content , Any written data is automatically added to the end of the file .'r+'
Open file for reading and writing .mode Arguments are optional , The default value when omitted is 'r'
.
Usually , The file is text mode The open , in other words , You read and write strings from files , These strings are specified encoding Coded . If not specified encoding , The default is platform related ( see open() ). because UTF-8 Is the de facto standard of modern , Unless you know you need to use a different code , Otherwise, it is recommended to use encoding="utf-8"
. Add a after the pattern 'b'
, It can be used binary mode Open file . The data in binary mode is in bytes Read and write in the form of objects . When opening a file in binary mode , You can't specify encoding .
When reading a file in text mode , By default, the platform specific line terminator (Unix Up for \n
, Windows Up for \r\n
) Convert to \n
. When writing data in text mode , By default \n
Convert back to platform specific Terminator . This operation mode modifies the file data in the background, which is no problem for text files , But it will destroy JPEG
or EXE
Wait for the data in the binary file . Be careful , When reading and writing such files , Be sure to use binary mode .
When processing file objects , Best use with keyword . Advantage is , At the end of the clause body , The file will close correctly , Even if an exception is triggered . and , Use with
Compared with the equivalent try-finally The code block is much shorter :
>>>
>>> with open('workfile', encoding="utf-8") as f: ... read_data = f.read() >>> # We can check that the file has been automatically closed. >>> f.closed True
If not used with keyword , Should be called f.close()
Close file , You can release the system resources occupied by files .
Warning
call f.write()
when , not used with
keyword , Or not called f.close()
, Even if the program exits normally , also ** Probably ** Lead to f.write()
The parameters of are not completely written to disk .
adopt with sentence , Or call f.close()
After closing the file object , Using the file object again will fail .
>>>
>>> f.close() >>> f.read() Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: I/O operation on closed file.
The following examples in this section assume that f
File object .
f.read(size)
Can be used to read the contents of the file , It will read some data , And return a string ( Text mode ), Or byte string object ( In binary mode ). size Is an optional numeric parameter . Omit size or size When it's negative , Read and return the contents of the entire file ; When the file size is twice as large as memory , There will be problems .size When taking other values , Read and return up to size Characters ( Text mode ) or size Bytes ( Binary mode ). If the end of the file has been reached ,f.read()
Returns an empty string (''
).
>>>
>>> f.read() 'This is the entire file.\n' >>> f.read() ''
f.readline()
Read single line data from file ; Line breaks are reserved at the end of the string (\n
), Only if the file does not end with a newline character , Line breaks are omitted on the last line of the file . This method makes the return value clear ; as long as f.readline()
Returns an empty string , It means that the end of the file has been reached , Blank line use '\n'
Express , The string contains only one newline character .
>>>
>>> f.readline() 'This is the first line of the file.\n' >>> f.readline() 'Second line of the file\n' >>> f.readline() ''
When reading multiple lines from a file , You can loop through the entire file object . This operation makes efficient use of memory , Fast , And the code is simple :
>>>
>>> for line in f: ... print(line, end='') ... This is the first line of the file. Second line of the file
To read all lines in the file as a list , It can be used list(f)
or f.readlines()
.
f.write(string)
hold string Content write file for , And returns the number of characters written .
>>>
>>> f.write('This is a test\n') 15
Before writing other types of objects , First convert them into strings ( Text mode ) Or byte object ( Binary mode ):
>>>
>>> value = ('the answer', 42) >>> s = str(value) # convert the tuple to string >>> f.write(s) 18
f.tell()
Return integer , Gives the current location of the file object in the file , In binary mode, the number of bytes starting from the file , And numbers with unknown meaning in text mode .
f.seek(offset, whence)
You can change the location of file objects . By adding offset Calculation location ; The reference point is whence Parameter assignment . whence The value is 0 when , Means to calculate from the beginning of a file ,1 Indicates that the current file location is used ,2 Use the end of the file as a reference point . Omit whence when , The default value is 0, That is, the beginning of the file is used as the reference point .
>>>
>>> f = open('workfile', 'rb+') >>> f.write(b'0123456789abcdef') 16 >>> f.seek(5) # Go to the 6th byte in the file 5 >>> f.read(1) b'5' >>> f.seek(-3, 2) # Go to the 3rd byte before the end 13 >>> f.read(1) b'd'
In the text file ( The pattern string is not used b
File opened when ) in , Only searches relative to the beginning of the file are allowed ( Use seek(0, 2)
Search to the end of the file is an exception ), The only effective offset The value is from f.tell()
In return , or 0. other offset Values all produce undefined behavior .
File objects also support isatty()
and truncate()
Other methods , But not often ; For a complete guide to document objects, see library reference .
Writing or reading strings from a file is simple , The numbers are a little troublesome , because read()
Method returns only string , These strings must be passed to int() A function like this , Accept '123'
Such a string , And return a numeric value 123. Save nested list 、 Dictionary and other complex data types , The operations of manual parsing and serialization are very complex .
Python Support JSON (JavaScript Object Notation) This popular data exchange format , Users don't have to write endlessly 、 Debugging code , To save complex data types to files .json The standard module adopts Python Data hierarchy , And convert it to string representation ; This process is called serializing ( serialize ). Reconstructing data from a string representation is called deserializing ( De sequencing ). Between serialization and deserialization , The string representing the object may already be stored in a file or data , Or send it to the remote place through network connection Machine .
remarks
JSON Format is usually used for data exchange of modern applications . Programmers have long been familiar with it , It is the best choice for interactive operation .
You can view an object's JSON String representation :
>>>
>>> import json >>> x = [1, 'simple', 'list'] >>> json.dumps(x) '[1, "simple", "list"]'
dumps() There is another variant of the function , dump() , It just serializes the object as text file . therefore , If f
yes text file object , You can do this :
json.dump(x, f)
To decode the object again , If f
Yes, it has been opened 、 For reading binary file or text file object :
x = json.load(f)
remarks
JSON The document must be marked with UTF-8 code . When open JSON File as a text file For reading and writing , Use encoding="utf-8"
.
This simple serialization technique can handle lists and dictionaries , But in JSON Serialize instances of any class in , It requires extra effort .json The reference to the module contains an explanation of this .
See
pickle - Seal the module
And JSON Different ,pickle It's a way to allow for complexity Python Object serialization Protocol . therefore , It's for Python Unique to , Cannot be used to communicate with applications written in other languages . It is also unsafe by default : If the de sequenced data is carefully designed by a skilled attacker , This untrusted source pickle Data can execute arbitrary code .
thus , This tutorial does not cover error messages in depth , But if you have entered the example in the previous article of this tutorial , I should have seen some error messages . at present ,( At least ) There are two different mistakes : Syntactic error and abnormal .
Syntactic errors are also called parsing errors , It's learning Python The most common mistake in :
>>>
>>> while True print('Hello world') File "<stdin>", line 1 while True print('Hello world') ^ SyntaxError: invalid syntax
The parser will reproduce lines of code with syntactic errors , And use small “ arrow ” Point to the first error detected in the line . The error is caused by the arrow upper Of token The trigger ( At least it was detected here ): In this case , stay print() Error detected in function , because , Missing colon in front of it (':'
) . The error message also outputs the file name and line number , When using script files , You can know where to find the wrong .
Even if the statement or expression uses the correct syntax , An error may still be triggered during execution . Errors detected during execution are called abnormal , Exceptions do not necessarily lead to serious consequences : Soon we will learn how to deal with Python It's abnormal . Most exceptions are not handled by the program , Instead, the following error message is displayed :
>>>
>>> 10 * (1/0) Traceback (most recent call last): File "<stdin>", line 1, in <module> ZeroDivisionError: division by zero >>> 4 + spam*3 Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'spam' is not defined >>> '2' + 2 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: can only concatenate str (not "int") to str
The last line of the error message indicates what type of error the program encountered . There are different types of exceptions , The type name will be printed as part of the error message : The exception types in the above example are :ZeroDivisionError, NameError and TypeError. The string printed as an exception type is the name of the built-in exception that occurred . This is true for all built-in exceptions , But this is not necessarily true for user-defined exceptions ( Although this specification is useful ). Standard exception types are built-in identifiers ( Not reserved keywords ).
The rest of this line depends on the exception type , Combined with the cause of the error , Explain the details of the error .
The beginning of the error message shows the context in which the exception occurred in the form of stack backtracking . Generally, stack backtracking of source code lines is listed ; However, lines read from standard input are not displayed .
Built-in exception Lists built-in exceptions and their meanings .
You can write programs to handle selected exceptions . The following example will ask the user to enter the content all the time , Until you enter a valid integer , But allow the user to interrupt the program ( Use Control-C Or other operations supported by the operating system ); Be careful , When the user interrupts the program, it will trigger KeyboardInterrupt abnormal .
>>>
>>> while True: ... try: ... x = int(input("Please enter a number: ")) ... break ... except ValueError: ... print("Oops! That was no valid number. Try again...") ...
try The statement works as follows :
First , perform try Clause (try and except Between keywords ( Multiple lines ) sentence ).
If no exception is triggered , Then skip except Clause ,try Statement execution finished .
If in execution try Exception in clause , Then skip the rest of the clause . If the type of exception is the same as except Keyword matches the specified exception , Will perform except Clause , Then jump to try/except Continue execution after the code block .
If an exception occurs with except Clause The exception specified in does not match , Then it will be passed to the external try In the sentence ; If no handler is found , Then it is a Unhandled exception And the execution will terminate and output the message shown above .
try There can be multiple statements except Clause To specify handlers for different exceptions . But at most one handler will be executed . The handler only processes the corresponding try Clause What happened in , Instead of dealing with the same try
Exceptions in other handlers within the statement . except Clause You can specify multiple exceptions with parenthesized tuples , for example :
... except (RuntimeError, TypeError, NameError): ... pass
If an exception occurs with except When the class in the clause is the same class or its base class , The class is compatible with the exception ( The opposite is not true --- List of derived classes except Clause Incompatible with base class ). for example , The following code will be printed in turn B, C, D:
class B(Exception): pass class C(B): pass class D(C): pass for cls in [B, C, D]: try: raise cls() except D: print("D") except C: print("C") except B: print("B")
Please note that if it is reversed except Clause The order of ( hold except B
Put it first ), Will output B, B, B --- That is, the first matching... Is triggered except Clause .
When an exception occurs, it may have associated values, also known as the exception's arguments. The presence and types of the arguments depend on the exception type.
The except clause may specify a variable after the exception name. The variable is bound to the exception instance which typically has an args
attribute that stores the arguments. For convenience, builtin exception types define __str__()
to print all the arguments without explicitly accessing .args
.
>>>
>>> try: ... raise Exception('spam', 'eggs') ... except Exception as inst: ... print(type(inst)) # the exception instance ... print(inst.args) # arguments stored in .args ... print(inst) # __str__ allows args to be printed directly, ... # but may be overridden in exception subclasses ... x, y = inst.args # unpack args ... print('x =', x) ... print('y =', y) ... <class 'Exception'> ('spam', 'eggs') ('spam', 'eggs') x = spam y = eggs
The exception's __str__()
output is printed as the last part ('detail') of the message for unhandled exceptions.
BaseException is the common base class of all exceptions. One of its subclasses, Exception, is the base class of all the non-fatal exceptions. Exceptions which are not subclasses of Exception are not typically handled, because they are used to indicate that the program should terminate. They include SystemExit which is raised by sys.exit() and KeyboardInterrupt which is raised when a user wishes to interrupt the program.
Exception can be used as a wildcard that catches (almost) everything. However, it is good practice to be as specific as possible with the types of exceptions that we intend to handle, and to allow any unexpected exceptions to propagate on.
The most common pattern for handling Exception is to print or log the exception and then re-raise it (allowing a caller to handle the exception as well):
import sys try: f = open('myfile.txt') s = f.readline() i = int(s.strip()) except OSError as err: print("OS error:", err) except ValueError: print("Could not convert data to an integer.") except Exception as err: print(f"Unexpected {err=}, {type(err)=}") raise
try ... except The statement has an optional else Clause , If this clause exists , It must be placed in all except Clause after . It applies to try Clause Code that does not throw an exception but must be executed . for example :
for arg in sys.argv[1:]: try: f = open(arg, 'r') except OSError: print('cannot open', arg) else: print(arg, 'has', len(f.readlines()), 'lines') f.close()
Use else
Clause to try Clause to add extra code is better , It can avoid accidental capture of non try
... except
Exception triggered by statement protected code .
Exception handlers do not handle only exceptions that occur immediately in the try clause, but also those that occur inside functions that are called (even indirectly) in the try clause. For example:
>>>
>>> def this_fails(): ... x = 1/0 ... >>> try: ... this_fails() ... except ZeroDivisionError as err: ... print('Handling run-time error:', err) ... Handling run-time error: division by zero
raise Statement supports forced triggering of the specified exception . for example :
>>>
>>> raise NameError('HiThere') Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: HiThere
The sole argument to raise indicates the exception to be raised. This must be either an exception instance or an exception class (a class that derives from BaseException, such as Exception or one of its subclasses). If an exception class is passed, it will be implicitly instantiated by calling its constructor with no arguments:
raise ValueError # shorthand for 'raise ValueError()'
If you just want to judge whether an exception is triggered , But I'm not going to handle this exception , You can use the simpler raise Statement triggers the exception again :
>>>
>>> try: ... raise NameError('HiThere') ... except NameError: ... print('An exception flew by!') ... raise ... An exception flew by! Traceback (most recent call last): File "<stdin>", line 2, in <module> NameError: HiThere
raise Statement supports optional from Clause , This clause is used to enable chained exceptions . for example :
# exc must be exception instance or None. raise RuntimeError from exc
Conversion exception , It works . for example :
>>>
>>> def func(): ... raise ConnectionError ... >>> try: ... func() ... except ConnectionError as exc: ... raise RuntimeError('Failed to open database') from exc ... Traceback (most recent call last): File "<stdin>", line 2, in <module> File "<stdin>", line 2, in func ConnectionError The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 4, in <module> RuntimeError: Failed to open database
The exception chain will be in except or finally Automatically generated when an exception is thrown inside a clause . This can be done by using from None
This way of writing to disable :
>>>
>>> try: ... open('database.sqlite') ... except OSError: ... raise RuntimeError from None ... Traceback (most recent call last): File "<stdin>", line 4, in <module> RuntimeError
The exception chain mechanism is detailed in Built-in exception .
Programs can name their own exceptions by creating new exception classes (Python See class ). Whether directly or indirectly , Exceptions should be Exception Class derivation .
Exception classes can be defined to do anything that other classes can do , But usually it should be kept simple , It often only provides some properties , Allow the corresponding exception handler to extract information about the error .
Most exceptions are named after “Error” ending , Similar to the naming of standard exceptions .
Many standard modules define their own exceptions to report errors that may occur in functions they define.
try Statement also has an optional clause , Used to define the cleanup operations that must be performed in all cases . for example :
>>>
>>> try: ... raise KeyboardInterrupt ... finally: ... print('Goodbye, world!') ... Goodbye, world! KeyboardInterrupt Traceback (most recent call last): File "<stdin>", line 2, in <module>
If there is finally Clause , be finally
Clause is try The last task executed before the end of the statement . Regardless of try
Whether the statement triggers an exception , It will be carried out finally
Clause . The following contents introduce several complex triggering exception scenarios :
If you execute try
An exception was triggered during the clause , Then a except Clause should handle the exception . If the exception does not except
Clause processing , stay finally
Clause will be triggered again after execution .
except
or else
An exception is also triggered during clause execution . Again , The exception will be in finally
Clause is triggered again after execution .
If finally
Clause contains break、continue or return Such statements , Exceptions will not be re thrown .
If you execute try
Statement encountered break,、continue or return sentence , be finally
Clause is executing break
、continue
or return
Execute before statement .
If finally
Clause contains return
sentence , The return value comes from finally
One of the clauses return
The return value of the statement , Not from try
Clause return
The return value of the statement .
for example :
>>>
>>> def bool_return(): ... try: ... return True ... finally: ... return False ... >>> bool_return() False
This is a complicated example :
>>>
>>> def divide(x, y): ... try: ... result = x / y ... except ZeroDivisionError: ... print("division by zero!") ... else: ... print("result is", result) ... finally: ... print("executing finally clause") ... >>> divide(2, 1) result is 2.0 executing finally clause >>> divide(2, 0) division by zero! executing finally clause >>> divide("2", "1") executing finally clause Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 3, in divide TypeError: unsupported operand type(s) for /: 'str' and 'str'
As shown above , In any case finally Clause .except Clause does not handle the... Triggered by the division of two strings TypeError, So it will finally
Clause is triggered again after execution .
In a real application ,finally Clause for releasing external resources ( For example, file or network connection ) Very useful , Whether or not resources are used successfully .
Some objects define standard cleanup operations to be performed when the object is not needed . Whether the operation using this object is successful or not , Will perform cleaning operations . such as , The following example is to open a file , And output the contents of the file :
for line in open("myfile.txt"): print(line, end="")
The problem with this code is , After executing the code , The file is open for an uncertain period of time . In a simple script, this is no problem , But for larger applications, there may be problems .with Statement supports timeliness 、 Use file objects the right way to clean up :
with open("myfile.txt") as f: for line in f: print(line, end="")
After statement execution , Even if there's a problem with processing lines , Will close the file f. Like a file , Objects that support predefined cleanup operations will point this out in the document .
There are situations where it is necessary to report several exceptions that have occurred. This it often the case in concurrency frameworks, when several tasks may have failed in parallel, but there are also other use cases where it is desirable to continue execution and collect multiple errors rather than raise the first exception.
The builtin ExceptionGroup wraps a list of exception instances so that they can be raised together. It is an exception itself, so it can be caught like any other exception.
>>>
>>> def f(): ... excs = [OSError('error 1'), SystemError('error 2')] ... raise ExceptionGroup('there were problems', excs) ... >>> f() + Exception Group Traceback (most recent call last): | File "<stdin>", line 1, in <module> | File "<stdin>", line 3, in f | ExceptionGroup: there were problems +-+---------------- 1 ---------------- | OSError: error 1 +---------------- 2 ---------------- | SystemError: error 2 +------------------------------------ >>> try: ... f() ... except Exception as e: ... print(f'caught {type(e)}: e') ... caught <class 'ExceptionGroup'>: e >>>
By using except*
instead of except
, we can selectively handle only the exceptions in the group that match a certain type. In the following example, which shows a nested exception group, each except*
clause extracts from the group exceptions of a certain type while letting all other exceptions propagate to other clauses and eventually to be reraised.
>>>
>>> def f(): ... raise ExceptionGroup("group1", ... [OSError(1), ... SystemError(2), ... ExceptionGroup("group2", ... [OSError(3), RecursionError(4)])]) ... >>> try: ... f() ... except* OSError as e: ... print("There were OSErrors") ... except* SystemError as e: ... print("There were SystemErrors") ... There were OSErrors There were SystemErrors + Exception Group Traceback (most recent call last): | File "<stdin>", line 2, in <module> | File "<stdin>", line 2, in f | ExceptionGroup: group1 +-+---------------- 1 ---------------- | ExceptionGroup: group2 +-+---------------- 1 ---------------- | RecursionError: 4 +------------------------------------ >>>
Note that the exceptions nested in an exception group must be instances, not types. This is because in practice the exceptions would typically be ones that have already been raised and caught by the program, along the following pattern:
>>>
>>> excs = [] ... for test in tests: ... try: ... test.run() ... except Exception as e: ... excs.append(e) ... >>> if excs: ... raise ExceptionGroup("Test Failures", excs) ...
When an exception is created in order to be raised, it is usually initialized with information that describes the error that has occurred. There are cases where it is useful to add information after the exception was caught. For this purpose, exceptions have a method add_note(note)
that accepts a string and adds it to the exception's notes list. The standard traceback rendering includes all notes, in the order they were added, after the exception.
>>>
>>> try: ... raise TypeError('bad type') ... except Exception as e: ... e.add_note('Add some information') ... e.add_note('Add some more information') ... raise ... Traceback (most recent call last): File "<stdin>", line 2, in <module> TypeError: bad type Add some information Add some more information >>>
For example, when collecting exceptions into an exception group, we may want to add context information for the individual errors. In the following each exception in the group has a note indicating when this error has occurred.
>>>
>>> def f(): ... raise OSError('operation failed') ... >>> excs = [] >>> for i in range(3): ... try: ... f() ... except Exception as e: ... e.add_note(f'Happened in Iteration {i+1}') ... excs.append(e) ... >>> raise ExceptionGroup('We have some problems', excs) + Exception Group Traceback (most recent call last): | File "<stdin>", line 1, in <module> | ExceptionGroup: We have some problems (3 sub-exceptions) +-+---------------- 1 ---------------- | Traceback (most recent call last): | File "<stdin>", line 3, in <module> | File "<stdin>", line 2, in f | OSError: operation failed | Happened in Iteration 1 +---------------- 2 ---------------- | Traceback (most recent call last): | File "<stdin>", line 3, in <module> | File "<stdin>", line 2, in f | OSError: operation failed | Happened in Iteration 2 +---------------- 3 ---------------- | Traceback (most recent call last): | File "<stdin>", line 3, in <module> | File "<stdin>", line 2, in f | OSError: operation failed | Happened in Iteration 3 +------------------------------------ >>>
Class binds data to functions . To create a new class is to create a new object type , This creates a new... Of this type example . Class instances support properties that maintain their state , And support ( Defined by class ) How to modify your state .
Compared with other programming languages ,Python Class of uses very little new syntax and semantics .Python The class of is somewhat similar to C++ and Modula-3 A combination of classes in , And support object-oriented programming (OOP) All the standard features of : The inheritance mechanism of class supports multiple base classes 、 Derived classes can override the methods of base classes 、 A method of a class can call a method of the same name in the base class . Objects can contain any number and type of data . Just like the module , Class also supports Python Dynamic characteristics : Create at run time , After creation, you can also modify .
If you use C++ Terms to describe , Members of the class ( Including data members ) Usually it is public ( See below for exceptions Private variables ), All member functions are virtual. And in Modula-3 In the same , There is no shorthand for referencing object members from object methods : Method function is declared , There is an explicit parameter representing this object , This parameter is implicitly provided by the call . And in Smalltalk In the same ,Python The class of is also an object , This provides semantic support for import and rename . And C++ and Modula-3 Different ,Python The built-in type of can be used as a base class , For users to expand . Besides , And C++ equally , Arithmetic operator 、 Built in operators with special syntax such as subscripts can be redefined for class instances .
Due to the lack of accepted terminology about classes , Occasionally used in this chapter Smalltalk and C++ The term of . This chapter also uses Modula-3 The term of ,Modula-3 Object oriented semantic ratio C++ Closer to the Python, But it is estimated that few readers have heard of this language .
Objects are independent of each other , Multiple names ( Within multiple scopes ) Can be bound to the same object . Other languages call it aliases .Python Beginners usually don't understand this concept easily , Deal with numbers 、 character string 、 Tuples and other immutable basic types , You can ignore . however , For variable objects involved , As listing 、 Dictionaries and most other types Python The semantics of code , Aliases can have unexpected effects . To do so , Usually to benefit the program , Because aliases are like pointers in some ways . for example , The cost of passing objects is small , Because the implementation passes only one pointer ; If the parameter of the object is changed , The caller can see the change --- There is no need to Pascal With two different parameters of the transfer mechanism .
Before introducing the class , First of all, I want to introduce Python Scope rules for . Class definitions have some clever tricks for namespaces , Understanding the working mechanism of scope and namespace is conducive to strengthening the understanding of classes . also , Even for advanced Python The programmer , This knowledge is also very useful .
Next , Let's start with some definitions .
namespace ( Namespace ) Is the name mapped to the object . Now? , Most namespaces use Python Dictionary implementation , But unless it comes to optimizing performance , We usually don't pay attention to this aspect , And it may change this way in the future . Several common examples of namespaces : abs() function 、 A collection of built-in functions such as built-in exceptions ; Global name in module ; Local name in function call . The attribute collection of an object is also a namespace . An important knowledge about namespaces is , There is absolutely no relationship between names in different namespaces ; for example , Two different modules can be defined maximize
function , Without causing confusion . When using a function, the user must append the module name before the function name .
The name after the dot is attribute . for example , expression z.real
in ,real
It's the object z
Properties of . Strictly speaking , A reference to a name in a module is an attribute reference : expression modname.funcname
in ,modname
Is the module object ,funcname
Is the attribute of the module . There is a direct mapping between the module attributes and the global names defined in the module : They share the same namespace ! 1
Properties can be read-only or writable . If you can write , Then the attribute can be assigned . When the module attribute is writable , have access to modname.the_answer = 42
.del Statement can delete writable attributes . for example , del modname.the_answer
Will delete modname
Object the_answer
attribute .
Namespaces are created at different times , And have different life cycles . The namespace of the built-in name is in Python Created when the interpreter starts , It will never be deleted . The global namespace of the module is created when the module definition is read ; Usually , The namespace of the module will also continue until the interpreter exits . Read from a script file or interactively , The statement executed by the top-level call of the interpreter is __main__ Part of the module call , Also has its own global namespace . The built-in name is actually in the module , namely builtins .
The local namespace of the function is created when the function is called , And is deleted when the function returns or throws an error that is not handled inside the function . ( actually , use “ Forget ” It would be better to describe what actually happened .) Of course , Each recursive call has its own local namespace .
Scope The namespace is directly accessible Python The text area of the program . “ Direct access to ” It means , An unqualified reference to a name looks up the name in the namespace .
Although the scope is statically determined , But it will be used dynamically . At any time during execution , There will be 3 or 4 A nested scope whose namespace can be accessed directly :
Innermost scope , Contains the local name , And first search in it
Scope of closed function , Contains non local and non global names , Search from the most recent closed scope
The penultimate scope , Contains the global name of the current module
The outermost scope , A namespace containing built-in names , At last, the search
If you declare the name as a global variable , All references and assignments will point directly to the intermediate scope containing the global name of the module . Rebind variables found outside the innermost scope , Use nonlocal Statement Declares the variable as a nonlocal variable . Variables that are not declared as nonlocal variables are read-only ,( Writing a read-only variable will create one in the innermost scope new local variable , The external variable with the same name remains the same .)
Usually , The current local scope will ( Press literal ) References the local name of the current function . Out of function , The local scope refers to a namespace consistent with the global scope : Module namespace . Class definition places another namespace in the local namespace .
Focus on , The scope is determined by literal text : The global scope of a function defined within a module is the module's namespace , No matter where the function is called from or under what alias . On the other hand , The actual name search is done dynamically at run time . however ,Python It's heading towards “ Compile time static name resolution ” Direction of development , So don't rely too much on dynamic name resolution !( Local variables have been statically determined .)
Python There is a special rule . If there is no effective global or nonlocal sentence , Then the assignment of the name will always enter the innermost scope . Assignment does not copy data , Just bind the name to the object . So is deletion : sentence del x
Remove the pair from the namespace referenced by the local scope x
The binding of . All operations that introduce new names use local scopes : In especial import Statements and function definitions bind module or function names in local scopes .
global Statement is used to indicate that a specific variable is in the global scope , And should be rebound in the global scope ;nonlocal Statement indicates that a specific variable is in the outer scope , And rebind in the outer scope .
The following example demonstrates how to reference different scopes and namespaces , as well as global and nonlocal Effect on variable binding :
def scope_test(): def do_local(): spam = "local spam" def do_nonlocal(): nonlocal spam spam = "nonlocal spam" def do_global(): global spam spam = "global spam" spam = "test spam" do_local() print("After local assignment:", spam) do_nonlocal() print("After nonlocal assignment:", spam) do_global() print("After global assignment:", spam) scope_test() print("In global scope:", spam)
The output of the sample code is :
After local assignment: test spam After nonlocal assignment: nonlocal spam After global assignment: nonlocal spam In global scope: global spam
Be careful , Local assignment ( This is the default state ) Will not change scope_test Yes spam The binding of . nonlocal Assignment will change scope_test Yes spam The binding of , and global Assignment changes the binding at the module level .
and ,global Not before assignment spam The binding of .
Class introduces a little new syntax , Three new object types and some new semantics .
The simplest form of class definition is as follows :
class ClassName: <statement-1> . . . <statement-N>
And function definition (def sentence ) equally , Class definitions must be executed before they can take effect . Put the class definition in if Try in the branch of the statement or inside the function .
In practice , Statements within a class definition are usually function definitions , But it can also be other statements . This part will be discussed later . Function definitions in classes are usually special parameter lists , This is indicated by the Convention specification of method invocation --- Again , I'll explain it later .
When entering a class definition , A new namespace will be created , And use it as a local scope --- therefore , All assignments to local variables are in this new namespace . Special , The function definition will be bound to the new function name here .
When ( From the end ) When leaving the class definition normally , Will create a Class object . This is basically a wrapper around the namespace content created by the class definition ; We will learn more about class objects in the next section . The original ( Works before entering the class definition ) The local scope will take effect again , Class objects will be bound here to the class name given by the class definition header ( In this example, it is ClassName
).
Class objects support two operations : Property references and instantiations .
Property reference Use Python The standard syntax used for all attribute references in : obj.name
. Valid attribute names are all names that exist in the class namespace when the class object is created . therefore , If the class definition is like this :
class MyClass: """A simple example class""" i = 12345 def f(self): return 'hello world'
that MyClass.i
and MyClass.f
Is a valid attribute reference , Will return an integer and a function object, respectively . Class properties can also be assigned , So you can change... By assigning values MyClass.i
Value . __doc__
It's also an effective attribute , The document string of the class will be returned : "A simple example class"
.
Class Instantiation Use functional representation . You can think of a class object as a function without parameters that returns a new instance of the class . for instance ( Suppose you use the above class ):
x = MyClass()
Create a new class example And assign this object to a local variable x
.
Instantiation operation (“ call ” Class object ) Will create an empty object . Many classes like to create custom instances with specific initial states . A definition for this class may contain a file named __init__()
Special method , Just like this. :
def __init__(self): self.data = []
When a class defines __init__()
When the method is used , The instantiation operation of the class will automatically initiate a call for the newly created class instance __init__()
. So in this example , An initialized new instance can be obtained by the following statement :
x = MyClass()
Of course ,__init__()
Methods can also have additional parameters to achieve higher flexibility . under these circumstances , Parameters supplied to class instantiation operators are passed to __init__()
. for example ,:
>>>
>>> class Complex: ... def __init__(self, realpart, imagpart): ... self.r = realpart ... self.i = imagpart ... >>> x = Complex(3.0, -4.5) >>> x.r, x.i (3.0, -4.5)
Now what can we do with instance objects ? The only operation that an instance object can understand is property reference . There are two valid attribute names : Data properties and methods .
Data attribute Corresponding to Smalltalk Medium “ Instance variables ”, as well as C++ Medium “ Data member ”. Data properties do not need to be declared ; Like a local variable , They will generate... The first time they are assigned . for example , If x
It was created above MyClass
Example , Then the following code snippet will print the value 16
, And does not retain any tracking information :
x.counter = 1 while x.counter < 10: x.counter = x.counter * 2 print(x.counter) del x.counter
Another type of instance property reference is called Method . The method is “ be subordinate to ” Object function . ( stay Python in , The term method is not specific to class instances : Other objects can also have methods . for example , The list object has append, insert, remove, sort Other methods . However , In the following discussion , We use the term method to refer specifically to the methods of class instance objects , Unless otherwise explicitly stated .)
The valid method name of an instance object depends on the class it belongs to . According to the definition , All the properties of function objects in a class are the corresponding methods that define their instances . So in our example ,x.f
Is a valid method reference , because MyClass.f
It's a function , and x.i
It's not the way , because MyClass.i
It's not a function . however x.f
And MyClass.f
It's not the same thing --- It's a Method object , It's not a function object .
Usually , Method is called immediately after binding :
x.f()
stay MyClass
Example , This will return the string 'hello world'
. however , It is not necessary to call a method immediately : x.f
Is a method object , It can be saved and called later . for example :
xf = x.f while True: print(xf())
Will continue to print hello world
, Until the end .
What happens when a method is called ? You may have noticed that the above calls x.f()
It doesn't have parameters , although f()
The function definition of specifies a parameter . What happened to this parameter ? When a function that requires parameters is called without parameters Python An exception must be thrown --- Even if the parameter is not actually used ...
actually , You may have guessed the answer : The special feature of the method is that the instance object is passed in as the first parameter of the function . In our example , call x.f()
It's the same thing as MyClass.f(x)
. All in all , Call a with n A method with one parameter is equivalent to calling the corresponding function with another parameter , The value of this parameter is the instance object to which the method belongs , Position before other parameters .
If you still don't understand how the method works , Then looking at the implementation details may clarify the problem . When a non data attribute of an instance is referenced , The class to which the instance belongs will be searched . If the referenced property name represents a function object in a valid class property , Will be packaged ( Point to ) Find the instance object and function object to an abstract object to create a method object : This abstract object is the method object . When a method object is called with a parameter list , A new parameter list will be built based on the instance object and the parameter list , And use this new parameter list to call the corresponding function object .
Generally speaking , Instance variables are used for unique data for each instance , Class variables are used for properties and methods shared by all instances of a class :
class Dog: kind = 'canine' # class variable shared by all instances def __init__(self, name): self.name = name # instance variable unique to each instance >>> d = Dog('Fido') >>> e = Dog('Buddy') >>> d.kind # shared by all dogs 'canine' >>> e.kind # shared by all dogs 'canine' >>> d.name # unique to d 'Fido' >>> e.name # unique to e 'Buddy'
just as Name and object As discussed in , Sharing data may involve mutable Objects such as lists and dictionaries lead to surprising results . For example, in the following code tricks Lists should not be used as class variables , Because of all the Dog Instances will only share a single list :
class Dog: tricks = [] # mistaken use of a class variable def __init__(self, name): self.name = name def add_trick(self, trick): self.tricks.append(trick) >>> d = Dog('Fido') >>> e = Dog('Buddy') >>> d.add_trick('roll over') >>> e.add_trick('play dead') >>> d.tricks # unexpectedly shared by all dogs ['roll over', 'play dead']
The correct class design should use instance variables :
class Dog: def __init__(self, name): self.name = name self.tricks = [] # creates a new empty list for each dog def add_trick(self, trick): self.tricks.append(trick) >>> d = Dog('Fido') >>> e = Dog('Buddy') >>> d.add_trick('roll over') >>> e.add_trick('play dead') >>> d.tricks ['roll over'] >>> e.tricks ['play dead']
If the same property name appears in both instances and classes , Then the attribute lookup takes precedence over the instance :
>>>
>>> class Warehouse: ... purpose = 'storage' ... region = 'west' ... >>> w1 = Warehouse() >>> print(w1.purpose, w1.region) storage west >>> w2 = Warehouse() >>> w2.region = 'east' >>> print(w2.purpose, w2.region) storage east
Data attributes can be used by methods and ordinary users of an object (“ client ”) The referenced . let me put it another way , Class cannot be used to implement pure abstract data types . actually , stay Python Nothing in can force data to be hidden --- It's completely convention based . ( On the other hand , use C language-written Python Implementation can completely hide the implementation details , And control the access of objects when necessary ; This feature can be achieved by using C To write Python Extend to use .)
Clients should use data attributes with caution --- The client may destroy the fixed variables maintained by the method by directly manipulating the data attributes . Note that clients can add their own data properties to an instance object without affecting the availability of methods , Just make sure to avoid name conflicts --- Once again remind , Using naming conventions here can save you a lot of headaches .
Reference data attributes inside methods ( Or other methods !) There is no easy way . I find that this actually improves the readability of the method : When browsing a method code , There is no opportunity to confuse local variables with instance variables .
The first parameter of a method is often named self
. This is just an agreement : self
The name is in Python There is absolutely no special meaning in . But be careful , Failure to follow this Convention will make your code vulnerable to other Python Lack of readability for programmers , And you can also imagine a Browser class Programming may depend on such conventions .
Any function as a class attribute defines a corresponding method for the instance of the class . The text of the function definition does not have to be included in the class definition : It is also possible to assign a function object to a local variable . for example :
# Function defined outside the class def f1(self, x, y): return min(x, x+y) class C: f = f1 def g(self): return 'hello world' h = g
Now? f
, g
and h
All are C
Class's reference function object's property , So they are all C
The method of an example of --- among h
Exactly equivalent to g
. But please pay attention to , The approach of this example usually only puzzles the reader of the program .
Method can be used by self
Parameter to call other methods :
class Bag: def __init__(self): self.data = [] def add(self, x): self.data.append(x) def addtwice(self, x): self.add(x) self.add(x)
Method can reference the global name in the same way as a normal function . The global scope associated with a method is the module that contains its definition . ( Class will never be used as a global scope .) Although we rarely have a good reason to use global scope in methods , But there are many reasonable usage scenarios for the global scope : for instance , Functions and modules imported into the global scope can be used by methods , The functions and classes defined in it are the same . Usually , The class containing the method itself is defined in the global scope , In the next section, we will find a good reason why a method needs to refer to its class .
Each value is an object , Therefore has class ( Also known as type ), And stored as object.__class__
.
Of course , If inheritance is not supported , Language features are not worthy of being called “ class ”. The syntax of the derived class definition is as follows :
class DerivedClassName(BaseClassName): <statement-1> . . . <statement-N>
name BaseClassName
Must be defined in the scope containing the derived class definition . It is also allowed to replace the location of the base class name with any other expression . It may also be useful , for example , When the base class is defined in another module :
class DerivedClassName(modname.BaseClassName):
The execution process of the derived class definition is the same as that of the base class . When constructing class objects , The base class will be remembered . This information will be used to resolve property references : If the requested property cannot be found in the class , The search will go to the base class to find . If the base class itself is derived from some other class , Then this rule will be applied recursively .
There is nothing special about the instantiation of derived classes : DerivedClassName()
A new instance of this class will be created . Method references will be resolved as follows : Search for the corresponding class properties , If necessary, search down the inheritance chain of the base class step by step , If a function object is generated, the method reference takes effect .
Derived classes may override the methods of their base classes . Because methods do not have special permissions when calling other methods of the same object , So a base class method that calls another method defined in the same base class may eventually call a method that covers its derived class . ( Yes C++ Programmer's tips :Python All of the methods in are actually virtual
Method .)
Overloaded methods in derived classes may actually want to extend rather than simply replace base class methods with the same name . There is a way to simply call the base class method directly : That is to call BaseClassName.methodname(self, arguments)
. Sometimes it's also useful for clients . ( Please note that only if this base class can be in global scope with BaseClassName
You can use this method when the name of is accessed .)
Python There are two built-in functions that can be used for inheritance mechanisms :
Use isinstance() To check the type of an instance : isinstance(obj, int)
Only in the obj.__class__
by int Or one derived from int The class time of is True
.
Use issubclass() To check the inheritance relationship of the class : issubclass(bool, int)
by True
, because bool yes int Subclasses of . however ,issubclass(float, int)
by False
, because float No int Subclasses of .
Python It also supports a multiple inheritance . The class definition statement with multiple base classes is as follows :
class DerivedClassName(Base1, Base2, Base3): <statement-1> . . . <statement-N>
For most applications , In the simplest case , You can think that searching for properties inherited from the parent class is depth first 、 From left to right , When there is overlap in the hierarchy, it will not search the same class twice . therefore , If a property is in DerivedClassName
No , Then it will arrive Base1
Search for it , then ( recursively ) To Base1
Search for , If you can't find it there , Until then Base2
Mid search , And so on .
The truth is a little more complicated than that ; The method parsing order changes dynamically to support the super() The cooperative call of . This is called subsequent method calls in some other multiple inheritance languages , It's better than in a single inheritance language super Call more powerful .
It is necessary to change the order dynamically , Because all cases of multiple inheritance show one or more diamond associations ( That is, at least one parent class can be accessed by the lowest class through multiple paths ). for example , All classes are inherited from object, So any case of multiple inheritance provides more than one path to object. To ensure that the base class is not accessed more than once , Dynamic algorithms can linearize the search order in a special way , Keep the left to right order specified by each class , Call each parent class only once , And keep it monotonous ( That is, a class can be subclassed without affecting the priority of its parent class ). To make a long story short , These features make it possible to design reliable and extensible classes with multiple inheritance . For more details , see also The Python 2.3 Method Resolution Order | Python.org.
The type that is only accessible from within an object “ private ” The instance variable is in Python Does not exist in . however , majority Python The code follows such a convention : Name with an underscore ( for example _spam
) It should be taken as API The non-public part of ( Whether it's a function 、 Method or data member ). This should be seen as an implementation detail , It may be changed without notice .
Because there are valid usage scenarios for private members of the class ( For example, avoid name conflicts with the names defined by subclasses ), So there is limited support for this mechanism , be called Name override . In any form __spam
Identifier ( Underline with at least two prefixes , At most one suffix underline ) The text of will be replaced with _classname__spam
, among classname
For the current class name without prefix underscores . This rewriting does not take into account the syntactic position of the identifier , As long as it appears inside the class definition, it will do .
Name rewriting helps subclasses overload methods without breaking intra class method calls . for example :
class Mapping: def __init__(self, iterable): self.items_list = [] self.__update(iterable) def update(self, iterable): for item in iterable: self.items_list.append(item) __update = update # private copy of original update() method class MappingSubclass(Mapping): def update(self, keys, values): # provides new signature for update() # but does not break __init__() for item in zip(keys, values): self.items_list.append(item)
The above example even in MappingSubclass
Introduced a __update
There is no error in the case of identifiers , Because it will be in Mapping
Class is replaced with _Mapping__update
And in the MappingSubclass
Class is replaced with _MappingSubclass__update
.
Please note that , Rewriting rules are designed to avoid unexpected conflicts ; It is still possible to access or modify variables that are considered private . This can even be useful in special situations , For example, in the debugger .
Please pay attention to pass it on to exec()
or eval()
The code will not consider the class name of the calling class as the current class ; This is similar to global
The effect of the statement , So this effect is limited to code that is compiled in bytecode at the same time . The same restrictions apply to getattr()
, setattr()
and delattr()
, And for __dict__
A direct reference to .
Sometimes you need to use something like Pascal Of “record” or C Of “struct” This type of data , Bundle some named data items together . This situation is suitable for defining an empty class :
class Employee: pass john = Employee() # Create an empty employee record # Fill the fields of the record john.name = 'John Doe' john.dept = 'computer lab' john.salary = 1000
A segment that requires a specific abstract data type Python Code can often be passed in as an alternative to a class that simulates a method of that data type . for example , If you have a function that formats certain data based on file objects , You can define a with read()
and readline()
Method to get data from the string cache , And pass it in as a parameter .
Instance method objects also have properties : m.__self__
Just with m()
Method , and m.__func__
Is the function object corresponding to the method .
up to now , You may have noticed that most container objects can be used for sentence :
for element in [1, 2, 3]: print(element) for element in (1, 2, 3): print(element) for key in {'one':1, 'two':2}: print(key) for char in "123": print(char) for line in open("myfile.txt"): print(line, end='')
This interview style is clear 、 Simple and convenient . The use of iterators is very common and makes Python To be a unified whole . Behind the scenes ,for Statement is called on the container object iter(). This function returns a definition __next__() Iterator object for method , This method will access the elements in the container one by one . When the elements are exhausted ,__next__() Will lead to StopIteration Exception to notify termination for
loop . You can use next() Built in function to call __next__() Method ; This example shows how it works :
>>>
>>> s = 'abc' >>> it = iter(s) >>> it <str_iterator object at 0x10c90e650> >>> next(it) 'a' >>> next(it) 'b' >>> next(it) 'c' >>> next(it) Traceback (most recent call last): File "<stdin>", line 1, in <module> next(it) StopIteration
I've seen the mechanism behind the iterator Protocol , Adding iterator behavior to your class is easy . Define a __iter__()
Method to return a __next__() Object of method . If the class already defines __next__()
, be __iter__()
You can simply go back to self
:
class Reverse: """Iterator for looping over a sequence backwards.""" def __init__(self, data): self.data = data self.index = len(data) def __iter__(self): return self def __next__(self): if self.index == 0: raise StopIteration self.index = self.index - 1 return self.data[self.index]
>>>
>>> rev = Reverse('spam') >>> iter(rev) <__main__.Reverse object at 0x00A1DB50> >>> for char in rev: ... print(char) ... m a p s
generator Is a simple and powerful tool for creating iterators . They are written like standard functions , But when they want to return data, they use yield sentence . Every time you call on the generator next() when , It will resume execution from where it left last time ( It remembers all the data values from the last statement execution ). An example of how to easily create a generator is as follows :
def reverse(data): for index in range(len(data)-1, -1, -1): yield data[index]
>>>
>>> for char in reverse('golf'): ... print(char) ... f l o g
What can be done with generators can also be done with the class based iterators described in the previous section . But the generator is more compact , Because it will automatically create __iter__()
and __next__() Method .
Another key feature is that local variables and execution states are automatically saved between calls . This allows the function to be compared to using self.index
and self.data
This way of instance variables is easier to write and clearer .
In addition to automatically creating methods and saving program state , When the generator ends , They also automatically trigger StopIteration. These characteristics combine , Make it as easy to create iterators as to write regular functions .
Some simple generators can be written as simple expression code , The syntax used is similar to list derivation , But the outer layer is parentheses, not square brackets . This expression is designed for situations where the generator will be used immediately by the outer function . Generator expressions are more compact but less flexible than full generators , Compared with the equivalent list derivation, it saves more memory .
Example :
>>>
>>> sum(i*i for i in range(10)) # sum of squares 285 >>> xvec = [10, 20, 30] >>> yvec = [7, 5, 3] >>> sum(x*y for x,y in zip(xvec, yvec)) # dot product 260 >>> unique_words = set(word for line in page for word in line.split()) >>> valedictorian = max((student.gpa, student.name) for student in graduates) >>> data = 'golf' >>> list(data[i] for i in range(len(data)-1, -1, -1)) ['f', 'l', 'o', 'g']
remarks
1
There is an exception . The module object has a secret read-only attribute __dict__, It returns the dictionary used to implement the module namespace ;__dict__ Is an attribute but not a global name . obviously , Using this will violate the abstraction of namespace implementation , It should only be used in situations such as post debugger .
os Modules provide many functions that interact with the operating system :
>>>
>>> import os >>> os.getcwd() # Return the current working directory 'C:\\Python311' >>> os.chdir('/server/accesslogs') # Change current working directory >>> os.system('mkdir today') # Run the command mkdir in the system shell 0
Be sure to use import os
instead of from os import *
. This will avoid built-in open() Function is os.open() Replace... Implicitly , Because they are used in very different ways .
Built in dir() and help() Functions can be used as interactive AIDS , For handling large modules , Such as os:
>>>
>>> import os >>> dir(os) <returns a list of all module functions> >>> help(os) <returns an extensive manual page created from the module's docstrings>
For daily file and directory management tasks , shutil The module provides a higher level interface that is easier to use :
>>>
>>> import shutil >>> shutil.copyfile('data.db', 'archive.db') 'archive.db' >>> shutil.move('/build/executables', 'installdir') 'installdir'
glob Module provides a function to create a list of files by using wildcard search in the directory :
>>>
>>> import glob >>> glob.glob('*.py') ['primes.py', 'random.py', 'quote.py']
General utility scripts usually need to handle command line arguments . These parameters are stored as a list in sys Modular argv Properties of the . for example , The following output comes from running... On the command line python demo.py one two three
>>>
>>> import sys >>> print(sys.argv) ['demo.py', 'one', 'two', 'three']
argparse The module provides a more complex mechanism for handling command-line parameters . The following script can extract one or more file names , And you can select the number of lines to display :
import argparse parser = argparse.ArgumentParser( prog='top', description='Show top lines from each file') parser.add_argument('filenames', nargs='+') parser.add_argument('-l', '--lines', type=int, default=10) args = parser.parse_args() print(args)
When passing python top.py --lines=5 alpha.txt beta.txt
When run from the command line , The script will args.lines
Set to 5
And will args.filenames
Set to ['alpha.txt', 'beta.txt']
.
sys The module also has stdin , stdout and stderr Properties of . The latter is useful for warning and error messages , Even in stdout You can see them back when they're reset :
>>>
>>> sys.stderr.write('Warning, log file not found starting a new one\n') Warning, log file not found starting a new one
The most direct way to terminate a script is to use sys.exit()
.
re Module provides regular expression tools for advanced string processing . For complex matching and operation , Regular expressions provide simplicity , Optimized solution :
>>>
>>> import re >>> re.findall(r'\bf[a-z]*', 'which foot or hand fell fastest') ['foot', 'fell', 'fastest'] >>> re.sub(r'(\b[a-z]+) \1', r'\1', 'cat in the the hat') 'cat in the hat'
When only simple functions are needed , String methods are preferred because they are easier to read and debug :
>>>
>>> 'tea for too'.replace('too', 'two') 'tea for two'
math Module provides the bottom layer of floating-point mathematics C Access to library functions :
>>>
>>> import math >>> math.cos(math.pi / 4) 0.70710678118654757 >>> math.log(1024, 2) 10.0
random Module provides tools for random selection :
>>>
>>> import random >>> random.choice(['apple', 'pear', 'banana']) 'apple' >>> random.sample(range(100), 10) # sampling without replacement [30, 83, 16, 4, 8, 81, 41, 50, 18, 33] >>> random.random() # random float 0.17970987693706186 >>> random.randrange(6) # random integer chosen from range(6) 4
statistics The module calculates the basic statistical properties of numerical data ( mean value , Median , Variance, etc ):
>>>
>>> import statistics >>> data = [2.75, 1.75, 1.25, 0.25, 0.5, 1.25, 3.5] >>> statistics.mean(data) 1.6071428571428572 >>> statistics.median(data) 1.25 >>> statistics.variance(data) 1.3720238095238095
SciPy project <https://scipy.org> There are many other modules for numerical calculation .
There are many modules that can be used to access the Internet and handle Internet protocols . Two of the simplest urllib.request For from URL Retrieving data , as well as smtplib For sending mail :
>>>
>>> from urllib.request import urlopen >>> with urlopen('http://worldtimeapi.org/api/timezone/etc/UTC.txt') as response: ... for line in response: ... line = line.decode() # Convert bytes to a str ... if line.startswith('datetime'): ... print(line.rstrip()) # Remove trailing newline ... datetime: 2022-01-01T01:36:47.689215+00:00 >>> import smtplib >>> server = smtplib.SMTP('localhost') >>> server.sendmail('[email protected]', '[email protected]', ... """To: [email protected] ... From: [email protected] ... ... Beware the Ides of March. ... """) >>> server.quit()
( Please note that , The second example needs to be in localhost Mail server running on .)
datetime Module provides classes that operate on dates and times in simple and complex ways . Although support date and time algorithm , But the focus of the implementation is effective member extraction for output formatting and operation . The module also supports time zone aware objects .
>>>
>>> # dates are easily constructed and formatted >>> from datetime import date >>> now = date.today() >>> now datetime.date(2003, 12, 2) >>> now.strftime("%m-%d-%y. %d %b %Y is a %A on the %d day of %B.") '12-02-03. 02 Dec 2003 is a Tuesday on the 02 day of December.' >>> # dates support calendar arithmetic >>> birthday = date(1964, 7, 31) >>> age = now - birthday >>> age.days 14368
Common data archiving and compression formats are directly supported by modules , Include :zlib, gzip, bz2, lzma, zipfile and tarfile.:
>>>
>>> import zlib >>> s = b'witch which has which witches wrist watch' >>> len(s) 41 >>> t = zlib.compress(s) >>> len(t) 37 >>> zlib.decompress(t) b'witch which has which witches wrist watch' >>> zlib.crc32(s) 226805979
some Python Users are interested in understanding the relative performance of different approaches to the same problem . Python Provides a measurement tool that can immediately answer these questions .
for example , Tuple packets and unpacking may be more attractive than traditional exchange parameters .timeit Module can quickly demonstrate some advantages in operating efficiency :
>>>
>>> from timeit import Timer >>> Timer('t=a; a=b; b=t', 'a=1; b=2').timeit() 0.57535828626024577 >>> Timer('a,b = b,a', 'a=1; b=2').timeit() 0.54962537085770791
And timeit The level of fine granularity is the opposite , profile and pstats Module provides tools for identifying time critical parts in larger code blocks .
One way to develop high-quality software is to write tests for each function during the development process , And run these tests often during the development process .
doctest Module provides a tool , Used to scan modules and validate tests embedded in program document strings . Test construction is as simple as cutting and pasting a typical call and its results into a document string . This improves the documentation by providing the user with examples , And it allows doctest The module ensures that the code remains true to the document :
def average(values): """Computes the arithmetic mean of a list of numbers. >>> print(average([20, 30, 70])) 40.0 """ return sum(values) / len(values) import doctest doctest.testmod() # automatically validate the embedded tests
unittest Modules don't look like doctest Modules are so easy to use , But it allows a more comprehensive set of tests to be maintained in a single file :
import unittest class TestStatisticalFunctions(unittest.TestCase): def test_average(self): self.assertEqual(average([20, 30, 70]), 40.0) self.assertEqual(round(average([1, 5, 7]), 1), 4.3) with self.assertRaises(ZeroDivisionError): average([]) with self.assertRaises(TypeError): average(20, 30, 70) unittest.main() # Calling from the command line invokes all tests
Python Yes “ With batteries ” Idea . This is best seen through the complexity and power of its packages . for example :
xmlrpc.client and xmlrpc.server Modules make it a piece of cake to implement remote procedure calls . Although it exists in the module name , But users don't need to know or deal with it directly XML.
email Package is a library for managing email , Include MIME And other compliance RFC 2822 Standardized mail documents . And smtplib and poplib Different ( What they actually do is send and receive messages ), The email package provides a complete tool set , Used to build or decode complex message structures ( Including accessories ) And the implementation of Internet coding and header protocol .
json Package provides powerful support for parsing this popular data exchange format . csv Module supports reading and writing files directly in comma separated value format , This format is usually supported by databases and spreadsheets . XML Deal with by xml.etree.ElementTree , xml.dom and xml.sax Package support . These modules and software packages together greatly simplify Python Data exchange between applications and other tools .
sqlite3 The module is SQLite The wrappers of the database , A slightly nonstandard SQL Syntax update and access persistent database .
Internationalization is supported by many modules , Include gettext , locale , as well as codecs package .
The second part covers the more advanced modules required for professional programming . These modules are rarely used in small scripts .
reprlib Module provides a customized version of repr() function , Used to abbreviate large or deeply nested container objects :
>>>
>>> import reprlib >>> reprlib.repr(set('supercalifragilisticexpialidocious')) "{'a', 'c', 'd', 'e', 'f', 'g', ...}"
pprint The module provides more complex print control , The output built-in objects and user-defined objects can be read directly by the interpreter . When the output result is too long and needs to be folded ,“ Beautifying output mechanism ” Line breaks and indents will be added , To show the data structure more clearly :
>>>
>>> import pprint >>> t = [[[['black', 'cyan'], 'white', ['green', 'red']], [['magenta', ... 'yellow'], 'blue']]] ... >>> pprint.pprint(t, width=30) [[[['black', 'cyan'], 'white', ['green', 'red']], [['magenta', 'yellow'], 'blue']]]
textwrap Module can format text paragraphs , To fit the given screen width :
>>>
>>> import textwrap >>> doc = """The wrap() method is just like fill() except that it returns ... a list of strings instead of one big string with newlines to separate ... the wrapped lines.""" ... >>> print(textwrap.fill(doc, width=40)) The wrap() method is just like fill() except that it returns a list of strings instead of one big string with newlines to separate the wrapped lines.
locale The module deals with data formats related to specific regional cultures .locale Modular format The function contains a grouping attribute , You can format numbers directly into a style with group separators :
>>>
>>> import locale >>> locale.setlocale(locale.LC_ALL, 'English_United States.1252') 'English_United States.1252' >>> conv = locale.localeconv() # get a mapping of conventions >>> x = 1234567.8 >>> locale.format("%d", x, grouping=True) '1,234,567' >>> locale.format_string("%s%.*f", (conv['currency_symbol'], ... conv['frac_digits'], x), grouping=True) '$1,234,567.80'
string The module contains a generic Template class , Has simplified syntax for end users . It allows users to customize their own applications without changing the application logic .
The above formatting operation is realized through placeholders , Placeholder by $
Add legal Python identifier ( Can only contain letters 、 Numbers and underscores ) constitute . Once you use curly braces to enclose placeholders , You can follow more letters and numbers directly without spaces .$$
Will be escaped as a single character $
:
>>>
>>> from string import Template >>> t = Template('${village}folk send $$10 to $cause.') >>> t.substitute(village='Nottingham', cause='the ditch fund') 'Nottinghamfolk send $10 to the ditch fund.'
If the value of a placeholder is not provided in the dictionary or keyword parameter , that substitute() Method will throw KeyError. For mail merge type applications , The data provided by the user may be incomplete , At this time to use safe_substitute() The method is more appropriate —— If the data is missing , It will keep the placeholder as it is .
>>>
>>> t = Template('Return the $item to $owner.') >>> d = dict(item='unladen swallow') >>> t.substitute(d) Traceback (most recent call last): ... KeyError: 'owner' >>> t.safe_substitute(d) 'Return the unladen swallow to $owner.'
Template Subclasses of can customize separators . for example , The following is the batch renaming function of a photo browser , The percent sign is used as the date 、 Placeholder for photo sequence number and Photo format :
>>>
>>> import time, os.path >>> photofiles = ['img_1074.jpg', 'img_1076.jpg', 'img_1077.jpg'] >>> class BatchRename(Template): ... delimiter = '%' >>> fmt = input('Enter rename style (%d-date %n-seqnum %f-format): ') Enter rename style (%d-date %n-seqnum %f-format): Ashley_%n%f >>> t = BatchRename(fmt) >>> date = time.strftime('%d%b%y') >>> for i, filename in enumerate(photofiles): ... base, ext = os.path.splitext(filename) ... newname = t.substitute(d=date, n=i, f=ext) ... print('{0} --> {1}'.format(filename, newname)) img_1074.jpg --> Ashley_0.jpg img_1076.jpg --> Ashley_1.jpg img_1077.jpg --> Ashley_2.jpg
Another application of templates is to separate program logic from various formatting output details . This makes it right XML file 、 Plain text reports and HTML It is possible to use custom templates for network reports .
struct The module provides pack() and unpack() function , Binary record format for processing variable length . The following example shows how to use zipfile In the case of modules , How to loop through a ZIP All header information of the file .Pack Code "H"
and "I"
Represents two byte and four byte unsigned integers, respectively ."<"
Represents that they are small endian bytes of standard size :
import struct with open('myfile.zip', 'rb') as f: data = f.read() start = 0 for i in range(3): # show the first 3 file headers start += 14 fields = struct.unpack('<IIIHH', data[start:start+16]) crc32, comp_size, uncomp_size, filenamesize, extra_size = fields start += 16 filename = data[start:start+filenamesize] start += filenamesize extra = data[start:start+extra_size] print(filename, hex(crc32), comp_size, uncomp_size) start += extra_size + comp_size # skip to the next header
Thread is a technique to decouple multiple tasks that are not sequence dependent . Multithreading can improve the response efficiency of applications , When receiving user input , Keep other tasks running in the background . A related application scenario is , take I/O And computing runs in two parallel threads .
The following code shows a high-order threading How the module runs tasks in the background , And does not affect the continued operation of the main program :
import threading, zipfile class AsyncZip(threading.Thread): def __init__(self, infile, outfile): threading.Thread.__init__(self) self.infile = infile self.outfile = outfile def run(self): f = zipfile.ZipFile(self.outfile, 'w', zipfile.ZIP_DEFLATED) f.write(self.infile) f.close() print('Finished background zip of:', self.infile) background = AsyncZip('mydata.txt', 'myarchive.zip') background.start() print('The main program continues to run in foreground.') background.join() # Wait for the background task to finish print('Main program waited until background was done.')
The main challenge for multithreaded applications is , Multiple threads that coordinate with each other need to share data or other resources . So ,threading The module provides multiple synchronous operation primitives , Include thread lock 、 event 、 Conditional variables and semaphores .
Although these tools are very powerful , But small design errors can lead to some problems that are difficult to reproduce . therefore , The preferred way to achieve multitasking collaboration is to centralize all requests for resources into one thread , And then use queue The module supplies the thread with requests from other threads . Application usage Queue Object for inter thread communication and coordination , Easier to design , Easier to read , More reliable .
logging The module provides a complete and flexible log recording system . In the simplest case , Log messages are sent to files or sys.stderr
import logging logging.debug('Debugging information') logging.info('Informational message') logging.warning('Warning:config file %s not found', 'server.conf') logging.error('Error occurred') logging.critical('Critical error -- shutting down')
This produces the following output :
WARNING:root:Warning:config file server.conf not found ERROR:root:Error occurred CRITICAL:root:Critical error -- shutting down
By default ,informational and debugging The news was suppressed , The output is sent to the standard error stream . Other output options include forwarding messages to e-mail , The datagram , Socket or HTTP The server . The new filter can select different routing methods according to message priority :DEBUG
,INFO
,WARNING
,ERROR
, and CRITICAL
.
The logging system can be directly from Python To configure , You can also load... From a user profile , To customize logging without changing the application .
Python Automatic memory management ( Count references to most objects and use garbage collection To clear circular references ). When the last reference of an object is removed, the memory occupied by it will be released soon .
This method is applicable to most applications , But occasionally you have to keep track of objects as they continue to be used by other objects . Unfortunately , Tracking them will create a reference that will make it permanent . weakref Modules provide tools to track objects without having to create references . When the object is no longer needed , It will automatically be removed from a weak reference table , And trigger a callback for the weak reference object . Typical applications include caching objects that are expensive to create :
>>>
>>> import weakref, gc >>> class A: ... def __init__(self, value): ... self.value = value ... def __repr__(self): ... return str(self.value) ... >>> a = A(10) # create a reference >>> d = weakref.WeakValueDictionary() >>> d['primary'] = a # does not create a reference >>> d['primary'] # fetch the object if it is still alive 10 >>> del a # remove the one reference >>> gc.collect() # run garbage collection right away 0 >>> d['primary'] # entry was automatically removed Traceback (most recent call last): File "<stdin>", line 1, in <module> d['primary'] # entry was automatically removed File "C:/python311/lib/weakref.py", line 46, in __getitem__ o = self.data[key]() KeyError: 'primary'
Many requirements for data structures can be met through built-in list types . however , Sometimes, alternative implementations with different cost-effectiveness ratios are needed .
array The module provides a array() object , It's like a list , But only data of the same type can be stored and the storage density is higher . The following example demonstrates an array of unsigned binary values with two bytes as the storage unit ( The type code is "H"
), For ordinary lists , Each entry is stored as a standard Python Of int Objects usually occupy 16 Bytes :
>>>
>>> from array import array >>> a = array('H', [4000, 10, 700, 22222]) >>> sum(a) 26932 >>> a[1:3] array('H', [10, 700])
collections The module provides a deque() object , It's like a list , But adding and ejecting from the left is faster , The speed of searching in the middle is slow . This kind of object is suitable for queue and breadth first tree search :
>>>
>>> from collections import deque >>> d = deque(["task1", "task2", "task3"]) >>> d.append("task4") >>> print("Handling", d.popleft()) Handling task1
unsearched = deque([starting_node]) def breadth_first_search(unsearched): node = unsearched.popleft() for m in gen_moves(node): if is_goal(m): return m unsearched.append(m)
Outside the alternative list implementation , The standard library also provides other tools , for example bisect The module has functions for operating a sequence table :
>>>
>>> import bisect >>> scores = [(100, 'perl'), (200, 'tcl'), (400, 'lua'), (500, 'python')] >>> bisect.insort(scores, (300, 'ruby')) >>> scores [(100, 'perl'), (200, 'tcl'), (300, 'ruby'), (400, 'lua'), (500, 'python')]
heapq Module provides functions to implement the heap based on the regular list . The entry of the minimum value is always kept at position zero . This is useful for applications that need to repeatedly access the smallest elements without running a full list sort :
>>>
>>> from heapq import heapify, heappop, heappush >>> data = [1, 3, 5, 7, 9, 2, 4, 6, 8, 0] >>> heapify(data) # rearrange the list into heap order >>> heappush(data, -5) # add a new entry >>> [heappop(data) for i in range(3)] # fetch the three smallest entries [-5, 0, 1]
decimal The module provides a Decimal The data type is used for decimal floating-point operations . Compared with the built-in float Binary floating point implementation , This class is especially suitable for
Financial applications and other uses that require precise decimal representation ,
Control accuracy ,
Control rounding to meet legal or regulatory requirements ,
Tracking significant decimal places , or
The user expects the results to match the calculations done manually by the application .
for example , Use decimal floating-point and binary floating-point numbers to calculate 70 Cents mobile phone and 5% Total cost of tax , Will produce different results . If the results are rounded to the nearest score, the difference will be greater :
>>>
>>> from decimal import * >>> round(Decimal('0.70') * Decimal('1.05'), 2) Decimal('0.74') >>> round(.70 * 1.05, 2) 0.73
Decimal The result of the representation will keep the zero of the tail , And automatically deduces four significant bits according to the multiplicator with two significant bits . Decimal You can simulate manual operations to avoid problems when binary floating-point numbers cannot accurately represent decimal numbers .
The precise representation of features makes Decimal Class can perform modulo operations and equality detection that are not applicable to binary floating-point numbers :
>>>
>>> Decimal('1.00') % Decimal('.10') Decimal('0.00') >>> 1.00 % 0.10 0.09999999999999995 >>> sum([Decimal('0.1')]*10) == Decimal('1.0') True >>> sum([0.1]*10) == 1.0 False
decimal The module provides enough precision for the operation :
>>>
>>> getcontext().prec = 36 >>> Decimal(1) / Decimal(7) Decimal('0.142857142857142857142857142857142857')
This chapter will explain Python The meaning of the various elements that make up the expression in .
grammatical commentary : In this and subsequent chapters , Can use extensions BNF Tagging is used to describe grammar rather than lexical analysis . When ( Some alternative ) Grammatical rules have the following form
name ::= othername
And it doesn't give semantics , Then this form name
Grammatically with othername
identical .
When used in the description of one of the following arithmetic operators “ Numeric parameters are converted to normal types ” So to speak , This means that the operator implementation of the built-in type adopts the following operation :
If any parameter is complex , Another parameter will be converted to a complex number ;
otherwise , If any parameter is a floating point number , Another parameter will be converted to a floating point number ;
otherwise , Both should be integers , No conversion is required .
Some additional rules apply to specific operators ( for example , String as '%' Left operation parameter of operator ). Extensions must define their own transformation behavior .
“ atom ” Refers to the most basic constituent element of an expression . The simplest atoms are identifiers and literals . In parentheses 、 The forms contained in square brackets or curly braces are also grammatically classified as atoms . The syntax of atom is :
atom ::= identifier | literal | enclosure enclosure ::= parenth_form | list_display | dict_display | set_display | generator_expression | yield_atom
Identifiers that appear as atoms are called names . Please see Identifiers and keywords Understand its lexical definition , as well as Naming and binding Get the documentation about naming and binding .
When the name is bound to an object , Evaluating this atom will return the corresponding object . When the name is not bound , Trying to evaluate it will cause NameError abnormal .
Private name conversion : When an identifier appears as text in a class definition, it begins with two or more underscores and does not end with two or more underscores , It will be regarded as such Private name . Private names are converted to a longer form before generating code for them . Class name will be inserted during conversion , Remove the leading underscore and add an underscore before the name . for example , Appears in a named Ham
Identifier in the class of __spam
Will be converted to _Ham__spam
. This transformation is independent of the relevant syntax used by the identifier . If the converted name is too long ( exceed 255 Characters ), Truncation defined by the concrete implementation may occur . If the class name consists only of underscores , Will not be converted .
Python Support string and byte string literal , And several numerical literal values :
literal ::= stringliteral | bytesliteral | integer | floatnumber | imagnumber
Evaluating a literal will return an object of the type corresponding to the value ( character string 、 Byte string 、 Integers 、 Floating point numbers 、 The plural ). For floating point numbers and imaginary numbers ( The plural ) The situation of , This value may be approximate . For details, see Face value .
All literals correspond to immutable data types , Therefore, the importance of object identification is not as important as its actual value . Evaluate literals with the same value multiple times ( Whether it happens in the same or different position of the program text ) You may get the same object or different objects with the same value .
The parenthesized form is a list of optional expressions contained in parentheses .
parenth_form ::= "(" [starred_expression] ")"
A parenthesized expression list will return anything that the expression list produces : If the list contains at least one comma , It will produce a tuple ; otherwise , It will produce a single expression corresponding to the expression list .
A pair of empty parentheses will produce an empty tuple object . Because tuples are immutable objects , Therefore, the same rules as the literal value apply ( That is, the objects generated by the two empty tuples may be the same or different ).
Note that tuples are not constructed from parentheses , What actually works is the comma operator . The exception is empty tuples , Then parentheses It's just necessary --- It is allowed to use... Without parentheses in expressions " empty " Will cause ambiguity , Common input errors cannot be captured .
To build a list 、 A collection or dictionary ,Python Provided is called “ Show ” Special syntax of , Each type has two forms :
The first is to explicitly list the contents of the container
The second is calculated through a set of loop and filter instructions , be called The derived type .
The common syntactic elements of the derivation are :
comprehension ::= assignment_expression comp_for comp_for ::= ["async"] "for" target_list "in" or_test [comp_iter] comp_iter ::= comp_for | comp_if comp_if ::= "if" or_test [comp_iter]
The structure of the derivation is a single expression followed by at least one for
Clauses and zero or more for
or if
Clause . under these circumstances , The element generation method of the new container is to put each for
or if
Clause is treated as a code block , Nest from left to right , Then every time we reach the innermost code block, the expression is evaluated to produce an element .
however , Except for the leftmost for
Iterations in Clauses represent expressions , The derivation is performed within another implicitly nested scope . This ensures that the name assigned to the target list does not “ Let the cat out of the ” To the outer scope .
The leftmost for
The iteratable object expression in the clause will be evaluated directly in the outer scope , Then it is passed as a parameter to the implicitly nested scope . Follow up for
Clause and leftmost for
Any filter condition in the clause cannot be evaluated in the outer scope , Because they may depend on the value obtained from the leftmost iteratible object . for example : [x*y for x in range(10) for y in range(x, x+10)]
.
To ensure that the result of the derivation is always a container of the correct type , The use of yield
and yield from
expression .
Since Python 3.6, in an async def function, an async for
clause may be used to iterate over a asynchronous iterator. A comprehension in an async def
function may consist of either a for
or async for
clause following the leading expression, may contain additional for
or async for
clauses, and may also use await expressions. If a comprehension contains either async for
clauses or await
expressions or other asynchronous comprehensions it is called an asynchronous comprehension. An asynchronous comprehension may suspend the execution of the coroutine function in which it appears. See also PEP 530.
3.6 New features : Asynchronous derivation is introduced .
stay 3.8 Version change : yield
and yield from
Has been disabled in implicitly nested scopes .
stay 3.11 Version change : Asynchronous comprehensions are now allowed inside comprehensions in asynchronous functions. Outer comprehensions implicitly become asynchronous.
The list shows a series of possibly empty expressions enclosed in square brackets :
list_display ::= "[" [starred_list | comprehension] "]"
The list display will generate a new list object , Its content is specified by a series of expressions or a derivation . When providing a series of expressions separated by commas , Its elements are evaluated from left to right and placed in the list object in this order . When providing a derivation , The list will be constructed according to the result elements generated by the derivation .
The set display is indicated by curly brackets , The difference from the dictionary display is that there are no colon separated keys and values :
set_display ::= "{" (starred_list | comprehension) "}"
The set display will generate a new variable set object , Its content is specified by a series of expressions or a derivation . When providing a series of expressions separated by commas , Its elements are evaluated from left to right and added to the collection object . When providing a derivation , The set will be constructed according to the result elements generated by the derivation .
Empty collection cannot be used {}
To build ; What this literal builds is an empty dictionary .
The dictionary shows a possibly empty key enclosed in curly braces / Data pair series :
dict_display ::= "{" [key_datum_list | dict_comprehension] "}" key_datum_list ::= key_datum ("," key_datum)* [","] key_datum ::= expression ":" expression | "**" or_expr dict_comprehension ::= expression ":" expression comp_for
The dictionary display will generate a new dictionary object .
If you give a comma separated key / Data pair sequence , They are evaluated from left to right to define the entries of the dictionary : Each key object will be used as a key to store the corresponding data in the dictionary . This means that you can key / The same key is specified multiple times in the sequence of data pairs , The value of the final dictionary will be determined by the last given key .
Double star **
Express Dictionary unpacking . Its operand must be a mapping. Each mapping item is added to a new dictionary . Subsequent values replace the previous keys / The value set by the data pair and the previous dictionary unpacking .
3.5 New features : Unpack to the dictionary display , By the first PEP 448 Put forward .
Dictionary derivation is different from list and set derivation , It requires two expressions separated by colons , Bring a standard one at the back "for" and "if" Clause . When the derivation is executed , The resulting key and value elements are added to the new dictionary in the order they are generated .
Restrictions on key value types are listed in the previous Standard type hierarchy In a section . ( On the whole , The type of key should be hashable, This excludes all mutable objects .) Conflicts between duplicate keys are not detected ; Specify the last data saved by the key ( That is, the rightmost text in the display ) For the final valid data .
stay 3.8 Version change : stay Python 3.8 In the previous dictionary derivation , There is no defined evaluation order for keys and values . stay CPython in , The value will be evaluated before the key . according to PEP 572 Proposal , from 3.8 Start , The key will be evaluated before the value .
Generator expressions are compact generator annotations enclosed in parentheses .
generator_expression ::= "(" expression comp_for ")"
The generator expression produces a new generator object . Its syntax is the same as the derivation , The difference is that it is enclosed in parentheses instead of square brackets or curly braces .
Variables used in the generator expression are called for the generator object __next__() Method is evaluated in an inert manner ( In the same way as a normal generator ). however , Far left for
An iteratable object within a clause is evaluated immediately , Therefore, the error caused by it will be detected when the generator expression is defined , Instead of making an error when getting the first value . Follow up for
Clause and leftmost for
Clause cannot be evaluated within the outer scope , Because they may depend on the value obtained from the leftmost iteratable object . for example : (x*y for x in range(10) for y in range(x, x+10))
.
Parentheses can be omitted in calls with only one argument . For details, see call section .
To avoid interfering with the expected operation of the generator expression itself , Use of... In implicitly defined generators is prohibited yield
and yield from
expression .
If the generator expression contains async for
Clause or await expression , It is called a Asynchronous generator expression . The asynchronous generator expression will return a new asynchronous generator object , This object belongs to an asynchronous iterator ( See Asynchronous iterator ).
3.6 New features : Asynchronous generator expressions are introduced .
stay 3.7 Version change : stay Python 3.7 Before , Asynchronous generator expressions can only be used in async def Appear in Concord . from 3.7 Start , Any function can use an asynchronous generator expression .
stay 3.8 Version change : yield
and yield from
Has been disabled in implicitly nested scopes .
yield_atom ::= "(" yield_expression ")" yield_expression ::= "yield" [expression_list | "from" expression]
The yield expression is used when defining a generator function or an asynchronous generator function and thus can only be used in the body of a function definition. Using a yield expression in a function's body causes that function to be a generator function, and using it in an async def function's body causes that coroutine function to be an asynchronous generator function. For example:
def gen(): # defines a generator function yield 123 async def agen(): # defines an asynchronous generator function yield 123
Because they will have a side effect on the outer scope ,yield
Expressions are not allowed as part of the implicit definition scope used to implement derivation and generator expressions .
stay 3.8 Version change : It is forbidden to use yield expression .
The following is a description of the generator function , The asynchronous generator function will be in Asynchronous generator functions It is introduced separately in the section .
When a generator function is called, it returns an iterator known as a generator. That generator then controls the execution of the generator function. The execution starts when one of the generator's methods is called. At that time, the execution proceeds to the first yield expression, where it is suspended again, returning the value of expression_list to the generator's caller. By suspended, we mean that all local state is retained, including the current bindings of local variables, the instruction pointer, the internal evaluation stack, and the state of any exception handling. When the execution is resumed by calling one of the generator's methods, the function can proceed exactly as if the yield expression were just another external call. The value of the yield expression after resuming depends on the method which resumed the execution. If __next__() is used (typically via either a for or the next() builtin) then the result is None. Otherwise, if send() is used, then the result will be the value passed in to that method.
All this makes generator functions very similar to coroutines ; they yield many times , They have multiple entry points , And their execution can be suspended . The only difference is that the generator function cannot be controlled in yield Then hand it over to where to continue ; Control is always transferred to the caller of the generator .
stay try Any position in the structure allows yield expression . If the generator is ( Because the reference count is zero or because it is garbage collected ) Execution was not resumed before destruction , The generator will be called - Iterator close() Method . close Method allows any pending finally Clause execution .
When using yield from <expr>
when , The expression provided must be an iteratable object . The value generated by iterating the iteratible object will be directly passed to the caller of the current generator method . Any pass send() The value passed in and anything passed through throw() If there is an appropriate method for the incoming exception, it will be passed to the lower iterator . If this is not the case , that send() Will lead to AttributeError or TypeError, and throw() The transferred exception will be thrown immediately .
When the lower iterator completes , Triggered StopIteration Example of value
The attribute will become yield Value of expression . It can trigger StopIteration Is explicitly set when , It can also be set automatically when the sub iterator is a generator ( By returning a value from the sub generator ).
stay 3.3 Version change : add to
yield from <expr>
Delegate control flow to a child iterator .
When yield When the expression is the only expression to the right of the assignment statement , Brackets can be omitted .
See
PEP 255 - Simple generator
stay Python Add generators and yield Statement proposal .
PEP 342 - Realize the collaboration through the enhanced generator
Enhancement generator API And grammar , So that it can be used as a simple process .
PEP 380 - Syntax for delegate to child generator
The proposal to introduce the yield_from
syntax, making delegation to subgenerators easy.
PEP 525 - Asynchronous generator
By adding a generator function to the coprocessor function PEP 492 Proposals for expansion .
6.2.9.1. generator - The method of iterator
This subsection describes the method of generating iterators . They can be used to control the execution of generator functions .
Please note that calling any of the following methods while the generator is already executing will cause ValueError abnormal .
generator.__next__()
Starts the execution of a generator function or resumes it at the last executed yield expression. When a generator function is resumed with a __next__() method, the current yield expression always evaluates to None. The execution then continues to the next yield expression, where the generator is suspended again, and the value of the expression_list is returned to __next__()'s caller. If the generator exits without yielding another value, a StopIteration exception is raised.
This method usually implicitly calls , For example, through for Loop or built-in next() function .
generator.send(value)
Resume execution and send the generator function “ send out ” A value . value The parameter will become the current yield Result of expression . send() Method returns the next value generated by the generator , Or if the generator exits without generating the next value, it will raise StopIteration. When calling send() To start the generator , It has to be None As a call parameter , Because there is no... That can receive values at this time yield expression .
generator.throw(value)
generator.throw(type[, value[, traceback]])
Raises an exception at the point where the generator was paused, and returns the next value yielded by the generator function. If the generator exits without yielding another value, a StopIteration exception is raised. If the generator function does not catch the passed-in exception, or raises a different exception, then that exception propagates to the caller.
In typical use, this is called with a single exception instance similar to the way the raise keyword is used.
For backwards compatibility, however, the second signature is supported, following a convention from older versions of Python. The type argument should be an exception class, and value should be an exception instance. If the value is not provided, the type constructor is called to get an instance. If traceback is provided, it is set on the exception, otherwise any existing __traceback__
attribute stored in value may be cleared.
generator.close()
Raises... Where the generator function pauses GeneratorExit. If the generator function exits normally 、 Turn off or trigger GeneratorExit ( Because the exception was not caught ) Then close and return its caller . If the generator produces a value , Closing will cause RuntimeError. If the generator throws any other exceptions , It will be propagated to the caller . If the generator has exited due to an exception or normal, then close() Won't do anything .
6.2.9.2. Example
Here is a simple example , Demonstrates the behavior of generators and generator functions :
>>>
>>> def echo(value=None): ... print("Execution starts when 'next()' is called for the first time.") ... try: ... while True: ... try: ... value = (yield value) ... except Exception as e: ... value = e ... finally: ... print("Don't forget to clean up when 'close()' is called.") ... >>> generator = echo(1) >>> print(next(generator)) Execution starts when 'next()' is called for the first time. 1 >>> print(next(generator)) None >>> print(generator.send(2)) 2 >>> generator.throw(TypeError, "spam") TypeError('spam',) >>> generator.close() Don't forget to clean up when 'close()' is called.
about yield from
Example , See “Python What's new ” Medium PEP 380: Syntax for delegate to child generator .
6.2.9.3. Asynchronous generator functions
In one use async def Occurs in a defined function or method yield The expression will further define the function as a asynchronous generator function .
When an asynchronous generator function is called , It returns an asynchronous iterator called the asynchronous generator object . This object will later control the execution of the generator function . Asynchronous generator objects are usually used in the async for In the sentence , Similar to in for Statement using generator objects .
Calling one of the asynchronous generator's methods returns an awaitable object, and the execution starts when this object is awaited on. At that time, the execution proceeds to the first yield expression, where it is suspended again, returning the value of expression_list to the awaiting coroutine. As with a generator, suspension means that all local state is retained, including the current bindings of local variables, the instruction pointer, the internal evaluation stack, and the state of any exception handling. When the execution is resumed by awaiting on the next object returned by the asynchronous generator's methods, the function can proceed exactly as if the yield expression were just another external call. The value of the yield expression after resuming depends on the method which resumed the execution. If __anext__() is used then the result is None. Otherwise, if asend() is used, then the result will be the value passed in to that method.
If an asynchronous generator happens to be due to break、 The caller task is canceled , Or other exceptions and early exit , The asynchronous cleanup code of the generator will run and may throw exceptions or access context variables in unexpected contexts -- Maybe after the life cycle of the task it depends on , Or during the event loop closure when the asynchronous generator garbage collection hook is called . To prevent this , The caller must call aclose() Method to explicitly close the asynchronous generator to terminate the generator and finally separate it from the event loop .
In the asynchronous generator function ,yield Expressions are allowed in try Anywhere in the structure . however , If an asynchronous generator is terminated ( Because the reference count reaches zero or is garbage collected ) Not recovered before , be then a yield expression within a try
The structure of the yield Expressions may cause pending finally Clause execution failed . In this case , It is the responsibility of the event loop or task scheduler running the asynchronous generator to call the asynchronous generator - Iterator aclose() Method and run the returned coroutine object , This allows any pending finally
Clause can be executed .
In order to be able to perform finalization at the end of the event loop , The event loop should define a finalizers function , It accepts an asynchronous generator iterator and will call aclose() And execute the process . This finalizers You can call sys.set_asyncgen_hooks() To register . When the first iteration , The asynchronous generator iterator will save the registered finalizers In order to call . of finalizers For a reference example of the method, see Lib/asyncio/base_events.py Of asyncio.Loop.shutdown_asyncgens
Realization .
yield from <expr>
Expressions that are used in asynchronous generator functions will cause syntax errors .
6.2.9.4. Asynchronous generator - Iterator method
This subsection describes the methods of asynchronous generator iterators , They can be used to control the execution of generator functions .
coroutine agen.__anext__()
Returns an awaitable which when run starts to execute the asynchronous generator or resumes it at the last executed yield expression. When an asynchronous generator function is resumed with an __anext__() method, the current yield expression always evaluates to None in the returned awaitable, which when run will continue to the next yield expression. The value of the expression_list of the yield expression is the value of the StopIteration exception raised by the completing coroutine. If the asynchronous generator exits without yielding another value, the awaitable instead raises a StopAsyncIteration exception, signalling that the asynchronous iteration has completed.
This method is usually through async for Loop implicitly calls .
coroutine agen.asend(value)
Returns a waiting object , It will resume the execution of the asynchronous generator at run time . With generator send() The method is the same , This method will “ send out ” A value is given to the asynchronous generator function , Its value The parameter will become current yield Result value of expression . asend() Method will return the next value generated by the generator , Its value is caused by StopIteration, Or if the asynchronous generator exits without producing the next value StopAsyncIteration. When calling asend() To start the asynchronous generator , It has to be None As a call parameter , Because there is no... That can receive values at this time yield expression .
coroutine agen.athrow(type[, value[, traceback]])
Returns a waiting object , It will be raised where the asynchronous generator pauses type
Exception of type , And return the next value generated by the generator function , Its value is caused by StopIteration abnormal . If the asynchronous generator does not produce the next value, it exits , Will be triggered by the waiting object StopAsyncIteration asynchronous . If the generator function does not catch the incoming exception , Or threw another exception , When the waiting object is running, the exception will be propagated to the caller of the waiting object .
coroutine agen.aclose()
Returns a waiting object , It will throw a GeneratorExit. If the asynchronous generator function exits normally 、 Turn off or trigger GeneratorExit ( Because the exception was not caught ) The returned waiting object will raise StopIteration abnormal . Any other waiting objects returned by subsequent calls to the asynchronous generator will raise StopAsyncIteration abnormal . If the asynchronous generator generates a value , The waitable object raises RuntimeError. If the asynchronous generator throws any other exceptions , It will be propagated to callers of waiting objects . If the asynchronous generator has exited due to exception or normal, the subsequent call aclose() Will return a waiting object that will not do anything .
Prototypes represent the most tightly bound operations in programming languages . Their syntax is as follows :
primary ::= atom | attributeref | subscription | slicing | call
Attribute references are prototypes with a period followed by a name :
attributeref ::= primary "." identifier
This prototype must evaluate to an object of a type that supports property references , Most objects support attribute references . Then the object will be asked to generate an attribute with the name of the specified identifier . This generation process can be overloaded __getattr__()
Method comes from definition . If this property is not available , Will trigger AttributeError abnormal . Otherwise , The type and value of the generated object will be determined according to the object . Multiple evaluations of the same attribute reference may produce different objects .
The subscription of an instance of a container class will generally select an element from the container. The subscription of a generic class will generally return a GenericAlias object.
subscription ::= primary "[" expression_list "]"
When an object is subscripted, the interpreter will evaluate the primary and the expression list.
The primary must evaluate to an object that supports subscription. An object may support subscription through defining one or both of __getitem__() and __class_getitem__(). When the primary is subscripted, the evaluated result of the expression list will be passed to one of these methods. For more details on when __class_getitem__
is called instead of __getitem__
, see __class_getitem__ versus __getitem__.
If the expression list contains at least one comma, it will evaluate to a tuple containing the items of the expression list. Otherwise, the expression list will evaluate to the value of the list's sole member.
For built-in objects, there are two types of objects that support subscription via __getitem__():
Mappings. If the primary is a mapping, the expression list must evaluate to an object whose value is one of the keys of the mapping, and the subscription selects the value in the mapping that corresponds to that key. An example of a builtin mapping class is the dict class.
Sequences. If the primary is a sequence, the expression list must evaluate to an int or a slice (as discussed in the following section). Examples of builtin sequence classes include the str, list and tuple classes.
The formal syntax makes no special provision for negative indices in sequences. However, built-in sequences all provide a __getitem__() method that interprets negative indices by adding the length of the sequence to the index so that, for example, x[-1]
selects the last item of x
. The resulting value must be a nonnegative integer less than the number of items in the sequence, and the subscription selects the item whose index is that value (counting from zero). Since the support for negative indices and slicing occurs in the object's __getitem__()
method, subclasses overriding this method will need to explicitly add that support.
A string is a special kind of sequence whose items are characters. A character is not a separate data type but a string of exactly one character.
Slicing is the sequence of objects ( character string 、 Tuples or lists ) Select an item in a range . Slices can be used as expressions as well as assignments or del The goal of the statement . The syntax of slice is as follows :
slicing ::= primary "[" slice_list "]" slice_list ::= slice_item ("," slice_item)* [","] slice_item ::= expression | proper_slice proper_slice ::= [lower_bound] ":" [upper_bound] [ ":" [stride] ] lower_bound ::= expression upper_bound ::= expression stride ::= expression
There is a little ambiguity in the formal syntax here : Anything that looks like an expression list will also look like a slice list , Therefore, any extraction operation can also be parsed into slices . In order not to make syntax more complicated , Therefore, this ambiguity is eliminated by defining that this situation is parsed as extraction prior to slicing ( This is the case if the slice list does not contain the correct slice ).
The semantics of slices are as follows . Meta types are indexed by a key constructed from the following slice list ( Use the same as normal extraction __getitem__()
Method ). If the slice list contains at least one comma , Then the key will be a tuple containing the transformation of the slice item ; Otherwise , The key will be the transformation of a single slice item . If the slice item is an expression , Then its conversion is the expression . A correct slice transformation is a slice object ( See Standard type hierarchy section ), The object start
, stop
and step
Attributes will be the lower bounds given by the expression 、 Upper bound and step value , The omitted expression will use None
To replace .
The so-called call is accompanied by a series that may be empty Parameters To execute a callable object ( for example function):
call ::= primary "(" [argument_list [","] | comprehension] ")" argument_list ::= positional_arguments ["," starred_and_keywords] ["," keywords_arguments] | starred_and_keywords ["," keywords_arguments] | keywords_arguments positional_arguments ::= positional_item ("," positional_item)* positional_item ::= assignment_expression | "*" expression starred_and_keywords ::= ("*" expression | keyword_item) ("," "*" expression | "," keyword_item)* keywords_arguments ::= (keyword_item | "**" expression) ("," keyword_item | "," "**" expression)* keyword_item ::= identifier "=" expression
One option is to add commas after position and keyword parameters without affecting semantics .
This prototype must evaluate to a callable object ( User defined functions , Built in functions , Methods of built-in objects , Class object , Methods of class instances and any with __call__()
Method objects are callable objects ). All parameter expressions will be evaluated before attempting to call . see also Function definition Learn about formal parameter List syntax .
If keyword arguments are present, they are first converted to positional arguments, as follows. First, a list of unfilled slots is created for the formal parameters. If there are N positional arguments, they are placed in the first N slots. Next, for each keyword argument, the identifier is used to determine the corresponding slot (if the identifier is the same as the first formal parameter name, the first slot is used, and so on). If the slot is already filled, a TypeError exception is raised. Otherwise, the argument is placed in the slot, filling it (even if the expression is None
, it fills the slot). When all arguments have been processed, the slots that are still unfilled are filled with the corresponding default value from the function definition. (Default values are calculated, once, when the function is defined; thus, a mutable object such as a list or dictionary used as default value will be shared by all calls that don't specify an argument value for the corresponding slot; this should usually be avoided.) If there are any unfilled slots for which no default value is specified, a TypeError exception is raised. Otherwise, the list of filled slots is used as the argument list for the call.
CPython implementation detail: Some implementations may provide built-in functions that have no name for positional parameters , Even if they have “ name ”, Therefore, parameters cannot be provided in the form of keywords . stay CPython in , With C Write and use PyArg_ParseTuple() This is the case with the function implementation that parses its parameters .
If there are more positional parameters than formal parameters , Would trigger TypeError abnormal , Unless a formal parameter is used *identifier
syntax ; In this case , This formal parameter will accept a tuple containing redundant positional parameters ( If there is no redundant positional parameter, it is an empty tuple ).
If any keyword parameter has no corresponding formal parameter name , Would trigger TypeError abnormal , Unless a formal parameter is used **identifier
syntax , This formal parameter will accept a dictionary containing redundant keyword parameters ( Use keywords as keys and parameter values as values corresponding to keys ), If there is no redundant keyword parameter, it is a ( new ) An empty dictionary .
If *expression
syntax ,expression
Must evaluate to a iterable. Elements from this iteratable object are treated as additional positional parameters . about f(x1, x2, *y, x3, x4)
call , If y Evaluate to a sequence y1, ..., yM, Then it is equivalent to a with M+4 Position parameters x1, x2, y1, ..., yM, x3, x4 Call to .
One consequence of this is that although *expression
Syntax may appear in explicit keyword parameters after , But it will be in the keyword parameter ( And anything **expression
Parameters -- See below ) Before Processed . therefore :
>>>
>>> def f(a, b): ... print(a, b) ... >>> f(b=1, *(2,)) 2 1 >>> f(a=1, *(2,)) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: f() got multiple values for keyword argument 'a' >>> f(1, *(2,)) 1 2
It is unusual for both keyword arguments and the *expression
syntax to be used in the same call, so in practice this confusion does not often arise.
If **expression
syntax ,expression
Must evaluate to a mapping, Its content will be treated as additional keyword parameters . If a keyword already exists ( As an explicit keyword parameter , Or from another unpacking ), Will trigger TypeError abnormal .
Use *identifier
or **identifier
Formal parameters of syntax cannot be used as positional parameter blanks or keyword parameter names .
stay 3.5 Version change : Function calls accept any number of *
and **
unpacking , Positional parameters may follow the unpacking of iteratible objects (*
) after , The keyword parameters may follow the dictionary unpacking (**
) after . from PEP 448 Initiate the initial proposal .
Unless an exception is thrown , The call always has a return value , The return value may also be None
. How the return value is calculated depends on the type of callable object .
If the type is ---
User defined functions :
The code block of the function will be executed , And pass in the parameter list . The first thing the code block does is bind the formal parameter to the corresponding parameter ; See Function definition section . Contemporary code block execution return When the sentence is , It specifies the return value of the function call .
Built in functions or methods :
The specific results depend on the interpreter ; For descriptions of built-in functions and methods, see Built in functions .
Class object :
Return a new instance of this class .
Class instance method :
Call the corresponding user-defined function , The parameter list passed in to it will be one more item than the parameter list called : This instance will become the first parameter .
Class instance :
This class must be defined with __call__()
Method ; The effect will be equivalent to calling the method .
Hang up coroutine To wait for a awaitable object . Only in coroutine function For internal use .
await_expr ::= "await" primary
3.5 New features .
The power operator is more tightly bound than the unary operator on its left ; But the binding is not as tight as the unary operator on its right . The syntax is as follows :
power ::= (await_expr | primary) ["**" u_expr]
therefore , In a sequence of unarmed power operators and unary operators , The operator will evaluate from right to left ( This does not limit the order in which operands are evaluated ): -1**2
The result will be -1
.
The power operator and the call with two parameters are built-in pow() Functions have the same semantics : The result is to perform a power operation on its left parameter to the power specified by its right parameter . Numeric parameters will be converted to the same type first , The result is also the converted type .
about int Operands of type , The result will have the same type as the operand , Unless the second parameter is negative ; In that case , All parameters will be converted to float Type and output float Result of type . for example ,10**2
return 100
, and 10**-2
return 0.01
.
Yes 0.0
Performing a negative power operation will result in ZeroDivisionError. Fractional exponentiation of a negative number returns complex The number . ( In earlier versions, this will trigger ValueError.)
This operator can use special __pow__()
Method comes from definition .
All arithmetic and bit operations have the same priority :
u_expr ::= power | "-" u_expr | "+" u_expr | "~" u_expr
One dollar -
( negative ) The operator produces a negative value of its numeric parameter ; This operation can be performed by __neg__()
Special methods to overload .
One dollar +
( Comes at a time ) The operator will output its numerical parameters as is ; This operation can be performed by __pos__()
Special methods to overload .
One dollar ~
( Take the opposite ) The operator will bitwise negate its integer parameters . x
Bitwise negation of is defined as -(x+1)
. It only works on integers or overloads __invert__()
Custom objects for special methods .
In all three cases , If the type of the parameter is incorrect , Will lead to TypeError abnormal .
Binary arithmetic operators follow the traditional precedence . Note that some of these operators also work on certain non numeric types . There are only two priorities other than the power operator , A multiplication operator , The other is for additive operators :
m_expr ::= u_expr | m_expr "*" u_expr | m_expr "@" m_expr | m_expr "//" u_expr | m_expr "/" u_expr | m_expr "%" u_expr a_expr ::= m_expr | a_expr "+" m_expr | a_expr "-" m_expr
Operator *
( ride ) Will output the product of its parameters . Both parameters must be numbers , Or one parameter must be an integer and another parameter must be a sequence . In the former case , Two numbers will be converted to the same type and multiplied . In the latter case , Repetition of the sequence will be performed ; A negative repetition factor will output a null sequence .
This operation can use special __mul__()
and __rmul__()
Method comes from definition .
Operator @
(at) The goal of is for matrix multiplication . No built-in Python Type implements this operator .
3.5 New features .
Operator /
( except ) and //
( to be divisible by ) The quotient of its parameters will be output . Two numeric parameters will first be converted to the same type . Integer division will output a float value , The result of integer division is still integer ; The result of division is to use 'floor' The result of arithmetic division by function . The operation of dividing by zero will cause ZeroDivisionError abnormal .
This operation can be customized using the special __truediv__()
and __floordiv__()
methods.
Operator %
( model ) Divide the output first parameter by the remainder of the second parameter . Two numeric parameters will first be converted to the same type . A zero right parameter will raise ZeroDivisionError abnormal . Parameters can be floating point numbers , for example 3.14%0.7
be equal to 0.34
( because 3.14
be equal to 4*0.7 + 0.34
). The positive and negative of the result of the modulo operator is always consistent with the second operand ( Or zero ); The absolute value of the result must be less than the absolute value of the second operand 1.
The connection between integer division and modulo operator can be illustrated by the following equation : x == (x//y)*y + (x%y)
. In addition, integral division and module can also be achieved through built-in functions divmod() To do it at the same time : divmod(x, y) == (x//y, x%y)
. 2.
In addition to performing modular operations on numbers , Operator %
It is also overloaded by string objects to perform old-fashioned string formatting ( Also known as interpolation ). For a description of string formatting syntax, see Python Library reference printf Style string formatting section .
Remainder Special __mod__()
Method comes from definition .
Integer division operator , Modulo operators and divmod() Function is not defined for complex numbers . If necessary, you can use abs() Function to convert it to a floating point number .
Operator +
(addition) Will output the sum of its parameters . Both parameters must be numbers , Or all of them are of the same type . In the former case , Two numbers will be converted to the same type and then added . In the latter case , Sequence splicing will be performed .
This operation can use special __add__()
and __radd__()
Method comes from definition .
Operator -
( reduce ) Will output the difference of its parameters . Two numeric parameters will first be converted to the same type .
This operation can use special __sub__()
Method comes from definition .
The priority of shift operation is lower than that of arithmetic operation :
shift_expr ::= a_expr | shift_expr ("<<" | ">>") a_expr
These operators accept integer arguments . They will shift the first parameter left or right by the number of bits specified by the second parameter .
This operation can use special __lshift__()
and __rshift__()
Method comes from definition .
Move right n Bits are defined as being pow(2,n)
to be divisible by . Move left n Bits are defined as multiplying pow(2,n)
.
The three bit operations have different priorities :
and_expr ::= shift_expr | and_expr "&" shift_expr xor_expr ::= and_expr | xor_expr "^" and_expr or_expr ::= xor_expr | or_expr "|" xor_expr
&
The operator performs bitwise AND, The parameters must all be integers or one of them must be overloaded __and__()
or __rand__()
Custom objects for special methods .
^
The operator performs bitwise XOR ( different OR), The parameters must all be integers or one of them must be overloaded __xor__()
or __rxor__()
Custom objects for special methods .
|
The operator performs bitwise ( Merge ) OR, The parameters must all be integers or one of them must be overloaded __or__()
or __ror__()
Custom objects for special methods .
And C Different ,Python All comparison operations in have the same priority , Lower than any arithmetic 、 Shift or bit operation . With another C The difference is a < b < c
Such expressions will be interpreted according to traditional arithmetic rules :
comparison ::= or_expr (comp_operator or_expr)* comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "!=" | "is" ["not"] | ["not"] "in"
The comparison operation produces Boolean values : True
or False
. Self defined Rich comparison method May return a non Boolean value . In this case Python The value will be called in the context of Boolean operations bool().
The comparison operation can be connected in series at will , for example x < y <= z
Equivalent to x < y and y <= z
, except y
Only evaluated once ( But in two ways, when x < y
When the value is false z
Will not be evaluated ).
The formal statement is like this : If a, b, c, ..., y, z For expression op1, op2, ..., opN For the comparison operator , be a op1 b op2 c ... y opN z
It is equivalent to a op1 b and b op2 c and ... y opN z
, The difference is that each expression is evaluated at most once .
Please note that a op1 b op2 c
Doesn't mean in a and c Make any comparison between , therefore , Such as x < y > z
Such writing is completely legal ( Although it may not be very beautiful ).
Operator <
, >
, ==
, >=
, <=
and !=
The values of the two objects will be compared . Two objects are not required to be of the same type .
object 、 Values and types The chapter has explained that objects have corresponding values ( There are also types and identification numbers ). The object value is Python Is a quite abstract concept : for example , There is no standard access method for object values . and , Object values do not need to be built in a specific way , For example, it is composed of all its data attributes . The comparison operator implements a specific object value concept . One can think that this defines the object value more indirectly by implementing the object .
Because all types are object Of ( Directly or indirectly ) subtypes , They all come from object Inherits the default comparison behavior . Types can be implemented by Enrich comparison methods for example __lt__()
To define your comparative behavior , For details, see Basic customization .
Default consistency comparison (==
and !=
) Is an object-based identification number . therefore , The consistency comparison results of instances with the same identification number are equal , The consistency comparison results of instances with different identification numbers are different . The motivation for prescribing this default behavior is to hope that all objects should be self reflective ( namely x is y
Means x == y
).
Order comparison (<
, >
, <=
and >=
) Default does not provide ; If you try to compare, it will cause TypeError. The reason for specifying this default behavior is the lack of fixed values similar to consistency .
Compare the behavior according to the default consistency , Instances with different identification numbers are always unequal , This may not be suitable for some object types whose values need to be reasonably defined and have value based consistency . Such types need to customize their own comparison behavior , actually , Many built-in types do this .
The following list describes the comparison behavior of the main built-in types .
Built in numeric type ( Numeric type --- int, float, complex) And standard library types fractions.Fraction and decimal.Decimal Internal and cross type comparisons can be made , The exception is that complex numbers do not support order comparison . Within type related limitations , They can do math ( Algorithm ) The rules are compared correctly without loss of accuracy .
Nonnumeric value float('NaN')
and decimal.Decimal('NaN')
It's a special case . Any sort comparison between numeric and non numeric values returns a false value . Another counterintuitive result is that non numeric values are not equal to themselves . for instance , If x = float('NaN')
be 3 < x
, x < 3
and x == x
Are false , and x != x
Then it is the true value . This behavior is to follow IEEE 754 The standard .
None
and NotImplemented
Are singleton objects . PEP 8 It is suggested that the comparison of singleton objects should always pass is
or is not
Instead of the equal operator .
Binary code sequence (bytes or bytearray Example ) Internal and cross type comparisons can be made . They use the numerical values of their elements to compare in dictionary order .
character string (str Example ) Using its characters Unicode Code point numeric value ( Built in functions ord() Result ) Compare in dictionary order . 3
String and binary code sequence cannot be directly compared .
Sequence (tuple, list or range Example ) Only intra type comparisons are allowed ,range Another limitation is that order comparison is not supported . The cross type consistency comparison results of the above objects will be unequal , Cross type order comparison will trigger TypeError.
Sequence comparison is to compare the corresponding elements one by one in dictionary order . Built in containers usually set the same object equal to itself . This enables them to skip the equality detection of the same object to improve operation efficiency and maintain their internal invariance .
The dictionary order comparison rules between built-in multiple sets are as follows :
If two multinomial sets are to be equal , They must be of the same type 、 Same length , And each pair of corresponding elements must be equal ( for example ,[1,2] == (1,2)
False value , Because of the different types ).
For multinomial sets that support sequential comparison , The sort is the same as that of the first unequal element ( for example [1,2,x] <= [1,2,y]
The value of is equal to ``x <= y`` identical ). If the corresponding element does not exist , Shorter multinomial sets rank first ( for example [1,2] < [1,2,3]
For the truth ).
Two mappings (dict Example ) To be equal , Must be if and only if they have the same ( key , value ) Yes . The consistency comparison of keys and values enforces self reflection .
Order comparison (<
, >
, <=
and >=
) Will lead to TypeError.
aggregate (set or frozenset Example ) Internal and cross type comparisons can be made .
They define comparison operators as subset and superset detection . This kind of relationship does not define complete ordering ( for example {1,2}
and {2,3}
Two sets are not equal , That is, not a subset of each other , Nor for each other's supersets . Accordingly , Sets are not suitable as parameters of functions that depend on complete ordering ( For example, if you give a set list as min(), max() and sorted() The input of will produce undefined results ).
The comparison of sets enforces the reflexivity of their elements .
Most other built-in types do not implement comparison methods , Therefore, they will inherit the default comparison behavior .
Where possible , User defined classes should follow some consistency rules when customizing their comparison behavior :
Equality comparison should be self reflective . let me put it another way , The same objects should be equal when compared :
x is y
signifyx == y
The comparison should be symmetrical . let me put it another way , The following expression should have the same result :
x == y
andy == x
x != y
andy != x
x < y
andy > x
x <= y
andy >= x
Comparison should be transitive . The following ( brief ) Examples show this :
x > y and y > z
signifyx > z
x < y and y <= z
signifyx < z
Reverse comparison should result in Boolean negation . let me put it another way , The following expression should have the same result :
x == y
andnot x != y
x < y
andnot x >= y
( For complete sorting )
x > y
andnot x <= y
( For complete sorting )
The last two expressions are applicable to completely sorted multinomial sets ( That is, sequences rather than sets or mappings ). See also total_ordering() Decorator .
hash() The result should be consistent with whether it is equal . Equal objects should or have the same hash value , Or marked as non hashable .
Python These consistency rules are not mandatory . actually , Non numeric values are an example of not following these rules .
Operator in and not in Used for member detection . If x yes s Of the members x in s
Evaluated as True
, Otherwise False
. x not in s
return x in s
Take the inverse value . All built-in sequence and set types and dictionaries support this operation , For dictionaries in
Check whether it has a given key . about list, tuple, set, frozenset, dict or collections.deque This type of container , expression x in y
Equivalent to any(x is e or x == e for e in y)
.
For string and byte string types , If and only if x yes y When you are in a string x in y
by True
. An equivalent test is y.find(x) != -1
. An empty string is always treated as a substring of any other string , therefore "" in "abc"
Will return True
.
For the definition of __contains__()
Method , If y.__contains__(x)
Return the true value, then x in y
return True
, Otherwise return to False
.
For undefined __contains__()
But it defines __iter__()
For user-defined classes , If you're right y
The iteration produces values z
Make expression x is z or x == z
It's true , be x in y
by True
. If an exception is thrown during the iteration , Is equivalent to in The exception was raised .
Finally, we will try the old iterative Protocol : If a class is defined __getitem__()
, Then if and only if there is a non negative integer index number i bring x is y[i] or x == y[i]
And no smaller index number triggers IndexError When abnormal x in y
by True
. ( If any other exception is thrown , Is equivalent to in The exception was raised ).
Operator not in It is defined as having and in The opposite logical value .
Operator is and is not The identification number used to detect the object : If and only if x and y Is the same object x is y
It's true . The identification number of an object can be used id() Function to determine . x is not y
Will produce the opposite logical value . 4
or_test ::= and_test | or_test "or" and_test and_test ::= not_test | and_test "and" not_test not_test ::= comparison | "not" not_test
In the case of Boolean operations , Or when expressions are used in process control statements , The following values will be resolved to false values : False
, None
, All types of digits zero , And empty strings and empty containers ( Including strings 、 Tuples 、 list 、 Dictionaries 、 Set and frozen set ). All other values will be resolved to true . User defined objects can be provided by __bool__()
Method to customize its logical value .
Operator not Will be generated when its parameter is false True
, Otherwise False
.
expression x and y
First of all, x evaluation ; If x If false, this value is returned ; Otherwise, yes y Evaluate and return the resulting value .
expression x or y
First of all, x evaluation ; If x True returns the value ; Otherwise, yes y Evaluate and return the resulting value .
Please note that and and or There is no restriction that the returned value and type must be False
and True
, Instead, it returns the operand that was last evaluated . This behavior is necessary , For example, suppose s
Is a string that should be replaced with a default value when it is empty , expression s or 'foo'
Will produce the desired value . because not You must create a new value , It will return a Boolean value regardless of the type of its parameter ( for example ,not 'foo'
The result is False
Instead of ''
.)
assignment_expression ::= [identifier ":="] expression
An assignment expression (sometimes also called a "named expression" or "walrus") assigns an expression to an identifier, while also returning the value of the expression.
A common use case is when dealing with matching regular expressions :
if matching := pattern.search(data): do_something(matching)
Or when dealing with partitioned file streams :
while chunk := file.read(9000): process(chunk)
3.8 New features : see also PEP 572 Learn more about assignment expressions .
conditional_expression ::= or_test ["if" or_test "else" expression] expression ::= conditional_expression | lambda_expr
Conditional expression ( Sometimes called “ Ternary operator ”) In all Python The operation has the lowest priority .
expression x if C else y
The first is the condition C Instead of x evaluation . If C It's true ,x Will be evaluated and its value returned ; Otherwise, it will be harmful to y Evaluate and return its value .
see also PEP 308 Learn more about conditional expressions .
lambda_expr ::= "lambda" [parameter_list] ":" expression
lambda expression ( Sometimes called lambda configuration ) Used to create anonymous functions . expression lambda parameters: expression
Will produce a function object . The behavior of the unnamed object is similar to the function defined in the following way :
def <lambda>(parameters): return expression
see also Function definition Learn about the syntax of parameter lists . Please note that through lambda Functions created by expressions cannot contain statements or annotations .
expression_list ::= expression ("," expression)* [","] starred_list ::= starred_item ("," starred_item)* [","] starred_expression ::= expression | (starred_item ",")* [starred_item] starred_item ::= assignment_expression | "*" or_expr
Except as part of a list or set display , A list of expressions containing at least one comma will generate a tuple . The length of tuples is the number of expressions in the list . The expression will be evaluated from left to right .
An asterisk *
Express Iterative unpacking . Its operand must be one iterable. The iteratible object will be disassembled into a sequence of iterations , And included in the new tuple at the unpacking location 、 In a list or collection .
3.5 New features : Unpack the iteratable objects in the expression list , By the first PEP 448 Put forward .
The comma at the end only creates a separate tuple ( Or called Single case ) The need when ; Optional in all other cases . A single expression without a comma at the end does not create a tuple , Instead, it produces the value of the expression . ( To create an empty tuple , A pair of empty parentheses should be used : ()
.)
Python Evaluate expressions from left to right . But note that when evaluating assignment operations , The right side will be evaluated before the left side .
In the following lines , Expressions will be evaluated in the arithmetic priority order of their suffixes .:
expr1, expr2, expr3, expr4 (expr1, expr2, expr3, expr4) {expr1: expr2, expr3: expr4} expr1 + expr2 * (expr3 - expr4) expr1(expr2, expr3, *expr4, **expr5) expr3, expr4 = expr1, expr2
The following table is right Python The order of precedence of operators in , From the highest priority ( Bind first ) To the lowest priority ( Final binding ). Operators in the same cell have the same priority . Unless the syntax explicitly gives , Otherwise, all operators refer to binary operations . Operators in the same cell are grouped from left to right ( Except that power operations are grouped from right to left ).
Please pay attention to the comparison 、 Member detection and identification number detection have the same priority , And it has such features as Comparison operations The left to right concatenation feature described in the section .
Operator
describe
(expressions...)
,
[expressions...]
, {key: value...}
, {expressions...}
Bound or parenthesized expression , The list shows , The dictionary shows , Set display
x[index]
, x[index:index]
, x(arguments...)
, x.attribute
extract , section , call , Property reference
await x
await expression
**
chengfang 5
+x
, -x
, ~x
just , negative , Bitwise non NOT
*
, @
, /
, //
, %
ride , Matrix multiplication , except , to be divisible by , Remainder 6
+
, -
Add and subtract
<<
, >>
displacement
&
Bitwise AND AND
^
Bitwise XOR XOR
|
Press bit or OR
in, not in, is, is not, <
, <=
, >
, >=
, !=
, ==
Comparison operations , Including member detection and identification number detection
not x
Boolean logic is not NOT
and
Boolean logic and AND
or
Boolean logic or OR
if -- else
Conditional expression
lambda
lambda expression
:=
Assignment expression
remarks
1
although abs(x%y) < abs(y)
It must be true in Mathematics , But for floating point numbers , Due to the existence of rounding , It may not be true numerically . for example , Suppose on a certain platform Python Floating point number is one IEEE 754 Double precision value , In order to make -1e-100 % 1e100
Have and 1e100
The same positive and negative , The result of the calculation will be -1e-100 + 1e100
, This is numerically equal to 1e100
. function math.fmod() The returned result will have the same positive and negative as the first parameter , So in this case, it will return -1e-100
. Which method is more suitable depends on the specific application .
2
If x It happens to be very close to y Integer multiple , Because of the existence of rounding x//y
It might be better than (x-x%y)//y
Big . under these circumstances ,Python The latter result will be returned , In order to keep the order divmod(x,y)[0] * y + x % y
As close as possible to x
.
3
Unicode The standard clearly distinguishes Code bits ( for example U+0041) and Abstract character ( for example " Capital Latin letters A"). although Unicode Most abstract characters in are represented by only one code point , But there are also some abstract characters that can be represented by sequences composed of multiple code points . for example , Abstract character " Capital Latin letters with cedilla C" It can be used U+00C7 Single on the code point Preset characters To express , You can also use one U+0043 On the code point Basic characters ( Capital Latin letters C) Add a U+0327 On the code point Combination character ( Combine the next addition ) To represent .
For strings , The comparison operator will press Unicode Compare code point levels . This may go against human intuition . for example ,"\u00C7" == "\u0043\u0327"
by False
, Although both strings represent the same abstract character " Capital Latin letters with cedilla C".
At the abstract character level ( That is, a more intuitive way for humans ) Compare strings , You should use unicodedata.normalize().
4
Due to the existence of automatic garbage collection 、 The dynamic characteristics of free lists and descriptors , You may notice that in certain situations is Operator will appear abnormal behavior , For example, this is the case when it comes to the comparison between instance methods or constants . See their documentation for more information .
5
Power operator **
The binding is less tight than the arithmetic or bitwise unary operator on its right , in other words 2**-1
by 0.5
.
6
%
Operators are also used for string formatting ; In this case, the same priority will be used .
文章目錄前言一、環境二、安裝1.ChromeDriver安裝
I. What is ROI?ROI is the abbr