In this article, we will learn how to crawl information from a multi page network . When browsing the web at ordinary times, many people see that many web pages have multiple pages , like this :
actually , In the last article, we learned how to crawl single page network information , So for multi page networks ........ This is certainly not a problem ( 罒 ω 罒 ).
The first step is to learn to analyze , How to analyze , First of all, we need to analyze the changes in the description of the page change , This is mainly from URL And request information , We still use the website in the above article as an example :
When we keep changing pages , What will we find ? you 're right ,“ The one who sees through the truth is a child who looks like a child , Wisdom is too ordinary .........”:
https://cn.tripadvisor.com/Attraction_Products-g60763-a_sort.-d1687489-The_National_9_11_Memorial_Museum-New_York_City_New_York.html?o=a60#ATTRACTION_LIST( The third page )
https://cn.tripadvisor.com/Attraction_Products-g60763-a_sort.-d1687489-The_National_9_11_Memorial_Museum-New_York_City_New_York.html?o=a90#ATTRACTION_LIST( Page four )
https://cn.tripadvisor.com/Attraction_Products-g60763-a_sort.-d1687489-The_National_9_11_Memorial_Museum-New_York_City_New_York.html?o=a120#ATTRACTION_LIST ( Page 5 )
You can find “?” hinder “o=a{}” It has changed , And the change is traceable every time a The value of after will increase regularly , Then grasp this feature and we can start building crawlers (*^▽^*), Based on the procedure in the previous article , Encapsulate it as a function , Then make a loop call .
def get_info(url,data = None):
wb_data = requests.get(url)# URL request , return response
# Parse web pages
#.text Make web pages readable
soup = BeautifulSoup(wb_data.text,'lxml')
titles = soup.select('div.listing_title > a')
images = soup.select('div.photo_booking > a > span > img')
description = soup.select('div.listing_description > span')
duration = soup.select('div.product_duration')
price = soup.select('div.product_price_info > div.price_test > div.from > span')for title,image,des,time,money in zip(titles,images,description,duration,price):
data = {
'title' : title.get_text(),
'image' : image.get('src'),
'description' : des.get_text(),
'duration' : time.get_text(),
'price' : money.get_text()
}
print(data)for i in range(30,150,30):
url = 'https://cn.tripadvisor.com/Attraction_Products-g60763-a_sort.-d1687489-The_National_9_11_Memorial_Museum-New_York_City_New_York.html?o=a{}#ATTRACTION_LIST'.format(str(i))
get_info(url)