When we use crawlers to crawl data from websites , For ordinary web pages, there are mature access pages , Tools for analyzing pages , No more details here .
Because of the content of the website now , Mostly through response, Return a page frame . There is no such data in this framework , Wait until the page is loaded , Trigger ajax request , adopt ajax Asynchronous requests , To get data .
This way, , In the ordinary way , Our reptiles get nothing .
In the face of this situation , We have the following two ideas :
First : simulation ajax request , Direct access to ajax Data returned by request , For these returned json Data analysis .
This method , Need to use request First visit , Then copy out the header file parameters of the request . contain :cookie Parameters . Then disguise as the returned frame page for asynchronous access
headers = {
'Accept': 'application/json, text/javascript, */*; q=0.01',
'Accept - Encoding': 'gzip, deflate',
'Accept-Language': 'zh-CN,zh;q=0.9',
'Connection': 'keep-alive',
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'Content-Length': '77',
'Cookie': 'PHPSESSID=urqun9vc9ujikd34gbas1llck0; Hm_lvt_210e7fd46c913658d1ca5581797c34e3=1628088616,1628233023; Hm_lpvt_210e7fd46c913658d1ca5581797c34e3=1628237521',
'Host': 'www.xxxxx.com',
'Origin': 'http://www.xxxxx.com',
'Referer': 'http://www.xxxxxx.com/tools/financial_report.php?si=601318&sn=%E4%B8%AD%E5%9B%BD%E5%B9%B3%E5%AE%89',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36',
'X-Requested-With': 'XMLHttpRequest'
}
pformdata = {
'stock_id': stock_code,
'report_form': 'netease_xjllb_info', # Cash flow statement
'report_year': 5,
'report_type': 'Annual'
}
url = 'http://www.xxxx.com/tools/financial_report.php?si={0}&sn={1}'.format(stock_code, stock_name)
data = urllib.parse.urlencode(formdata).encode()
# send out POST request
new_url = "http://www.xxxx.com/tools/get_financial_report_data_v2.php"
print(url)
req = urllib2.Request(url)
# response = urllib2.urlopen(req).read().decode()
response = requests.post(url=url, data=data, headers=headers)
session = requests.session()
response = session.request("POST", url, data=data, headers=headers)
time.sleep(1)
req = urllib2.Request(new_url, data=data, headers=nheaders)
res_data = urllib2.urlopen(req).read().decode()
data = eval(res_data )
print(data)
The second method : adopt Chromedriver.exe, adopt Chrome browser , Simulate visiting the website . When all the visits to the website are over , Then get the source code of the browser .
url = "http://www.xxxxx.com/ement.php?stock_id={0}".format(stock_code)
base_url = "http://www.xxx.com"
base_path = "C:\\financial_report\\pdf\\"
# Open the home page again :
browser = webdriver.Chrome()
browser.get(url)
# Then simulate the scroll bar scrolling to the bottom
for i in range(1, 5):
browser.execute_script('window.scrollTo(0, document.body.scrollHeight)')
time.sleep(1)
html = BeautifulSoup(browser.page_source, "lxml")