現在很多的數據都來自移動端的app,很多的數據獲取經過處理之後也是十分有用的,這次就爬取最近比較熱的王者榮耀中的英雄們的圖片,下載到本地。
環境:windows/linux
語言:python
版本:3.7
模塊/框架:scrapy,os
1.使用抓包工具Fidder對手機app進行數據的抓取,至於說Fidder如何配置和使用,網上有一大把的資料大家供大家可以使用。
2.從抓包工具中中查看url
3.獲取頁面代碼
4.分離數據
5.獲取圖片信息並且保存
1.創建工程
scrapy startproject King_Fight
這裡沒有說明創建路徑,是我在提前建立好的文件夾下面,打開powershell中操作的
然後再spiders文件中建立新的.py文件命名為spider
2.打開抓包工具
注意手機和PC必須在同一個網段下,然後打開Fidder,然後打開app中的英雄界面,Fidder配置正確的情況下,會看到對應的數據的刷新,然後點開對應的信息,查看URL,我這邊獲取的URL是‘
start_urls = ['http://gamehelper.gm825.com/wzry/hero/list?channel_id=90009a&app_id=h9044j&game_id=7622&game_name=%E7%8E%8B%E8%80%85%E8%8D%A3%E8%80%80&vcode=13.0.4.0&version_code=13040&cuid=8025FD949C93FC66D1DDB6BAC65203D7&ovr=8.0.0&device=Xiaomi_MI+6&net_type=1&client_id=&info_ms=&info_ma=xA9SDhIYZnQ7DOL9HYU%2FDTmfXcpNZC9piF6I%2BbRM5q4%3D&mno=0&info_la=jUm4EMrshA%2BjgQriNYPOaw%3D%3D&info_ci=jUm4EMrshA%2BjgQriNYPOaw%3D%3D&mcc=0&clientversion=13.0.4.0&bssid=9XEEdN1xCIRfdgHQ8NQ4DlZl%2By%2BL8gXiWPRLzJYCKss%3D&os_level=26&os_id=0d62e3f861713d92&resolution=1080_1920&dpi=480&client_ip=192.168.1.61&pdunid=bbbb5488']
’
讀者也可以直接使用我的接口。
3.在item.py中寫需要獲取的數據
import scrapy
class WangzSpiderItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
image_urls = scrapy.Field()
images = scrapy.Field()
4.編寫爬蟲獲取頁面內容
import scrapy
from scrapy import Request
import json
import os
from WangZ_Spider.items import WangzSpiderItem
class SpiderSpider(scrapy.Spider):
name = 'spider'
#allowed_domains = ['wanz.com']
start_urls = ['http://gamehelper.gm825.com/wzry/hero/list?channel_id=90009a&app_id=h9044j&game_id=7622&game_name=%E7%8E%8B%E8%80%85%E8%8D%A3%E8%80%80&vcode=13.0.4.0&version_code=13040&cuid=8025FD949C93FC66D1DDB6BAC65203D7&ovr=8.0.0&device=Xiaomi_MI+6&net_type=1&client_id=&info_ms=&info_ma=xA9SDhIYZnQ7DOL9HYU%2FDTmfXcpNZC9piF6I%2BbRM5q4%3D&mno=0&info_la=jUm4EMrshA%2BjgQriNYPOaw%3D%3D&info_ci=jUm4EMrshA%2BjgQriNYPOaw%3D%3D&mcc=0&clientversion=13.0.4.0&bssid=9XEEdN1xCIRfdgHQ8NQ4DlZl%2By%2BL8gXiWPRLzJYCKss%3D&os_level=26&os_id=0d62e3f861713d92&resolution=1080_1920&dpi=480&client_ip=192.168.1.61&pdunid=bbbb5488']
item = WangzSpiderItem()
headers = {
'Accept - Charset': 'UTF-8;',
'Accept - Encoding': 'gzip, deflate',
'Content - type' :'application/ x-www-form-urlencoded',
'X-Requested-With': 'XMLHttpRequest',
'User - Agent': 'Dalvik/2.1.0(Linux;U;Android 8.0.0;MI 6 MIUI/V10.0.2.0.OCACNFH)',
'Host': 'gamehelper.gm825.com',
'Connection': 'Keep - Alive',
}
def parse(self, response):
yield Request(url=self.start_urls[0],headers=self.headers,method='GET',callback=self.get_data)
def get_data(self,response):
print(response.text)
先別執行,搞過scrapy的都知道直接在命令行裡輸入scrapy crawl spider就可以運行,這次我們搞一個點一下就能運行的
4.創建start.py
文件創建在和工程文件的同級目錄,錯了就不能玩兒了,然後在文件裡寫一行代碼
from scrapy import cmdline
cmdline.execute('scrapy crawl spider'.split())
然後點開Run->edit->+->pthon->打開對應的start文件。
然後點擊右上角的綠色箭頭,就跑起來了,這是侯你應該可以看到頁面的輸出信息一堆的str數據
5.整理數據
把str的數據轉換成json然後轉成字典,這樣我們獲取的時候也很方便,上貨
def get_data(self,response):
print(response.text)
result = json.loads(response.text)
print(result)
result = dict(result)
print(result)
result = result["list"]
print(len(result))
ids = [result[id]["cover"] for id in range(0,len(result))]
names = [result[id]["name"] for id in range(0,len(result))]
hero_ids = [result[id]['hero_id'] for id in range(0,len(result))]
print(ids)
print(names)
print(hero_ids)
self.item['image_urls'] = ids
self.item['images'] = names
yield self.item
這樣爬蟲部分也就結束了。
6,圖片下載
數據分出來之後,看到的是一堆的http://...........png的東西,其實已經成功了,這個就是圖片了,接下來要做的就是把他們下載下來就好了。
首先需要設置下settings.py文件
ITEM_PIPELINES = {
'WangZ_Spider.pipelines.WangzSpiderPipeline': 300,
}
IMAGE_STORE = 'E:/python_project/King_Fight/WangZ_Spider/Image'
IMAGE_URLS_FILE = 'image_urls'
IMAGE_RESULT_FILED = 'images'
IMAGE_THUMBS = {
'small':(80,80),
'big':(240,240),
}
然後編寫pipline.py將圖片下載下來
import requests
from .settings import IMAGE_STORE
import os
class WangzSpiderPipeline(object):
def process_item(self, item, spider):
images = []
dir_path = '{}'.format(IMAGE_STORE)
if not os.path.exists(dir_path) and len(item['src']) != 0:
os.mkdir(dir_path)
for jpg_url, name, num in zip(item['image_urls'], item['images'], range(0, 100)):
file_name = name + str(num)
file_path = '{}//{}'.format(dir_path, file_name)
images.append(file_path)
if os.path.exists(file_path) or os.path.exists(file_name):
continue
with open('{}//{}.png'.format(dir_path, file_name), 'wb') as f:
req = requests.get(url=jpg_url)
f.write(req.content)
return item
接下來結果展示