文章目录
-
-
- 一、前言
-
- 1. 目标
- 2. 环境
- 二、分析网页
- 三、爬虫代码实现
- 四、其他说明
-
一、前言
王者荣耀这款手游,想必大家都玩过或听过,游戏里英雄有各式各样的皮肤,制作得很精美,有些拿来做电脑壁纸它不香吗。本文带你利用Python爬虫一键下载王者荣耀英雄皮肤壁纸。
1. 目标
创建一个文件夹, 里面又有按英雄名称分的子文件夹保存该英雄的所有皮肤图片
URL:https://pvp.qq.com/web201605/herolist.shtml
2. 环境
运行环境:Pycharm、Python3.7
需要的库
import requests
import os
import json
from lxml import etree
from fake_useragent import UserAgent
import logging
二、分析网页
首先打开王者荣耀官网,点击英雄资料进去。
进入新的页面后,任意选择一个英雄,检查网页。
多选择几个英雄检查网页,可以发现各个英雄页面的URL规律
https://pvp.qq.com/web201605/herodetail/152.shtml
https://pvp.qq.com/web201605/herodetail/150.shtml
https://pvp.qq.com/web201605/herodetail/167.shtml
发现只有末尾的数字在变化,末尾的数字可以认为是该英雄的页面标识。
点击Network,Crtl + R 刷新,可以找到一个 herolist.json 文件。
发现是乱码,但问题不大,双击这个 json 文件,将它下载下来观察,用编辑器打开可以看到。
ename是英雄网址页面的标识;而 cname 是对应英雄的名称;skin_name为对应皮肤的名称。
任选一个英雄页面进去,检查该英雄下面所有皮肤,观察url变化规律。
url变化规律如下:
https://game.gtimg.cn/images/yxzj/img201606/heroimg/152/152-bigskin-1.jpg
https://game.gtimg.cn/images/yxzj/img201606/heroimg/152/152-bigskin-2.jpg
https://game.gtimg.cn/images/yxzj/img201606/heroimg/152/152-bigskin-3.jpg
https://game.gtimg.cn/images/yxzj/img201606/heroimg/152/152-bigskin-4.jpg
https://game.gtimg.cn/images/yxzj/img201606/heroimg/152/152-bigskin-5.jpg
复制图片链接到浏览器打开,可以看到高清大图。
观察到同一个英雄的皮肤图片 url 末尾 -{x}.jpg 从 1 开始依次递增,再来看看不同英雄的皮肤图片 url 是如何构造的。会发现, ename这个英雄的标识不一样,获取到的图片就不一样,由 ename 参数决定。
https://game.gtimg.cn/images/yxzj/img201606/heroimg/152/152-bigskin-1.jpg
https://game.gtimg.cn/images/yxzj/img201606/heroimg/150/150-bigskin-1.jpg
https://game.gtimg.cn/images/yxzj/img201606/heroimg/153/153-bigskin-1.jpg
# 可构造图片请求链接如下
https://game.gtimg.cn/images/yxzj/img201606/heroimg/{ ename}/{ ename}-bigskin-{ x}.jpg
三、爬虫代码实现
# -*- coding: UTF-8 -*-
""" @File :王者荣耀英雄皮肤壁纸.py @Author :叶庭云 @Date :2020/10/2 11:40 @CSDN :https://blog.csdn.net/fyfugoyfa """
import requests
import os
import json
from lxml import etree
from fake_useragent import UserAgent
import logging
# 日志输出的基本配置
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s: %(message)s')
class glory_of_king(object):
def __init__(self):
if not os.path.exists("./王者荣耀皮肤"):
os.mkdir("王者荣耀皮肤")
# 利用fake_useragent产生随机UserAgent 防止被反爬
ua = UserAgent(verify_ssl=False, path='fake_useragent.json')
for i in range(1, 50):
self.headers = {
'User-Agent': ua.random
}
def scrape_skin(self):
# 发送请求 获取响应
response = requests.get('https://pvp.qq.com/web201605/js/herolist.json', headers=self.headers)
# str转为json
data = json.loads(response.text)
# for循环遍历data获取需要的字段 创建对应英雄名称的文件夹
for i in data:
hero_number = i['ename'] # 获取英雄名字编号
hero_name = i['cname'] # 获取英雄名字
os.mkdir("./王者荣耀皮肤/{}".format(hero_name)) # 创建英雄名称对应的文件夹
response_src = requests.get("https://pvp.qq.com/web201605/herodetail/{}.shtml".format(hero_number),
headers=self.headers)
hero_content = response_src.content.decode('gbk') # 返回相应的html页面 解码为gbk
# xpath解析对象 提取每个英雄的皮肤名字
hero_data = etree.HTML(hero_content)
hero_img = hero_data.xpath('//div[@class="pic-pf"]/ul/@data-imgname')
# 去掉每个皮肤名字中间的分隔符
hero_src = hero_img[0].split('|')
logging.info(hero_src)
# 遍历英雄src处理图片名称。
for j in range(len(hero_src)):
# 去掉皮肤名字的&符号
index_ = hero_src[j].find("&")
skin_name = hero_src[j][:index_]
# 请求下载图片
response_skin = requests.get(
"https://game.gtimg.cn/images/yxzj/img201606/skin/hero-info/{}/{}-bigskin-{}.jpg".format(
hero_number, hero_number, j + 1))
# 获取图片二进制数据
skin_img = response_skin.content
# 把皮肤图片保存到对应名字的文件里
with open("./王者荣耀皮肤/{}/{}.jpg".format(hero_name, skin_name), "wb")as f:
f.write(skin_img)
logging.info(f"{skin_name}.jpg 下载成功!!")
def run(self):
self.scrape_skin()
if __name__ == '__main__':
spider = glory_of_king()
spider.run()
运行效果如下:
程序运行一段时间,英雄皮肤壁纸就都保存在本地文件夹啦,结果如下:
四、其他说明
- 不建议抓取太多数据,容易对服务器造成负载,浅尝辄止即可。
- 通过本文爬虫,可以帮助你了解 json 数据的解析和提取需要的数据,如何通过字符串的拼接来构造URL请求。
- 本文利用 Python 爬虫一键下载王者荣耀英雄皮肤壁纸,实现过程中也会遇到一些问题,多思考和调试,最终解决问题,也能理解得更深刻。
- 代码可直接复制运行,如果觉得还不错,记得给个赞哦,也是对作者最大的鼓励,不足之处可以在评论区多多指正。
解决报错:fake_useragent.errors.FakeUserAgentError: Maximum amount of retries reached
# 报错如下
Error occurred during loading data. Trying to use cache server https://fake-useragent.herokuapp.com/browsers/0.1.11
Traceback (most recent call last):
File "/usr/local/python3/lib/python3.6/urllib/request.py", line 1318, in do_open
encode_chunked=req.has_header('Transfer-encoding'))
File "/usr/local/python3/lib/python3.6/http/client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/python3/lib/python3.6/http/client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/python3/lib/python3.6/http/client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/python3/lib/python3.6/http/client.py", line 1026, in _send_output
self.send(msg)
File "/usr/local/python3/lib/python3.6/http/client.py", line 964, in send
self.connect()
File "/usr/local/python3/lib/python3.6/http/client.py", line 1392, in connect
super().connect()
File "/usr/local/python3/lib/python3.6/http/client.py", line 936, in connect
(self.host,self.port), self.timeout, self.source_address)
File "/usr/local/python3/lib/python3.6/socket.py", line 724, in create_connection
raise err
File "/usr/local/python3/lib/python3.6/socket.py", line 713, in create_connection
sock.connect(sa)
socket.timeout: timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/python3/lib/python3.6/site-packages/fake_useragent/utils.py", line 67, in get
context=context,
File "/usr/local/python3/lib/python3.6/urllib/request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File "/usr/local/python3/lib/python3.6/urllib/request.py", line 526, in open
response = self._open(req, data)
File "/usr/local/python3/lib/python3.6/urllib/request.py", line 544, in _open
'_open', req)
File "/usr/local/python3/lib/python3.6/urllib/request.py", line 504, in _call_chain
result = func(*args)
File "/usr/local/python3/lib/python3.6/urllib/request.py", line 1361, in https_open
context=self._context, check_hostname=self._check_hostname)
File "/usr/local/python3/lib/python3.6/urllib/request.py", line 1320, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error timed out>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/python3/lib/python3.6/site-packages/fake_useragent/utils.py", line 154, in load
for item in get_browsers(verify_ssl=verify_ssl):
File "/usr/local/python3/lib/python3.6/site-packages/fake_useragent/utils.py", line 97, in get_browsers
html = get(settings.BROWSERS_STATS_PAGE, verify_ssl=verify_ssl)
File "/usr/local/python3/lib/python3.6/site-packages/fake_useragent/utils.py", line 84, in get
raise FakeUserAgentError('Maximum amount of retries reached')
fake_useragent.errors.FakeUserAgentError: Maximum amount of retries reached
解决方法如下:
# 将https://fake-useragent.herokuapp.com/browsers/0.1.11里内容复制 并另存为本地json文件:fake_useragent.json
# 引用
ua = UserAgent(verify_ssl=False, path='fake_useragent.json')
print(ua.random)
运行结果如下:
Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1500.55 Safari/537.36