前言
B站作为一个弹幕视频网站,有着所谓的弹幕文化,那么接下来我们看看,一个视频中出现最多的弹幕是什么?
知识点:
1. 爬虫基本流程
2. 正则
3. requests
4. jieba
5. csv
6. wordcloud
开发环境:
Python 3.6
Pycharm
Python部分
步骤:
import re
import requests
import csv
1、确定爬取的url路径,headers参数
代码:
url = 'https://api.bilibili.com/x/v1/dm/list.so?oid=186803402'
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36'}
2、模拟浏览器发送请求,获取相应内容
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36'}
resp = requests.get(url)
#乱码
print(resp.content.decode('utf-8'))
3、解析网页 提取数据
#按照要求提取网页数据
res = re.compile('<d.*?>(.*?)</d>')
danmu = re.findall(res,html_doc)
print(danmu)
4、保存数据
for i in danmu:
with open('C:/Users/Administrator/Desktop/B站弹幕.csv','a',newline='',encoding='utf-8-sig') as f:
writer = csv.writer(f)
danmu = []
danmu.append(i)
writer.writerow(danmu)
显示数据
导入词云制作库wordcloud和中文分词库jieba
import jieba
import wordcloud
导入imageio库中的imread函数,并用这个函数读取本地图片,作为词云形状图片
import imageio
mk = imageio.imread(r"拳头.png")
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36",
}
response = requests.get("https://api.bilibili.com/x/v1/dm/list.so?oid=186803402", headers=headers)
# print(response.text)
html_doc = response.content.decode('utf-8')
# soup = BeautifulSoup(html_doc,'lxml')
format = re.compile("<d.*?>(.*?)</d>")
DanMu = format.findall(html_doc)
for i in DanMu:
with open('C:/Users/Mark/Desktop/b站弹幕.csv', "a", newline='', encoding='utf-8-sig') as csvfile:
writer = csv.writer(csvfile)
danmu = []
danmu.append(i)
writer.writerow(danmu)
构建并配置词云对象w,注意要加stopwords集合参数,将不想展示在词云中的词放在stopwords集合里,这里去掉“曹操”和“孔明”两个词
w = wordcloud.WordCloud(width=1000,
height=700,
background_color='white',
font_path='msyh.ttc',
mask=mk,
scale=15,
stopwords={' '},
contour_width=5,
contour_color='red')
对来自外部文件的文本进行中文分词,得到string
f = open('C:/Users/Mark/Desktop/b站弹幕.csv', encoding='utf-8')
txt = f.read()
txtlist = jieba.lcut(txt)
string = " ".join(txtlist)
将string变量传入w的generate()方法,给词云输入文字
w.generate(string)
将词云图片导出到当前文件夹
w.to_file('C:/Users/Mark/Desktop/output2-threekingdoms.png')
效果如下: