更新了爬虫第2天文档
parent
e86dece224
commit
2315b0cef9
|
@ -1,8 +1,73 @@
|
||||||
## 数据采集和解析
|
## 数据采集和解析
|
||||||
|
|
||||||
|
通过上一个章节,我们已经了解到了开发一个爬虫需要做的工作以及一些常见的问题,至此我们可以对爬虫开发需要做的工作以及相关的技术做一个简单的汇总,可能有些库我们之前并没有使用过,不过别担心,这些内容我们都会讲到的。
|
||||||
|
|
||||||
|
1. 下载数据 - urllib / requests / aiohttp。
|
||||||
|
2. 解析数据 - re / lxml / beautifulsoup4(bs4)/ pyquery。
|
||||||
|
3. 持久化 - pymysql / redis / sqlalchemy / pymongo。
|
||||||
|
4. 调度器 - 进程 / 线程 / 协程。
|
||||||
|
|
||||||
### HTML页面分析
|
### HTML页面分析
|
||||||
|
|
||||||
|
```HTML
|
||||||
|
<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8">
|
||||||
|
<title>首页</title>
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<h1>Hello, world!</h1>
|
||||||
|
<p>这是一个神奇的网站!</p>
|
||||||
|
<hr>
|
||||||
|
<div>
|
||||||
|
<h2>这是一个例子程序</h2>
|
||||||
|
<p>静夜思</p>
|
||||||
|
<p class="foo">床前明月光</p>
|
||||||
|
<p id="bar">疑似地上霜</p>
|
||||||
|
<p class="foo">举头望明月</p>
|
||||||
|
<div><a href="http://www.baidu.com"><p>低头思故乡</p></a></div>
|
||||||
|
</div>
|
||||||
|
<a class="foo" href="http://www.qq.com">腾讯网</a>
|
||||||
|
<img src="./img/pretty-girl.png" alt="美女">
|
||||||
|
<img src="./img/hellokitty.png" alt="凯蒂猫">
|
||||||
|
<img src="/static/img/pretty-girl.png" alt="美女">
|
||||||
|
<table>
|
||||||
|
<tr>
|
||||||
|
<th>姓名</th>
|
||||||
|
<th>上场时间</th>
|
||||||
|
<th>得分</th>
|
||||||
|
<th>篮板</th>
|
||||||
|
<th>助攻</th>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
|
```
|
||||||
|
|
||||||
|
如果你对上面的代码并不感到陌生,那么你一定知道HTML页面通常由三部分构成,分别是:用来承载内容的Tag(标签)、负责渲染页面的CSS(层叠样式表)以及控制交互式行为的JavaScript。通常,我们可以在浏览器的右键菜单中通过“查看网页源代码”的方式获取网页的代码并了解页面的结构;当然,我们也可以通过浏览器提供的开发人员工具来了解网页更多的信息。
|
||||||
|
|
||||||
|
#### 使用requests获取页面
|
||||||
|
|
||||||
|
1. GET请求和POST请求。
|
||||||
|
2. URL参数和请求头。
|
||||||
|
3. 复杂的POST请求(文件上传)。
|
||||||
|
4. 操作Cookie。
|
||||||
|
|
||||||
### 三种采集方式
|
### 三种采集方式
|
||||||
|
|
||||||
|
#### 三种采集方式的比较
|
||||||
|
|
||||||
|
| 抓取方法 | 速度 | 使用难度 | 备注 |
|
||||||
|
| ---------- | --------------------- | -------- | ------------------------------------------ |
|
||||||
|
| 正则表达式 | 快 | 困难 | 常用正则表达式<br>在线正则表达式测试 |
|
||||||
|
| lxml | 快 | 一般 | 需要安装C语言依赖库<br>唯一支持XML的解析器 |
|
||||||
|
| Beautiful | 快/慢(取决于解析器) | 简单 | |
|
||||||
|
|
||||||
|
> 说明:Beautiful的解析器包括:Python标准库(html.parser)、lxml的HTML解析器、lxml的XML解析器和html5lib。
|
||||||
|
|
||||||
|
#### BeautifulSoup的使用
|
||||||
|
|
||||||
|
1. 遍历文档树。
|
||||||
|
2. 五种过滤器:字符串、正则表达式、列表、True、方法。
|
||||||
|
|
||||||
|
|
|
@ -8,7 +8,8 @@ import ssl
|
||||||
from pymysql import Error
|
from pymysql import Error
|
||||||
|
|
||||||
|
|
||||||
def decode_page(page_bytes, charsets=('utf-8', )):
|
# 通过指定的字符集对页面进行解码(不是每个网站都将字符集设置为utf-8)
|
||||||
|
def decode_page(page_bytes, charsets=('utf-8',)):
|
||||||
page_html = None
|
page_html = None
|
||||||
for charset in charsets:
|
for charset in charsets:
|
||||||
try:
|
try:
|
||||||
|
@ -20,7 +21,8 @@ def decode_page(page_bytes, charsets=('utf-8', )):
|
||||||
return page_html
|
return page_html
|
||||||
|
|
||||||
|
|
||||||
def get_page_html(seed_url, *, retry_times=3, charsets=('utf-8', )):
|
# 获取页面的HTML代码(通过递归实现指定次数的重试操作)
|
||||||
|
def get_page_html(seed_url, *, retry_times=3, charsets=('utf-8',)):
|
||||||
page_html = None
|
page_html = None
|
||||||
try:
|
try:
|
||||||
page_html = decode_page(urlopen(seed_url).read(), charsets)
|
page_html = decode_page(urlopen(seed_url).read(), charsets)
|
||||||
|
@ -32,32 +34,38 @@ def get_page_html(seed_url, *, retry_times=3, charsets=('utf-8', )):
|
||||||
return page_html
|
return page_html
|
||||||
|
|
||||||
|
|
||||||
|
# 从页面中提取需要的部分(通常是链接也可以通过正则表达式进行指定)
|
||||||
def get_matched_parts(page_html, pattern_str, pattern_ignore_case=re.I):
|
def get_matched_parts(page_html, pattern_str, pattern_ignore_case=re.I):
|
||||||
pattern_regex = re.compile(pattern_str, pattern_ignore_case)
|
pattern_regex = re.compile(pattern_str, pattern_ignore_case)
|
||||||
return pattern_regex.findall(page_html) if page_html else []
|
return pattern_regex.findall(page_html) if page_html else []
|
||||||
|
|
||||||
|
|
||||||
def start_crawl(seed_url, match_pattern):
|
# 开始执行爬虫程序并对指定的数据进行持久化操作
|
||||||
|
def start_crawl(seed_url, match_pattern, *, max_depth=-1):
|
||||||
conn = pymysql.connect(host='localhost', port=3306,
|
conn = pymysql.connect(host='localhost', port=3306,
|
||||||
database='crawler', user='root',
|
database='crawler', user='root',
|
||||||
password='123456', charset='utf8')
|
password='123456', charset='utf8')
|
||||||
try:
|
try:
|
||||||
with conn.cursor() as cursor:
|
with conn.cursor() as cursor:
|
||||||
url_list = [seed_url]
|
url_list = [seed_url]
|
||||||
|
visited_url_list = {seed_url: 0}
|
||||||
while url_list:
|
while url_list:
|
||||||
current_url = url_list.pop(0)
|
current_url = url_list.pop(0)
|
||||||
page_html = get_page_html(current_url, charsets=('utf-8', 'gbk', 'gb2312'))
|
depth = visited_url_list[current_url]
|
||||||
links_list = get_matched_parts(page_html, match_pattern)
|
if depth != max_depth:
|
||||||
url_list += links_list
|
page_html = get_page_html(current_url, charsets=('utf-8', 'gbk', 'gb2312'))
|
||||||
param_list = []
|
links_list = get_matched_parts(page_html, match_pattern)
|
||||||
for link in links_list:
|
param_list = []
|
||||||
page_html = get_page_html(link, charsets=('utf-8', 'gbk', 'gb2312'))
|
for link in links_list:
|
||||||
headings = get_matched_parts(page_html, r'<h1>(.*)<span')
|
if link not in visited_url_list:
|
||||||
if headings:
|
visited_url_list[link] = depth + 1
|
||||||
param_list.append((headings[0], link))
|
page_html = get_page_html(link, charsets=('utf-8', 'gbk', 'gb2312'))
|
||||||
cursor.executemany('insert into tb_result values (default, %s, %s)',
|
headings = get_matched_parts(page_html, r'<h1>(.*)<span')
|
||||||
param_list)
|
if headings:
|
||||||
conn.commit()
|
param_list.append((headings[0], link))
|
||||||
|
cursor.executemany('insert into tb_result values (default, %s, %s)',
|
||||||
|
param_list)
|
||||||
|
conn.commit()
|
||||||
except Error:
|
except Error:
|
||||||
pass
|
pass
|
||||||
# logging.error('SQL:', error)
|
# logging.error('SQL:', error)
|
||||||
|
@ -67,8 +75,9 @@ def start_crawl(seed_url, match_pattern):
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
ssl._create_default_https_context = ssl._create_unverified_context
|
ssl._create_default_https_context = ssl._create_unverified_context
|
||||||
start_crawl('http://sports.sohu.com/nba_a.shtml',
|
start_crawl('http://sports.sohu.com/nba_a.shtml',
|
||||||
r'<a[^>]+test=a\s[^>]*href=["\'](.*?)["\']')
|
r'<a[^>]+test=a\s[^>]*href=["\'](.*?)["\']',
|
||||||
|
max_depth=2)
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
|
|
|
@ -7,38 +7,45 @@ def main():
|
||||||
html = """
|
html = """
|
||||||
<!DOCTYPE html>
|
<!DOCTYPE html>
|
||||||
<html lang="en">
|
<html lang="en">
|
||||||
<head>
|
<head>
|
||||||
<meta charset="UTF-8">
|
<meta charset="UTF-8">
|
||||||
<title>首页</title>
|
<title>首页</title>
|
||||||
</head>
|
</head>
|
||||||
<body>
|
<body>
|
||||||
<h1>Hello, world!</h1>
|
<h1>Hello, world!</h1>
|
||||||
<p>Good!!!</p>
|
<p>这是一个神奇的网站!</p>
|
||||||
<hr>
|
<hr>
|
||||||
<div>
|
<div>
|
||||||
<h2>这是一个例子程序</h2>
|
<h2>这是一个例子程序</h2>
|
||||||
<p>静夜思</p>
|
<p>静夜思</p>
|
||||||
<p class="foo">床前明月光</p>
|
<p class="foo">床前明月光</p>
|
||||||
<p id="bar">疑似地上霜</p>
|
<p id="bar">疑似地上霜</p>
|
||||||
<p class="foo">举头望明月</p>
|
<p class="foo">举头望明月</p>
|
||||||
<div><a href="http://www.baidu.com"><p>低头思故乡</p></a></div>
|
<div><a href="http://www.baidu.com"><p>低头思故乡</p></a></div>
|
||||||
</div>
|
</div>
|
||||||
<a class="foo" href="http://www.qq.com">腾讯网</a>
|
<a class="foo" href="http://www.qq.com">腾讯网</a>
|
||||||
<img src="./img/pretty-girl.png" alt="美女">
|
<img src="./img/pretty-girl.png" alt="美女">
|
||||||
<img src="./img/hellokitty.png" alt="凯蒂猫">
|
<img src="./img/hellokitty.png" alt="凯蒂猫">
|
||||||
<img src="./static/img/pretty-girl.png" alt="美女">
|
<img src="/static/img/pretty-girl.png" alt="美女">
|
||||||
<goup>Hello, Goup!</goup>
|
<table>
|
||||||
</body>
|
<tr>
|
||||||
|
<th>姓名</th>
|
||||||
|
<th>上场时间</th>
|
||||||
|
<th>得分</th>
|
||||||
|
<th>篮板</th>
|
||||||
|
<th>助攻</th>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
</body>
|
||||||
</html>
|
</html>
|
||||||
"""
|
"""
|
||||||
# resp = requests.get('http://sports.sohu.com/nba_a.shtml')
|
|
||||||
# html = resp.content.decode('gbk')
|
|
||||||
soup = BeautifulSoup(html, 'lxml')
|
soup = BeautifulSoup(html, 'lxml')
|
||||||
|
# JavaScript - document.title
|
||||||
print(soup.title)
|
print(soup.title)
|
||||||
# JavaScript: document.body.h1
|
# JavaScript - document.body.h1
|
||||||
# JavaScript: document.forms[0]
|
|
||||||
print(soup.body.h1)
|
print(soup.body.h1)
|
||||||
print(soup.find_all(re.compile(r'p$')))
|
print(soup.find_all(re.compile(r'^h')))
|
||||||
|
print(soup.find_all(re.compile(r'r$')))
|
||||||
print(soup.find_all('img', {'src': re.compile(r'\./img/\w+.png')}))
|
print(soup.find_all('img', {'src': re.compile(r'\./img/\w+.png')}))
|
||||||
print(soup.find_all(lambda x: len(x.attrs) == 2))
|
print(soup.find_all(lambda x: len(x.attrs) == 2))
|
||||||
print(soup.find_all('p', {'class': 'foo'}))
|
print(soup.find_all('p', {'class': 'foo'}))
|
||||||
|
|
Loading…
Reference in New Issue