855
gooseeker集搜客
Scrapy:Python3下的第一次運行測試
1,引言
《Scrapy的架構初探》一文講解了Scrapy的架構,本文就實際來安裝運行一下Scrapy爬蟲。本文以官網的tutorial作為例子,完整的代碼可以在github上下載。
2,運行環境配置
- 本次測試的環境是:Windows10, Python3.4.3 32bit
- 安裝Scrapy : $ pip install Scrapy #實際安裝時,由於服務器狀態的不穩定,出現好幾次中途退出的情況
3,編寫運行第一個Scrapy爬蟲
3.1. 生成一個新項目:tutorial
$ scrapy startproject tutorial
項目目錄結構如下:
3.2. 定義要抓取的item
# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class DmozItem(scrapy.Item):
title = scrapy.Field()
link = scrapy.Field()
desc = scrapy.Field()
3.3. 定義Spider
import scrapy
from tutorial.items import DmozItem
class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
"https://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
"https://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
]
def parse(self, response):
for sel in response.xpath('//ul/li'):
item = DmozItem()
item['title'] = sel.xpath('a/text()').extract()
item['link'] = sel.xpath('a/@href').extract()
item['desc'] = sel.xpath('text()').extract()
yield item
3.4. 運行
$ scrapy crawl dmoz -o item.json
1) 結果報錯:
- A) ImportError: cannot import name '_win32stdio'
- B) ImportError: No module named 'win32api'
2) 查錯過程:
查看官方的FAQ和stackoverflow上的信息,原來是scrapy在python3上測試還不充分,還有小問題
3) 解決過程:
- A) 需要手工去下載twisted/internet下的 _win32stdio 和 _pollingfile,存放到python目錄的lib\sitepackages\twisted\internet下
- B) 下載並安裝pywin32
再次運行,成功!在控製台上可以看到scrapy的輸出信息,待運行完成退出後,到項目目錄打開結果文件items.json, 可以看到裏麵以json格式存儲的爬取結果。
[
{"title": [" About "], "desc": [" ", " "], "link": ["/docs/en/about.html"]},
{"title": [" Become an Editor "], "desc": [" ", " "], "link": ["/docs/en/help/become.html"]},
{"title": [" Suggest a Site "], "desc": [" ", " "], "link": ["/docs/en/add.html"]},
{"title": [" Help "], "desc": [" ", " "], "link": ["/docs/en/help/helpmain.html"]},
{"title": [" Login "], "desc": [" ", " "], "link": ["/editors/"]},
{"title": [], "desc": [" ", " Share via Facebook "], "link": []},
{"title": [], "desc": [" ", " Share via Twitter "], "link": []},
{"title": [], "desc": [" ", " Share via LinkedIn "], "link": []},
{"title": [], "desc": [" ", " Share via e-Mail "], "link": []},
{"title": [], "desc": [" ", " "], "link": []},
{"title": [], "desc": [" ", " "], "link": []},
{"title": [" About "], "desc": [" ", " "], "link": ["/docs/en/about.html"]},
{"title": [" Become an Editor "], "desc": [" ", " "], "link": ["/docs/en/help/become.html"]},
{"title": [" Suggest a Site "], "desc": [" ", " "], "link": ["/docs/en/add.html"]},
{"title": [" Help "], "desc": [" ", " "], "link": ["/docs/en/help/helpmain.html"]},
{"title": [" Login "], "desc": [" ", " "], "link": ["/editors/"]},
{"title": [], "desc": [" ", " Share via Facebook "], "link": []},
{"title": [], "desc": [" ", " Share via Twitter "], "link": []},
{"title": [], "desc": [" ", " Share via LinkedIn "], "link": []},
{"title": [], "desc": [" ", " Share via e-Mail "], "link": []},
{"title": [], "desc": [" ", " "], "link": []},
{"title": [], "desc": [" ", " "], "link": []}
]
第一次運行scrapy的測試成功。
4,接下來的工作
接下來,我將使用GooSeeker API來實現網絡爬蟲,省掉對每個item人工去生成和測試xpath的工作量。目前有2個計劃:
- 在gsExtractor中封裝一個方法:從xslt內容中自動提取每個item的xpath。
- 從gsExtractor的提取結果中自動提取每個item的結果。
具體選擇哪個方案,將在接下來的實驗中確定,並發布到gsExtractor新版本中。
5,文檔修改曆史
2016-06-11:V1.0,首次發布
若有疑問可以或
最後更新:2017-01-09 14:08:09