微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

python – scrapy爬行蜘蛛ajax分页

我试图删除有ajax调用分页链接.
我正在尝试抓取http://www.demo.com链接.在.py文件中我提供了限制XPATH的代码,编码是:

# -*- coding: utf-8 -*-
import scrapy

from scrapy.contrib.linkextractors import LinkExtractor
from scrapy.contrib.spiders import sumSpider, Rule
from scrapy.selector import HtmlXPathSelector
from sum.items import sumItem

class Sumspider1(sumSpider):
    name = 'sumDetailsUrls'
    allowed_domains = ['sum.com']
    start_urls = ['http://www.demo.com']
    rules = (
        Rule(LinkExtractor(restrict_xpaths='.//ul[@id="pager"]/li[8]/a'), callback='parse_start_url', follow=True),
    )

    #use parse_start_url if your spider wants to crawl from first page , so overriding 
    def parse_start_url(self, response):
        print '********************************************1**********************************************'
        #//div[@class="showMoreCars hide"]/a
        #.//ul[@id="pager"]/li[8]/a/@href
        self.log('Inside - parse_item %s' % response.url)
        hxs = HtmlXPathSelector(response)
        item = sumItem()
        item['page'] = response.url
        title = hxs.xpath('.//h1[@class="page-heading"]/text()').extract() 
        print '********************************************title**********************************************',title
        urls = hxs.xpath('.//a[@id="linkToDetails"]/@href').extract()
        print '**********************************************2***url*****************************************',urls

        finalurls = []       

        for url in urls:
            print '---------url-------',url
            finalurls.append(url)          

        item['urls'] = finalurls
        return item

我的items.py文件包含

from scrapy.item import Item, Field


class sumItem(Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    page = Field()
    urls = Field()

当我抓取它时,我仍然没有得到确切的输出无法获取所有页面.

解决方法:

我希望下面的代码会有所帮助.

somespider.py

# -*- coding: utf-8 -*-
import scrapy
import re
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import Selector
from scrapy.spider import BaseSpider
from demo.items import DemoItem
from selenium import webdriver

def removeUnicodes(strData):
        if(strData):
            strData = strData.encode('utf-8').strip() 
            strData = re.sub(r'[\n\r\t]',r' ',strData.strip())
        return strData

class demoSpider(scrapy.Spider):
    name = "domainurls"
    allowed_domains = ["domain.com"]
    start_urls = ['http://www.domain.com/used/cars-in-trichy/']

    def __init__(self):
        self.driver = webdriver.Remote("http://127.0.0.1:4444/wd/hub", webdriver.DesiredCapabilities.HTMLUNITWITHJS)

    def parse(self, response):
        self.driver.get(response.url)
        self.driver.implicitly_wait(5)
        hxs = Selector(response)
        item = DemoItem()
        finalurls = []
        while True:
            next = self.driver.find_element_by_xpath('//div[@class="showMoreCars hide"]/a')

            try:
                next.click()
                # get the data and write it to scrapy items
                item['pageurl'] = response.url
                item['title'] =  removeUnicodes(hxs.xpath('.//h1[@class="page-heading"]/text()').extract()[0])
                urls = self.driver.find_elements_by_xpath('.//a[@id="linkToDetails"]')

                for url in urls:
                    url = url.get_attribute("href")
                    finalurls.append(removeUnicodes(url))          

                item['urls'] = finalurls

            except:
                break

        self.driver.close()
        return item

items.py

from scrapy.item import Item, Field

class DemoItem(Item):
    page = Field()
    urls = Field()
    pageurl = Field()
    title = Field()

注意:
您需要运行selenium rc服务器,因为HTMLUNITWITHJS仅使用Python与selenium rc一起使用.

运行发出命令的selenium rc服务器:

java -jar selenium-server-standalone-2.44.0.jar

使用命令运行您的蜘蛛:

spider crawl domainurls -o someoutput.json

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 [email protected] 举报,一经查实,本站将立刻删除。

相关推荐