admin 管理员组

文章数量: 887609

一、翻页爬取的基本逻辑

普通的分页:

这种网页的表现为:“上一页 1,2,3,4.......下一页,尾页”

情况1:观察页面源代码,发现url直接在源代码中体现,对应的解决方案是:(1)访问第一页,(2)提取下一页url,(3)提取下一页url,一直重复该流程到最后一页。

情况2:观察页面源代码,发现url不能在源代码中体现,对应的解决方案是:直接观察页面总数,观察每一页url的变化规律,通过程序模拟出每一页的url。

不正常的分页

例如点击加载更多,或者鼠标向下滑动才能加载更多信息的分页逻辑属于不正常的分页。

情况1:加载更多,点击之后才可以加载下一页的内容。对应的解决方案是:通过抓包找到url的变化规律,变化规律可能在每一页对应的url上体现,也可能在参数上体现

情况2:滚动刷新,这种情况的解决方案也是抓包,找规律。有一种特殊的情况,在前一页的请求结果中存在一个参数供下一页使用,这种情况多发生在微博中。

二、scrapy抓取17K相关书籍名称

先说明一下,17K网站现在加入了debug模式,导致翻页失败。

怎么解决debug模式还需要后续继续学习。但是我们可以深度理解一下在scrapy中翻页的思想。

具体的思路如下:

第一页网页源代码
第二页网页源代码
第三页网页源代码
第334页网页源代码

通过对网页源代码的分析,我们在某一页上可以得到相近几页的url。比如说:

第一页网页源代码——》【2,3,4,5,2】页的url地址;

第二页网页源代码——》【3,4,5,6,3】页的url地址;

第三页网页源代码——》【2,1,2,4,5,6,7,4】页的url地址;

以此类推我们能够得到所有页面的url地址。

这里就会存在一个问题,解析第n页的网页源代码,与第n-1页、第n+1页得到的url地址重复。会不会造成重复url的爬取。

对该问题我们首先给出答案,不会造成重复爬取的。大家还记得scrapy中有一个调度器schedule模块,在schedule中存在一个过滤器,过滤器会剔除已经爬取过的url,这也是scrapy框架的强大之处。

整个程序的工作流程如下图所示:

下面我们就给出上述翻页思想的代码实现,具体如下:

import scrapy
from novelSpider.items import NovelspiderItem
'''
settings.py中进行的设置
USER_AGENT = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36"
ROBOTSTXT_OBEY = False
LOG_LEVEL = "WARNING"
DOWNLOAD_DELAY = 3
ITEM_PIPELINES = {
   "novelSpider.pipelines.NovelspiderPipeline": 300,
}

'''
class NovelnameSpider(scrapy.Spider):
    name = "novelName"
    allowed_domains = ["17k"]
    start_urls = ["https://www.17k/all"] # 修改起始url为我们需要的地址

    def parse(self, response, **kwargs):
        # print(response.text)
        # 当前页内容解析
        items = response.xpath('//tbody//td[@class="td3"]/span/a')
        for item in items:
            name = item.xpath('.//text()').extract_first()
            # print(name)
            # 小说名称发送至pipeline
            item = NovelspiderItem() # 构建item对象
            item['name'] = name
            yield item
        # 翻页
        page_urls = response.xpath('//div[@class="page"]/a')
        for page_url in page_urls:
            p_url = page_url.xpath('.//@href').extract_first()
            if p_url.startswith("javascript"):
                continue
            # print(p_url)
            # 拼接url,构建Request对象
            p_url = response.urljoin(p_url)
            print(p_url)
            yield scrapy.Request(url=p_url, method='GET', callback=self.parse) # 每一页的网站布局都是一样的,用parse解析即可

'''
(function anonymous(
) {
debugger
})
该网站加入了无限debug,暂时还不能解决
可以借助selenium实现数据爬取
'''

三、翻页抓取豆瓣TOP250的电影名称

在翻页抓取豆瓣TOP250电影名称,我们通过解析当前url对应的Response对象中是否有next标签对应的href,实现翻页抓取的。

子页面与父页面的布局方式不同,所以在spider中parse方法解析父页面,新建parse_detail方式解析子页面。

runner

from scrapy.cmdline import execute
if __name__ == '__main__':
    execute("scrapy crawl dbTop250".split())

spider

from typing import Iterable

import scrapy
import time
import random
from doubanTop250.items import Doubantop250Item
from scrapy import Request
#自动切换user_agent功能没有实现

class Dbtop250Spider(scrapy.Spider):
    name = "dbTop250"
    allowed_domains = ["douban"]
    start_urls = ["https://www.douban/doulist/3936288/"]


    def parse(self, response):
        href_list = response.xpath('//div[@class="title"]/a/@href').extract() # 取出每个电影对应的网址
        next_href = response.xpath('//span[@class="next"]/a/@href').extract_first()
        for href in href_list:
            yield scrapy.Request(
                url = href,
                method = 'get',
                callback = self.parse_detail
            )
        # 翻页
        if next_href:
            time.sleep(random.randint(1, 6))
            yield scrapy.Request(
                url=next_href,
                method='get',
                callback=self.parse
            )



    def parse_detail(self,response):
        # 返回的是页面列表中对应的页面信息
        name = response.xpath('//*[@id="content"]/h1/span[@property="v:itemreviewed"]/text()').extract_first()
        year = response.xpath('//*[@id="content"]/h1/span[@class="year"]/text()').extract_first()
        director = response.xpath('//*[@id="info"]/span[1]/span[2]/a/text()').extract_first()
        time = response.xpath('//*[@id="info"]/span[@property="v:runtime"]/text').extract_first()
        # img_url = response.xpath('//*[@id="mainpic"]/a/img/@src').extract_first()

        info = Doubantop250Item()
        info['name'] = name
        info['year'] = year
        info['director'] = director
        info['time'] = time
        yield info


pipeline

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy/en/latest/topics/item-pipeline.html


# useful for handling different item types with a single interface
from itemadapter import ItemAdapter


class Doubantop250Pipeline:
    def process_item(self, item, spider):
        print('电影:'+ str(item['name']) + '的导演是' + str(item['director']) + ',上映时间为' + str(item['year']))
        return item

class NewDoubantop250Pipeline: # 定义一个新的管道,用于将数据保存在csv文件中
    def open_spider(self,spider):
        self.file = open('movieInfo.csv',mode='a',encoding='utf-8')
        self.file.write('TOP250电影信息')
    def process_item(self, item, spider):
        info = '电影:'+ str(item['name']) + '的导演是' + str(item['director']) + ',上映时间为' + str(item['year'])
        self.file.write(info)
        self.file.write('\n')
    def close_spider(self,spider):
        self.file.close()

settings

# Scrapy settings for doubanTop250 project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy/en/latest/topics/settings.html
#     https://docs.scrapy/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy/en/latest/topics/spider-middleware.html

BOT_NAME = "doubanTop250"

SPIDER_MODULES = ["doubanTop250.spiders"]
NEWSPIDER_MODULE = "doubanTop250.spiders"


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = "doubanTop250 (+http://www.yourdomain)" #这里是默认的user-agent,scrapy发送请求时会自动带入
USER_AGENT = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36"
# USER_AGENT_LIST = [
   # "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
   # "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)",
   # "Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
   # "Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)",
   # "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)",
   # "Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)",
   # "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)",
   # "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)",
   # "Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6",
   # "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1",
   # "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0",
   # "Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5",
   # "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6",
   # "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
   # "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20",
   # "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52",
   # "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.11 TaoBrowser/2.0 Safari/536.11",
   # "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER",
   # "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; LBBROWSER)",
   # "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E; LBBROWSER)",
   # "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.84 Safari/535.11 LBBROWSER",
   # "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)",
   # "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; QQBrowser/7.0.3698.400)",
   # "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E)",
   # "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SV1; QQDownload 732; .NET4.0C; .NET4.0E; 360SE)",
   # "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E)",
   # "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)",
   # "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1",
   # "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1",
   # "Mozilla/5.0 (iPad; U; CPU OS 4_2_1 like Mac OS X; zh-cn) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8C148 Safari/6533.18.5",
   # "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:2.0b13pre) Gecko/20110307 Firefox/4.0b13pre",
   # "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0",
   # "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11",
   # "Mozilla/5.0 (X11; U; Linux x86_64; zh-CN; rv:1.9.2.10) Gecko/20100922 Ubuntu/10.10 (maverick) Firefox/3.6.10",
   # "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36",
   # ]
# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
#    "Accept-Language": "en",
#}

# Enable or disable spider middlewares
# See https://docs.scrapy/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    "doubanTop250.middlewares.Doubantop250SpiderMiddleware": 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {
   "doubanTop250.middlewares.Doubantop250DownloaderMiddleware": 543,
}

# Enable or disable extensions
# See https://docs.scrapy/en/latest/topics/extensions.html
#EXTENSIONS = {
#    "scrapy.extensions.telnet.TelnetConsole": None,
#}

# Configure item pipelines
# See https://docs.scrapy/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   "doubanTop250.pipelines.Doubantop250Pipeline": 300,
   "doubanTop250.pipelines.NewDoubantop250Pipeline": 400
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = "httpcache"
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = "scrapy.extensions.httpcache.FilesystemCacheStorage"

# Set settings whose default value is deprecated to a future-proof value
REQUEST_FINGERPRINTER_IMPLEMENTATION = "2.7"
TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor"
FEED_EXPORT_ENCODING = "utf-8"
LOG_LEVEL = "WARNING"

运行的结果

最终,我们将爬取的电影名称保存在一个csv文件下。

本文标签: 爬虫 翻页 scrapy