扫码一下
查看教程更方便
要执行我们的蜘蛛程序,请在 first_scrapy 目录中运行以下命令
$ scrapy crawl first
其中,first 是创建蜘蛛时指定的蜘蛛名称。
蜘蛛爬行后,可以看到如下输出
2022-08-09 18:13:07-0400 [scrapy] INFO: Scrapy started (bot: tutorial)
2022-08-09 18:13:07-0400 [scrapy] INFO: Optional features available: ...
2022-08-09 18:13:07-0400 [scrapy] INFO: Overridden settings: {}
2022-08-09 18:13:07-0400 [scrapy] INFO: Enabled extensions: ...
2022-08-09 18:13:07-0400 [scrapy] INFO: Enabled downloader middlewares: ...
2022-08-09 18:13:07-0400 [scrapy] INFO: Enabled spider middlewares: ...
2022-08-09 18:13:07-0400 [scrapy] INFO: Enabled item pipelines: ...
2022-08-09 18:13:07-0400 [scrapy] INFO: Spider opened
2022-08-09 18:13:08-0400 [scrapy] DEBUG: Crawled (200)
<GET http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/> (referer: None)
2022-08-09 18:13:09-0400 [scrapy] DEBUG: Crawled (200)
<GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> (referer: None)
2022-08-09 18:13:09-0400 [scrapy] INFO: Closing spider (finished)
正如我们在输出中看到的那样,对于每个 URL 都有一个日志行,其中 (referer: None) 声明这些 URL 是起始 URL,并且它们没有引荐来源网址。 接下来,我们应该会看到在 first_scrapy
目录中创建了两个名为 Books.html 和 Resources.html 的新文件。