How to access a specific start_url in a Scrapy CrawlSpider?

2024/10/6 18:31:02

I'm using Scrapy, in particular Scrapy's CrawlSpider class to scrape web links which contain certain keywords. I have a pretty long start_urls list which gets its entries from a SQLite database which is connected to a Django project. I want to save the scraped web links in this database.

I have two Django models, one for the start urls such as http://example.com and one for the scraped web links such as http://example.com/website1, http://example.com/website2 etc. All scraped web links are subsites of one of the start urls in the start_urls list.

The web links model has a many-to-one relation to the start url model, i.e. the web links model has a Foreignkey to the start urls model. In order to save my scraped web links properly to the database, I need to tell the CrawlSpider's parse_item() method which start url the scraped web link belongs to. How can I do that? Scrapy's DjangoItem class does not help in this respect as I still have to define the used start url explicitly.

In other words, how can I pass the currently used start url to the parse_item() method, so that I can save it together with the appropriate scraped web links to the database? Any ideas? Thanks in advance!

Answer

By default you can not access the original start url.

But you can override make_requests_from_url method and put the start url into a meta. Then in a parse you can extract it from there (if you yield in that parse method subsequent requests, don't forget to forward that start url in them).


I haven't worked with CrawlSpider and maybe what Maxim suggests will work for you, but keep in mind that response.url has the url after possible redirections.

Here is an example of how i would do it, but it's just an example (taken from the scrapy tutorial) and was not tested:

class MySpider(CrawlSpider):name = 'example.com'allowed_domains = ['example.com']start_urls = ['http://www.example.com']rules = (# Extract links matching 'category.php' (but not matching 'subsection.php')# and follow links from them (since no callback means follow=True by default).Rule(SgmlLinkExtractor(allow=('category\.php', ), deny=('subsection\.php', ))),# Extract links matching 'item.php' and parse them with the spider's method parse_itemRule(SgmlLinkExtractor(allow=('item\.php', )), callback='parse_item'),)def parse(self, response): # When writing crawl spider rules, avoid using parse as callback, since the CrawlSpider uses the parse method itself to implement its logic. So if you override the parse method, the crawl spider will no longer work.for request_or_item in CrawlSpider.parse(self, response):if isinstance(request_or_item, Request):request_or_item = request_or_item.replace(meta = {'start_url': response.meta['start_url']})yield request_or_itemdef make_requests_from_url(self, url):"""A method that receives a URL and returns a Request object (or a list of Request objects) to scrape. This method is used to construct the initial requests in the start_requests() method, and is typically used to convert urls to requests."""return Request(url, dont_filter=True, meta = {'start_url': url})def parse_item(self, response):self.log('Hi, this is an item page! %s' % response.url)hxs = HtmlXPathSelector(response)item = Item()item['id'] = hxs.select('//td[@id="item_id"]/text()').re(r'ID: (\d+)')item['name'] = hxs.select('//td[@id="item_name"]/text()').extract()item['description'] = hxs.select('//td[@id="item_description"]/text()').extract()item['start_url'] = response.meta['start_url']return item

Ask if you have any questions. BTW, using PyDev's 'Go to definition' feature you can see scrapy sources and understand what parameters Request, make_requests_from_url and other classes and methods expect. Getting into the code helps and saves you time, though it might seem difficult at the beginning.

https://en.xdnf.cn/q/70335.html

Related Q&A

Escaping search queries for Googles full text search service

This is a cross-post of https://groups.google.com/d/topic/google-appengine/97LY3Yfd_14/discussionIm working with the new full text search service in gae 1.6.6 and Im having trouble figuring out how to …

dificulty solving a code in O(logn)

I wrote a function that gets as an input a list of unique ints in order,(from small to big). Im supposed to find in the list an index that matches the value in the index. for example if L[2]==2 the out…

Scrapy. How to change spider settings after start crawling?

I cant change spider settings in parse method. But it is definitely must be a way. For example:class SomeSpider(BaseSpider):name = mySpiderallowed_domains = [example.com]start_urls = [http://example.co…

numpy ctypes dynamic module does not define init function error if not recompiled each time

sorry for yet an other question about dynamic module does not define init function. I did go through older questions but I didnt find one which adress my case specifically enought.I have a C++ library …

How do I save Excel Sheet as HTML in Python?

Im working with this library XlsxWriter.Ive opened a workbook and written some stuff in it (considering the official example) - import xlsxwriter# Create a workbook and add a worksheet. workbook = xlsx…

Faster sockets in Python

I have a client written in Python for a server, which functions through LAN. Some part of the algorithm uses socket reading intensively and it is executing about 3-6 times slower, than almost the same …

Python gmail api send email with attachment pdf all blank

I am using python 3.5 and below code is mostly from the google api page... https://developers.google.com/gmail/api/guides/sending slightly revised for python 3.xi could successfully send out the email …

how to find height and width of image for FileField Django

How to find height and width of image if our model is defined as followclass MModel:document = FileField()format_type = CharField()and image is saved in document then how we can find height and width o…

Given a pickle dump in python how to I determine the used protocol?

Assume that I have a pickle dump - either as a file or just as a string - how can I determine the protocol that was used to create the pickle dump automatically? And if so, do I need to read the entir…

Get First element by the recent date of each group

I have following model in djangoBusiness ID Business Name Business Revenue DateHere is the sample data:Business ID | Business Name | Business Revenue | Date 1 B1 1000 …