I have two CrawlerProcesses, each is calling different spider. I want to pass custom settings to one of these processes to save the output of the spider to csv, I thought I could do this:
storage_settings = {'FEED_FORMAT': 'csv', 'FEED_URI': 'foo.csv'}
process = CrawlerProcess(get_project_settings())
process.crawl('ABC', crawl_links=main_links, custom_settings=storage_settings )
process.start()
and in my spider I read them as an argument:
def __init__(self, crawl_links=None, allowed_domains=None, customom_settings=None, *args, **kwargs):self.start_urls = crawl_linksself.allowed_domains = allowed_domainsself.custom_settings = custom_settingsself.rules = ......super(mySpider, self).__init__(*args, **kwargs)
but how can I tell my project settings file "settings.py" about these custom settings? I don't want to hard code them, rather I want them to be read automatically.
You cannot tell your file about these settings. You are perhaps confused between crawler settings and spider settings. In scrapy, the feed paramaters as of the time of this wrting need to be passed to the crawler process and not to the spider. You have to pass them as parameters to your crawler process. I have the same use case as you. What you do is read the current project settings and then override it for each crawler process. Please see the example code below:
s = get_project_settings()
s['FEED_FORMAT'] = 'csv'
s['LOG_LEVEL'] = 'INFO'
s['FEED_URI'] = 'Q1.csv'
s['LOG_FILE'] = 'Q1.log'proc = CrawlerProcess(s)
And then your call to process.crawl()
is not correct. The name of the spider should be passed as the first argument as a string, like this: process.crawl('MySpider', crawl_links=main_links)
and of course MySpider
should be the value given to the name attribute in your spider class.