How can I collect stats from within a spider callback?
class MySpider(Spider): name = "myspider" start_urls = ["http://example.com"] def parse(self, response): stats.set_value('foo', 'bar')
Not sure what to
import or how to make
stats available in general.
Check out the stats page from the scrapy documentation. The documentation states that the Stats Collector, but it may be necessary to add
from scrapy.stats import stats to your spider code to be able to do stuff with it.
EDIT: At the risk of blowing my own trumpet, if you were after a concrete example I posted an answer about how to collect failed urls.
EDIT2: After a lot of googling, apparently no imports are necessary. Just use