We're designing a system based on CherryPy that in addition to serving web requests needs to do tasks/jobs in parallel. We want it to be a single process running as a daemon and create threads for all the parallel jobs like scheduled tasks or collecting data online.
I've been browsing through the CherryPy documentation and know that it's thread-pooled creating threads for all user requests. However I can't seem to find documentation on how to create and manage threads for custom jobs. Does CherryPy have a thread-handler that we can hook in to or could/should we write our own handler that hooks into CherryPy?
Subscribe a Monitor instance:
from cherrypy.process.plugins import Monitor
def foo():
my.store.collect_data('things', 'stuff')
Monitor(cherrypy.engine, foo, frequency=300).subscribe()
This will run the foo
function every 300 seconds in its own thread, and that thread will start when you call engine.start
and stop when you call engine.stop
(or at process exit).