As anyone who reads my blog knows we've been profiling a large amount of Perl code recently. A daemon process receives jobs to run and in most cases (a few are run immediately) places them on a queue in the database. For a queued job we are really interested in the turnaround time i.e., the time from seeing an incoming request, decoding it (it is in JSON), checking it, inserting the request into the database and returning a unique job ID; obviously this determines how quickly we can queue jobs.
It is often said don't optimize too early. I try very hard to keep to that, and I do mean very hard. In the project I am working on now we have a basically working implementation (although some functionality is still to code, it is on the edges - this probably counts as too early - oh dear). We have a rather complicated setup which I don't have the time to go into here but at its simplest it is the age old client-server relationship and involves a lot of database (Oracle via DBD::Oracle) access. The client in this case is really a daemon process receiving work to process and either processing it or passing it on to another server to be queued and worked on later.
I've been working on 1.22 doesn't compile on 64 bit systems with unixODBC on and off for a few days. Boy, this is tiresome (no reflection on the poster of this rt).