This is a very opinionated post, with some performance tips from someone that uses sidekiq for more than two years now, as as such I have already did some mistakes and learnt some tricks.
My current opinion is that sidekiq is the best backend job engine for ruby/rails applications.
And of course to use it in your rails app, you just need to add the gem to your Gemfile
gem 'sidekiq
Then just create your jobs/workers in the app/workers directory as this simple example:
class MySimpleWorker include Sidekiq::Worker def perform(args) puts "do your job here" end end
And you are good to go, of course they will only run when you start the sidekiq “daemon” (do we still call the it “daemons?”)
Sidekiq is not made for lengthy tasks
You do not want really longs tasks in sidekiq, yes I know that you use your sidekiq jobs to unload your controllers of lengthy operations and that is fine, that is not what I’m talking about.
If your Job is one atomic operation, keep it that way, but if your Job is lengthy because you do a loop over a lot of items, and do some operation with each, you’ll get a lot more performance if you split this work into:
- 1 job that will do the loop
- many small jobs that will do the actual work
Why this is better? check this list of reasons from the top of my mind:
- You’ll be using better the number of workers you have running
- Sidekiq will take care of resuming the work for you, you do not need to take care of that to restart your loop in the right position
- You’ll be able to use sidekiq web to monitor the progress of your tasks
Of course this is not a perfect solution (is there such thing?)
- You’ll not be able to guarantee the execution order of your small atomic tasks
- It will be harder to stop everything if one task fail (if that is your goal)
Keep the number and size of parameters to your jobs the minimum possible
Sidekiq takes care of re running failed tasks for you, and this is one of the beauties of sidekiq, but I’ve seen it fail to put back a task to redis upon failure when the parameter list is too big, because it puts extra failure information.
So to avoid problems, it is better to keep the parameter list to the minimum possible.
Monitoring is never a bad idea
I had one corner case, with a job that spawned a lot of smaller jobs, that some times one of those jobs didn’t run as expected and was not re-added to the queue (possibly due to the above problem, but changing that architecture would be really hard and risky).
My Solution to that?
I created another sidekiq job, with the sole responsibility of checking if all those jobs ended successfully, and reschedule the missing parts.
Depending on your busyness rules, this might be a good idea, create a job to monitor the other jobs and schedule it to run every X time.
Always rise the errors/exceptions
As mentioned before, sidekiq will retry the failed jobs, but the way it knows the job failed is if it rises an error.
If you rescue that error, and does not rise it again, sidekiq will assume everything was fine with your Job and will not add it to the RetrySet
Use some kind of service monitoring to keep Sidekiq running
Personally I use mmonit, it is flexible and has more debug information then the alternatives I’ve tested, but you can use anything you want.
Just a service that will be used to start sidekiq and keep it running if it crashes for some reason.
“Oh but it never crashed in my machine”
And real user workload sometimes will put more memory usage, a lot more jobs, linux will decide it needs to kill something because some other process used all your machine memory, some emergency maintenance will restart the server and you’ll forget to start sidekiq for no reason at all…
We could go all day listing impossible reasons for the service to stop and needs to be restarted…
Use different queues for different groups of tasks
It is not possible to set priority for tasks in sidekiq, but you can set what queue should run that task
And every time you start a sidekiq daemon, you can set what queue(s) that daemon should run.
For example, in a project I have tasks related to updating the user interface (long calculations), email deliveries and system maintenance, for those queues, I have the user interface queues running in the same machine as the web servers and the other two queues with different number of workers in a pool of sidekiq only machines.
This way I can split my processing power and have the performance I need for each type of task, and I also know that if the UI tasks are too slow, they will not slow down my email deliveries and system tasks.
Summary
If you keep these 6 tips in mind you’ll not need to repeat some of the mistakes I did in the past, because as Otto von Bismarck said: “Only a fool learns from his own mistakes. The wise man learns from the mistakes of others.”
And that is it, this is not a sidekiq tutorial, of course I can write one if someone thinks it would be useful, so if you want a tutorial or have any comments about any of the tips I published here, please leave a comment bellow.
But just to remember:
- Sidekiq is not made for lengthy tasks
- Keep the number and size of parameters to your jobs the minimum possible
- Monitoring is never a bad idea
- Always rise the errors/exceptions
- Use some kind of service monitoring to keep Sidekiq running
- Use different queues for different groups of tasks