Queues are the final aspect of Payload's Jobs Queue and deal with how to run your jobs. Up to this point, all we've covered is how to queue up jobs to run, but so far, we aren't actually running any jobs.
When you go to run jobs, Payload will query for any jobs that are added to the queue and then run them. By default, all queued jobs are added to the default queue.
But, imagine if you wanted to have some jobs that run nightly, and other jobs which should run every five minutes.
By specifying the queue name when you queue a new job using payload.jobs.queue(), you can queue certain jobs with queue: 'nightly', and other jobs can be left as the default queue.
Then, you could configure two different runner strategies:
cron that runs nightly, querying for jobs added to the nightly queuedefault queue every ~5 minutes or soAs mentioned above, you can queue jobs, but the jobs won't run unless a worker picks up your jobs and runs them. This can be done in four ways:
The jobs.autoRun property allows you to configure cron jobs that automatically run queued jobs at specified intervals. Note that this does not queue new jobs - only runs jobs that are already in the specified queue.
Example:
You can execute jobs by making a fetch request to the /api/payload-jobs/run endpoint:
This endpoint is automatically mounted for you and is helpful in conjunction with serverless platforms like Vercel, where you might want to use Vercel Cron to invoke a serverless function that executes your jobs.
limit: The maximum number of jobs to run in this invocation (default: 10).queue: The name of the queue to run jobs from. If not specified, jobs will be run from the default queue.allQueues: If set to true, all jobs from all queues will be run. This will ignore the queue parameter.If you're deploying on Vercel, you can add a vercel.json file in the root of your project that configures Vercel Cron to invoke the run endpoint on a cron schedule.
Here's an example of what this file will look like:
The configuration above schedules the endpoint /api/payload-jobs/run to be invoked every 5 minutes.
The last step will be to secure your run endpoint so that only the proper users can invoke the runner.
To do this, you can set an environment variable on your Vercel project called CRON_SECRET, which should be a random string—ideally 16 characters or longer.
Then, you can modify the access function for running jobs by ensuring that only Vercel can invoke your runner.
This works because Vercel automatically makes the CRON_SECRET environment variable available to the endpoint as the Authorization header when triggered by the Vercel Cron, ensuring that the jobs can be run securely.
After the project is deployed to Vercel, the Vercel Cron job will automatically trigger the /api/payload-jobs/run endpoint in the specified schedule, running the queued jobs in the background.
If you want to process jobs programmatically from your server-side code, you can use the Local API:
Run all jobs:
Run a single job:
Finally, you can process jobs via the bin script that comes with Payload out of the box. By default, this script will run jobs from the default queue, with a limit of 10 jobs per invocation:
You can override the default queue and limit by passing the --queue and --limit flags:
If you want to run all jobs from all queues, you can pass the --all-queues flag:
In addition, the bin script allows you to pass a --cron flag to the jobs:run command to run the jobs on a scheduled, cron basis:
You can also pass --handle-schedules flag to the jobs:run command to make it schedule jobs according to configured schedules:
By default, jobs are processed first in, first out (FIFO). This means that the first job added to the queue will be the first one processed. However, you can also configure the order in which jobs are processed.
You can configure the order in which jobs are processed in the jobs configuration by passing the processingOrder property. This mimics the Payload sort property that's used for functionality such as payload.find().
You can also set this on a queue-by-queue basis:
If you need even more control over the processing order, you can pass a function that returns the processing order - this function will be called every time a queue starts processing jobs.
You can configure the order in which jobs are processed in the payload.jobs.queue method by passing the processingOrder property.
Here are typical patterns for organizing your queues:
Separate jobs by priority to ensure critical tasks run quickly:
Then queue jobs to appropriate queues:
Only run jobs on specific servers:
Use cases:
Group jobs by feature or domain:
This makes it easy to:
Here's a quick guide to help you choose:
Method | Best For | Pros | Cons |
|---|---|---|---|
Cron jobs ( | Dedicated servers, long-running apps | Simple setup, automatic execution | Not for serverless, requires always-running server |
Endpoint | Serverless platforms (Vercel, Netlify) | Works with serverless, easy to trigger | Requires external cron (Vercel Cron, etc.) |
Local API | Custom scheduling, testing | Full control, good for tests | Must implement your own scheduling |
Bin script | Development, manual execution | Quick testing, manual control | Manual invocation only |
Recommendations:
autoRun)payload.jobs.runByID()Jobs aren't running
Is shouldAutoRun returning true?
Is autoRun configured correctly?
Are jobs in the correct queue?
Check the jobs collection
Enable the jobs collection in admin:
Look for jobs with:
processing: true but stuck → Worker may have crashedhasError: true → Check the log field for errorscompletedAt: null → Job hasn't run yetCheck the job logs in the payload-jobs collection:
Increase limit
Run more frequently
Add more workers
Scale horizontally by running multiple servers with ENABLE_JOB_WORKERS=true.