Simplify your stack and build anything. Or everything.
Build tomorrow’s web with a modern solution you truly own.
Code-based nature means you can build on top of it to power anything.
It’s time to take back your content infrastructure.

Queues

Queues are the final aspect of Payload's Jobs Queue and deal with how to run your jobs. Up to this point, all we've covered is how to queue up jobs to run, but so far, we aren't actually running any jobs.

When you go to run jobs, Payload will query for any jobs that are added to the queue and then run them. By default, all queued jobs are added to the default queue.

But, imagine if you wanted to have some jobs that run nightly, and other jobs which should run every five minutes.

By specifying the queue name when you queue a new job using payload.jobs.queue(), you can queue certain jobs with queue: 'nightly', and other jobs can be left as the default queue.

Then, you could configure two different runner strategies:

  1. A cron that runs nightly, querying for jobs added to the nightly queue
  2. Another that runs any jobs that were added to the default queue every ~5 minutes or so

Executing jobs

As mentioned above, you can queue jobs, but the jobs won't run unless a worker picks up your jobs and runs them. This can be done in four ways:

Cron jobs

The jobs.autoRun property allows you to configure cron jobs that automatically run queued jobs at specified intervals. Note that this does not queue new jobs - only runs jobs that are already in the specified queue.

Example:

1
export default buildConfig({
2
// Other configurations...
3
jobs: {
4
tasks: [
5
// your tasks here
6
],
7
// autoRun can optionally be a function that receives `payload` as an argument
8
autoRun: [
9
{
10
cron: '0 * * * *', // every hour at minute 0
11
limit: 100, // limit jobs to process each run
12
queue: 'hourly', // name of the queue
13
},
14
// add as many cron jobs as you want
15
],
16
shouldAutoRun: async (payload) => {
17
// Tell Payload if it should run jobs or not. This function is optional and will return true by default.
18
// This function will be invoked each time Payload goes to pick up and run jobs.
19
// If this function ever returns false, the cron schedule will be stopped.
20
return true
21
},
22
},
23
})

Endpoint

You can execute jobs by making a fetch request to the /api/payload-jobs/run endpoint:

1
// Here, we're saying we want to run only 100 jobs for this invocation
2
// and we want to pull jobs from the `nightly` queue:
3
await fetch('/api/payload-jobs/run?limit=100&queue=nightly', {
4
method: 'GET',
5
headers: {
6
Authorization: `Bearer ${token}`,
7
},
8
})

This endpoint is automatically mounted for you and is helpful in conjunction with serverless platforms like Vercel, where you might want to use Vercel Cron to invoke a serverless function that executes your jobs.

Query Parameters

  • limit: The maximum number of jobs to run in this invocation (default: 10).
  • queue: The name of the queue to run jobs from. If not specified, jobs will be run from the default queue.
  • allQueues: If set to true, all jobs from all queues will be run. This will ignore the queue parameter.

Vercel Cron Example

If you're deploying on Vercel, you can add a vercel.json file in the root of your project that configures Vercel Cron to invoke the run endpoint on a cron schedule.

Here's an example of what this file will look like:

1
{
2
"crons": [
3
{
4
"path": "/api/payload-jobs/run",
5
"schedule": "*/5 * * * *"
6
}
7
]
8
}

The configuration above schedules the endpoint /api/payload-jobs/run to be invoked every 5 minutes.

The last step will be to secure your run endpoint so that only the proper users can invoke the runner.

To do this, you can set an environment variable on your Vercel project called CRON_SECRET, which should be a random string—ideally 16 characters or longer.

Then, you can modify the access function for running jobs by ensuring that only Vercel can invoke your runner.

1
export default buildConfig({
2
// Other configurations...
3
jobs: {
4
access: {
5
run: ({ req }: { req: PayloadRequest }): boolean => {
6
// Allow logged in users to execute this endpoint (default)
7
if (req.user) return true
8
9
// If there is no logged in user, then check
10
// for the Vercel Cron secret to be present as an
11
// Authorization header:
12
const authHeader = req.headers.get('authorization')
13
return authHeader === `Bearer ${process.env.CRON_SECRET}`
14
},
15
},
16
// Other job configurations...
17
},
18
})

This works because Vercel automatically makes the CRON_SECRET environment variable available to the endpoint as the Authorization header when triggered by the Vercel Cron, ensuring that the jobs can be run securely.

After the project is deployed to Vercel, the Vercel Cron job will automatically trigger the /api/payload-jobs/run endpoint in the specified schedule, running the queued jobs in the background.

Local API

If you want to process jobs programmatically from your server-side code, you can use the Local API:

Run all jobs:

1
// Run all jobs from the `default` queue - default limit is 10
2
const results = await payload.jobs.run()
3
4
// You can customize the queue name and limit by passing them as arguments:
5
await payload.jobs.run({ queue: 'nightly', limit: 100 })
6
7
// Run all jobs from all queues:
8
await payload.jobs.run({ allQueues: true })
9
10
// You can provide a where clause to filter the jobs that should be run:
11
await payload.jobs.run({
12
where: { 'input.message': { equals: 'secret' } },
13
})

Run a single job:

1
const results = await payload.jobs.runByID({
2
id: myJobID,
3
})

Bin script

Finally, you can process jobs via the bin script that comes with Payload out of the box. By default, this script will run jobs from the default queue, with a limit of 10 jobs per invocation:

1
pnpm payload jobs:run

You can override the default queue and limit by passing the --queue and --limit flags:

1
pnpm payload jobs:run --queue myQueue --limit 15

If you want to run all jobs from all queues, you can pass the --all-queues flag:

1
pnpm payload jobs:run --all-queues

In addition, the bin script allows you to pass a --cron flag to the jobs:run command to run the jobs on a scheduled, cron basis:

1
pnpm payload jobs:run --cron "*/5 * * * *"

You can also pass --handle-schedules flag to the jobs:run command to make it schedule jobs according to configured schedules:

1
pnpm payload jobs:run --cron "*/5 * * * *" --queue myQueue --handle-schedules # This will both schedule jobs according to the configuration and run them

Processing Order

By default, jobs are processed first in, first out (FIFO). This means that the first job added to the queue will be the first one processed. However, you can also configure the order in which jobs are processed.

Jobs Configuration

You can configure the order in which jobs are processed in the jobs configuration by passing the processingOrder property. This mimics the Payload sort property that's used for functionality such as payload.find().

1
export default buildConfig({
2
// Other configurations...
3
jobs: {
4
tasks: [
5
// your tasks here
6
],
7
processingOrder: '-createdAt', // Process jobs in reverse order of creation = LIFO
8
},
9
})

You can also set this on a queue-by-queue basis:

1
export default buildConfig({
2
// Other configurations...
3
jobs: {
4
tasks: [
5
// your tasks here
6
],
7
processingOrder: {
8
default: 'createdAt', // FIFO
9
queues: {
10
nightly: '-createdAt', // LIFO
11
myQueue: '-createdAt', // LIFO
12
},
13
},
14
},
15
})

If you need even more control over the processing order, you can pass a function that returns the processing order - this function will be called every time a queue starts processing jobs.

1
export default buildConfig({
2
// Other configurations...
3
jobs: {
4
tasks: [
5
// your tasks here
6
],
7
processingOrder: ({ queue }) => {
8
if (queue === 'myQueue') {
9
return '-createdAt' // LIFO
10
}
11
return 'createdAt' // FIFO
12
},
13
},
14
})

Local API

You can configure the order in which jobs are processed in the payload.jobs.queue method by passing the processingOrder property.

1
const createdJob = await payload.jobs.queue({
2
workflow: 'createPostAndUpdate',
3
input: {
4
title: 'my title',
5
},
6
processingOrder: '-createdAt', // Process jobs in reverse order of creation = LIFO
7
})

Common Queue Strategies

Here are typical patterns for organizing your queues:

Priority-Based Queues

Separate jobs by priority to ensure critical tasks run quickly:

1
export default buildConfig({
2
jobs: {
3
tasks: [
4
/* ... */
5
],
6
autoRun: [
7
{
8
cron: '* * * * *', // Every minute
9
limit: 100,
10
queue: 'critical',
11
},
12
{
13
cron: '*/5 * * * *', // Every 5 minutes
14
limit: 50,
15
queue: 'default',
16
},
17
{
18
cron: '0 2 * * *', // Daily at 2 AM
19
limit: 1000,
20
queue: 'batch',
21
},
22
],
23
},
24
})

Then queue jobs to appropriate queues:

1
// Critical: Password resets, payment confirmations
2
await payload.jobs.queue({
3
task: 'sendPasswordReset',
4
input: { userId: '123' },
5
queue: 'critical',
6
})
7
8
// Default: Welcome emails, notifications
9
await payload.jobs.queue({
10
task: 'sendWelcomeEmail',
11
input: { userId: '123' },
12
queue: 'default',
13
})
14
15
// Batch: Analytics, reports, cleanups
16
await payload.jobs.queue({
17
task: 'generateAnalytics',
18
input: { date: new Date() },
19
queue: 'batch',
20
})

Environment-Based Execution

Only run jobs on specific servers:

1
export default buildConfig({
2
jobs: {
3
tasks: [
4
/* ... */
5
],
6
shouldAutoRun: async (payload) => {
7
// Only run jobs if this env var is set
8
return process.env.ENABLE_JOB_WORKERS === 'true'
9
},
10
autoRun: [
11
{
12
cron: '*/5 * * * *',
13
limit: 50,
14
queue: 'default',
15
},
16
],
17
},
18
})

Use cases:

  • Dedicate specific servers to job processing
  • Disable job processing during deployments
  • Scale job workers independently from API servers

Feature-Based Queues

Group jobs by feature or domain:

1
autoRun: [
2
{ cron: '*/2 * * * *', queue: 'emails', limit: 100 },
3
{ cron: '*/10 * * * *', queue: 'images', limit: 50 },
4
{ cron: '0 * * * *', queue: 'analytics', limit: 1000 },
5
]

This makes it easy to:

  • Monitor specific features
  • Scale individual features independently
  • Pause/resume specific types of work

Choosing an Execution Method

Here's a quick guide to help you choose:

Method

Best For

Pros

Cons

Cron jobs (autoRun)

Dedicated servers, long-running apps

Simple setup, automatic execution

Not for serverless, requires always-running server

Endpoint

Serverless platforms (Vercel, Netlify)

Works with serverless, easy to trigger

Requires external cron (Vercel Cron, etc.)

Local API

Custom scheduling, testing

Full control, good for tests

Must implement your own scheduling

Bin script

Development, manual execution

Quick testing, manual control

Manual invocation only

Recommendations:

  • Production (Serverless): Use Endpoint + Vercel Cron
  • Production (Server): Use Cron jobs (autoRun)
  • Development: Use Bin script or Local API
  • Testing: Use Local API with payload.jobs.runByID()

Troubleshooting

Jobs aren't running

Is shouldAutoRun returning true?

1
jobs: {
2
shouldAutoRun: async (payload) => {
3
console.log('shouldAutoRun called') // Add logging
4
return true
5
},
6
}

Is autoRun configured correctly?

1
// invalid cron syntax
2
autoRun: [{ cron: 'every 5 minutes' }]
3
4
// valid cron syntax
5
autoRun: [{ cron: '*/5 * * * *' }]

Are jobs in the correct queue?

1
// Queuing to 'critical' queue
2
await payload.jobs.queue({ task: 'myTask', queue: 'critical' })
3
4
// But autoRun only processes 'default' queue
5
autoRun: [{ queue: 'default' }] // won't pick up the job

Check the jobs collection

Enable the jobs collection in admin:

1
jobsCollectionOverrides: ({ defaultJobsCollection }) => ({
2
...defaultJobsCollection,
3
admin: {
4
...defaultJobsCollection.admin,
5
hidden: false,
6
},
7
})

Look for jobs with:

  • processing: true but stuck → Worker may have crashed
  • hasError: true → Check the log field for errors
  • completedAt: null → Job hasn't run yet

Jobs running but failing

Check the job logs in the payload-jobs collection:

1
const job = await payload.findByID({
2
collection: 'payload-jobs',
3
id: jobId,
4
})
5
6
console.log(job.log) // View execution log
7
console.log(job.processingErrors) // View errors

Jobs running too slowly

Increase limit

1
autoRun: [
2
{ cron: '*/5 * * * *', limit: 100 }, // Process more jobs per run
3
]

Run more frequently

1
autoRun: [
2
{ cron: '* * * * *', limit: 50 }, // Run every minute instead of every 5
3
]

Add more workers

Scale horizontally by running multiple servers with ENABLE_JOB_WORKERS=true.

Next

Job Schedules