We want to deploy a payload instance with azure - but when registering or just saving a post - we're always reaching the RU limit for some reason
Error message:
Response status code does not indicate success: BadRequest (400); Substatus: 1028; ActivityId: ; Reason: (Message: {"Errors":["Your account is currently configured with a total throughput limit of 1600 RU\/s. This operation failed because it would have increased the total throughput to 2000 RU\/
Interesting. Do any other operations succeed? Are you performing any seeding on startup that would possibly be hitting your database? I'm not familiar with Azure's RU's. Would 2000 RU be a big jump over 1600?
This could be an index thing as well. For Cosmos, you'll want to set
indexSortableFields
to true - try that out and see if it makes a difference.
https://payloadcms.com/docs/configuration/overview#optionsi actually switched to serverless cosmosdb - and now everything works pretty nice - i'll also post some screenshots - but it seems, that while creating the indexes - it got up to 2400 RUs - for a short time. - this would mean setting up cosmosdb for 2400RUs - about 200€/month & the problem is - that with cosmosdb - you cannot downsize the instance anymore.
Maybe adding a notice to the docs would be good - or maybe in the future even optimising the setup
@Dan Ribbens Have you seen anything like this with clients running Cosmos?
Not too sure about existing setup costs. I wonder if cosmosDB is rebuild the indexes on a large collections on every startup? I would have to do some digging to find out. You could try setting
autoIndex: false
in your mongo options in the Payload config in production.
What practical effect does
autoIndex: false
have on the CMS?
yes, exactly the same problem for me too
but the cosmos db api is timing our app service because of absurd amounts of RU's
For anyone stumbling upon this. The current workaround is to use serverless Cosmos DB. Issue tracked here:
https://github.com/payloadcms/plugin-cloud-storage/issues/64That's not really a solution. That just means I'll get billed for the silly amount of RUs Payload is using rather than my app getting throttled.
Correct, I used the term workaround. The issue is open to look into a viable solution.
any movements on this?
I'm also still running into this
I posted here as well:
https://github.com/payloadcms/payload/issues/4836I think a dedicated cosmos db provider needs making
How come that the combination of plugin-cloud-storage and Cosmos DB are leading to an immense number of RU?
I've had no issue with the free tier of Atlas, but with a dedicated Cosmos DB there suddenly is.
This probably has to do with file access as the static file handler has to go through the db in order to fetch. If you are only serving public assets, you could make your read permissions on the s3 bucket to be open and have the staticURLs for the uploads collection go straight to your bucket.
Star
Discord
online
Get dedicated engineering support directly from the Payload team.