We're loving Payload and plan to continue using it for a growing number of client projects. The front-ends for these projects are hosted on Vercel. The projects are relatively lightweight, they're typically marketing sites rather than e-comm or apps with a large # of users.
What would be the most cost-effective (and easy to manage) way to deploy our Payload instances? Payload Cloud was very easy to set-up, but I think as we scale the number of projects self-hosting would be simpler, and I know James mentioned on the call yesterday that the Payload team is more focused on the core product for now rather than Cloud (rightfully so).
Cost-wise, I'm assuming S3 for file uploads + a shared EC2 instance might be cheapest? If we're using S3 and the projects are lightweight-ish, would an ephemeral platform be better? Would love to hear your recommendations.
Have you considered having multiple CMSs on one Payload Cloud server? I don't know much about this as I've never done it before - I'm considering hosting multiple front-end client websites on Vercel and have 1 Payload Cloud with access control for each client. What do you think?
hm that's an interesting idea. Not sure how you would be able to set different environment variables for the different CMS's though since the ability to deploy the Payload Cloud changes is based on a single repo
Can a multi-tenant architecture work - 1 env?
https://github.com/payloadcms/payload/tree/master/examples/multi-tenantwow I did not realize this was a thing, very interesting
Yea it's epic, it'll save so much time and money. I wonder what the limitations and pitfalls are. Please keep us posted. I've asked someone with more experience about potential pitfalls to look out for, such as slow page laod speeds and they said it shouldn't be a problem for small websites that don't have a lot of traffic.
It's super intriguing but I don't know if this is the right fit for me. I like having the 1:1 coupling between my front-end(s) and the CMS that supports them. Whereas the multi-tenancy would combine the same plugins, webhooks, and most notably, build process, into the same. I'm working on a monorepo for these projects (/site folder deployed to Vercel, /cms folder deploy to Payloud Cloud or my own tbd server) so this multi-tenancy would be pretty opposite of that. Just one /cms for all /sites. I don't like the idea of needing to build and deploy my CMS for new projects against existing projects that aren't changing
I'm also just now realizing
https://github.com/payloadcms/next-payloadexists which would take this in a different direction entirely and keep everything on Vercel
I deploy payload (and my frontends) on a VPS using CapRover, works great!
Alright after some more digging it sounds like the most recommended option for self-hosting is Digital Ocean, potentially their Apps rather than Droplets. Since my front-end is Vercel and Vercel does their own image caching and hosting, it might be worth finding a non-ephemeral option that allows for local file storage instead of using S3. All that being said it also sounds like
next-payload
will be revisited as soon as 2.0 is released, and if there's a way to deploy Payload on Vercel alongsidethe front-end, use the local API, etc, that sounds most ideal. So I'll probably need to revisit this in a month or so.
has a message somewhere from early September that outlined his production stack on DO + Vercel
I'm just a mortal dev but here's my current take on hosting:
S3 - I'm still shilling cloudflare's R2 offering, the free tier is very much enough for most small projects and if you ever do want to go off vercel your images are already CDN ready, I still use vercel's image manipulation and therefore CDN though, but I'll be exploring alternatives here via astro's image component and other community libs
Database hosting - if you have lots of small sites, you can host multiple databases within the same mongodb instance, and then just connect to them like
uri/database
Still using DO for the server in production, BUT if you want a bit of self hosting without the manual labour, I can recommend coolify.io...you basically point it to a server via SSH keys and it handles everything for you, including automatic deploys, preview branches, SSL and whatever else you need. currently have like 12 dbs and several APIs running on one hetzner $20 server...v4 just came out in beta and it's fantastic, will migrate to it from v3 as soon as possible and continue using it
You mention client sites, if they're largely static, I know of a few agencies here who run payload in a multitenancy mode, one instance and many static sites per client
anyway, yes once the newer core changes happen in payload's structure to allow for a more stable serverless deployment, it's going to be a nobrainer to deploy payload straight on vercel, for an entirely free package (R2 cf for s3, mongoatlas free tier etc.) you'd have a full stack service hosted which is just incredible imo
it still won't be ideal for every usecase but definitely for smaller sites, why not
really appreciate the extra context. Sounds like I need to checkout coolify. And glad you agree serverless makes the most sense once it's ready. I can't wrap my head around multi-tenancy. I think my client sites would need to be similar enough for this to make sense—reusing at least a few blocks and collections. But for those types of sites I'm typically using Framer or Webflow anyways, I turn to Next.js + Payload for the sites that are much more custom. But I feel like I might be misunderstanding the benefits of multi-tenancy.
Do you use Digital Ocean's Droplets or App Platform? Now I'm leaning R2, MongoAtlas Free Tier, and either Droplet or App Platform for the server
Apps platform, I don't enjoy messing around with devops at all but I do like self hosting, hence coolify has been just perfect for me
For multi-tenancy we'll need to redeploy for every client right? Or is there a way to configure a new client via the dashboard so the server never goes offline? Surely it's a deal-breaker to have the server go down for multiple client websites? How do you guys navigate that? If it requires editing the Payload code to configure every client and therefore redeployment, we might as well depploy everything to Vercel?
This is the first time I'm hearing of deploying Payload to Vercel and the entire repo in 1 swift deployment, am I understanding correctly? That sound amazing and like the most efficient solution? In that case there's no need for multi-tenancy manual work of defining permission etc.
For multi-tenancy we'll need to redeploy for every client right?
You can handle it all via collections actually, a new client is a new tenant, you assign people to it and your permissions will be built around that.
https://payloadcms.com/blog/how-to-build-a-multi-tenant-app-with-payloadYou won't have downtime, though to some degree yes, your data structure will need to be somewhat similar or shared across clients
This is the first time I'm hearing of deploying Payload to Vercel and the entire repo in 1 swift deployment, am I understanding correctly?
I ran this setup for a few months on a static site, but for now it's quite limited around bundle sizes, it's too difficult to make Payload run leaner
right now, though James has talked about a push for edge function compatibility, ESM, code splitting and more..that was the only problem with this hosting approach, otherwise it worked perfectly..so within the next 6 months we might be looking at this being a real possibility for production apps
When setting up on Digital Ocean, did you ever run into an issue with
cross-env
in the
serve
command? I keep getting
cross-env not found
. I noticed this was mentioned before for DO
https://github.com/payloadcms/payload/discussions/1682#discussioncomment-4609280. I tried moving it to my regular dependencies instead of dev dependencies as well without luck. This is using the Apps Platform for the
cms
subdirectory in my monorepo.
Is Coolify an alternative to Apps Platform? Where you just bring your own server and Coolify handles all the initial set-up and CI/CD git connections? If so, what type of server did you connect?
your
cross-env
not found is most likely cause you're running build with the node env set to prod? if so i'd remove node env entirely from your config and just run the commands, payload automatically adds it in the serve script in package.json
NODE_ENV=production
And yes, you point coolify to a server and it spins up docker and a few other things it needs to handle all of your deployments...you can then even disconnect coolify entirely and your server will still run on its own, it just wont continue to be monitored and maintained
I use hetzner here in europe so I've ran coolify on 5 eur and 20 eur servers, any sort of droplet-like server will work, you just need root ssh access and the ability to install docker and runtimes like that
v4 isnt out of beta yet, but it's significantly better than v3, i'd play around with it, dm the creator or something if its still invite only
Nice okay, thank you. I haven't done DevOps stuff in forever (Vercel / serverless functions has solved it for so long lol), I really appreciate the help wrapping my head around this.
I'll give that a try for App Platform. When I linked my repo it automatically added these build packs, so maybe they're setting the node env and I need to strip down what was automatically added.
For Coolify, you still need to write your own Dockerfile(s) for the API then to ensure it run correctly when deployed into the Coolified-server?
nope! coolify handles everything for you, you can bring your own dockerfile if you want extra power and i believe there's more features coming for that, but otherwise once your permissions are setup and working its exactly the same as vercel or any other..you say which github/gitlab repo to deploy, select an env (node, rust, python, php etc.) and it figures it out...
it can also deploy databases and ready-made services coming later like uptime kuma and plausible analytics
those build packs are standard yeah, in the settings somewhere you'll see environment variables, it's app wide and service specific in the top tabs, so check both areas
great, I'm giving Coolify a try now on a Droplet. Still trying to debug the App Platform instance. I don't see that NODE_ENV is being manually set in any of my environment variables. The build phase works fine, it's the deploy phase that no longer has access to
cross-env
and fails before it even gets to the node executable. I could just take out
cross-env
and manually set the ENV variables, but that seems like a miss for the future if someone tries this starter on Windows
Weird, I just checked both coolify and DO to make sure, I'm running them exactly the same as I mentioned above, no env being set and the follow commands
Maybe it's my monorepo set-up...it knows that
/cms
is the directory to use, but maybe it's not calling
yarn serve
from there. If it's calling from the root still then it wouldn't have cross-env
you could try setting a
cd
in your script or changing the
Source Directory: /
its under Source in your component config in apps
coolify also has a config for this
Lol yep trying
cd cms; yarn serve
rn. Odd that
yarn build
would work correctly though (from the cms folder) but then
yarn serve
doesn't
Wanted to throw my hat in the ring and include an article I wrote on deploying on GCloud if you were interested in that;
With Google Cloud run we’ve been able to get our costs down to $2 a month using the generous free 2 million invocations a month
how much would it cost if that generosity were to go away?
This article is so thorough, love it. I’m not having a great experience with DO so might give Google Cloud a go. Do Google Run instances have dedicated IPs so that you can whitelist your API for your MongoDB instance?
I believe less than a dollar. The majority of my costs come from network egress for the CDN as well as cloud storage for images. Cloud run itself has never cost me more than $1-3 a month
You can do that with a virtual cloud network. All of the Google Cloud run servers are completely distributed across multiple VMs, but you can map their network activity to route through a dedicated IP like we have done for our media cache strategy.
Thanks for jumping in here
@191776538205618177- this input is super helpful. I had a whole boatload of stuff I was going to chime in with from my time in the agency world but you beat me to it with some solid solutions.
I am using coolify (actually v3 because it is stable), payload-next as monorepo, s3 which is hosted on coolify to. For Databases, Plausible analytics and other services i use coolify to. Before i used Hetzner Cloud for my VPS with 16 shared cores, i have used Contabo which has cheap vps to. If you have small websites with small traffic, you can take it in one single server. But if you have one project which is more advanced and has more traffic i recommend you to deploy it on a separate vps. Hetzner have a nice ui where you can manage your instances simple. I think this is the cheapest method to be flexible. But full selfhosting on production can be hard, if you want to send emails you need to manage a own mailserver or use a external service which have more costs
are there any hurdles to look out for when using R2 for cloud storage? I have my setup as attached, currently getting an "401 unauthorized" when I attempt to upload. My API key as read and write permissions for my bucket. I added my local IP to the whitelist for the bucket as well
Nevermind! I removed my personal IP from the "Client IP" options in the API Key and it works as expected
What are the costs associated with your VPC? I got everything up and running on Google Cloud but I'm realizing setting up the VPC might negate all the costs saved with Cloud Run
networking is a little pricey, works out about $0.0001 per visitor.
though we might do away with our image caching and go all in on Vercel
gotcha. I'd only be using the VPC for the dedicated IP between Cloud Run and the MongoDB instance so I think it'd be pretty low, but it looks like the VPC Connector is a minimum of $12 / mo and the dedicated IP is like $3
I guess pricing in your instance might depend on your revalidation strategy and how often you intend to read and modify the data in Payload
That sounds right. If you go all-in on Vercel, will you still be running your Payload instances on Cloud Run? I'd love to have everything on Vercel (front-end and server) via
https://github.com/payloadcms/next-payloadbut the general sentiment seems to indicate next-payload isn't quite ready for production yet
I'll probably keep that on Cloud Run, yeah. Because Next already does image caching though, it's now redundant to do your own kind of image caching if Vercel does it all for you. But I always like to keep my server seperate from my front-end just in case I need to scale the functionality. Also helps too if we have a large influx of users on an e-store. We can just up our instances, rather than pay the hefty Vercel edge function prices
Hey guys, question: i see that in the docker the env var PAYLOAD_PUBLIC_SECRET is exposed. Is it right? If so this secret is the payload secret? I dont understand why we should expose it
+ deploying on cloud run i get exactly the same issue described here :
https://github.com/payloadcms/payload/issues/1309namely " I can login but cannot save any content, all the requests throw 401 or 403. I can read content OK. Restarted the server and the database but no change. Cleared cookies etc. Will investigate further tomorrow but haven't found a way round it yet." should i reply to the same issue?
+ solved thanks to discussion
https://github.com/payloadcms/payload/discussions/1396This could actually be made private. That's a misconfiguration on my part. Though the secret is only used in the express side of the application, not the react front-end. This secret is used for encryption only and is never passed to the front-end.
In cases where secrets are passed between the front-end and back-end, a different Next JS token is used only to verify that the user is coming from Payload and did not manage to access a draft copy of a collection by accident. So not as critical of a security issue if someone was to get a hold of this token. Different story if a collection held sensitive user data and was not encrypted
Got it thanks
I'm curious what you're doing now that next-payload has made progress
Have anybody tried hosting on Plesk Panel?
Is there any limitations on hosting Payload on Plesk
I am currently hosting about 15 Payload CMS on Plesk. No issues atm. They are all on V2
Nice, we should probably make a tutorial
Thanks for the information, we have also hosted a custom application on plesk with Payload but we are facing a chunkLoaderror on 404 page loading, It is possible for to share the startup file
Probably, happy to talk through my approach with the team. About to try V3 in the coming weeks.
My server.ts file?
Personally using railway - would be keen to hear where others host mongodb?
I just run a serverless atlas instance.
yes the app.js file which given on the plesk start-up file field
import express from 'express'
import payload from 'payload'
import path from 'path'
import nodemailerSendgrid from 'nodemailer-sendgrid'
const sendGridAPIKey = process.env.SENDGRID_API
KEY, res) => {
res.redirect('/admin')
})
const start = async () => {
// Initialize Payload
await payload.init({
secret: process.env.PAYLOAD_SECRET || '',
express: app,
...(sendGridAPIKey
? {
email: {
fromName: '',
fromAddress: '',
transportOptions: nodemailerSendgrid({
apiKey: sendGridAPIKey,
}),
},
}
: {}),
onInit: async () => {
payload.logger.info(
Payload Admin URL: ${payload.getAdminURL()}
)
},
})
// Add your own express routes here
app.listen(process.env.PORT, async () => {
payload.logger.info(
Server listening on port ${PORT}
)
})
}
void start()
That's my server.ts file
Most times I have to run yarn or npn install once I've deployed
[test]
Star
Discord
online
Get dedicated engineering support directly from the Payload team.