Hey folks, I wrote a blog post on how to deploy PayloadCMS from start to finish using Docker & Google Cloud Run. You can read about it all here -
https://www.omniux.io/blog/deploying-a-payload-cms-instance-with-docker-and-google-cloud-run#google-cloud #docker #cloud #gcp
hello @markatomniux , I have followed your article to deploy paylod in Cloud Run, but I get the error:
non-zero status:1
Attached is a picture of the error and my Dockerfile configuration.
What happens when you build locally without docker? Does it build ok?
@markatomniux I already corrected that error, now I have an error related to the yaml file. I get the following error:
Repository "us.gcr.io" not fund
. Attached is a picture of the error t of the configuration of my YAML file.
Have you actually created that repository?
in your docker you’re saying that you have created a repository and named it “us.gcr.io” (which is a strange name for a repository, since this would usually be your project name or smth)
us.gcr.io was the repository that was generated automatically for me when I added Container Registry to my project. Since then we've switched to Artifact Registry. @steprob have you added Artifact Repository? It will have created a special location for your docker images
Actually, I see now that I have a mistake in my code. us.gcr.io is the name of the repository from another one of my projects (that's my bad). I'll update the blog post to cover the creation of the Artifact Registry,
When you add a new Registry, give it any name like 'docker-registry'. Then use this name of your registry in place of us.gcr.io
I've updated the blog post. That should make it a bit clearer!
hi @markatomniux , I have not added Artifact Registry or Artifact Repository, I just based on your blog to do the deployment in Cloud Run.
Ah, you'll need to enable all 4 services mentioned in the Getting Started section in order to deploy on Google Cloud Run;
With your Google project all setup, you'll be greeted with a welcome screen with a ton of services. This can all get pretty overwhelming because Google offers lots of individual services (over 70 at the time of writing!), but for now, we're going to focus on 4 services;
Cloud Build
Cloud Run
Container Registry
Cloud Storage
@markatomniux I'm going to implement it, any error I'll ask for your help ^^
@markatomniux The service has been successfully implemented, but when I navigate to the url that provides the service, I get the following error:
service unavailable
What’s the port in service settings? It should be 3000 (default is 8080 if I remember correctly)
@iamlinkus If port 3000 is configured, the deployment is done correctly, but at the url that generates the deployed project I get the error:
Service Unavailable
What do your Cloud Log files say?
@markatomniux In the Log files it shows an error related to GraphQL, which I don't do by myself but is incorporated in the node_modules file of Payload.
try adding this to the package.json:
"overrides": {
"graphql@>15.7.0 <16.7.0": "^15.8.0"
},
I think I had that error and this fixed it. There's was a problem with the version of graphql that payload used by default. Although this was a couple of versions down, so I'll have to look over the commit history of payload and see if the team has updated that dep and maybe with the latest payload the error shouldn't occur.
@markatomniux i checked out your article , another simpler way would just be to create a dockerfile and add it to the repo root and go to cloudrun and create service then select "
Continuously deploy new revisions from a source repository" then choose the repo from github and for buildtype select "Dockerfile" etc
maybe add this as an alternative to using the cloudbuild.yaml which is platform specific
That's a valid solution. I should probably add that this setup is good for Monorepo projects. I have the code my my NextJS app and my PayloadCMS in the same repo but in seperate folders. So specifying cloudbuild files and steps can add an extra layer of control (and scalability)
I haven’t tried putting nextjs and payload in one project yet I still put them in two different containers and they communicate via api
Hello, I followed the blog post (thanks for writing it!) to deploy my app to Google Cloud, but I'm stuck with the communication between the Storage Bucket and MongoDB database. When I upload images with Payload, the images appear in the bucket but not in the database. This is my first project using GCP, what do I need to do in order to connect my MongoDB Atlas cluster to my GCP project? I've seen articles about using the Compute Engine and/or VPC networks, but I'm a little confused. Could anybody give me pointers please? Thank you
Hi @loryglory., when you say it's not appearing in your database, I don't quite think I follow. What collection are you using for your media? Are you sure that you have set up the cloud storage plugin correctly? Dump your payload config and media collection and I can take a look 🙂
Good morning Mark, I'll dump the code below, just so you know - I haven't used GCP, Docker or Payload before, so this is huge learning project for, maybe the mistake is in the Dockerfile? Regarding the GCP: I did not use anything else than the services you wrote about in your article, so no VM or similar.
When I'm uploading a file with the Payload backend, the file is added to the Google Bucket, so that works, but I get an error message in Payload (with no specifics) and the image is
notadded to the media collection/MongoDB database.
export type Size = 'card' | 'square' | 'portrait' | 'feature';
export type Type = {
filename: string
alt: string
mimeType: string
sizes: {
card?: SizeDetails
square?: SizeDetails
portrait?: SizeDetails
feature?: SizeDetails
}
}
const Media: CollectionConfig = {
slug: 'media',
access: {
read: () => true,
create: () => true,
update: () => true,
delete: () => true,
},
admin: {
useAsTitle: 'filename',
group: 'Content'
},
upload: {
staticURL: `https://${process.env.GCS_HOSTNAME}/${process.env.GCS_BUCKET}`,
staticDir: '/',
adminThumbnail: 'thumbnail',
mimeTypes: ['image/png', 'image/jpeg', 'image/svg+xml'],
imageSizes: [
{
name: 'thumbnail',
width: 480,
height: 320,
},
{
name: 'portrait',
width: 768,
height: 1024,
},
{
name: 'hero',
width: 1920,
height: 1080,
}
],
},
fields: [
{
name: 'alt',
label: 'Alt Text',
localized: true,
type: 'text',
required: true,
},
],
};
export default Media;
This is the payload.config.ts file
export default buildConfig({
plugins: [
cloudStorage({
collections: {
media: {
adapter: gcsAdapter({
options: {
credentials: {
type: "service_account",
private_key: "XXX",
client_email: "XXX",
client_id: "XXX"
},
},
bucket: process.env.GCS_BUCKET,
acl: "Public",
}),
},
},
}),
],
serverURL: process.env.PAYLOAD_PUBLIC_SERVER_URL || '',
collections: [Pages, Categories, FormSubmissions, Studies, Media],
globals: [
MegaMenu,
SocialMedia,
Footer,
],
typescript: {
outputFile: path.resolve(__dirname, 'payload-types.ts'),
},
cors: ['*', process.env.PAYLOAD_PUBLIC_SERVER_URL],
admin: {
webpack: (config) => ({
...config,
resolve: {
...config.resolve,
}
})
}
})
And for good measure, this is the Dockerfile
FROM node:18.8-alpine as base
FROM base as builder
WORKDIR /home/node/app
COPY ./package*.json ./
COPY . .
RUN yarn install
RUN yarn build
FROM base as runtime
ENV NODE_ENV=production
ENV PAYLOAD_CONFIG_PATH=dist/payload.config.js
ENV MONGODB_URI=xxx
ENV PAYLOAD_SECRET=xxx
ENV PAYLOAD_PUBLIC_SERVER_URL=xxx
ENV NEXT_PUBLIC_SERVER_URL=xxx
ENV PAYLOAD_SEED=false
ENV PAYLOAD_DROP_DATABASE=false
ENV GCS_BUCKET=xxx
ENV BUCKET_URL=xxx
WORKDIR /home/node/app
COPY package*.json ./
COPY .env ./
RUN yarn install --production
COPY --from=builder /home/node/app/dist ./dist
COPY --from=builder /home/node/app/.next ./.next
COPY --from=builder /home/node/app/build ./build
EXPOSE 8080
CMD ["node", "dist/server.js"]
And the deployment.yaml file:
steps:
- name: 'gcr.io/cloud-builders/docker'
dir: '.'
args: [
'build',
'-t',
'us-docker.pkg.dev/$PROJECT_ID/XXX/cms:$SHORT_SHA',
'-f',
'./Dockerfile',
'.'
]
- name: 'gcr.io/cloud-builders/docker'
args: [
'push',
'us-docker.pkg.dev/$PROJECT_ID/XXX/cms:$SHORT_SHA'
]
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: gcloud
args:
- run
- deploy
- cms
- --region=us-central1
- --platform=managed
- --image=us-docker.pkg.dev/$PROJECT_ID/XXX/cms:$SHORT_SHA
env:
- 'MONGODB_URI=XXX'
- 'PAYLOAD_SECRET=XXX'
timeout: 1800s
@loryglory. I think the issue is your static URL. You don't need to include that and the cloud storage plugin. It's one or the other. Chances are the StaticURL us what is causing the issue, so get rid of that and static dir and you should be good to go!
Hello Mark, thank you for your tipp, unfortunately it didn't fix the problem - I checked the console in my browser and there's a 503 service unavailable error for the POST method on: '
domain/api/media?locale=en&depth=0&fallback-locale=null" so I'll check if I have to define the locale or something, I don't know 😄
Star
Discord
online
Get help straight from the Payload team with an Enterprise License.