Simplify your stack and build anything. Or everything.
Build tomorrow’s web with a modern solution you truly own.
Code-based nature means you can build on top of it to power anything.
It’s time to take back your content infrastructure.

deploy to GCP

default discord avatar
efrat0006_815598 months ago
68

Thanks! Upgrading to the latest version solved the problem.

  • default discord avatar
    markatomniux9 months ago

    what version of payload are you using? And just to confirm, you're NOT using the NextJS Payload Server template

  • default discord avatar
    efrat0006_815599 months ago

    When I upload a WebP image, it fails if I use the imageSizes configuration. However, it works fine without imageSizes. Does anyone have any ideas on how to fix this? Thanks!



    Here is my code:


    upload: {


    staticDir: path.resolve(__dirname, '../../../media'),


    imageSizes: [


    {


    name: 'webp',


    formatOptions: { format: 'webp' },


    },


    {


    name: 'thumbnail',


    width: 200,


    formatOptions: { format: 'webp' },


    },


    {


    name: 'medium',


    width: 800,


    formatOptions: { format: 'webp', options: { quality: 90 } },


    },


    {


    name: 'large',


    width: 1200,


    formatOptions: { format: 'webp' },


    },


    ],


    adminThumbnail: 'thumbnail',


    },


    Error Message:


    ERROR: Expected integer for top but received NaN of type number


    err: {


    "type": "Error",


    "message": "Expected integer for top but received NaN of type number",


    "stack":


    Error: Expected integer for top but received NaN of type number


    at Object.invalidParameterError (C:\dev\CMS\payload\cm-website\packages\payload\node_modules.pnpm\sharp@0.33.4\node_modules\sharp\lib\is.js:135:10)


    at Sharp.<anonymous> (C:\dev\CMS\payload\cm-website\packages\payload\node_modules.pnpm\sharp@0.33.4\node_modules\sharp\lib\resize.js:483:16)


    at Array.forEach (<anonymous>)


    at Sharp.extract


    Any help would be greatly appreciated!

  • default discord avatar
    markatomniuxlast year

    So it's important to note that .env files

    should not

    be added to version control. environment variables can either be set in a dockerfile (strongly advise against this in any live server environment) or by the calling process. I can see you are running Payload/Next which I'm afraid this guide is not specifically catering for. However, you should be able to check locally that environment variables are being read in properly by spinning up a docker container and passing in environment variables in your command.

  • default discord avatar
    __emmanuellast year

    "building locally" = running

    yarn build

    on my machine.



    building a docker image locally has the same issues as mentioned above



    in case it helps, i was getting stuck between 2 main errors:


    1. payload secret and other env vars were not being read in. this is something im midly confused on, as i was under the impression that the

    .env

    file would be copied over with the

    COPY . .

    instruction in the docker file (hence not neading to decalre the env in the dockerfile, and later loaded in by

    dotenv

    but a console.log showd this not happening


    2. was getting some errors about using

    import

    outside of a module in my

    payload.config.ts

    to clarify, these 2 errors were happening independantly, seemed like when I was able to get the env read in, error 2 popped up.



    neither of these are an issue building loaclly/running dev server



    # docker file
    FROM node:18
    
    WORKDIR /home/node
    COPY . .
    
    # ENV NODE_ENV=production
    # ENV PAYLOAD_CONFIG_PATH=dist/payload.config.js
    # ENV MONGODB_URI=*****
    # ENV MONGODB_ADDR=*****
    # ENV MONGODB_USER=*****
    # ENV MONGODB_PASS=******
    # ENV PAYLOAD_SECRET=********
    # ENV PAYLOAD_PUBLIC_SERVER_URL=http://localhost:8080
    # ENV NEXT_PUBLIC_SERVER_URL=http://localhost:8080
    # ENV PAYLOAD_SEED=false
    # ENV PAYLOAD_DROP_DATABASE=false
    
    RUN yarn install
    RUN yarn build
    
    EXPOSE 8080
    
    CMD ["node", "dist/server.js"]

    i have the env vars commented out right now but it wasnt working for me regardless of them being present or not. I simiplified it a little from the origonal where I got it from which is here, to see if my issue was not fully understanding the multi stage one, but even this simple one doesnt seem to work


    link to where i got the orignal dockerfile:

    https://www.omniux.io/blog/deploying-a-payload-cms-instance-with-docker-and-google-cloud-run

    // server.ts
    import dotenv from 'dotenv';
    import next from 'next';
    import nextBuild from 'next/dist/build';
    import path from 'path';
    
    dotenv.config({
      path: path.resolve(__dirname, '../.env'),
    });
    
    import express from 'express';
    import payload from 'payload';
    
    import { seed } from './seed';
    
    const app = express();
    const PORT = process.env.PORT || 8080;
    
    const start = async (): Promise<void> => {
      await payload.init({
        secret: process.env.PAYLOAD_SECRET || '',
        mongoURL: process.env.MONGODB_URI || '',
        express: app,
        onInit: () => {
          payload.logger.info(`Payload Admin URL: ${payload.getAdminURL()}`);
        },
      });
    
      if (process.env.PAYLOAD_SEED === 'true') {
        payload.logger.info('---- SEEDING DATABASE ----');
        await seed(payload);
      }
    
      if (process.env.NEXT_BUILD) {
        app.listen(PORT, async () => {
          payload.logger.info(`Next.js is now building...`);
          // @ts-expect-error
          await nextBuild(path.join(__dirname, '../'));
          process.exit();
        });
    
        return;
      }
    
      const nextApp = next({
        dev: process.env.NODE_ENV !== 'production',
      });
    
      const nextHandler = nextApp.getRequestHandler();
    
      app.use((req, res) => nextHandler(req, res));
    
      nextApp.prepare().then(() => {
        payload.logger.info('Next.js started');
    
        app.listen(PORT, async () => {
          payload.logger.info(`Next.js App URL: ${process.env.PAYLOAD_PUBLIC_SERVER_URL}`);
        });
      });
    };
    
    start();


    @191776538205618177

    yes, here they are.



    //payload config
    import dotenv from 'dotenv';
    import path from 'path';
    
    dotenv.config({
      path: path.resolve(__dirname, '../.env'),
    });
    
    import { buildConfig } from 'payload/config';
    import Media from './collections/Media';
    import PreviousWorks from './collections/PreviousWorks';
    import Services from './collections/Services';
    import BusinessInfo from './globals/BusinessInfo';
    
    module.exports = buildConfig({
      serverURL: process.env.PAYLOAD_PUBLIC_SERVER_URL || '',
      collections: [Services, PreviousWorks, Media],
      globals: [BusinessInfo],
      typescript: {
        outputFile: path.resolve(__dirname, 'payload-types.ts'),
      },
    });
  • default discord avatar
    markatomniuxlast year

    hey

    @582393395570409492

    would you mind sharing your payload config, server.ts, and dockerfile?

  • default discord avatar
    __emmanuellast year

    adding a q to this thread in hopes you guys had success deploying previously. Trying to deploy using cloud run, I saw the article that was written, and was having issue with env vars being loaded correctly in the container (everything works fine locally). tried following the deploy from source from google (

    https://cloud.google.com/run/docs/quickstarts/build-and-deploy/deploy-nodejs-service

    ), and the image builds fine, but insta terminates, and complains about the app not listening on the port specified by $PORT



    using the nextjs custom server template with Local API

  • default discord avatar
    markatomniuxlast year

    you're using artifact registry right?



    i'll check my config



    hmm that's strange. IAM should apply all this automatically with the cloud build agent

  • default discord avatar
    sburgpitlast year

    Yes, the build and the push are successful, but the deployment does not pass

  • default discord avatar
    markatomniuxlast year

    Hey Peter, do you know at what stage this seems to fail? Is it during deployment to Cloud Run?

  • default discord avatar
    sburgpitlast year
    @191776538205618177

    Hi! Thank you for your article on the site, it's awesome. I went step by step doing all the same things, but I can't beat IAM in any way. The only thing I've added is Google Secret Manager.



    My config:


    steps:
      - name: 'gcr.io/cloud-builders/docker'
        dir: './cms'
        args: ['build', '-t', 'europe-west3-docker.pkg.dev/$PROJECT_ID/saplingo/cms:$SHORT_SHA', '-f', './Dockerfile', '.']
      - name: 'gcr.io/cloud-builders/docker'
        args: ['push', 'europe-west3-docker.pkg.dev/$PROJECT_ID/saplingo/cms:$SHORT_SHA']
      - name: 'gcr.io/cloud-builders/gcloud'
        entrypoint: gcloud
        args:
          - run
          - deploy
          - cms
          - --region=europe-west3
          - --platform=managed
          - --image=europe-west3-docker.pkg.dev/$PROJECT_ID/saplingo/cms:$SHORT_SHA
          - --allow-unauthenticated
        secretEnv: [***]
        timeout: 1800s
    availableSecrets:
      secretManager: ***


    The error I get at the last deploy step:


    ERROR: (gcloud.run.deploy) User [***@cloudbuild.gserviceaccount.com] does not have permission to access namespaces instance [my-id] (or it may not exist): Permission 'iam.serviceaccounts.actAs' denied on service account ***-compute@developer.gserviceaccount.com (or it may not exist).

    And the roles of my @cloudbuild.gserviceaccount.com


    Perhaps you have encountered this before ? I will be glad of any ideas.

  • default discord avatar
    markatomniuxlast year

    Just want to be clear, my approach does NOT work for Payload Next. If you created your Payload proect using the Next hosting template, you are better deploying on vercel with one-click deployment 🙂

  • default discord avatar
    levkachlast year
    @191776538205618177

    are you using the

    build:next

    from the package.json or you skip that altogether? Cause I've run into a problem that


    when deploying to Cloud Run,

    node server.js

    command is run. It fails on the above thing (BUILD_ID file nonexistent), which makes sense - I've removed the

    build:next

    from the

    build

    in package.json.


    When I returned that

    build:next

    , the Cloud Build step now requires env. variables (all of them defined in the .env file which is not in the repo). That step also starts the payload app, since it tries to connect to Mongodb (and fail) .


    I'm thinking if I should/can skip the next.js part altogether. Did you have to solve any of this? 🙂



    as per .env file, I've removed that config from the dotenv configuration as I'm not planning to run the payload locally, just in google cloud



    hi there Mark, thanks for helping again!


    the problem was that I configured the

    substitution

    variables during the cloud build (since the next build that starts the payload app required it).


    When configuring the cloud

    run

    env. variables, it all clicked.


    so now I have a deployed cloud run container that fails on the next build file not found. Apparently, next build is still required for the app to start, so I'll be figuring it out

  • default discord avatar
    markatomniuxlast year

    also, include

    require('dotenv').config();

    at the top of your server.ts file too



    hmm that seems strange. Environment variables set in your Cloud Run should be enough to get it working. Remember that it's generally considered bad practice to include .env files in your deployments, it's better to declare your variables at startup by passing them into your Docker environment.



    Have you included anything like this in your payload.config.ts file?



    dotenv.config({
      path: path.resolve(__dirname, "../.env"),
    });
  • default discord avatar
    levkachlast year
    @191776538205618177

    also, do you have the code from that article on github by chance? I'm still struggling to get the deploy part working, it fails on secret not found even though I pass the env.variables



    gooot it, thanks man!

  • default discord avatar
    markatomniuxlast year

    No, but Payload does use React to power its' UI, the same as NextJS. That's maybe where you're getting confused. Payload is first and foremost an ExpressJS application.

  • default discord avatar
    levkachlast year

    I might be wrong here as I'm not from the JS world, but payload is a next.js app in it's core, no?

  • default discord avatar
    markatomniuxlast year

    Are you running your payload in a next server?

  • default discord avatar
    levkachlast year

    the problem seemed to actually be in the package.json:



    cross-env NODE_ENV=production yarn build:payload && yarn build:server && yarn copyfiles &&

    yarn build:next

    where



    "build:next": "cross-env PAYLOAD_CONFIG_PATH=dist/payload/payload.config.js NEXT_BUILD=true node dist/server.js"



    so it would try to run the server during the build stage which doesn't make sense. removing the line worked



    Hey guys, I have a problem that during the build step secret is not picked up in server.ts. I used the barebone create-payload-app for app creation. the line of interest looks following (pic attached).


    I've Any ideas?

  • default discord avatar
    markatomniux2 years ago

    Hey folks. i decided to write an article on this full process. you can check it out here -

    https://www.omniux.io/blog/deploying-a-payload-cms-instance-with-docker-and-google-cloud-run
  • default discord avatar
    iamlinkus2 years ago

    Sorry, correction, it's inside the cloud run service details:



    Cloud run -> YOUR SERVICE -> Edit & deploy new revision -> Environment variables



    It’s in the “triggers”



    When in cloud build there should be a button “edit version” or something similar and that’s where you add/edit env vars. Usually you don’t include the .env file and don’t include the vars in the dockerfile for security reasons (it should never be pushed to git)



    you can add your env variables through cloud build if I’m not mistaken. In cloud build you can edit the version of the build and add the env vars.

  • default discord avatar
    meghabagri2 years ago

    I'm trying to build my container image on gcloud using cloud run but I am getting 'missing secret error' even when I have included my .env file . Can someone please suggest a workaround?

  • default discord avatar
    eloahsam2 years ago

    yup ive tried and still it fails

  • default discord avatar
    markatomniux2 years ago

    You haven't tried to install dotenv in the project have you?

  • default discord avatar
    eloahsam2 years ago

    yup all env's start with NEXT_PUBLIC

  • default discord avatar
    markatomniux2 years ago

    Are you specifying NEXT_PUBLIC with your env variables?



    I did, but ive recently moved to vercel

  • default discord avatar
    eloahsam2 years ago

    whats the difference between your payload dockerfile and nextjs dockerfile and if they have seperate cloudbuild files



    and does it have environmental variables ?, mine cant seem to read the env's even with a cloudbuild yaml and specifying in the cloudbuild UI and cloudrun ui and dockerfile



    @191776538205618177

    did you deploy your nextjs on cloud run ?

  • default discord avatar
    markatomniux2 years ago

    That is if you choose to use a thumbnail size. This is what I do for my collection;



            upload: {
            formatOptions: {
                format: 'webp',
                options: {
                    quality: 80,
                    force: true,
                    alphaQuality: 100,
                },
            },
            adminThumbnail: ({ doc }: { doc: { sizes: { thumbnail: { url: string } } } }) => doc.sizes.thumbnail.url,
            imageSizes: [
                {
                    formatOptions: {
                        format: 'webp',
                        options: {
                            quality: 80,
                            force: true,
                            alphaQuality: 100,
                        },
                    },
                    name: 'thumbnail',
                    width: 175,
                    height: 125
                },
                {
                    formatOptions: {
                        format: 'webp',
                        options: {
                            quality: 80,
                            force: true,
                            alphaQuality: 100,
                        },
                    },
                    name: 'square',
                    width: 600,
                    height: 400
                },
                {
                    formatOptions: {
                        format: 'webp',
                        options: {
                            quality: 80,
                            force: true,
                            alphaQuality: 100,
                        },
                    },
                    name: 'wide',
                    width: 1280,
                    height: 500
                },
            ]
        },


    for your thumb nail, try this;



    adminThumbnail: ({ doc }: { doc: { sizes: { thumbnail: { url: string } } } }) => doc.sizes.thumbnail.url
  • default discord avatar
    eloahsam2 years ago

    I changed the “all users” permission To cloud storage view from storage legacy viewer



    should i change anything in my media collection ? slug: "media",


    access: {


    read: () => true,


    delete: isAdmin,


    },


    fields: [],


    upload: {


    staticURL: "/media",


    imageSizes: [


    {


    name: "thumbnail",


    width: 400,


    height: 300,


    position: "centre",


    },


    ],


    adminThumbnail: "thumbnail",


    mimeTypes: ["image/*"],


    },



    when i look for the image url via url it starts with

    http://localhost:3000/media/

    rather than my bucket url



    and in cloud storage i gave "allUsers" "Storage Legacy Bucket Reader" permission but in the thumbnail and media collections it show broken image link



    my service account details are in another location on my pc

  • default discord avatar
    markatomniux2 years ago

    No the plugin will handle that for you, it will create a url where the file is held. I notice you are missing your service account details as well, you'll need to provide authentication

  • default discord avatar
    eloahsam2 years ago
    @191776538205618177

    am i suppose to append my storage bucket link to the filename ?



    as soon as i upload i only asee a broken link



    so ive only added this in plugins: [


    cloudStorage({


    collections: {


    media: {


    adapter: gcsAdapter({


    options: {},


    bucket: process.env.bucketUrl,


    }),


    },


    },


    }),


    ],

  • default discord avatar
    markatomniux2 years ago

    Its definetly the best way to host on Google Cloud, much cheaper and more efficient than dedicated servers

  • default discord avatar
    eloahsam2 years ago

    man you made me fall inlove with cloud run im going to switch over my other 6 project over from app engine to cloud run



    i doubt it's even reached 100k or 10k requests

  • default discord avatar
    markatomniux2 years ago

    Free use, then they'll charge you per million requests

  • default discord avatar
    eloahsam2 years ago

    atleast



    yeah it sucks honestly

  • default discord avatar
    markatomniux2 years ago

    Looks like Google just upped it to 2 million requests per month, so you'll be fine 😄



    Damn that sucks, in that case you'll have to run Payload 24/7 with 1 dedicated instance. But don't worry. It'll cost you about $3 a month

  • default discord avatar
    eloahsam2 years ago

    yeah im a Nextjs fanatic but with this project it runs on so many legacy dependencies😢 it would take a huge amount of effort to convert it

  • default discord avatar
    markatomniux2 years ago

    With Nextjs, you won't need to run Payload 24/7 because Nextjs will statically regenerate content whenever the cache is invalidated after say 60 seconds. So in the background the page will be recompiled and then displayed without any delay for the user. It will bring down your costs and massively improve page loading times



    You don't have to. But it's a recommendation if you're already using React



    If possible, see about adding Nextjs to it because it will work a lot better in the long run if they wish to make changes and new content

  • default discord avatar
    eloahsam2 years ago

    in the frontend i using React cause my client brought me a template and asked i connect it to a backend cms

  • default discord avatar
    markatomniux2 years ago

    Yeah

  • default discord avatar
    eloahsam2 years ago

    wait what ?😂 so you suggest i should keep the minimum instance at 0 ?

  • default discord avatar
    markatomniux2 years ago

    As long as your nextjs revalidation time is more than 1 second, you'll struggle to trigger 1 million payload requests a month 😄



    There's a free limit up to about 1 million invocations. You wont need to run payload 100% of the time, so you don't require any minimum instances. You'll likely never pay for it

  • default discord avatar
    eloahsam2 years ago

    what is the usual cost of having atleast 1 instance running ?



    thank you and i noticed a small bug , maybe its due to cold starts but when i go to my payload url sometimes it shows "Cannot get /admin/" then i refresh then it works or sometimes i have to refresh my page twice

  • default discord avatar
    markatomniux2 years ago
    https://stackoverflow.com/questions/20351637/how-to-create-a-simple-http-proxy-in-node-js

    You could proxy with an afterRead hook. Request the resource using the cloud storage api url, then take the response in payload and attach it to a new request



    They will be publically visible, but you'll need to either make the whole bucket public (which I don't recommend) or proxy the image URL



    I would also highlight that you're now in a very grey zone in terms of how you wish to execute the use of your Cloud Storage solution. The two examples I mentioned will require a bit of research into how GCloud handles permissions, as well as potential issues surrounding CORS and proxying. You'll be best learning about how to best use the Google Cloud Plugin and then identifying a way to proxy your connection

  • default discord avatar
    eloahsam2 years ago

    so i want the images to be publicly viewable as its a experience booking web platform

  • default discord avatar
    markatomniux2 years ago

    The benefits of using a cloud storage container are numerous. You can share your bucket across multiple instances of your Cloud Run service, you can setup a load balancer and cache your images via a CDN for faster delivery, and you can easily view and manage your media directly from GCloud's Cloud Bucket Object Explorer



    You likely didn't enable the correct permissions for your bucket, there are 2 ways to get your image.



    You can either make the bucket completely public and use the cloud storage url passed directly from payload into your front end app. Or you can proxy the connection, use a service account to keep the bucket private and only allow access via payload, then have payload request the files on your behalf and then proxy them via a url like

    https://cms.yourwebsite.com/api/media/myimage.jpeg
  • default discord avatar
    eloahsam2 years ago

    I tried using gcp storage but my pictures were private for some reason even when I made them public

  • default discord avatar
    markatomniux2 years ago

    It's basically a wrapper for AWS, GCloud or Azure storage. So you can use the regular node libraries instead and invoke them via a collection hook



    You can use one of the cloud storage plugins or you can roll your own -

    https://github.com/payloadcms/plugin-cloud-storage

    https://payloadcms.com/docs/production/deployment#file-storage

    You have to store them in a storage bucket like Google Cloud bucket. Storage on Cloud Run is emphereal, you have to have a permenant location



    Oh you mean like jpeg and pngs?

  • default discord avatar
    eloahsam2 years ago

    How did you store images on your cloud run ?



    I’m not using any 3rd party to upload only payload that creates the media file and saves locally



    So I upload the image through payload it appears and it also appears on the frontend then I check the site after 2 hours and I see a blank image in payload and the frontend so it’s not persistent

  • default discord avatar
    markatomniux2 years ago

    disappear from your container registry?

  • default discord avatar
    eloahsam2 years ago
    @191776538205618177

    I see another problem so it’s hosted on cloudrun and uses cloudbuild for ci/cd but images for some reason disappear



    @191776538205618177

    Thanks for the assistance and im impressed by cloud run , especialy with continous deployment , i didnt need the cloudbuild.yaml only the dockerfile specifying everything



    this worked like a charm and set the serve url your server URL aswell else it will stay in that loading state



    Or must I just set the default value then with UI change value



    Currently it doesn’t pick them is as I only specify the environment variable in the cloud run service UI



    Should I also specify them in the docker like this ?



    ARGS MONGODB_URI


    ENV MONGODB_URI=$MONGODB_URI



    ? Will it read them from my cloud run ui and include them when it builds the image ?



    I only used dotenv in server.ts

  • default discord avatar
    markatomniux2 years ago

    I found that using it in both places seems to mess with payload's env variables and breaks it. If you just have it initialised in your server.ts then it should be ok



    By any chance are you using dotenv in the payload.config.json AND server.ts files?



    it should definitely pick them up from GCloud run. That being said, I have noticed a lot of issues around Env variables in the past wee while

  • default discord avatar
    eloahsam2 years ago

    I’ve just deployed to cloud run it was much easier than app engine I used the continuous deployment from GitHub



    I’m now having trouble with environment variables so I don’t want to commit them to any branch and I tried setting environment variables on the cloud run service and I tried on cloud build but the project never detects them and lastly I tried with secret manager



    ill let you know how it goes tomorrow after succeeding



    ohh perfect thats what i was looking for

  • default discord avatar
    markatomniux2 years ago

    and the docker image is stored on Google Container Registry which is linked to a Google Cloud Bucket



    nah you shouldn't need any NGRX

  • default discord avatar
    eloahsam2 years ago

    and i see on a few articles a extra nginx file is needed



    is there somewhere i can host an image similar to how i host my repo on github , like with cloudbuild and app engine it pulls from github, with cloudbuild and cloud run where does it pull from ?

  • default discord avatar
    markatomniux2 years ago

    what do you mean?

  • default discord avatar
    eloahsam2 years ago

    lastly, so with code i just commit to gihub , with docker images where do you commit to ?



    alright seems like cloud run is basically app engine for containers , ill try it out and try deploy tomorrow and this may sway maybe to move my other project from app engine to cloud run

  • default discord avatar
    markatomniux2 years ago

    It does, you can set a minimum and maximum number of container instances running. These containers will be spun up based on demand, so if you get say 50 requests in a second, 2 or 3 more containers will get spun up. Then after a bit of idling time, the additional containers will then be spun down, billing you for your usage. Cloud Run has a free budget allocation, so you can get a lot out of the base tier before payng-per-use

  • default discord avatar
    eloahsam2 years ago

    does cloud run also scale to zero like app engine standard ?



    ohh alright now i understand why you have the website url in there

  • default discord avatar
    markatomniux2 years ago

    You would create 2 seperate build yamls for your environments, in the Google Build UI you can set a build to run on a trigger

    https://cloud.google.com/build/docs/automating-builds/create-manage-triggers

    Cloud run does this too. But rather than running a dedicated instance, it spins up and down depending on a per-request basis



    my CMS and Frontend are seperate, my frontend runs on vercel. But I need to pass the frontend url as an environment variable so I can configure things like CORS in Payload

  • default discord avatar
    eloahsam2 years ago

    I see that you have a website Url and cms url so I’m guessing your front and backend are in one image …did you set your urls after gcp created them for you ?



    And I specify max nodes and compute power etc



    Yeah I’ve heard about what docker does and I have some theory but no practical experience with it, that’s why I’ve been using app engine cause it offers the paas so my bill is based on how much requests and came in etc



    Ohh so similar to the app.yaml and cloudbuild.yaml



    So where would you be coming your container changes as with my current projects I commit to different GitHub branches and when I commit to my dev or to main the trigger updates the on live environments

  • default discord avatar
    markatomniux2 years ago

    Wee breakdown for you regarding the cloudbuild.yaml file;



    Firstly, you will need to configure a service account with permissions in order to perform all the necessary steps:

    https://cloud.google.com/build/docs/securing-builds/configure-access-to-resources

    This file is designed to work with Google Cloud build. The process is broken down into 3 steps.



    - First Step, the CloudBuild file looks for the docker file in my git repository. The cloud build file uses an absolute URI, so I have to include a dir: tag pointing to the location of my DockerFile, it lives in a folder called

    cms

    . Then in my build args, I tell it to build the dockerfile, giving the docker file a tag, and then pointing the yaml to the Dockerfile situated at ./Dockerfile (this is the file, not a directory)



    - Second Step, after building the docker image, it needs to be hosted in a container registry (Like Google Container Registry). This is necessary because when we deploy the docker instance we need to pass in a location for where the image is stored so it can pull down the latest. I recommend configuring your security so that this image is not publicly available. You may also wish to remove vulnerability scanning if you already have vulnerability scanning enabled on your Github repo with a service like Snyk.



    - Third Step, this is where you may run into the most issues as it can be tricky. After deploy, I specify which service I wish to target, I have called this

    staging-omniux-cms

    as this is my staging environment. We hand deploy changes to prod. I then specify the region, the platform type, as well as the location of the image for Docker to spin up. Finally, I have included Environment Variables which I can change in the UI if I wish. Once your service is running, you will be given a url to access it. There is an option to manage custom domains for your service, so you can change this to anything you like and it will persist between deployments.



    Our Docker file sits at the same level. You said earlier you've never worked with docker, so this will seem a bit overwhelming. It's important that you take some time to learn what docker is and why it's used. Ultimately, docker allows us to 'containerise' our applications meaning the application runs in a self-contained instance of Linux. It drastically simplifies web server configuration (once you understand it) and allows you to scale up and down your web architecture based solely on demand. It makes running a website a hell of a lot cheaper and gives us the added flexibility to move Payload to anywhere we want. So this docker instance will run on AWS, Azure, Digital Ocean, etc...



    FROM node:18.8-alpine as base
    
    FROM base as builder
    
    WORKDIR /home/node/app
    COPY ./package*.json ./
    
    COPY . .
    RUN yarn install
    RUN yarn build
    
    FROM base as runtime
    
    ENV NODE_ENV=production
    ENV PAYLOAD_CONFIG_PATH=dist/payload.config.js
    
    
    WORKDIR /home/node/app
    COPY package*.json  ./
    
    RUN yarn install --production
    COPY --from=builder /home/node/app/dist ./dist
    COPY --from=builder /home/node/app/build ./build
    
    EXPOSE 3000
    
    CMD ["node", "dist/server.js"]


    Here's the cloudbuild file, it sits at the base of the project, alongside the package.json file;



    steps:
    - name: 'gcr.io/cloud-builders/docker'
      dir: './cms'
      args: [
        'build',
        '-t',
        'us-docker.pkg.dev/$PROJECT_ID/us.gcr.io/omniux-cms:$SHORT_SHA',
        '-f',
        './Dockerfile',
        '.'
        ]
    
    - name: 'gcr.io/cloud-builders/docker'
      args: [
        'push',
        'us-docker.pkg.dev/$PROJECT_ID/us.gcr.io/omniux-cms:$SHORT_SHA'
        ]
    
    # Deploy container image to Cloud Run
    - name: 'gcr.io/cloud-builders/gcloud'
      entrypoint: gcloud
      args:
      - run
      - deploy
      - staging-omniux-cms
      - --region=us-central1
      - --platform=managed
      - --image=us-docker.pkg.dev/$PROJECT_ID/us.gcr.io/omniux-cms:$SHORT_SHA
      env: #FOR LOCAL USE ONLY
      - 'MONGODB_URI='
      - 'PAYLOAD_PUBLIC_WEBSITE_URL='
      - 'PAYLOAD_PUBLIC_CMS_URL='
      - 'PAYLOAD_PUBLIC_GCS_BUCKET='
      - 'PAYLOAD_PUBLIC_GCS_ENDPOINT='
      - 'PAYLOAD_PUBLIC_GCS_PROJECT_ID='
    
    
    timeout: 1800s


    In Gcloud you can set triggers on Cloud Build to listen for pushes or PRs on certain branches. Once a commit goes into my 'staging' branch, we trigger the cloudbuild script and then the docker image gets built and a container gets spun up in Cloud Run

  • default discord avatar
    eloahsam2 years ago

    I’d appreciate that

    @191776538205618177



    I’ve never worked with docker I’d like to know how you update the repo and commit so that it update the cloud project

  • default discord avatar
    markatomniux2 years ago

    When im back at my desk in 30 mins, i'll send over a copy of the cloud build yaml and docker file



    Hey

    @877297218967724072

    , we use Google cloud build to create our docker image and then deploy our Payload instance to Google Cloud Run. It's the cheapest and most efficient way to host Payload on GCloud 🙂

  • default discord avatar
    eloahsam2 years ago

    alright id appreciate any pointers in that regard

  • discord user avatar
    denolfe
    2 years ago

    I don't believe anyone on the team has any expertise there. I know

    @191776538205618177

    does GCP, maybe he can give you some pointers

  • default discord avatar
    eloahsam2 years ago

    And Also is there a cloud run config used and how does it look ?



    Yes ..so my current process I set my env in cloud build that pulls from my GitHub so I just guidance in the entire process of using cloud Run as I know docker is used

  • discord user avatar
    denolfe
    2 years ago

    There are a few here in Discord that I know are using Cloud Run. I haven't seen App Engine mentioned though. Do you have a specific question?

  • default discord avatar
    eloahsam2 years ago

    Has anyone deployed their paylodcms to gcp app engine ? I need guidance in deploying there and it would be a bonus if you use cloud build for your ci/cd

Star on GitHub

Star

Chat on Discord

Discord

online

Can't find what you're looking for?

Get dedicated engineering support directly from the Payload team.