In this guide, we'll walk through how to create both a basic and advanced robots.txt file in your Payload app using Next.js.
A robots.txt
file helps control how search engines crawl and index your website. While not mandatory, it's an essential part of technical SEO that ensures search engines respect your site’s structure.
In this guide, we’ll walk through:
robots.txt
file in a Next.js app with Payload CMS.The simplest way to add a robots.txt
file is by creating it in your app directory. In Next.js, this is done programmatically to ensure dynamic control.
Inside your app folder, create a new robots.ts file.
Inside robots.ts
, import MetadataRoute from Next.js:
Now, export a default function called robots
and set its return type:
userAgent: '*'
→ Applies rules to all search engines.allow: '/'
→ Allows indexing for all pages.disallow: '/admin'
→ Blocks crawlers from indexing the admin panel.sitemap: 'https://yourwebsite.com/sitemap.xml'
→ Informs crawlers where to find your sitemap.Run your Next.js app and visit: http://localhost:3000/robots.txt
You should see the following:
Your robots.txt file is now live! Next, let’s look at advanced rules.
If you need want to be more granular in the permissions that you give to web crawlers, you can modify the rules
section for different search engines.
Instead of using a single rule, update the function to return an array of rules:
/admin
.Visit: http://localhost:3000/robots.txt
You should see the following:
Your robots.txt
file now handles multiple crawlers dynamically!
A robots.txt
file is a simple but powerful tool for SEO. Whether you're allowing, blocking, or customizing rules for different search engines, Next.js makes it easy to manage dynamically.