Ive got a number of older sites running keystone v4... Lately been wondering if i'ts possible to migrate these over to Payload. Seems doable, but does anybody have any insight to how challenging/possible this would be or have any ideas, pointers, experience doing something similar?
I haven't used keystone for quite some time, but really what this comes down to is massaging data from keystone into the right form in order to put into Payload. That will determine how difficult of a task this is. You'd use the Payload local API for this inside of a script.
Here is some pseudo-code for you:
import payload from 'payload';
require('dotenv').config();
const { PAYLOAD_SECRET, MONGODB_URI } = process.env;
const migratePosts = async () => {
await payload.init({
secret: PAYLOAD_SECRET,
mongoURL: MONGODB_URI,
local: true,
});
// Retrieve 'Posts' from keystone'
// Modify data accordingly
const postsToMigrate = [];
const adjustedPostData = adjust(postsToMigrate);
// Create new 'Posts' in Payload
adjustedPostData.map(async post => {
const newPost = await payload.create({ collection: 'posts', data: post });
})
};
migratePosts();
Also, here is a more advanced example migrating csv data which might be good to look at:
https://github.com/payloadcms/payload/discussions/1660#discussioncomment-4485387Thanks for your feedback @denolfe … I’ve been working on this and it’s coming along great. I actually just started running the payload and keystone in tandem on different ports. Then in the keystone just make myself a temporary endpoint to hit payloads REST api after transforming the data to to the format for payload. Mostly straightforward but the trickiest part is the media files since those are handled drastically different (better) in payload. But overall got all of my tests working, just in the grunt work phase. Finally my front end should drop in pretty painlessly just tweaking all the queries over with payloads local Api! 👍
Another side note for the media files.. the original app is using S3 storage. I found the payload plugin which is AMAZING, but I had to trick it to port over my existing files/links. i did this by createing a basic, “not upload enabled” collection that mirrors the payload media collection schema just using text and numbers etc. because payload would error if you hit the api without a file to upload. Once I run my script to inject all the existing docs I swap the collection slugs for the temporary “not upload” collection and the real upload collection and boom! Got all the s3 files ready to go!
Nice, this script might be somewhat relevant as well:
https://github.com/payloadcms/payload/discussions/1834Yes! I needed this thanks so much🙏
I get the this error running that script
Unable to find documents with payload
APIError: The collection with slug media can't be found.
Are you trying to regenerate the media sizes like that script is designed for? I linked to that script as an example of how to interact w/ the Payload local API via script.
Yeah, I just trying regenerate all my images on the server
@denolfe I resolved my issue and added a github comment here
https://github.com/payloadcms/payload/discussions/1834#discussioncomment-4989249of how I resolved the issue
@christopher.nowlan Great, thanks for doing that 👍
No worries. I am happy to assist