This GPT is really easy to use, but you will have to put in a little bit of effort to create the data it requires.
You need to feed it a CSV file in this format:
- Column A: The URLs of Website 1 (your site).
- Column B: The title tags or H1 headings of the URLS of Website 1.
- Column C: The URLs of Website 2 (competitor site).
- Column D: The title tags or H1 headings of the URLS of Website 2.
Pretty simple and probably the easiest way to do this is by using Screaming Frog to crawl the sites.
For the example I’m going to show you, I used Screaming Frog to crawl the blog of ShipHero (Website 1) and the blog of ShipMonk (Website 2).
I then exported the URLs and Titles to a Google Sheet and downloaded that file as a CSV.
If you are unsure how to crawl just the blog of a website with Screaming Frog, there are plenty of tutorials out there to show you. It’s really easy.
If a site does not use /blog or something like /news for its content and instead stupidly sticks everything at the root domain level like website.com/article-title-here, it’s going to take a lot more time to create such a file for it because you can’t just target a specific directory.
Then all you need to do is follow the instructions provided by this GPT.
This only performs a basic content gap analysis based off the titles and URLs provided. If there are any pages that Screaming Frog does not find a title, it will parse the URL and create a title from that.
It does not read the content of each URL. There is no way to make ChatGPT do that, and even if it could, on larger sites I’m not sure it could accurately store it all and use it.
I do have another version where you can have Screaming Frog use ChatGPT to create a quick summary of each page and the GPT will use those summaries for its analysis. Crawling this way takes substantially more time.
On the surface level, it makes a lot more sense, but so far I haven’t been able to tweak it enough that the added time involved gives substantially better results than just using the titles to understand what each page is about.
If I can improve it enough to make it worthwhile, I’ll share that one at another time.
What this GPT does:
- Based on the titles and URLs it organizes the content of each site into topic clusters.
- It compares the clusters to identify potential topic gaps missing from Website 1.
- The gaps could be topics that are not covered at all.
- They could be areas where they are not covered in as much detail.
- It will assign a priority for each topic gap of Low, Medium, or High.
What this GPT does not do:
- It does not visit and read each URL.
- There is no reliable way to get ChatGPT to do this at this time.
- Even if it could, for larger sites I’m not sure it would really fit it all into memory and do an accurate analysis.
- There is no reliable way to get ChatGPT to do this at this time.
- Because it cannot read the content, it’s not identifying topic gaps within existing content.
- In other words, you and your competitor may have a similar piece of content, but yours may be lacking significant depth. This GPT has no way to identify those cases.
After the GPT runs and does its analysis, always ask it if there are additional content gaps. It will often provide more.
Unfortunately, when I try to add commands to have it provide more in the initial output, the analysis always gets worse.
Watch the video below: