Read this if you have an infinite scroll feed on your website with poor SEO performance

BLOG
Read this if you have an infinite scroll feed on your website with poor SEO performance

tutorial
how-to
next-js
react-js
javascript
guide
sushrit_lawliet
sushrit_lawliet5 months ago - 0 min read
1

This article will detail how infinite scrolling feeds (like the kind on Twitter, Instagram etc) are bad for your SEO if not accounted for properly. I will refer to my thought processes and considerations as I worked to fix the same on my Startup skillShack(⚡️);

We will then explore possible solutions that you can use with something like next.js.

I first came across this, when I was checking my website’s coverage on Google Search Console, to see how many of my pages had been indexed and if the SEO optimisations I had done were helpful in any way.

I soon discovered that almost none of the pages were indexed beyond the ones linked to in the navigation menus, and a few that had been linked to by me on social media and other blogs posted on Medium.

After some digging, I realised that it was because of how infinite scrolling lists behaved. And my website is heavily reliant on them.

A short one-line pitch before we proceed (would appreciate feedback on this, so I pitch it in every blog)

skillShack(⚡); is a community for software professionals looking to share the projects they are working on and get feedback. From side projects to startups!

So as you can see, my website relies on feeds to show these projects, blogs, and challenges for users. All my content was really locked behind 3 main feeds:

Now hoping one post blows up and slowly over time the other posts that are suggested below it ensure my website is covered well is not really a wise strategy so I went digging and so here are my learnings and solution!

So what are infinite scrolling feeds and why are they used?

caption

A lot of websites these days, that host user-generated content, like my Startup skillShack(⚡️); make use of infinite scrolling lists/feeds.

When the page first loads, about 10 or so posts are pre-fetched. Once you scroll to the last one, another 10 are fetched, and so on. This has basically become the standard way for serving user content on almost any platform you can look at these days. Everyone in FAANG (MANGA now I guess...) except Apple has at-least one platform with infinite scrolling feeds.

On a platform like YouTube, which has like a bazillion videos on it. Displaying a page of ranked video recommendations that goes up to this max length is not wise (probably not even possible over most standard network connections, forget rendering this in your device).

This is the problem that was solved by infinite scrolling feeds, which scale almost perfectly to any type of device and network conditions, you can just change the number of posts you want to pre-fetch and fetch consequently to match the user’s device and network conditions.

Now, what makes infinite scrolling feeds bad for SEO?

To answer this, first we must understand how Search Engine Crawlers like the kind for Google (Google Bot) crawl and index your page. It happens in the following stages:

  • Crawler enters your website from a pre-discovered URL (this could be from another page that has linked to it or you have manually requested your page to be indexed from something like the search console)
  • Your CDN, backend or whatever it is that you use to serve your web pages sends a mix of HTML/CSS/JavaScript (incases of React javascript bundles) to the requesting device, which is the bot in this case.
  • Web Crawlers have been capable of executing JavaScript for a long time now, so they can indeed wait for things like requests to complete and for data to be fetched (within reasonable limits which vary from bot to bot, some even have execution time limits).
  • Once the actual DOM (All your HTML that has been parsed by the browser and ready to be rendered) is ready, the bots simply scan for links. Which are usually in anchor tags.
<a href=”https://skillshack.dev/”>
  skillShack(⚡);
</a>

Once this is done, the crawler crawls to the next URL from this list and the steps are repeated until no more links can be found.

All this sounds good, I mean even fetch requests are carried out. So what’s the catch?

Well, the problem lies in how infinite scrolling feeds work under the hood.

How infinite scrolling feeds work

Usually, they work when the last element enters the visible part of the screen, the next posts are fetched.

Now here’s the problem. Crawlers don’t scroll. They do not trigger scroll events. Once the page loads, it just looks at the DOM. No events are triggered as such.

So at best the first 10 posts from your page are crawled to, and if you have a good layout where each post’s dedicated page shows a few more suggestions at the end, they will be indexed. But this isn’t reliable.

So how do we fix this?

Now, this solution will require some refactoring of your code to work well with crawlers, so be prepared for that. However, the fix itself is quite simple.

In frameworks like Next.js you have the option to use SSR (Server Side Rendering) to return pages on each request from the user. Usually, you pre-populate these pages with some data which is rehydrated on the client-side.

You can also check incoming request headers when receiving a request in Next.js. Once you do this, go through the headers and look for the User-Agent Header.

User-Agent: <product> / <product-version> <comment>

Use a library like isbot to check if the given User Agent is a bot or not. And depending on the kind of User-Agent serve pages as follows:

  • If the Client is a bot: Serve a paginated version of the page with Next/Previous links to go to a page with the next/previous 10 posts.

    Something like this

  • If the Client is not a bot: Serve your infinite scrolling feed.

And voila! This way the bot can crawl through the paginated version of the feed and discover all your posts!

Some sample code

import isbot from 'isbot';

export async function getServerSideProps() {
    // Fetch all relevant data for the page
    let isBot = isbot(context.req.headers['user-agent']);

    return {
        props: {
            data,// Fetched data
            isBot,
        },
    };
}

export default function Page(props) {
    if (props.isBot) {
        return (
            <div>
                <h1>Bot</h1>
                <p>
                    Paginated Version of The Feed
                </p>
            </div>
        );
    }

    return (
        <div>
            <h1>Not Bot</h1>
            <p>
                Infinite Scrolling Feed
            </p>
        </div>
    )
}

Now you can confidently show off your website anywhere and as more pages link to your site, the more chances there are for search engines to crawl through your website and rank your results higher!

That’s all from my side! Would love to hear your thoughts in the comments and hopefully see you on skillShack(⚡️); !

Follow me on Twitter if you want to see tweets detailing my journey building my startup and insights into topics like these that may not become full-fledged articles!

You can read about my journey so far here.


Loading...
Logo

skillShack(⚡); is a community for software professionals looking to share the projects they are working on and get feedback. From side projects to startups!

Follow us on

Designed by Braggi Solutions © Braggi Solutions 2021 All right reserved