Month: May 2022

How to Improve Google Page Experience for Better Ranking

How To Improve Google Page Experience For Better Ranking In 2022

If you want to improve Google page experience, you got to know what it is all about. In essence, Google is trying to tell you as a user or as a webmaster that you need to put the user first.  If you put the user first and strive hard to provide them with the best user experience, Google will rank you higher in the long run as it benefits a user which pleases Google. And if the users are happy while using Google’s search engine, what will they do? They will keep coming back and Google more, which helps Google generate more revenue.

Table of contents:

    • What is Google Page Experience and Why does it matter?
    • What are Web Vitals?
    • How do Core Web Vitals affect the Website?
    • Why Google Page Experience is Important?
    • Tips to Improve Google Page Experience

Google follows the trend. Remember that it doesn’t care about you or your website; today, it has become more user-centric. So, you need to focus on both – SEO (Search engine optimization) and UX (user experience) to give your readers the best possible experience and thereby increase your page ratings and site’s performance.

So, first off let us understand Google’s latest algorithm update – “Google Page Experience”.

 

What is Google Page Experience and Why does it matter? 

 

Google Page Experience

Google Page experience is Google’s latest attempt to improve search engines for users. Page experience update started rolling out on 15th June and it is Google’s new input for search ranking. It is a set of signals that calculate how users perceive the experience of interacting with a web page on the computer and mobile devices. Google’s new algorithm update combines the core web vitals and previous user-experience-related search signals to measure the Google page experience. 

What goes into Page Experience?

There are a few core page experience signals, that Google has identified as a part of this new update:

1. Boolean checks
    • Mobile-friendliness
    • Using HTTPS
    • No intrusive interstitials
    • Safe browsing

 

2. Core web vitals
    • Largest Contentful Paint (LCP)
    • First Input Delay (FID)
    • Cumulative Layout Shift (CLS)

 

All of these factors allow you to identify the issues that hinder online readers from accessing a wealth of valuable information on the web. Google’s focus on these Google page experience metrics aligns with recent search marketing trends that have moved beyond traditional On-page SEO strategies such as keyword density, page metadata, etc.

The advance technical SEO strategy prioritizes the improvements of a website’s user experience through code-level enhancement. User experience plays a vital role in the ranking law of search, and the Google page experience update has provided you with a roadmap to follow.

 

What are Web Vitals?

 

Web Vitals

Web vitals is an initiative taken by Google to provide unified guidance for quality signals that are essential to serve a better user experience.

Core web vitals are the subset of web vitals that apply to all pages. Core web vitals are the metrics that help webmasters, marketers, or site owners to keep track of their web pages and optimize the website to deliver a great user experience. These web core vitals measure the ability of a website to offer users a better browsing experience with optimal speed, visual stability, and responsiveness across computers and mobile devices such as mobile phones and tablets. The metrics that makeup web vitals will evolve over time.

 

How do Core Web Vitals affect the website?

Here are a few factors that affect the core web vitals and thereby hurt your page experience:

Web Vitals affect the website

  • Page loading time: If your site takes a lot of time for loading a page, your users will likely leave that page right away. You need to increase your page speed to provide a better user experience.

 

  • Broken links: Links that fail to land on a page, or return a 404-error message are called “broken links” or “link rots”. Having such dead links on your page may damage your website’s ranking.

 

  • Intrusive interstitial guidelines: Intrusive popups block a user from having smooth access to your web page. Showing popups that cover the main content of the page makes the content less accessible to the users. And it is really annoying! 

 

  • User interface: It is very important to have a mobile-friendly website as Google likes a mobile site. If your website is unresponsive, neglects security, and is not optimized for SEO then staying indifferent to the trends may earn you a rebel title as the “Bad website design”. You need to focus on web designing and web development to improve your site’s performance on computers as well as mobile devices like smartphones and tablets.

 

  • Security and safety: Google, promotes internet safety and security. Safe browsing is Google’s first priority. Having a website that is labeled as “not secure” by Google chrome will harm the trustworthiness of the website. That is why an SSL certificate is important. It helps to reduce the fraud rate and protect user privacy.

 

Core web vitals consist of three metrics that measure the overall page experience of a website.

1. Largest contentful paint (LCP)

LCP is the first metric of the web core vitals. It indicates how long it takes for the largest content of the page to load. The length of time taken by the largest content to load is called the “Largest contentful paint”. LCP that takes 2.5 to load the effective content is considered good. If your site takes more than 4 seconds then you are in trouble.

 

Largest contentful paint (LCP)

For example, suppose that you are browsing a new website and opened a new article to read, LCP for that page would occur when the main featured image of the article was loaded because images are heavier than texts. Lightweight page elements and the texts are typically loaded first.

 

2. First input delay (FID)

FID measures the time taken by the site to respond to the user’s input such as clicking, and tapping a button or a link.  Google wants every website to be interactive and responsive as fast as possible once they are opened by the users. For example, if you clicked an interactive element such as a Call-to-action button, the time taken by the computer to register your click and respond is FID.

 

First input delay (FID)

Generally, the response time should be less than 100ms, that’s a tenth of a second. Just like a blink of an eye. Google wants – the moment a user is ready to act, the website needs to be ready to respond. A score under 100ms is considered good or passing. 

 

3. Cumulative layout shift (CLS)

CLS is the last metric in the web core vitals. It accesses the stability of a page. For example, if someone is trying to read content and the page moved, so you have to find your place in the article once again, or if you are trying to tap a button and the page moves unexpectedly and you are forced to click the wrong button, then you have been a victim of a bad CLS. That’s a page layout shift which is called a “cumulative layout shift”. CLS is the total change in the layout of a web page as it loads. A score under 0.1 is considered good or passing.

 

Cumulative layout shift (CLS)

According to Google research, having a poor core web vital score and page experience:

  • Reduces the conversion rate: There is a strong relationship between conversions and a good page experience. Pages that load in 2.4 seconds have a better conversion rate.
  • Increases the bounce rate: Longer page loading time has a major impact on the bounce rate.
  • Generate less revenue: Speedy rendering times generate more revenue than the average and vice-versa.

Websites that have a bad user experience find it difficult to rank higher on Google and drive traffic from SERPs. Optimizing a website with the latest update along with SEO has become one crucial part of marketing strategies.

 

Tips to Improve Google Page Experience 

If you want your website to be rewarded, rather than penalized, with the rollout of the latest Google update “page experience”, here are a few tips to improve your Google Page experience to provide the best possible UX.

1. Use a responsive web design

 If you are not using a responsive web design, then now is the time to upgrade your website. 

2. Upgrade to HTTPS

Google wants to provide its users with a secure and safe browsing environment. Getting an SSL certificate through your domain registrar is inexpensive and easy. HTTPS protocol has been added as a page experience signal by Google in its new rollout update, so if you want to achieve a “good page experience” status in Google search results then, a page must have an HTTPS encryption.

3. Increase the security of your website

Work hard to achieve better standards for user privacy, fraud reduction, and overall safety.

4. Remove popups

Remove annoying elements or intrusive interstitial guidelines that block the access of the users.

5. Cleanup backend code

Several improvements can be done to the backend code to improve page loading time and provide a better user experience. You can remove the unused JavaScript, utilize modern file formats, and minimize large JF libraries with local CSS and JavaScript libraries for building user interfaces.

6. Use a good caching plugin

A good caching plugin can help you store your website’s information so it loads much faster than before for repeat visitors.

 

Conclusion

Google page experience update is going to evolve significantly along the way. With this initial rollout, Google wants to reward the sites that offer a high-quality user experience while de-ranking sites that provide a poor user experience. So, optimizing your website for this latest Google update should be your highest priority. 

Do you need help in improving your website’s page experience?

We have the best search marketing experts who specialize in both web development and search engine optimization. 

Connect with us to set up a free consultation. 

Related post
How to Use Meta Tags for SEO: Ultimate Guide in 2022
How to Use Meta Tags for SEO:Ultimate Guide in 2022

Meta tags are snippets of code that provide search engines with valuable information about your web page. They tell the Read more

How to Create a Robots.txt File for SEO: Best Guide in 2022
How to Create a Robots.txt File for SEO: Best Guide in 2022

Everybody loves “hacks.” People keep finding hacks to make life easier. So, today I am going to share a legitimate Read more

How Google Web Crawler Works: The Ultimate Guide in 2022
How Google Web Crawler Works: Ultimate Guide in 2022

A search engine provides easy access to information online, but Google web crawler/web spiders play a vital role in rounding Read more

How to Optimize Website Sitemap for SEO in 2022
How to Optimize Website Sitemap for SEO in 2022

Every complex website must have a sitemap if you look from an SEO standpoint. Sitemaps are a vital part of Read more

How to Create a Robots.txt File for SEO: Best Guide in 2022

How to Create a Robots.txt File for SEO: Best Guide in 2022

Everybody loves “hacks.”

People keep finding hacks to make life easier. So, today I am going to share a legitimate SEO hack that you can start using right away. 

It is the “robot.txt file” that can help you to boost your SEO. This teeny-tiny text file is also called a robot’s exclusive protocol or standard. Robots.txt file is part of every website on the internet, but it gets rarely talked about. It’s a source of SEO that is designed to work with search engines.

Robot.txt file is one of the best methods to enhance your SEO strategy because:

    • It is easy to implement
    • Consumes less time 
    • Does not require any technical experience 
    • Increases your SEO

 

You need to find out the source code of your website, and then follow along with me to see how to create a robots.txt file that search engines would love.

 

What is a robots.txt file?

 

What is a robots.txt file?

Robots.txt file is a simple text file that webmasters create to instruct web robots or web crawlers how to crawl pages on your website. Robots.txt file is a per of REP (robot’s exclusive protocol), a standard that regulates how robots crawl the website, access the index content, and serve that content to the users online. The REP also has meta robots or site-wide instructions for how search engines should treat the links such as “follow” and “unfollow”.

Robots.txt file indicates web crawlers which part of the website they can crawl and which part is not allowed to access. These crawl instructions are specified by “allowing” or “disallowing” for all user agents. Robots.txt file allows you to keep specific web pages out of Google. It plays a big role in SEO.

Search engines regularly check a site’s robots.txt file to see if there are any instructions for web crawling. These instructions are called directives.

 

Why robots.txt file is important for SEO?

From the SEO point of view, the robots.txt file is very important for your website. Using these simple text files, you can prevent search engines from crawling certain web pages from your website, they guide search engines on how to crawl sites more efficiently. It also tells search engine crawlers which web pages not to crawl.

For example, 

Let’s say Google is about to visit a website. Before it visits the target page, it will check the robots.txt file for instructions.

There are different web components of the robots.txt file. Let’s analyze them:

    • Directive – it is the code of conduct that the user-agent follows.
    • User-agent – it is the name used to define specific search engine crawlers and other programs active online. This is the first line of any group.  An asterisk (*) matches all crawlers except the Adsbot. 

 

Let’s understand this with three examples:

1. How to block only Googlebot

User-agent: Googlebot

Disallow: /

2. How to block Googlebot and Adsbot

User-agent: Googlebot

User-agent: Adsbot

Disallow: /

3. How to block Adsbot

User-agent: *

Disallow: /

    • Disallow – it is used to tell different search engines not to crawl a particular URL, page, or file. It begins with a “/” character and if it refers to a directory then it ends with a “/”.
    • Allow – it is used to permit search engines to crawl a particular URL or website section. It is used to override the disallow directive rule to allow the crawling of a page in a disallowed directory.
    • Crawl-delay – this is an unofficial directive used to tell web crawlers to slow down web crawling.
    • Sitemap – it is used to define the location of your XML sitemaps to search engines. Sitemap URL is a fully qualified URL. A sitemap is a better way to indicate the crawlers which file Google can crawl.

 

Let’s understand this with the following example,

Say that Google finds this syntax:

 

User-agent: *

Disallow: /

This is the basic format of a robots.txt file.

Let’s understand the anatomy of the robots.txt file – 

    • The user-agent indicates for which search engines the directives are meant.
    • “Disallow” directive in robots.txt file indicates that the content is not accessible to the user-agent.
    • The asterisk (*) after “user-agent” means that the robots.txt file applies to all the web crawlers that visit the website.
    • The slash (/) after “disallow” tells the crawlers not to visit any pages on the website.

 

But, why anyone would want to stop web robots from crawling their website?  After all, everyone wants search engines to crawl their website easily so they increase site ranking.

This is where you can use the SEO hack. 

If you have a lot of pages on your website, the Google search engine will crawl each of your website pages. But, the huge number of pages will take Googlebot’s a while to crawl. If the time delay is more it may hurt your website ranking. That’s because Google’s search engine bots have a crawl budget. 

 

What is a crawl budget?

The amount of time that Google spends crawling a website is called as “site’s crawl budget”. The general theory of web crawling says that the web has infinite space, exceeding Google’s ability to explore and index each URL available online. As a result, there are limits to how much time Google web crawlers can spend time crawling any single website. Web crawling gives your new website a chance to appear in the top SERPs. You don’t get unlimited crawling from Google search engines. Google has a website crawl budget that guides its crawlers in – how often to crawl, which page to scan, and how much server pressure to accept. Heavy activity from web crawlers and visitors can overload your website.

To keep your website running smoothly, you can adjust web crawling through the crawl capacity limit and crawl demand.

What is a crawl budget?

The crawl budget breaks down into two parts-

1. Crawl capacity limit/crawl rate limit

Crawl rate limit monitors fetching on websites so that the loading speed doesn’t suffer or result in a surge of an error. Google web crawlers want to crawl your site without overloading your server. The crawl capacity limit is calculated as the maximum number of concurrent connections that Google bots use to crawl a site, as well as the delay between fetches. 

The crawl capacity limit varies depending on 

    • Crawl health 

if your website responds quickly for some time, the crawl rate limit goes up, which means more connections can be used to crawl.  If the website slows down, the crawl rate limit goes down and Google bots crawl less.

    • Limit set by the website owner in the Google search console

A website owner can reduce the web crawling of their site.

    • Google’s crawling limit 

Google has so many machines, but they are still limited. Hence, we need to make choices with the resources we have. 

 

2. Crawl demand

It is the level of interest Google and its users have in your site. If do not have huge followings yet, then Google web crawlers won’t crawl your site as often as the highly popular ones. 

Here are three main factors that play important role in determining the crawl demand:

    • Popularity

Popular URLs on the Internet tend to be crawled more often to keep them fresh in the index.

    • Staleness

Systems want to recrawl documents frequently to pick up any alterations.

    • Perceived inventory

Without any guidance, Google web crawlers will try to crawl almost every URL from your website. If the URLs are duplicates and you don’t want them to be crawled for some reason, this wastes a lot of time on your site. This is a factor that you can control easily.

 

Additionally, site-wide events like site moves may boost the crawl demand to re-index the content under new URLs.

Crawl rate capacity and crawl demand together define the “site’s crawl budget”.

In simple words, the crawl budget is the “number of URLs Google search engine bots can and wants to crawl.”

Now that you know all about the website’s crawl budget management, let’s come back to the robots.txt file.

If you ask Google search engine bots to only crawl certain useful contents of your website, Google bots will crawl and index your website based on that content alone.

“you might not want to waste your crawl budget on useless or similar content on your website.”

By using the robots.txt file the right way, you can prevent the wastage of your crawling budget. You can ask Google to use your website’s crawl budget wisely. That’s why the robots.txt file is so important for SEO.

 

How to find the robots.txt file on your website?

If you think finding the robots.txt file on your website is a tricky job. Then you are wrong. It is super easy.  

This method can be used for any website to find its robots.txt file. All you have to do is to type the URL of your website into the browser search bar and then add robots.txt at the end of your site’s URL. 

One of the three situations will happen:

1. If you have a robots.txt file, you will get the file just by typing www.example.com/robots.txt, where the example will be replaced by your domain name.

For instance, for www.ecsion.com/robots.txt  I got the robots.txt file as follows:

User-agent: *

Disallow: /wp-admin/

Allow: /wp-admin/admin-ajax.php

Sitemap: https://www.ecsion.com/sitemap_index.xml

If you find the robots.txt file you need to locate it in your website’s root directory. Once you find your robots.txt file you can open it for editing. Erase all the texts, but keep the file.

2. If you lack a robots.txt file, then you will get an empty file. In that case, you will have to create a new robots.txt file from scratch. For creating a robots.txt file you must only use a plain text editor such as a notepad for androids or a TextEdit for Mac. Utilizing Microsoft word might insert additional codes into the text file.

3. If you get a 404 error, then you might want to take a second and view your robots.txt file and fix the error. 

Note: if you are using WordPress and you don’t find any robots.txt file in the site’s root directory, then WordPress creates a virtual robots.txt file. If this happens to you, you must delete all the texts and re-create a robots.txt file.

 

How to create a robots.txt file?

You can control which content or files the web crawlers can access on your website with a robots.txt file. Robots.txt file lives in a website’s root directory. You can create a robots.txt file in a simple text editor like a notepad or TextEdit. If you already have a robots.txt file, ensure you have deleted the text, but not the file. So, for www.ecsion.com, the robots.txt file lives at www.ecsion.com/robots.txt. The Robots.txt file is a simple and plain text file that follows the REP (robots exclusive protocol). A robots.txt file has many rules. Each rule either blocks or allows access for a given web robot to a specified file path on that site. All files will be crawled unless you specify.

Following is a simple robots.txt file with 2 rules:

1. User-agent: Googlebot

Disallow: /nogooglebot/

2. User-agent: *

Disallow: /

Sitemap: https://www.ecsion.com/sitemap.xml  

This is what a simple robots.txt file looks like. 

Let us see, what that robots.txt file means:

  1. The user agent named Googlebot is not allowed to crawl any URL that starts with http://example.com/nogooglebot/
  2. All the other agents are allowed to crawl the entire website.
  3. The website’s sitemap is located at https://www.ecsion.com/sitemap.xml  

Creating a robots.txt file involves four steps:

  1. Create a file named robots.txt 
  2. Add instructions to the robots.txt file 
  3. Upload the text file to your website
  4. Test the robots.txt file 

Create a file named robots.txt

Using the robots.txt file you can control which files, or URLs the web crawlers can access. Robots.txt file lives in the site’s root directory. To create a robots.txt file, you need to use a simple plain text editor like notepad or TextEdit. Use of text editors like a word processor or Microsoft will be void, as it can add unexpected characters or codes which cause problems for web crawlers. Ensure that you save your file with UTF-8 coding if prompted during the save file dialog. 

Robots.txt rules and format:

    • Robots.txt file must be named robots.txt.
    • Every site can have a single robots.txt file.
    • Robots.txt file must be located in the website’s root directory.  For example, to control crawling on all the URLs of your website https://www.ecsion.com,  the robots.txt file must be located at https://www.ecison.com/robots.txt. It can’t live in the sub-directory. (for example, at https://www.ecsion.com/pages/robots.txt ). 
    • Robots.txt file can be applied to the sub-domains or on non-standard ports. 
    • Robots.txt file is a UTF-8 encoded text file that includes ASCII. Google may ignore characters that are not part of the UTF-8 encoding, rendering robots.txt rules invalid.

 

Add instructions to the robots.txt file

Instructions are the rules for web crawlers about which part of the site they can crawl and which part they can’t. when adding rules to your robots.txt file keep the following guidelines in mind:

    • Robots.txt file consists of one or many groups.
    • Each group has different rules and directives, one instruction per line. Each group begins with a user-agent line that defines the target of the group.
    • A group gives the following instructions to the user-agent:
    • Who the group applies to (the user-agent)
    • Which files, URLs, or directories the agent can crawl?
    • Which files, URLs, or directories the agent cannot crawl.
    • Web crawlers process the groups starting from the top to the bottom. A user agent can match only one instruction set, which is the first, most specific group that matches a given user agent. 
    • By default, a user agent can access any URL, or file on your website unless it is blocked by the “disallow” rule.
    • Rules are case-sensitive. For example, disallow: file.asp only applies to https://www.example.com/file.asp, but not https://www.example.com/FILE.asp
    • “#” marks the beginning of a comment. 

 

Upload the robots.txt file

After saving your robots.txt file to the computer, you might want to make it available for the search engine crawlers. How you upload the robots.txt file to your website completely depends on your server and the site’s architecture. You can search the documentation of your hosting company or directly get in touch with them.

Once you upload the robots.txt file, perform a test to check whether it is publicly accessible or not.

Test the robots.txt file 

For testing your robots.txt markup, open a private browsing window in your web browser and navigate to the location of your robots.txt file. 

For example, https://www.example.com/robots.txt 

if you find the contents of your robots.txt file, you can proceed to test the robots.txt markup.

There are two ways offered by Google to test the robots.txt markup:

1. Robots.txt tester in search console 

This tool can be used for robots.txt files which are already accessible on your website.

2. Google’s open-source robots.txt library 

It is also used in Google search to test the robots.txt file locally on your computer.

Submit the robots.txt file

After uploading and testing your robots.txt file, Google’s search engine crawlers will automatically start utilizing your robots.txt file. There’s nothing much you need to do. 

 

Conclusion

We hope this blog has given you an insight into why robots.txt files are so important for SEO. So, if you seriously want to improve your SEO, you must implement this teeny tiny robots.txt file on your website. Without it, you will be lagging behind your competitors in the market.

 

 

Related post
How to Use Meta Tags for SEO: Ultimate Guide in 2022
How to Use Meta Tags for SEO:Ultimate Guide in 2022

Meta tags are snippets of code that provide search engines with valuable information about your web page. They tell the Read more

How To Improve Google Page Experience For Better Ranking In 2022
How to Improve Google Page Experience for Better Ranking

If you want to improve Google page experience, you got to know what it is all about. In essence, Google Read more

How Google Web Crawler Works: The Ultimate Guide in 2022
How Google Web Crawler Works: Ultimate Guide in 2022

A search engine provides easy access to information online, but Google web crawler/web spiders play a vital role in rounding Read more

How to Optimize Website Sitemap for SEO in 2022
How to Optimize Website Sitemap for SEO in 2022

Every complex website must have a sitemap if you look from an SEO standpoint. Sitemaps are a vital part of Read more