How to make a website optimized for SEO

0
499
How to Optimize Website for SEO

How to Optimize Website for SEO – Now that you know what SEO is and what are the main factors that Google takes into account when positioning a website, you need to learn what you have to do so that your page has opportunities to position up in the SERPs.

In this chapter, we are going to talk about how to optimize the main positioning factors as well as the main SEO problems that arise when optimizing a website and its possible solutions.

Table of Contents

How to Optimize Website for SEO

We will divide the themes of this chapter into 4 large blocks:

  1. Accessibility
  2. Indexability
  3. Content
  4. Meta tags

1. Accessibility

The first step in optimizing a website for SEO is to allow access to search engines to our content. That is, you have to check if the website is visible in the eyes of search engines and, above all, how they are viewing the page.

For various reasons that we will explain later, it may be the case that search engines cannot correctly read a website, an indispensable requirement for positioning.

Aspects to consider for good accessibility

  • File robots txt
  • Meta tag robots
  • HTTP status codes
  • Sitemap
  • Web structure
  • JavaScript and CSS
  • Web speed

File robots txt

The robots.txt file is used to prevent search engines from accessing and indexing certain parts of a website. It is very useful to prevent Google from displaying in the search results the pages we don’t want. For example in WordPress, so that they do not access the administrator files, the robots.txt file would be like this:

Example

User agent: *

Disallow:/wp-admin

EYE: You must be very careful not to block the access of search engines to your entire website without realizing it as in this example:

Example

User agent: *

Disallow:/

We must verify that the robots.txt file is not blocking any important part of our website. We can do this by visiting the url www.example.com/robots.txt, or through Google Webmaster Tools in “Tracking”> “robots.txt tester”

How to Optimize Website for SEO

The robots.txt file can also be used to indicate where our sitemap is located by adding on the last line of the document.

Therefore, an example of a full robots.txt for WordPress would look like this:

Example

User-agent: *

Disallow: /wp-admin

Sitemap: http://www.example.com/sitemap.xml

If you want to go deeper into this file, we recommend you visit the web with the information about the standard.

How to Optimize Website for SEO – Meta tag Robot

The meta tag “robots” is used to tell search engine robots whether or not they can index the page and whether they should follow the links it contains.

When analyzing a page you should check if there is a meta tag that by mistake is blocking access to these robots. This is an example of how these tags would look in the HTML code:

Example

<meta name = ”robots” content = ”noindex, nofollow”>

On the other hand, meta tags are very useful to prevent Google from indexing pages that do not interest you, such as pages or filters, but follow the links to continue tracking our website. In this case, the label would look like this:

Example

<meta name = ”robots” content = ”noindex, follow”>

We can check the meta tags by right-clicking on the page and selecting “see page source code”.

Or if we want to go a little further, with the Screaming Frog tool we can see at a glance which pages of the entire web have said tag implemented. You can see it in the “Directives” tab and in the “Meta Robots 1” field. Once you have located all the pages with these tags you just have to delete them.

HTTP status codes

In the event that any URL returns a status code (404, 502, etc.), users and search engines will not be able to access that page. To identify these URLs we recommend you also use Screaming Frog because it quickly shows the status of all the URLs on your page.

IDEA: Every time you do a new search in Screaming Frog, export the result in a CSV. So you can gather them all in the same Excel later.

Sitemap

The sitemap is an XML file that contains a list of the pages of the site along with some additional information, such as how often the page changes its contents, when it was its last update, etc.

A small excerpt from a sitemap would be:

Example

<url>

<loc> http://www.example.com </loc>

<changefreq> daily </changefreq>

<priority> 1.0 </priority>

</url>

Important points to check regarding the Sitemap, which:

  • Follow the protocols, otherwise Google will not process it properly
  • Get uploaded to Google Webmaster Tools
  • Be updated When you update your website, make sure you have all the new pages on your sitemap
  • All pages on the sitemap are being indexed by Google

In case the website does not have any sitemap we must create one, following four steps:

  1. It generates an Excel with all the pages that we want to be indexed, for this we will use the same Excel that we created when doing the search of the HTTP response codes
  2. Create sitemap. For this, we recommend the Sitemap Generators tool (simple and very complete)
  3. Compare the pages that are in your excel and those in the sitemap and remove from the excel those that we do not want to be indexed
  4. Upload the sitemap through Google Webmaster Tools

Web structure

If the structure of a website is too deep Google will find it more difficult to reach all pages. So it is recommended that the structure is not more than 3 levels deep (not counting the home) since the Google robot has a limited time to track a website, and the more levels you have to go through the less time you will have left to access to the deepest pages

That is why it is always better to create a web structure horizontally and not vertically.

Vertical Structure – How to Optimize Website for SEO

How to Optimize Website for SEO

Horizontal structure

How to Optimize Website for SEO

Our advice is to make an outline of the whole web in which you can easily see the levels you have, from home to the deepest page and be able to calculate how many clicks it takes to reach it.

Locate at what level each page is and if you have links pointing to it using Screaming Frog again.

JavaScript and CSS

Although in recent years Google has become smarter when reading this type of technology we must be careful because JavaScript can hide part of our content and CSS can mess it up by showing it in another order that Google sees it.

There are two methods to know how Google reads a page:

  • Plugins
  • “Cache:” command

Plugins

Plugins such as Web Developer or Disable-HTML help us see how the web “crawls” a search engine. To do this you have to open one of these tools and disable JavaScript. We do this because all the drop-down menus, links and texts must be able to be read by Google.

Then we deactivate the CSS since we want to see the actual order of the content and the CSS can change this completely.

“Cache:” command

Another way to know how Google sees a website is through the “cache:”

Enter “cache: www.myexample.com” in the search engine and click on “Text-only version”. Google will show you a photo where you can know how to read a website and when was the last time you accessed it.

Of course, for the “cache:” command to work correctly our pages must be previously indexed in the Google indexes.

Once Google indexes a page for the first time, it determines how often you will visit it again for updates. This will depend on the authority and relevance of the domain to which that page belongs and the frequency with which it is updated.

Whether through a plugin or the “cache:” command, make sure you meet the following points:

  • You can see all the menu links.
  • All web links are clickable.
  • There is no text that is not visible with CSS and Javascript enabled.
  • The most important links are at the top.

Loading speed – How to Optimize Website for SEO

The Google robot has a limited time when browsing our page, the less it takes each page to load more pages will reach.

You should also keep in mind that a very slow page load can cause your bounce rate to skyrocket, so it becomes a vital factor not only for positioning but also for good user experience.

To see the speed of loading of your website we recommend Google Page Speed, there you can check what are the problems that slow down your site in addition to finding the tips that Google offers to tackle them. Focus on those with high and medium priority.

Indexability

Once the Google robot has accessed a page the next step is to index it, these pages will be included in an index where they are sorted according to their content, their authority and their relevance to make it easier and faster for Google to access them.

How to check if Google has indexed my website correctly?

The first thing you have to do to know if Google has indexed your website correctly is to perform a search with the “site:” command, so Google will give you the approximate number of pages on our website that it has indexed:

Google site command

If you have linked Google Webmaster Tools on your website you can also check the actual number of indexed pages by going to Google Index> Indexing status

How to Optimize Website for SEO

Knowing (more or less) the exact number of pages your website has, this data will help you compare the number of pages that Google has indexed with the number of real pages on your website. Three scenarios can happen:

  1. The number in both cases is very similar. It means that everything is in order.
  2. The number that appears in Google search is smaller, which means that Google is not indexing many of the pages. This happens because you cannot access all the pages on the web. To solve this, review the accessibility part of this chapter.
  3. The number that appears in the Google search is larger, which means that your website has a duplicate content problem. Surely the reason why there are more indexed pages than there really are on your website is that you have duplicate content or that Google is indexing pages that you don’t want to be indexed.

Duplicate content – How to Optimize Website for SEO

Having duplicate content means that for several URLs we have the same content. This is a very common problem, which is often involuntary and can also have negative effects on Google positioning.

These are the main reasons for duplicate content:

  • “Canonicalization” of the page
  • URL parameters
  • Pagination

It is the most common reason for duplicate content and occurs when your homepage has more than one URL:

Example

example.com

www.example.com

example.com/index.html

www.example.com/index.html

Each of the above direct to the same page with the same content, if Google is not told which one is correct, it will not know which one it has to position and it may just position the unwanted version.

Solution. 

There are 3 options:

  1. Redirect to the server to make sure there is only one page that is shown to users.
  2. Define which subdomain we want to be the main one (“www” or “no-www”) in Google Webmaster Tools. 
  3. How to define the main subdomain.
  4. Add a “rel = canonical” tag in each version that points to the ones considered correct.
  • URL parameters

There are many types of parameters, especially in e-commerce: product filters (color, size, score, etc.), sorting (lower price, relevance, higher price, grid, etc.) and user sessions. The problem is that many of these parameters do not change the content of the page and that generates many URLs for the same content.

www.example.com/boligraphs?color=negro&pre-desde=5&precionahasta=10

In this example, we find three parameters: color, minimum price and maximum price.

Solution

Add a “rel = canonical” tag to the original page, so you will avoid any confusion on the part of Google with the original page.

Another possible solution is to indicate through Google Webmaster Tools> Tracking> URL parameters which parameters Google should ignore when indexing the pages of the web.

  • Pagination

When an article, product list or pages of labels and categories have more than one page, duplicate content problems may appear even if the pages have different content, because they are all focused on the same topic. This is a huge problem in the e-commerce pages where there are hundreds of articles in the same category.

Solution

Currently the rel = next and rel = prev tags allow search engines to know which pages belong to the same category/publication and thus it is possible to focus all the positioning potential on the first page.

How to use the NEXT and PREV parameters

1. Add the rel = next tag in the part of the code to the first page:

  • link rel = ”next” href = ”http://www.example.com/page-2.html” />

2. Add on all pages except the first and last tags rel = next and rel = prev

  • link rel = ”prev” href = ”http://www.example.com/page-1.html” />
  • link rel = ”next” href = ”http://www.example.com/page-3.html” />

3. Add the rel = prev tag to the last page

  • link rel = ”prev” href = ”http://www.example.com/page-4.html” />

Another solution is to look for the pagination parameter in the URL and enter it in Google Webmaster Tools so that it is not indexed.

Cannibalization

Keyword cannibalization occurs when there are several pages on a website that compete for the same keywords. This confuses the search engine by not knowing which one is the most relevant for that keyword.

This problem is very common in e-commerce because having several versions of the same product “attack” with all of them the same keywords. For example, if a book is sold in softcover, hardcover and digital version, there will be 3 pages with virtually the same content.

SolutionCreate a main page of the product, from where you access the pages of the different formats, in which we will include a canonical label that points to the said main page. The optimum will always be to center each keyword on a single page to avoid any cannibalization problem.

3. Content

Since in recent years it has become quite clear that content is king for Google. Let’s offer him a good throne then.

The content is the most important part of a website and as much as it is well optimized at SEO level if it is not relevant with respect to the searches that users perform it will never appear in the top positions.

To make a good analysis of the content of our website you have a few tools at your disposal, but in the end, the most useful thing is to use the page with Javascript and CSS deactivated as explained above. This way you will see what content Google is actually reading and in what order it is arranged.

When analyzing the content of the pages you must ask yourself several questions that will guide you through the process:

  • Does the page have enough content? There is no standard measure of how much is “sufficient”, but it should at least contain 300 words.
  • Is the content relevant? It must be useful for the reader, just ask yourself if you would read that. Be sincere.
  • Do you have important keywords in the first paragraphs? In addition to these, we must use related terms because Google is very effective in relating terms.

A page will never position for something that does not contain

  • Do you have keyword stuffing?  If the content of the page “sins” of excess keywords to Google will not do any grace. There is no exact number that defines a perfect keyword density, but Google advises to be as natural as possible.
  • Do you have misspellings?
  • Is it easy to read? If we don’t find the reading tedious, it will be fine. The paragraphs should not be too long, the letter should not be too small and it is recommended that there be images or videos that reinforce the text. Remember to always think what audience you write for.
  • Can Google read the text of the page? We have to prevent the text from being inside Flash, images or Javascript. We will verify this by looking at the text-only version of our page, using the Google command cache: www. example.com and selecting this version.
  • Is the content well distributed? It has its corresponding labels H1, H2 and so on, the images are well mockup etc.
  • Is it linkable? If we do not provide the user with how to share it, it is very likely that he will not do so. It includes social media sharing buttons in visible places on the page that do not hinder the display of content, be it a video, a photo or text.
  • Is actual? The more up-to-date your content is, the higher the frequency of Google crawling on your website and the better the user experience.

Advice

You can create an excel with all the pages, their texts and the keywords you want to appear on them, this way it will be easier for you to see where you should reduce or increase the number of keywords on each page.

4. Meta tags

The meta tags or meta tags are used to convey information to search engines what the page is about when they have to sort and show your results. These are the most important labels that we must keep in mind:

Title

The title tag is the most important element in meta tags. It is the first thing that appears in the results on Google.

When optimizing the title, keep in mind that:

  • The tag must be in the <head> </head> section of the code.
  • Each page must have a unique title.
  • It should not exceed 70 characters if it will not appear cut.
  • It must be descriptive regarding the content of the page.
  • It must contain the keyword for which we are optimizing the page.

We must never abuse the keywords in the title, this will make users distrust and Google thinks that we are trying to deceive you.

Another aspect to consider is where to put the “brand”, that is: the name of the web, usually it is usually put at the end to give more importance to the keywords, separating these from the name of the web with a script or a vertical bar

Meta-description – How to Optimize Website for SEO

Although it is not a critical factor in the positioning of a website, it significantly affects the click-through rate in search results.

For the meta-description, we will follow the same principles as with the title, only that its length should not exceed 155 characters. For both titles and meta-descriptions, we must avoid duplication, this can be verified in Google Webmaster Tools> Search aspect> HTML improvements.

Meta Keywords – How to Optimize Website for SEO

At the time the meta keywords were a very important positioning factor, but Google discovered how easy it is to manipulate the search results so it eliminated it as a positioning factor.

Tags H1, H2, H3 …

The labels H1, H2, etc. They are very important to have a good information structure and a good user experience since they define the hierarchy of content, something that will improve SEO. We must give importance to H1 because it is normally in the highest part of the content and the higher a keyword is, the more importance Google will give it.

“Alt” tag in the image

The “alt” tag in the images is added directly to the image code itself.

Example

<img src = ”http://www.example.com/example.jpg” alt = ”keyword molona” />

This tag has to be descriptive with respect to the image and content of that image since it is what Google reads when tracking it and one of the factors that it uses to position it in Google Images.

conclusion

You already know how to make a page optimized for SEO and that there are many factors to optimize if you want to appear in the best positions of the search results. Now you will surely ask yourself, what are the keywords that best position my website?

We don’t know exactly what those keywords are, but we can help you find them, find out in the next lesson!

Must Read:

LEAVE A REPLY

Please enter your comment!
Please enter your name here