How to make a web optimized for SEO?

SEOexpertum -SEO services

Now that you know what SEO is and what are the main factors that Google takes into account when positioning a website, you still have to learn what you have to do in order for your page to have opportunities to position itself in the SERPs.

In this chapter, we will talk about how to optimize the main factors of positioning as well as the main SEO problems that arise when optimizing a website and its possible solutions.

We will divide the topics of this chapter into 4 big blocks:

  • Accessibility
  • Indexability
  • Content
  • Meta tags

1. Accessibility

The first step in optimizing the SEO of a website is to allow search engines to access our content. That is, you have to check if the web is visible in the eyes of the search engines and above all, how they are viewing the page.

For various reasons that we will explain later, it may be the case that search engines can not correctly read a web, an essential requirement for positioning.

Aspects to take into account for good accessibility

  • File robots txt
  • Meta tag robots
  • HTTP status codes
  • Sitemap
  • Web structure
  • JavaScript and CSS
  • Speed ​​of the web

robots txt file

The robots.txt file is used to prevent search engines from accessing and indexing certain parts of a website. It is very useful to prevent Google from showing pages that we do not want in the search results. For example in WordPress, so that they do not access the administrator files, the robots.txt file would look like this:

Example
User agent: *

Disallow:/wp-admin

EYE: You must be very careful not to block the access of the search engines to your entire web without realizing it as in this example:

Example
User agent: *

Disallow:/

We must verify that the robots.txt file is not blocking any important part of our website. We can do it by visiting the URL www.example.com/robots.txt, or through Google Webmaster Tools in “Crawl”> “robots.txt Tester”

The robots.txt file can also be used to indicate where our sitemap is located by adding in the last line of the document.

Therefore, an example of full robots.txt for WordPress would look like this:

Example
User-agent: *

Disallow:/wp-admin

Sitemap: https://www.example.com/sitemap.xml

If you want to go into more detail about this file, we recommend you visit the web with information about the standard.

Meta tag Robot

The meta tag “robots” is used to tell the robots of the search engines if they can or not index the page and if they should follow the links it contains.

When analyzing a page you should check if there is a meta tag that by mistake is blocking access to these robots. This is an example of how these tags would look in the HTML code:

Example
<meta name = “robots” content = “noindex, nofollow”>

On the other hand, meta tags are very useful to prevent Google from indexing pages that do not interest you, such as pages or filters, but to follow the links to continue tracking our website. In this case, the label would look like this:

Example
<meta name = “robots” content = “noindex, follow”>

We can check the meta tags by right-clicking on the page and selecting “view source code of the page”.

Or if we want to go a little further, with the Screaming Frog tool we can see at a glance which pages of the entire web have this tag implemented. You can see it in the “Directives” tab and in the “Meta Robots 1” field. Once you have located all the pages with these labels you just have to delete them.

HTTP status codes

In the event that any URL returns a status code (404, 502, etc), users and search engines will not be able to access that page. To identify these URLs, we recommend you to also use Screaming Frog, because it quickly shows the status of all the URLs of your page.

IDEA: Each time you do a new search in Screaming Frog exports the result in a CSV. This way you can gather them all in the same Excel later.

Sitemap

The sitemap is an XML file that contains a list of the pages of the site along with some additional information, such as how often the page changes its contents when it was its last update, etc.

A small excerpt from a sitemap would be:

Example
<url>

<loc>http://example.com</loc>

<changefreq>daily</changefreq>

<priority>1.0</priority>

</url>

Important points that you should check with respect to the Sitemap, that:

  • Follow the protocols, otherwise, Google will not process it properly
  • Upload to Google Webmaster Tools
  • Be updated. When you update your website, make sure you have all the new pages in your sitemap
  • All the pages that are in the sitemap are being indexed by Google

In case the web does not have any sitemap we will have to create one, following four steps:

  • Generate an Excel with all the pages that we want to be indexed, for this, we will use the same Excel that we created when doing the search of the HTTP response codes

  • Create a sitemap. For this, we recommend the Sitemap Generators tool (simple and very complete)

  • Compare the pages that are in your excel and those in the sitemap and remove from the excel those that we do not want to be indexed

  • Upload the sitemap through Google Webmaster Tools

Web structure

If the structure of a website is too deep Google will find it more difficult to reach all the pages. So it is recommended that the structure does not have more than 3 levels of depth, (not counting the home) because Google’s robot has a limited time to track a website, and the more levels you have to go through, the less time you have left to access to the deepest pages

That’s why it’s always better to create a web structure horizontally and not vertically.

Vertical Structure

vertical-structure-seb-optimized-seoexpertum

Horizontal Structure

horizontal-structure-seo-optimized-seoexpertum

Our advice is to make an outline of the whole web in which you can easily see the levels you have, from home to the deepest page and calculate how many clicks are needed to reach it.

Find out what level each page is on and if you have links pointing to it using Screaming Frog again.

JavaScript and CSS

Although in recent years Google has become more intelligent when it comes to reading this type of technology we must be careful because the JavaScript can hide part of our content and the CSS can mess it up by showing it in another order to which Google sees it.

There are two methods to know how Google reads a page:

  • Plugins
  • Command “cache:”

Plugins

Plugins like Web Developer or Disable-HTML help us to see how a search engine “crawls” the web. To do this you have to open one of these tools and disable JavaScript. We do this because all the drop-down menus, links, and texts should be able to be read by Google.

Then we deactivate the CSS since we want to see the real order of the content and the CSS can change this completely.

Command “cache:”

Another way to know how Google sees a web is through the Command “cache:”

Enter “cache: www.miejemplo.com” in the search engine and click on “Text-only version”. Google will show you a photo where you can know how a website reads and when was the last time you accessed it.

Of course, for the command “cache:” to work correctly, our pages must be previously indexed in the Google indexes.

Once Google indexes a page for the first time, it determines how often it will revisit it for updates. This will depend on the authority and relevance of the domain to which that page belongs and the frequency with which it is updated.

Either by means of a plugin or the command “cache:”, make sure you meet the following points:

  • You can see all the links on the menu.
  • All links on the website are clickable.
  • There is no text that is not visible with CSS and Javascript enabled.
  • The most important links are at the top.

Load speed

The Google robot has a limited time when it comes to browsing our page, the less it takes each page to load more pages will get to arrive.

You should also bear in mind that a very slow page load can cause your bounce rate to shoot up, so it becomes a vital factor not only for positioning but also for good user experience.

To see the loading speed of your website we recommend Google Page Speed, there you can check what are the problems that slow down your site in addition to finding the advice that Google offers to tackle them. Focus on those with high and medium priority.

Indexability

Once the Google robot has accessed a page the next step is to index it, these pages will be included in an index where they are sorted according to their content, their authority, and their relevance to making it easier and faster for Google to access them.

How to check if Google has indexed my website correctly?

The first thing you have to do to know if Google has indexed your website correctly is to perform a search with the command “site:”, so Google will give you the approximate number of the pages of our website that has indexed.

If you have linked Google Webmaster Tools on your website you can also check the actual number of indexed pages by going to Google Index> Indexing status

Knowing (more or less) the exact number of pages that your website has, this data will serve to compare the number of pages that Google has indexed with the number of real pages of your website. Three scenarios can happen:

  • The number in both cases is very similar. It means that everything is in order.

  • The number that appears in the Google search is smaller, which means that Google is not indexing many of the pages. This happens because you can not access all the pages on the web. To solve this, review the accessibility part of this chapter.

  • The number that appears in the Google search is higher, which means that your website has a duplicate content problem. Surely the reason why there are more pages indexed than those that really exist on your website is that you have duplicate content or that Google is indexing pages that you do not want to be indexed.

Duplicate content

Having duplicate content means that for several URLs we have the same content. This is a very common problem, which is often involuntary and can also have negative effects on positioning in Google.

These are the main reasons for the duplicate content:

  • “Canonicalization” of the page
  • Parameters in the URL
  • Pagination

This is the most common reason for duplicate content and occurs when your home page has more than one URL:

Example
example.com

www.example.com

example.com/index.html

www.example.com/index.html

Each of the above direct to the same page with the same content, if you do not tell Google which is the correct one you will not know which one you have to position and it may position just the version you do not want.

Solution. There are 3 options:

  1. Make a redirection on the server to make sure that there is only one page that is shown to users.
  2. Define which subdomain we want to be the main one (“www” or “no-www”) in Google Webmaster Tools. How to define the main subdomain.
  3. Add a rel = canonical” tag in each version that points to what is considered correct.
  • Parameters in the URL

There are many types of parameters, especially in e-commerce: product filters (color, size, punctuation, etc.), sorting (lower price, relevance, higher price, grid, etc.) and user sessions. The problem is that many of these parameters do not change the content of the page and that generates many URLs for the same content.

www.example.com/spellings? color = black & prices from = 5 & price = 10

In this example, we find three parameters: color, minimum price, and maximum price.

Solution

Add a “rel = canonical” tag to the original page, so you will avoid any confusion on Google’s part with the original page.

Another possible solution is to indicate through Google Webmaster Tools> Tracking> URL Parameters what parameters Google should ignore when indexing the pages of a website.

  • Pagination

When an article, list of products or pages of labels and categories have more than one page, problems of duplicate content may appear even though the pages have different content, because they are all centered on the same topic. This is a huge problem in the e-commerce pages where there are hundreds of articles in the same category.

Solution
Currently the rel = next and rel = prev tags allow search engines to know which pages belong to the same category/publication and thus it is possible to center all the positioning potential on the first page.

How to use the NEXT and PREV parameters

1. Add the rel = next tag in the part of the code to the first page:

link rel = “next” href = “http://www.ejemplo.com/page-2.html” />
2. Add in all the pages except the first and last tags rel = next and rel = prev

link rel = “prev” href = “http://www.ejemplo.com/page-1.html” />
link rel = “next” href = “http://www.ejemplo.com/page-3.html” />
3. Add the rel = prev tag to the last page

link rel = “prev” href = “http://www.ejemplo.com/page-4.html” />
Another solution is to find the paging parameter in the URL and enter it in Google Webmaster Tools so that it is not indexed.

Cannibalization

The cannibalization of keywords occurs when there are several pages competing for the same keywords in a web. This confuses the search engine by not knowing which is the most relevant for that keyword.

This problem is very common in e-commerce because having several versions of the same product “attack” with all of them the same keywords. For example, if you sell a book in softcover, hardcover, and digital version, you will have 3 pages with practically the same content.

Solution
Create a main page of the product, from where you can access the pages of the different formats, in which we will include a canonical tag that points to the main page. The optimal thing will always be to center each keyword on a single page to avoid any problem of cannibalization.

3. Content

Since in recent years it has become quite clear that content is the king for Google. Let us offer him a good throne then.

Content is the most important part of a website and as much as it is well optimized at the SEO level, if it is not relevant with respect to the searches carried out by users, it will never appear in the top positions.

To make a good analysis of the content of our website you have a few tools at your disposal, but in the end, the most useful thing is to use the page with the Javascript and the deactivated CSS as explained above. This way you will see what content Google is really reading and in what order it is willing.

When analyzing the content of the pages you should ask yourself several questions that will guide you in the process:

  • Does the page have enough content? There is no standard measure of how much is “enough”, but at least it should contain 300 words.

  • Is the content relevant? It should be useful for the reader, just ask yourself if you would read that. Be sincere.

  • Do you have important keywords in the first paragraphs? In addition to these, we must use related terms because Google is very effective in relating terms.

A page will never position for something that does not contain

TWEET THIS

  • Do you have keyword stuffing? If the content of the page “sins” of the excess of keywords to Google will not do you any grace. There is no exact number that defines a perfect keyword density, but Google advises to be as natural as possible.

  • Do you have spelling mistakes?

  • Is it easy to read? If the reading is not tedious, it will be fine. The paragraphs should not be too long, the lyrics should not be too small and it is recommended that there be images or videos that reinforce the text. Remember to always think about what public you write.

  • Can Google read the text of the page? We have to avoid that the text is inside Flash, images or Javascript. We’ll check this by looking at the text-only version of our page, using the cache command in www: www.example.com and selecting this version.

  • Is the content well distributed? It has its corresponding labels H1, H2, etc., the images are well laid out, etc.

  • Is it linkable? If we do not provide the user with how to share it, it is very likely that he will not. Includes buttons to share on social networks in visible places on the page that do not hinder the display of content, be it a video, a photo or text.

  • Is actual? The more updated your content is, the higher the frequency of Google tracking on your website and the better the user’s experience will be.

Advice

You can create an excel with all the pages, their texts and the keywords that you want to appear in them, in this way it will be easier to see where you should reduce or increase the number of keywords on each page.

4. Meta tags

Meta tags are used to transmit information to search engines about what a page is about when they have to sort and show their results. These are the most important labels that we must take into account:

Title

The title tag is the most important element within the meta-tags. It is the first thing that appears in the results in Google.

When optimizing the title, we must bear in mind that:

  • The tag must be in the <head> </ head> section of the code.
  • Each page must have a unique title.
  • It should not exceed 70 characters, otherwise, it will be cut off.
  • It must be descriptive with respect to the content of the page.
  • It must contain the keyword for which we are optimizing the page.

We should never abuse the keywords in the title, this will make users mistrust and Google thinks we are trying to deceive you.

Another aspect to take into account is where to put the “brand”, ie: the name of the web, usually put at the end to give more importance to the keywords, separating these from the name of the web with a script or a vertical bar.

Meta-description

Although it is not a critical factor in the positioning of a website, it affects the click-through rate in the search results considerably.

For the meta-description, we will follow the same principles as with the title, only that its length should not exceed 155 characters. Both for the titles and for the meta-descriptions we must avoid duplication, this we can check in Google Webmaster Tools> Search appearance> HTML improvements.

Meta Keywords

At the time, meta keywords were a very important positioning factor, but Google discovered how easy it is to manipulate the search results so it eliminated it as a positioning factor.

Tags H1, H2, H3 …

The labels H1, H2, etc. are very important to have a good information structure and a good user experience, because they define the hierarchy of content, something that will improve SEO. We must give importance to the H1 because it is usually in the highest part of the content and the higher a keyword is, the more important Google will give it.

“alt” tag in the image

The “alt” tag in the images is added directly to the image code itself.

Example
<img src = “http://example.com/example.jpg” alt = “keyword molona” />

This tag has to be descriptive with respect to the image and content of that image since it is what Google reads when it is tracked and one of the factors it uses to position it in Google Images.

Conclusion

You already know how to make a page optimized for SEO and there are many factors to optimize if you want to appear in the best positions of the search results. Now surely you wonder what are the keywords that best position my website?

1 thought on “How to make a web optimized for SEO?”

Leave a Reply

Your email address will not be published. Required fields are marked *