The Site Experience provides a lot of useful data from which Quick Wins can be interpreted quickly and easily. This article shows the most important charts and graphs and explains how to interpret them!
- All crawled URLs
- Index vs. Noindex
- Server Responses
- Canonical Tags
- Page Speed
- Broken & Redirected Links
- Link Equity Optimization
- Anchor Texts
- Link Inconsistencies
As the name of the tab suggests, this section gives you a general overview of the crawl results. At the bottom of this page, you will find a list of all the issues that were found during the crawl. The highlight - they are ranked with a priority score, which makes it easier for you to work through the detected issues!
Clicking on the deep link of the issue will take you directly to the relevant subpage, where the filters are also automatically set to identify the "problem URLs" immediately.
Let's take a look at this with an example:
If you click on the "Image links without alt attribute" request in this table, you will land directly in Link Analysis > Link Texts. If you scroll to the bottom of the page, you will see that the following filter has been set:
The table below lists the concern, whereby a click on the target URLs number leads directly to the link text details. If you display the details, you can see the pages that link to the target URL with an image that doesn't have an alt attribute.
This subpage contains a list of all URLs that were crawled. Thus, this subpage summarizes all results, which are shown in more detail on the other subpages. The table can be extended by all attributes that were crawled.
If you have already stored a keyword set in your project, which has been well managed, this list can be sorted by the traffic index (click on small arrow next to "Traffic Index"). This will sort the URLs based on the defined keyword set according to the estimated monthly traffic index, which can increase the priority of a concern.
In addition, this table can be exported as an Excel file for free (click on three dots in the upper right corner), which makes it even easier to sort by sizes (e.g. status codes) or to identify empty rows (e.g. missing meta descriptions).
Clicking on a URL in this list (as well as in all other lists of the Site Experience) opens the URL Details, with all information (e.g. problems including priority list) found in the crawl for this page. An advantage of this view are the tables with the top incoming and top outgoing links, i.e. which URLs link internally to the analysed URL, or to which pages the analysed URL links.
First of all, it is important to note that we provide data, you interpret it! Red marked "NoIndex, NoFollow" are not always bad. It makes sense to provide some pages with these attributes. On this page you can see, based on all crawled URLs , how many of the URLs are in the index ("NoIndex") and marked with the "Follow" ("NoFollow") attribute. It is up to you to decide whether this result is appropriate or not.
Here you can quickly check 404 server responses and server errors (505), as well as their percentage in relation to all crawled URLs. This allows you to react immediately and fix it.
Filter the list below for all URLs that have a 404 error. This server response is not only bad for the URL itself, but also for all pages that link to it. For this reason, it is best to sort the list by incoming links to prioritize the following actions.
Have a look at the number of redirects in the redirect chains tab. If the number is >3, you will usually see loops in the displayed details, which should be avoided. In such cases, try to redirect with the first page straight to the very last one.
The table at the bottom of the page lists all crawled URLs with a canonical tag. Showing the details allows you to check the URL that is pointed to by the Canonical Tag from the top URL. This way you can quickly check if Canonical Tags are used correctly.
These tables give a quick overview of duplicate, missing, too long or too short Titles.
Since all tabs basically work the same way, let's look at the duplicate Titles in the following example.
Behind the headline we learn that the request "Duplicate Titles" was found 174 times. Behind the listed titles is the number of URLs having this problem.
By clicking on the number, you'll land in the detail view, where you can see the corresponding URLs that have an identical title. This way you can act immediately.
In the "Title too long" tab, a recommendation of 600 pixels is also given so that it is fully visible in the SERPs. Additionally, you can sort by traffic index here based on the project keywords, which helps you prioritize.
The Descriptions subpage works the same way as Titles. Note that Descriptions should not exceed 150 characters if you want them to be fully readable in the SERPs.
At the top of the page you will find identified risks based on their project URLs. Clicking on the deep links will take you directly to the respective URLs.
The bar chart helps you to quickly identify and classify the loading times of all crawled URLs. Of particular interest is the percentage share, which provides information on how many percent of all crawled URLs are in the respective load time ranges.
In this example, we can see that nearly 9% of all URLs show loading times in the red area. In contrast to the previous crawl, 0.4% (trend) more URLs have slipped out of the red zone.
High loading times are often caused by too large files used on the pages. This can be checked in the table below. In the example, we can see that the pages with loading times in the red range also have large file sizes. In this case, the file sizes should definitely be compressed.
However, long loading times can also be caused by other reasons (e.g. long redirect chains). If a page is listed in the table with long loading times and has file sizes in the green range, this isn't the reason. In this case, the URLs with high loading times should definitely be cross-checked by you or the IT department.
Another Quick Win: By clicking on the three dots next to the URLs, they can be accessed directly. This way you can check by yourself how long the page takes to load until it is fully loaded.
The bar chart shown here provides a quick insight into the structure of your domain by showing the percentages of all crawled URLs for the individual levels, i.e. how many clicks are required to reach a specific page.
The diagram is only intended to provide an overview and is not making any direct recommendations for action. As long as a domain has a good structure that logically explains why an URL is on the fifth level, for example, there is no need to worry. Nevertheless, take a look at the URLs in the last levels to check whether they are in a good place. Sort the table below according to levels.
As already described in Status Codes, 404 pages are also bad for the URLs that link to them. This subpage helps you to quickly identify and fix such links. While in our example only 45 pages had 404 status codes, the extent becomes really visible here. In total, 263 URLs link to these 45 pages, which can be seen in the pie chart.
The table below lists all broken links, whereas a click on the source URL number opens a detailed overview of all URLs that link to the broken page.
In this example, 2 URLs link to a 404 page. In the detail overview we can see additional information about the linking URLs (e.g. link type).
Once all links to broken links are fixed, you can move on to the redirected links in the second step. Optimally, all URLs should point to 200s and not 300s pages.
The list shown here gives an overview of the URLs that are linked to from your pages. Moreover, the list can be exported for free, which makes sense because this information is not listed in the tab "all crawled URLs". The given information is to be evaluated individually, as it is a SEO strategic question how external links should be set up. For this reason, we only give an overview and no further recommendations.
Let's look at an example: We see that 19,774 of our URLs point to kdp.amazon.com. Moreover, these are only text links (blue bar). We learn that 0,5% of these links have been tagged with the NoFollow attribute and a shared traffic index of 91,776 is achieved. If we click on the 19,774 source URLs, we get a detailed list of all URLs pointing to kdp.amazon.com. We could then consider whether this makes sense for us, or whether we want to change something.
This analysis shows you which of your URLs should be linked better. First of all: These analyses are only useful when you have a well-optimized keyword set, because they are based on evaluations of the traffic index.
This graphic gives you information about the SPS and shows you how many internal links are linking to your pages. Thus, it can be considered as a goal to get as many URLs as possible in the right area of the graph, so to optimize the SPS of the pages.
All URLs that are displayed in the upper right part of the graphic are linked frequently, but give away a lot of potential traffic. This indicates that they haven't been sufficiently optimized for keywords.
Filter for these URLs in the list below using the advanced filters. Let's take a look at this in this example. We can see that a few URLs are displayed in the upper right part of the graph (see screenshot above). So we filter for all URLs that have an SPS between 7-10, as well as give away high traffic.
You can also analyze hub pages in the same form. The more a URL is displayed on the right, the more it is linked to. These URLs therefore have a high reputation. The higher a URL is ranked, the more it is linking out. You should try to clean up the two red areas. This is because URLs in the upper left area have little reputation themselves (inbound links), so outbound links are not very valuable. In contrast, URLs in the lower right area have a good reputation, but give away their potential because they do not link enough.
The list below on this page shows you all link texts. Click on the advanced filters and select Project Keywords.
This allows you to have a look at all project keywords that are used as link texts and link to multiple URLs.
In this example, we see that the project keyword "pineapple" is optimized in the link text with nine different URLs (target URLs). From an SEO point of view, this is less optimal, as in the best case only one page should be optimized for a specific keyword. If we now click on the number, the link text details will open. This allows us to assess at a glance whether all URLs listed there should really be optimized for the keyword.
The two tables below give a quick overview of Canonicalized pages, as well as NoIndex and NoFollow pages that are linked to.
We see in this example that 2715 (source) URLs point to a target URL, which in turn points to another page via Canonical Tag. Redirect chains should be avoided from an SEO point of view. In the best case, a page always links directly to the URL with the canonical tag.
The analysis in the NoIndex, NoFollow tab follows the same principle. Here you can find a list of URLs that are referred to and marked with NoIndex and NoFollow. Again, we do not make any recommendation or estimation, but only provide an overview, so that you can decide yourself whether this is appropriate.