How to Use Google Webmaster Tools for SEO
Google Webmaster Tools ― which I shall henceforth egregiously call “GWT” (apologies in advance) ― is a sweet suite of Google SEO tools that provides data and configuration control for your site in Google. If you’re doing any SEO and you don’t find value in GWT, you either use a paid tool that uses GWT data or you have a great opportunity to expand your toolkit.
There’s a ton you can do with GWT, but it can take a while to learn how to get great return on the time you spend with it. To that end, I’ve tried my best to assemble a meaty, practical collection of key insights on the reports I’ve found most useful (including lots of coverage on the Search Queries feature).
If you haven’t set up Google Webmaster Tools yet, do so yesterday. It’s really easy and worthwhile. Just go to www.google.com/webmasters/tools, sign in with your Google account, and click add a site. Then you’ll be provided with several options to verify that you manage the site. Use the option that’s easiest and make it happen.
The biggest mistake that I see people make with GWT is failing to add every version of every site they manage. It’s unfortunate, because it’s very easy to do. Failure to add every version of every site will result in data for only some of your site(s) ― at best, this stifles insights; at worst, this can cause you to make costly errors or neglect critical issues.
Obviously, if you have the domain thingamabobs.com and a domain called whatchamacallits.com, you should add both root domains.
You should also add all subdomains. If you have the subdomain http://red.thingamabobs.com and the subdomain http://www.thingamabobs.com, add them both. If you only add http://www.thingamabobs.com, that’s all GWT will track; and that’s all the data you’ll get.
If you have http://thingamabobs.com and http://www.thingamabobs.com, add them both. (Then fix your duplicate content issue).
If you have https://www.thingamabobs.com and http://www.thingamabobs.com, add them both.
Basically, if you can change what’s to the left of your root domain and still get a live page when you enter the URL in the browser bar, then add that subdomain. Also add any subdirectories that target specific countries. Google explains the versions you can add here.
Occasionally, Google Webmaster Tools will notify you if your site seems to have a very important issue. Make sure you set up the messaging forwarding so you can get these notifications emailed. The emails may inform you about some problems accessing the site, increases in crawl errors, unnatural link warnings, malware alerts, and more.
One caveat is that there can be a delay between the time the problem arises and the time you get notified. Another caveat is that there are plenty of bad problems GWT won’t notify you on. You definitely want to really pay mind to GWT emails, but they are merely an additional line of defense ― not a replacement for any other risk-mitigation measures.
Google has a lot of resources on SEO, and good practitioners gobble as much of it as they can. The Google Webmaster Tools Help Center is a treasure trove.
Make sure you use GWT with an inquisitive mindset. Most of the reports have limits, caveats, and nuances. Before you go rushing off to make a major decision, make sure the data is what you think it is and means what you think it means. Often, GWT data leads to important but unconfirmed hypotheses that you need to investigate.
Also, if you click the help button, you may get a quality, relevant suggested help article. So do that frequently. The articles are pretty good at explaining what the various data-types in reports actually are (insomuch as Google is willing to share).
However, the articles are frequently dry and lacking in pragmatic insights on what to focus on (which is why I stayed up too late, nodding off at the desk writing all this color commentary).
Plus, there’s very little pictures. (We’re expected to just read words on the interwebs? Come on.)
The GWT reports help files can also be a bit inconsistent. I looked through the related help files for each GWT report I cover below, and I’ve listed the most helpful ones.
Be sure to click on those little question marks a bunch too.
GWT help article here.
Hopefully, you know by now how Google uses schema.org markup to inform rich snippets that get displayed in the search results pages as recipes, reviews, and much more. And, that by implementing the right data markup, you can hope to trigger the data on your site to display as these rich snippets and dramatically improve click-through-rates for a notable bump in search traffic.
If structured data matters much to you, the GWT Structured Data report is essential.
By viewing stats on structured data for your site as a whole and by type of data, you can verify that Google is picking up structured data.
You can also get nice details on the individual data pieces being picked up and also on errors.
If the numbers and data don’t seem to match what you hope to expect, start diagnosing by looking for errors. Then, find a page that should be triggering a rich snippet but isn’t and test it on GWT’s handy-dandy Rich Snippets Testing Tool.
The HTML improvements section can not only help you improve the appearance of your SERP listings, but also help you find opportunities to address keyword optimization and duplicate content issues.
Find title tags and Meta descriptions that need to be fixed.
However, the HTML Improvements report will do a good job flagging pages that don’t conform to the following best practices for Title tags and Meta descriptions:
- Have a unique one for each page.
- Don’t make it too long or it will get truncated.
- Be informative.
Sniff out duplicate content.
As you likely know, it is generally a bad practice to have pages that do not contain content unique to that page. The first step in dealing with duplicate content problems is identifying them, and GWT offers one way of doing so that is too simple to ignore ― simply check for duplicate Title tags and Meta descriptions.
Find out which pages share which Title tags and, if there’s a lot of duplicate Titles, download the data so you can play around with it in Excel. There’s a good chance the URLs are duplicates. While there are other good ways of finding duplicate content (like with Screaming Frog), this method’s benefit is that it will show you some duplicate content Google has indexed.
Caveat on non-indexable content data
I’ve worked on countless sites that have content that appears not to be indexed or even properly read by search engines that is not reflected in the “non-indexable content” data. I really have no idea what a page has to flagged here (I’d love to hear insights if anyone knows). I almost always see GWT say “we didn’t detect any issues with non-indexable content” ― even when it seems that would be incorrect. So use caution.
Google your brand. Now do it on private browsing. Assuming you have sitelinks, do you like them? Occasionally, the sitelinks can link to pages that convert poorly or offer suboptimal UX. If you don’t like a sitelink, you can demote the Sitelink to reduce the chances of it appearing.
If you have a lot of branded traffic and a crappy sitelink or two, this is a big and easy win. The most common big win scenario I see is when a site was getting a lot of traffic to a page that has suddenly become dated (for example, a seasonal or out-of-stock product).
Just make sure this is the right thing to do. Factor in the possible impact of personalization, location, and device on the sitelinks you observe ― what you see may not be what everyone sees, and what you don’t want to see may be something some people do want to see. Also, if it happens that the vast majority of Google traffic to a page is coming through a sitelink (which you can determine by analyzing the page in the Search Queries and noting how many clicks are from branded queries), then you can guess on conversion and engagement for the sitelink in a Google Analytics landing page report filtered for only Google organic traffic.
GWT help article here.
This is the gem of the “Search Traffic Section” ― heck, it’s the gem of all of Google Webmaster Tools, and the data in this report can be found repackaged in many paid SEO tools suites. This report has some (nothing is truly comprehensive) data on the following:
That data can be displayed against the following dimensions: 1) keyword, 2) landing page, and 3) keyword/landing page.
You can then filter data by location (but only certain countries) and Google search vertical (regular web, image, mobile, video, or news). And you can download that data and have a field day.
Sweet, right? Well…
Limitations of Search Queries
This data has gotten extremely popular as folks must compensate for missing data on keyword performance due to keywords (not provided) in Google Analytics. The search query data seems to have gotten much more reliable too. And more specific.
Unfortunately, while Search Queries are one of the most important ways to fill in the (not provided) void, Search Queries data is far from a complete replacement for the (not provided) keywords. Why?
- There’s no engagement or conversion metrics.
- You don’t have all rich secondary dimensions like “metro area” or “time of day” like in Analytics.
- Not every keyword is shown (not even close).
- A click in this report is technically different than a visit (session) in Google Analytics.
- Historical data only goes back 3 months (a workaround is below).
Cool Insights with Search Queries
That said, there’s a ton you can do. Obviously, it’s very good to know which keywords people are Googling to get to your site.
And you probably know how to make use of rankings data. By the way, GWT “avg. position” data has been demonstrated to be relatively consistent with other ranking-checking methods, .
Below are a few other fun insights.
I find that many, if not most, web marketers have never looked at search engine traffic data on PDFs and other downloads.
Out of the box, Google Analytics can only pull data on HTML pages. Well, one method (here’s more) to get more data on non-HTML pages is the Search Query report. And this is the best way to get keyword-level data on non-HTML pages. Just bust out your ctrl+f and look for the filetype extension (.pdf, .doc, etc…) in the URL.
Image and video SEO
It can be useful to look at web-only (regular) queries, image-only queries, and video-only queries. This data can explain weird things you might see in Analytics.
For example, a quick look at image-only queries revealed why we (still) get a lot of low-quality traffic to a random old blog post about naming the then new office pet.
One very important thing to remember when looking at image data is that clicks do not equal visits ― especially for images. Whenever you compare GWT clicks to images to visits by image in Analytics you get wildly different results. Image clicks will be waaay more than image visits. Below, you’ll see those 4,000+ image clicks resulted in only 132 sessions in our site (these sessions do not include visits to only the image file URL; these are only sessions on HTML pages.)
To find Google image traffic in Analytics:
- Go to Acquisition -> All Traffic.
- Set the advanced filter to Source Contains Google and Referral Path Matching RegExp images|mgres|imagedetail.
AJ Kohn has more details on tracking image search in Analytics.
I haven’t verified 100%, but I’m almost positive that GWT is counting any click on the image SERP, not just clicks to your site. On a related note, a change in the image SERPs in 2013 drastically decreased Google image traffic for everyone.
Another thing to note is that the image data has incredibly high impressions and low ctr compared to the other verticals, so it can really skew your data if you are viewing All search queries.
Mobile vs. Web
Mobile users often do different kinds of searches than non-mobile. For example, mobile users are more likely to be looking for a business near them. Use the Search Query data to get insight on how mobile and non-mobile Google users search differently to wind up on your site.
Another question to ask is “Are rankings are drastically different for the same keyword on mobile vs. non-mobile?” If you rule out that mobile image or video results are not skewing the data, then maybe it’s possible your page ranks lower on mobile. While it’s probably them (Google) and not you, make sure you’re not making any big mobile SEO mistakes.
Looking at click-thru rate can reveal a number of opportunities and insights.
First, CTR data will help you understand the relationship between rank position and clicks.
Second, CTRs can also help you understand the SERPs for your niche. Often the CTR is highly dependent on external factors such as competition, number of advertisers, and amount of specialized results (like rich snippets, local carousels, images, etc…). Understanding which search queries tend to have lower CTR in your niche can help inform your future keyword research and SEO strategy.
Finally, sometimes the CTR is something you have some direct control over; and you want to find opportunities to directly improve CTRs. Below-average click-thrus may indicate an opportunity to employ rich snippets or tweak Meta descriptions.
To perform the above analyses, you need to know what average CTR is. You could look to the varying results of external studies. If you ask me, my take-with-a-grain-of-salt go-to “average ctr” for the #1 position is 30%, but I’m sure you’d get a dozen answers if you asked a dozen SEOs.
You could also take the average CTR for your data. First, export the search query data into Excel. Then isolate a bucket of queries for a given rank (all queries for position 1, for example). Then take the average for the bucket.
Another thing to look at is the CTR of searches for your brand. While it will never be near 100%, you usually want to get it as high as it will go. See if you should try to win more real estate in the SERPs.
Search Queries Hacks
Integrate GWT into GA
Viewing GWT Search Query data in Google Analytics (GA) is super easy. All it requires is for the admin of both GA and GWT to log into GA and, in the left nav, go to Acquisition -> Search Engine Optimization -> Landing Pages. If you’ve never connected GA and GWT, you’ll see a screen that states “This report requires Webmaster Tools to be enabled.” Simply click the set-up button and follow the easy instructions.
But there are limits.
One limitation of connecting the accounts is that you can only connect one GWT account to one GA account, and a GWT account can only be for one subdomain. So, if you have multiple subdomains, an individual GA view will only display some of your GWT query data.
Another limitation is you can’t view Search Query data by landing page together in GA.
Export keyword by landing page
Viewing keyword data without landing page data is like having chocolate without the more chocolate. Unfortunately, GWT doesn’t let you download the search query data by landing page without clicking on every landing page in the report.
Well, LunaMetrician Noah has created a great bookmarklet that will automatically “click” on every landing page to reveal the search queries and then download it. So now you can have double chocolate. (OMG.)
One problem with the Search Queries report is that it only goes back 90 days. That’s no good if you love historical data like I do. The obvious solution is to export it periodically, but this is a pain to constantly do manually. Fortunately, you can automate downloads: here’s a <a ” title=”Official Google Webmaster Central Blog: Download search queries data using Python” href=”http://googlewebmastercentral.blogspot.com/2011/12/download-search-queries-data-using.html” target=”_blank”>Python method and a <a ” title=”Running – php-webmaster-tools-downloads – PHP class to download website data from Google Webmaster Tools as CSV. – Google Project Hosting” href=”https://code.google.com/p/php-webmaster-tools-downloads/wiki/Running” target=”_blank”>PHP method.
We currently use the PHP solution here at LunaMetrics to access Search Queries report data (as well as content keywords, external links, and internal links).
Webmaster Tools can give you some decent data on your backlinks. However, I don’t use it much these days. We use the paid tool Open Site Explorer by Moz which has superior actionable insights on authority. We also use Majestic SEO which has the most comprehensive set of raw link data available and is now free if you can verify site ownership (it’s quick and done through Webmaster Tools).
So use Majestic SEO. If you already have it, below are some little link insights that may be unique to GWT.
Who links the most
Nobody crawls the web as deep as Google does. GWT might have data on some links when the other tools don’t. That said, GWT doesn’t always display all the links Google knows about (I’m not sure what the quantity cap looks like, but you may be able to get more than 1,000 domains if you download the data). You can download the linking domains and check to see if a given domain is linking to you. This can be useful if you really want to see if a specific site links to you or if you just want to see what the other link tools are missing (typically the dirty underbelly of the interwebs).
You might also note if the quantity of linking domains is growing from month to month.
The data from “download more sample links” or “download latest links” is very noisy; you’ll need to scrub out links from the same subdomain in Excel to get any use out of it.
Your most linked content
There’s a solid chance you can use this report to find your inbound-linked-to pages you won’t find elsewhere. Seeing which pages pull in the most links and why is my favorite thing I do when analyzing link-winning strategy. I don’t use GWT for this much, but it can help if you have a site that doesn’t get a ton of backlinks and every little link matters.
How your data is linked
While anchor text isn’t as critical to rankings as it used to be, it’s still worth looking at now and then. Unfortunately, the GWT report only lists up to 200 phrases.
GWT help article here.
Index bloat is one of the most common problems SEOs deal with. When Google has way more pages indexed then deserve to be organic landing pages, the consequent dissipation of link juice and constrained crawl budget can have a significant impact on SEO traffic.
The converse of index bloat is when pages that should be indexed are not indexed, and this is an equally important problem. There’s no shortage of horror stories of a site’s organic traffic dying because indexation was blocked via a problem with something like robots.txt, Meta robots, rel=canonical, or nofollow attributes. Often, when these issues are in their early stages, the impact on traffic is not yet apparent.
Check the Advanced Index Status report and examine total pages indexed, the number of pages removed, and the number of pages blocked by robots.txt. If any numbers have moved in a way you wouldn’t expect, investigate immediately.
For more information on crawling and indexation metrics, read this.
GWT help article here. (links to articles on specific types of errors on right)
A 404 is the HTTP status code for Page Not Found. This error occurs whenever there is no page for the URL requested. Webmaster Tools reports 404 errors whenever Google’s spider crawls a link to a URL that has no actual page associated with it. Common causes of 404s include typos in the destination URL of a link and failure to redirect the URL of a page that was moved or deleted. Both causes of 404s can be detrimental to both the user experience and your SEO endeavors.
Note that many GWT 404s are outdated, “false alarms”, or triggered by bad links from insignificant pages no one ever visits. These may not represent any significant inconvenience to your users or wastage of link juice, but many 404s will be problematic. Click on the URL to see the site’s linking if you suspect the 404 may be a problem.
Resolve problem 404s by 301 redirecting to the appropriate page, by changing the destination URL of the inbound link, or by restoring content to the 404, depending on what is most practical and most beneficial to your users.
Note that if you utilize the “MARK AS FIXED” button, you will have more up-to-date data.
This post has a nice perspective on 404s. While the GWT report is super useful for trends and is a great data point, I often look to some other data points like Google Analytic for taking action on 404s (see #9 in SEO Measurement Mistakes Part 3: Crawling and Indexation Metrics).
Soft 404s and Other Crawl Errors
404s get a lot of attention, but there are other crawl errors that can impact user-experience and SEO. For example 403s, 500s, and 503s are all non-crawlable. Other “not followed” URLs like redirect loops may not be crawlable. Google Webmaster Tools reports on all these.
Soft 404s are a user-experience and SEO issue, and GWT can be the best way to find them non-manually (though some might not actually be soft 404s).
However, GWT does not report on crawl issues like misplaced meta robots tags or 302 redirects.
The data in the Crawl Stats may not be as rich as server log file data, but it’s better than not looking at any spider activity reports at all.
Crawl Stats has pretty volatile graphs, but do look for big, weird spikes and distinct trends. For example, Crawl Stats can tell you:
- If you have increase in # of pages sucking crawl budget ― if pages crawled goes up, but kilobytes downloaded does not,
- If page load times suck crawl budget ― time spent downloading a page goes up and # of pages crawled goes down, or
- If crawl budget increases/decreases ― kilobytes downloaded per day will trend, and pages crawled will likely follow.
An XML Sitemap(s) is an opportunity to tell Google and the other search engines what pages on your site you want to be crawled and indexed. For large site or sites with frequently updated content, a Sitemap is pretty important. The search engines don’t guarantee it will abide by the Sitemap, but anecdotal evidence has proven time and time again that XML Sitemaps help increase the chance your pages are found and found fast (especially if the Sitemap is up-to-date and “clean”).
Sitemaps can get tricky — especially when you have a large site or when you use special Sitemaps for images, video, news, mobile, or source code. To ensure you’re doing your Sitemaps right and getting the most of them, always submit them with GWT’s Sitemap feature.
It is recommended that you always validate your Sitemaps before going live. And what better way to validate than through the eyes of Google? Simply click the big red “Add/Test” button and test away.
Once you’ve submitted a valid sitemap to Google, you should not ignore it, however.
Check in regularly to see if there are any errors or warnings. Often, a sitemap error will reveal a larger problem with your site.
In addition, pay attention to the number of URLs (or images, videos, etc..) indexed versus the number of URLs or items submitted. It is not uncommon for there to be a discrepancy here, but one of your SEO goals is to get the search engines to index everything you want indexed.
The tricky part is seeing which pages are not indexed (in fact, this topic could warrant its own article), but this may be possible with Google site search and Analytics landing page reports. It’s very time consuming manually, but can be automated with technical hacks.
If the pages not indexed are important to you, there are a few things you can do to improve indexation. For example, you could add or adjust tags in Sitemap: the <priority> tag tells the search engine how important a URL is, and the <changefreq> tag indicates how frequently the page is updated (for example with links to new pages). Also, unindexed pages may be a red flag that those pages lack inbound links or lack content perceived by engines to be unique.
On a related note, I wrote about building Sitemaps here.
GWT help article here.
When you first go into this report, you’ll see a doorway page that states “Use this feature only if you’re sure how parameters work. Incorrectly excluding URLs could result in many pages disappearing from search.” Heed that warning and don’t change the parameter settings unless you know what you’re doing.
But even if you kinda don’t, there’s some useful data in here.
When I’m doing an SEO audit, I like to look at the most commonly used parameters, see h0w the site is using them, see if they’re tied to un-informative URL names, and see if any are causing duplicate content or being a major drag on crawl budget. Don’t worry about utm parameters, which are related to Google Analytics and well understood by Google.
One way to find the URLs with the parameter is to Google “inurl:?yourparameter= site:yoursite.com”. This will give you an indication of which parameters are getting indexed. The other method is to look for the parameters in the results of a site crawl.
This analysis may lead you to identifying non-canonical URLs; if so, you’ll want to apply duplicate content fixes. You can also configure the parameters in GWT, but this typically should only be a band-aid instead of a permanent fix, and (as noted) should always be done with much caution.
Well, I hope this Webmaster Tools guide is useful. My favorite reports are Index Status, Sitemaps, and ― of course ― Search Queries. What are yours?
Are there any reports or functions of GWT you’d like me to cover? Got a hot Webmaster Tools tip you want to share? Leave a comment. I’ll try to update and improve this guide in the future.
About Reid Bandremer
Reid Bandremer is an Senior Search Project Manager. His background before joining LunaMetrics in 2011 includes eCommerce marketing experience and a pair of business degrees. He is a rabid fan of music and holistic, ROI-driven search marketing strategy. Other strengths include organic search marketing segmentation, migrations, and metrics. Contrary to popular theory, Reid is not homeless – he just often stays at the office late because he is obsessed with increasing traffic value to clients’ sites.