Upcoming LunaMetrics Seminars
Washington DC, Dec 1-5 Los Angeles - Anaheim, Dec 8-12 Pittsburgh, Jan 12-16 Boston, Jan 12-16

Author Archive

Find More Precise Keyword Data for Organic Clicks to PDFs

If you’re working with PDF-heavy websites and haven’t had the opportunity to set up server-side PDF tracking with Google Analytics, you’re likely missing out on a great deal of organic traffic data. Sure, you can use event tracking to keep record of internal clicks to PDFs and other downloadable resources, but you aren’t able to capture keyword data (or any other data, for that matter) for “direct” organic visits. While the server-side tracking option is optimal—in that you can track the associated visit—there is, in fact, another way to recover more precise keyword data for clicks (which are different from visits) to PDFs from organic search.

Google AdWords Paid & Organic Report

(more…)

Five HTTP Header Fields Every SEO Should Know

When it comes time to conduct technical analyses of our clients’ websites, we can gain a great deal of insight by reviewing the HTTP response headers returned when we issue requests to their servers. Whether we’re checking for chained redirects or working to identify inconsistent canonical URLs, we benefit in understanding the myriad of HTTP header fields returned in each server response. Let’s look at five HTTP response header fields that are particularly pertinent to our efforts as SEOs.

HTTP Response Header for Google.com

1. Status

While the response status isn’t so much a header field as it is part of the Status-Line in an HTTP response, we can think of it as a specific piece of pertinent information (like a field) that we ought to understand. The status comes in the form of an HTTP status code, which gives us immediate feedback on the status of the requested resource.

(more…)

Link Intersect from the Command Line

With Moz’s Link Intersect tool on the mend, I thought it might be interesting to utilize the power of the Mozscape API to build a simple command line tool with similar functionality; let’s call it Links in Common. While it’s hardly a robust competitive link research tool, I’ve found it great for generating quick “intersection” reports. In this post, I’ll walk through the setup of Links in Common (on OSX) and provide a couple of examples of its usage in the wild.

Initial Setup

Links in Common is comprised of one Ruby file, which you can grab here. Open it in your editor of choice and give everything a quick look-over. If you’ve played with the Mozscape API in the past, much of the code will look familiar (as it is featured in one of the Ruby samples provided by the Moz dev team). You’ll notice that we’re using environment variables for the Moz ACCESS_ID and SECRET_KEY variables. You should do the same if you plan to share your code publicly, as you don’t want the public to have access to your API credentials. If you have no intentions of sharing your code, you can simply paste in your ACCESS_ID and SECRET_KEY values where the environment variables are set.

Moz API Credentials

A bit below these assignments, you’ll see the Link Metrics request parameters. You can toggle these if you’d like to customize your response beyond the current implementation. (Learn more about each of the parameters here.) Right now, our request is configured to retrieve the top 150 external followed links to the subdomain of the target URL, sorted by Page Authority. This means that our response will contain information about individual pages, which is great for the purpose of pinpointing link intersections. (more…)

Using iCurl for Technical SEO On the Go

Analyzing source code and checking HTTP response headers are two of the modern SEO’s integral functions. If you’ve become accustomed to smashing your keyboard (command + option + U in Chrome on Mac) or right clicking to access the page source in your browser, you’ve undoubtedly felt the pain of being an SEO on the go. Every so often—be it out of personal interest or requisite urgency—I’ve sought the ability to view the source HTML and HTTP response headers (in tandem) from my iPhone. (If you’re interested in checking just the page source from your mobile browser, the bookmarklet solution seems promising.)

Enter iCurl for iPhone

In the past, I’ve talked a bit about using cURL for checking server response headers. Now, I’ll talk about using iCurl, “the easy-to-use cURL utility for iPhone,” for conducting basic technical analyses on the go. You can download iCurl from the App Store for the reasonable price of . . . FREE. Fair warning: It provides a good amount of advanced functionality. I’ll focus on the basics, for now, but I encourage you to play around with the more granular settings (e.g., Request Parameters, HTTP Header fields, Session Management, etc.) as you become more familiar with the app.

(more…)

3 Wishes for the Google+ Local Relocation Genie

A genie's lamp.

When Google first made the announcement that they’d be transitioning Google Places into Google+ Local, I was pretty darn excited. I’d dipped my big toe in the local water not long before, helping businesses verify their Places listings at an internship and then publishing a still applicable guide to Google Places. I couldn’t help but think that Google+ held new and exciting opportunities for business owners looking to communicate with existing and prospective customers. All in a beautiful new interface, too. That was the cherry on top.

It’s been a fair amount of time since the transition now, and I’m still a fan of the switch. Having recently gone through the process of updating LunaMetrics’ location information via the Places (or + Local?) Dashboard, though, I thought it an appropriate time to voice my concerns with the whole relocation process. Without any further ado, my three wishes: (more…)

PHP SEO: Page-Level Titles, Meta Descriptions, & More

When it comes to updating title tags, meta descriptions, canonical link elements, etc. on a page-by-page basis, we often rely on the power of the client’s CMS. Whether we’re using WordPress plugins or Drupal modules to get the job done, we generally have a process that is efficient and feasible. No tinkering with template files. No scouring the web for alternative solutions. Simple implementation – just the way we like it.

Content Management Systems with built in SEO utilities are great. What happens, though, when you’re tasked with implementing all of the pertinent HTML elements page-by-page on a PHP based website with a static <head>? Let’s dive right in.

1. Make that <head> dynamic!

In most cases, each static PHP file, be it index.php, contact.php, what have you, will reference the same header.php file via an include statement:

<?php include('header.php'); ?>

The include statement tells the server that any code within header.php should also be included in the file being requested. This way, we don’t have to write a lot of the same HTML on every content page. Instead, we have this one static file from which we can pull the necessary code. Note that the header.php file doesn’t necessarily contain only the HTML <head>. Generally, it will include any code that is reusable at the top of the HTML document throughout the website (including the logo, navigation, banner, etc.). Let’s look at an example of code we might find in header.php:

(more…)

If That SEO Tool Didn’t Exist . . .

As SEOs, there are times at which we take for granted the tremendous tools and resources that help us perform our jobs efficiently. In this post, we’ll look at three tasks that we perform regularly, the tools that help us perform these tasks, and how we might replicate these tools’ functionality and results if they didn’t exist. Hold on for the ride; it’s time to get hypothetical.

1. Crafting Page Titles

In a recent post on misconceptions within the SEO industry, I talked about the mythological character cut-off for page titles, highlighting SEOmofo’s width-based snippet testing tool as a viable alternative to JavaScript character counts. This tool is useful in that it demonstrates that title cut-off in Google SERPs is not a function of character count, but rather the pixel width of the text. Pretty great, right?

Google SERP title pixel width

We can fit 128 ‘i’s into a page title

I got to thinking – if this tool didn’t exist, how could I verify that a title fits within the allotted space? Beyond building my own tool, there has to be some hacky, inefficient way to get this done. Alas, we turn to Chrome’s Developer Tools and/or Firebug for Firefox. To avoid dealing with the <em> tags that Google uses to markup query terms that appear in result titles (bolded keywords, in other words), we start with a simple site search. You can use any domain you please. For this example, we’ll be using Google.com.

Once we’ve executed our site search, we can right click on any of the result titles and choose ‘Inspect Element.’ From here, we can edit the text within the result anchor tags, testing to see whether or not our title will overflow the allotted width. We’ll be testing the title, ‘Monkey Bar Conundrum | Donkey Kong Libation or Playground Equipment?‘ – a title that’s 68 characters in length.

editing result title

Editing the result title

(more…)

Mozscape API Application Tutorial

Behind SEOmoz’s popular Open Site Explorer is a breadth of metric-defining data. The engineers at SEOmoz have coalesced and interpreted this data, forming well known metrics, like Page Authority and Domain Authority, and delivering meaningful counts, like External Followed Links. As SEOs, we use these metrics almost daily, accessing them via OSE, the SEOmoz toolbar, and third party applications (like HubSpot).

mozscape api

In this tutorial, we’ll examine the practice of accessing SEOmoz metrics via your own applications (like our Luna Link Rover), making use of the free version of the Mozscape API and the provided PHP Signed Authentication Example. Prior programming experience isn’t a must, but it certainly won’t hurt. All you’ll need to complete this tutorial is an SEOmoz account, a text editor (we use Sublime Text 2 in this tutorial), and a web server (for testing your application). If you don’t have access to a hosting account, you can setup a local web server using XAMPP (Windows) or MAMP (Mac).

(more…)

4 Common SEO Misconceptions

Chances are good that you’ve read your fair share of ‘SEO Misconceptions’ posts. You know — posts wherein authors debunk and demythologize common misconceptions that outsiders hold true about SEO as an industry. Most recently, Bill Slawski and Will Critchlow offered us a fabulous rebuttal to Paul Boag’s article, “The Inconvenient Truth About SEO.” (A must read, I might add.)

The misconceptions outlined in this post are a bit different, though. These misconceptions are ones often held by those within the SEO industry. While following best practices is generally a safe bet, it can cloud our understanding of the way things really work. In this post, we’ll look at three misconceptions born of SEO best practices and one that’s a bit more related to causal oversimplification. Let’s begin.

1. Title tag and meta description cut-off occurs at a given character count

Truth be told, keeping your title tags under 65-70 characters is generally an advisable practice. However, it’s important to understand that it isn’t the number of characters in your title that determines whether or not Google cuts it short in a site search, but rather the pixel-width of the title itself. The same holds true for the meta description. As Barry Schwartz posted recently, this is a fact that seems to have gone largely unnoticed (seemingly due to no official confirmation from Google).

(more…)

Building a Search Engine with Udacity

For SEOs, the web crawler is a powerful tool. When conducting technical audits, competitive analyses, what have you, we use web crawlers (like Xenu’s Link Sleuth or Screaming Frog’s SEO Spider) to navigate internal linking structures and collect data. These handy utilities take much of the human effort out of discerning top-level page attributes. Feed in a starting URL and—should fortune favor the HTML—you’ll receive the titles, meta descriptions, server response codes, etc. of a healthy selection (if not all) of the website’s URLs (not to mention the URLs themselves).

web crawler architecture

Having near-immediate access to these various page attributes is valuable for a number of reasons. We can efficiently:
(more…)