Upcoming LunaMetrics Seminars
Pittsburgh, Jan 12-16 Boston, Jan 12-16 New York City, Jan 26-30 Denver, Feb 9-13

Archive for the ‘Search Engine Optimization’ Category

From Brain To Blog – How to Work with Industry Experts

blog-experts

The Scenario:

You’re developing web content for an industrial B2B company that has 20+ employees, most of whom are experts in their field. Their premier product is highly technical and in a niche market that is traditionally offline, as are many B2B businesses. Their site and its audience are relatively small, but growing rapidly.

Because this particular industry was so slow to migrate to a digital world, the competition isn’t incredibly high. This means that, hypothetically, keyword-targeted high-quality content rich with expert information could easily rank well.


"Writing Is Hard" Charlie Brown Comic Strip

It sounds simple, but getting a handful of industry veterans to contribute content by writing down the wealth of knowledge in their brains is no field day. The expert’s first question is typically, “What do you want to know?” It’s a valid question. If I worked in the industry for 20+ years and someone asked me to write, I’d likely respond with a similar remark.

So how does one coax information from an expert’s brain? This question becomes more challenging when you are not an expert in the industry yourself. Here are a few of my tried-and-true solutions: (more…)

What Should You Do With Old Content? Part 3 of 3

blog-old-content-part3

PART ONEPART TWOPART THREE

Making the Right Choice – Part 3 of 3

In the two previous parts to this three-part series, we discussed the issues facing us as we evaluate potential outdated content, and we investigated options to handle that content. In part 3, we discuss how to pick the right right options.

Matching Option to Scenario(s)

By now, you should have answers to important questions like, “How much effort is this worth?“, “what are my SEO needs”, and “what are my UX issues”?

You can now use the table below, which shows the impact of your options for handling old content on labor costs, SEO, and UX.

Old-content-options

(more…)

What Should You Do With Old Content? Part 2 of 3

blog-old-content-part2

PART ONEPART TWOPART THREE

Options for Dealing with Old Content

This post is part of series on how to handle old and outdated content. Part 1 focused on your internal resources and the reasons you may want to update old content. Part 2 focuses on the 6 types of potential options you have for how to update old content, and Part 3 will help you make the right decisions.

As you identify problem pages, whether they’re outdated, incorrect, or no longer relevant, you can also start thinking about the best way to fix these pages. (more…)

What Should You Do With Old Content? Part 1 of 3

blog-old-content-part1

PART ONEPART TWOPART THREE
Answer: It depends. But don’t ignore it. 

Don’t ignore your old and potentially outdated content. You don’t yet know if it could be a huge burden or a huge opportunity for your site. Your old pages might also be where the majority of your audience lands; in October 2014, for example, about 2/3rds of traffic to our blog was to articles published prior to 2014.

Many folks take the “set it and forget it” approach to content (and to blogs in particular), spending a ton of time creating it, yet never revisiting it. This is a shame – there are potentially huge returns to investing time in revisiting-and-revising the old stuff. (I can personally attest to said returns, as we’ve seen plenty of success addressing old content for our clients). So do something.

You should carefully consider several options to handling old content. In this series, I’ll lay out those options and suggest a framework for choosing the most appropriate method for dealing with it. Part 1 begins with considerations.

(more…)

Search Lessons From Google’s Eric Schmidt

blog-search-lessons

Eric Schmidt, executive chairman (and longtime CEO) of Google knows a thing or two about managing a growing business in the 21st century. He also knows a little bit about search.

In promotion of his new book How Google Worksco-written by Jonathan Rosenberg, he released a terrific little slidedeck summarizing the company’s approach to work (Full slidedeck at bottom of post).

Though the book and the slideshow are primarily aimed at the Management audience, these lessons are very relevant to those of us in the search world as well. (more…)

10 Years of Digital Marketing Trends in 10 Graphs

blog-digital-marketing-trends

Each year about this time, digital marketers are bombarded by reviews of industry trends and projections on what’s to come. It’s all 5 Content Marketing Lessons from 2014 and Secret Strategies for SEO Success in 2015.

This is not one of those posts.

Leave your marketing plans in the drawer and forget your keyword research docs. This won’t help you with them. Think of it more like hump day material to take a break from the inbox and reflect on how much our industry has changed over 10 years.

Here are 10 graphs from Google Trends (focused on the US for consistency and accuracy) that illustrate how we have evolved and why we will all have “hacker” or “influencer” in our job title some day. (more…)

Taking Advantage of Semantic Search NOW: Understanding Semiotics, Signs, & Schema

blog-semantics1

Semantic Search. I imagine saying it five times into a mirror conjures an effect similar to horror classic Candyman. It’s all anyone in the Search world is talking about on blogs, at conferences, and in hushed whispers in the break rooms of agencies.

Yes, the future is coming, and it is semantic. Some of it is already here. Let’s take advantage of it! Many posts just like this one focus solely on the how, but today I’m going to switch it up and give you the why.

Google’s Hummingbird release, as documented by our own Andrew Garberson, changed the search game in a major way. Not only did (not provided) significantly alter data available to search marketers, Hummingbird signaled a major learning leap on the part of Google.

No longer confined to a toddler-level reading ability wherein a term is just a term unto itself and needs endless repetition (read: keyword stuffing), it signals a shift towards a first-grade reading level by the search engines to place words in context and take educated guesses at synonyms, meanings and full language understanding.

Example: “hot dog” and “hotdog” meant different things to pre-Hummingbird search, but could easily be synonyms to the current technology.

It’s clear that the concept of a singular keyword is dying if not dead. (more…)

Using Latent Dirichlet Allocation to Brainstorm New Content

blog-LDA

I recently had a problem with my client – I ran out of things to write about. The client, a chimney sweep, has been with our company for 3 years and in that time we have written every article under the sun informing people about chimneys, the issues they cause, potential hazards, and optimal solutions. All of that writing has worked and worked well. We have seen over 100% traffic increases YoY. The challenge now is to keep that momentum.

Brainstorming sessions weren’t working. They looked more like a list of accomplishments than of new ideas. Each new idea seemed like we were slightly changing an already successful article written in the past. I wanted something new and I wanted to make sure it was tied to a strategy. Tell me if this sounds familiar!

So I internalized the problem. I let it smolder and waited for the answer. Then while reflecting on the effects of website architecture and content consolidation, topic modeling popped into my head. If I could scrape the content we’ve already written and throw it into an Latent Dirichlet Allocation (LDA) model I could let the algorithm do the brainstorming for me.

For those of you unfamiliar with Latent Dirichlet Allocation  it is:

“a generative model that allows sets of observations to be explained by unobserved groups that explain why some parts of the data are similar. For example, if observations are words collected into documents, it posits that each document is a mixture of a small number of topics and that each word’s creation is attributable to one of the document’s topics.” -Wikipedia

All that basically say is that there are a lot of articles on a website, each of those articles is related to a topic of some sort, by using LDA we can programmatically determine what the main topics of a website are. (If you want to see a great visualization of LDA at work on 100,000 Wikipedia articles, check this out.)

So, by applying LDA to our previously written articles, we can hopefully find areas to write about that will help my client be seen as more authoritative in certain topics.

So I got to researching. The two tools I found which allowed me to quickly test this idea were a content scraper by Kimono and a Topic Modeling Tool I found on code.google.com.

Scrape Content With Kimono

Kimono has an easy to use web application that uses a Chrome extension to train the scraper to pull certain types of data from a page. You are then able to give Kimono a list of URLs that have similar content and have it return a CSV of all the information you need.

Training Kimono is easy; data selection works similar to the magnifying glass feature of many web dev tools. For my purposes I was only interested in the header tag text and body content. (Kimono does much more than this, I recommend you check them out). Kimono’s video about extracting data will give you a better idea of how easy this is. When it’s done Kimono gives you a CSV file you can use in the topic modeling tool.

Compile a Lists of URLs with Screaming Frog

Next I needed a list of URLs for Kimono to scrape. Screaming Frog was the easy solution for this. I had Screaming Frog pull a list of articles from the clients blog, then I plugged those into Kimono. You could also use the page path report from Google Analytics.

Here is what that process looks like:

Map Topics With This GUI Topic Modeling Tool

Many of the topic modeling tools out there require some coding knowledge. However, I was able to find this Topic Modeling Tool housed on code.google.com. The development of this program was funded by the Institute of Museum and Library Services to Yale University, the University of Michigan, and the University of California, Irvine.

The institute’s mission is to create strong libraries and museums that connect people to information and ideas. My mission is to understand how strong my clients content library is and how I can connect them with more people. Perfect match.

Download the program, then:
1. Upload the CSV file from Kimono into the ‘Select Input File or Dir’ field.
2. Select your output directory.
3. Pick the number of topics you would like to have it produce. 10-20 should be fine.
4. If you’re feeling like a badass you can change the advanced settings. More on that below.
5. Click Learn Topics.

topic-modeling-program
Main Topic Modeling Interface
topic-modeling-program-advanced
Advanced Settings Interface

 

Advanced Options
Besides the basic options provided in the first window, there are more advanced parameters that can be set by clicking the Advanced button.
badass_neil-degrasse-tyson

Remove stopwords – If checked, remove a list of “stop words” from the text.

Stopword file – Read “stop words” from a file, one per line. Default is Mallet’s list of standard English stopwords.

Preserve case – If checked, do not force all strings to lowercase.

No. of iterations – The number of iterations of Gibbs sampling to run.
Default is:
– For T500 default iterations = 1000
– Else default iterations = 2*T
Suggestion: Feel free to use the default setting for number of iterations. If you run for more iterations, the topic coherence *may* improve.

No. of topic words printed – The number of most probable words to print for each topic after model estimation. Default is print top-10 words. Typical range is top-10 to top-20 words.

Topic proportion threshold – Do not print topics with proportions less than this threshold value. Good suggested value is 5%. You may want to increase this threshold for shorter documents.

Analyze The Output

The output of this raw data is a list of keywords organized into rows, each row representing a topic. To make analysis easier I transposed these rows into columns. Now I put my marketer hat on and manually highlighted every word in these topics that directly related to services, products, or the industry. That looks something like this:

topic-modeling-spreadsheet

main-topics-topic-modelingOnce I identified the keywords that most closely related to the client’s industry and offering, I eyeballed several themes that theses keywords could fall under. I found themes related to Repair, Fire, Safety, Building, Home, Environmental, and Cleaning.

Once I had this list, I looked back through each topic column and added the themes I felt best matched the words above each LDA topic. That gave me a range at the top of my LDA topics which I could sum using a countif function in Excel. The result is something to the right.

Obviously this last part is far from scientific. The only thing remotely scientific about this is using Latent Dirichlet Allocation to organize words into topics. However it does provide value. This is a real model rooted in math; I used actual blog content not a list of keywords that came from a brainstorming session and Ubersuggest, and with a little intuition I got an idea of the strengths and weaknesses of my clients blog content.

Cleaning is a very important part of what my client does, yet it does not have much of a presence in this analysis. I have my next blog topic!

Something To Consider

LDA and topic modeling have been around for 11 years now and most search related articles about the topic appear between 2010 and 2012. I am unsure why that is as all of my efforts have been put toward testing the model. Moving forward I will be digging a little deeper to make sure this is something worth perusing. If it is, you can expect me to report on a more scientific application, along with results, in the future.

The ABCs of Content – 26 Ways to Always Be Creating

typesetter image - the abcs of content

In the lexicon of modern marketing, “content marketing” has become a rather popular phrase to bandy about. And it seems like everyone wants to sell you their foolproof recipe for success.

Today, I’m playing that game. My ridiculous line of buzzword-edition Marketing Magnetic Poetry is, “High ROI content marketing is a product of efficiency, synergy, and multi-tasking.” And my “secret sauce” to content creation is:

Always Be Creating content.

This is no secret to true master bloggers and content marketers; they’re 24/7 creators. I don’t include myself in such company, but the better I get at it, the better the return I see on time spent.

(more…)

The SMX East Panels You Shouldn’t Miss

smx_panels_2014

I am attending SMX East

 

I’ve got my pass for SMX East 2014 and I’m ready to go to one of the biggest Search Marketing conferences in the country. After scouring the agenda, I thought I’d share my top must-see panels with you, as well as give you this Twitter cheat sheet of key moderators to follow:

(more…)