Creating a Test Plan for Google Analytics Implementations



In order to be able to use the data you collect from your website, you have to know how your data reflects user actions on your website, and you have to have confidence that your website it tracking correctly. A Test Plan is a process you can use to make sure that you receive the data you expect from your Google Analytics implementation.

Like any formal process, a Test Plan creates overhead, and it many cases it’s just writing down what you’re already doing ad-hoc. Nonetheless, a Test Plan provides several benefits that are especially for websites that are large, fast-moving, or have a lot of cooks in the kitchen.

Most importantly, a well-written Test Plan can be verified by someone who is not well-versed in Analytics. This allows different teams, like developers or project managers, to be able to make updates to the website and have confidence they’re not breaking tracking, without making the analyst a bottleneck. It’s also useful to have an archive of completed Test Plans with dates, to verify how things were tracked at particular points in time.

We often see Test Plans as part of a Quality Assurance Team, as a final check before changes get promoted to a live website.

A Test Plan consists of two essential components, and one optional component:

  1. A series of user actions on a website
  2. The data you expect to see from Step 1, including a list of hits and the data that should be included in each hit
  3. (Optional) The tool(s) you will use to verify Step 2

It’s usually best to create separate Test Plans that each cover a separate component of your website. For example, one test plan might cover email sign-ups; another might cover registering for a training; and a third might cover contacting us about our services.

User Interactions

The first part of a Test Plan takes some user interaction that you want to track, and breaks it down into steps. For example, for a Form Submission the steps would be:

  1. User visits a page with a form
  2. User fills out form
  3. User clicks Submit
  4. Website directs user to a thank-you page

A more complicated Test Plan might cover the full end-to-end process of buying a product, including entering a search term, refining results, adding to cart, and checking out. These interactions would be combined into a single Test Plan, rather than broken out into separate Test Plans, because there is data (such as the Product Name) that must be sent multiple times, and must be consistent every time.

If your website is developing using an Agile methodology, your dev team may already have User Stories associated with site features. It’s best to use these if they exist, so that your Test Plans align with existing development and testing that’s already being done.

Additionally, consider what should stay within the scope of the Test Plan. Submitting a form and seeing the information appear in Google Analytics is just one piece. Will the tester be able to verify that the form has kicked off the appropriate processes to email the correct people and deposit the form information into the correct systems? This will depend on the scale of your organization, who is actually doing the testing, and the type of changes that are being made.

Data You Expect to See

For each step of the User Interaction, list all the hits you expect to be sent to Google Analytics (if any!). On each hit, list the data you need for each hit. Applying this to be above example:

  1. Pageview hit with a URL unique to the form, or the page where the form lives
  2. No hits as the user enters data
  3. Event hit on successful form submission.
    • Event Category is “Forms”
    • Event Action is “Submit”
    • Event Label is the name of the form
    • Custom Dimension 3 is the value the user selected for the “Industry” dropdown
  4. Pageview hit with a URL different from the pageview in step 1

Your test plan should only cover the values that you care about and expect to change. Google Analytics sends a lot of extra values on every hit, like the client ID and the cache-buster. Your most important values are your Event Schema, your URL, your product information for ecommerce implementations, and your custom dimensions.

Make sure you indicate how specific values must be. In the above example, if there are no other forms on the site, then it doesn’t matter so much if the Event Action were “Submit” or “Submission” or “Form Submitted Successfully.” However, if there is already another form on the site using “Submit” as the Event Action, then this form must use that exact value.

Data You Don’t Expect To See

As a sidenote – sometimes it’s almost as important to think about what information you expect to not see. Notice in the list above, I included “No hits as the user enters data.” Consider where things could go wrong, and where extra or incorrect information may accidentally be included.

If there are two forms on the page, make sure your new tracking only works for the expected form. If you’re trying to track PDF downloads, test a link where you expect to see an event recorded, a PDF link, and then also test a link where you don’t want to see anything recorded, like a DOC link.

List out these caveats and include them in your test plan for others to follow.


Now that you’ve described what values you expect to see, you need to explain how you expect to see them. If your Test Plan is something that you expect to execute yourself, then the proof the tracking works correctly may simply be viewing the Real-Time Reports in Google Analytics. Verifying your data in Google Analytics is strongly preferred whenever possible, however keep in mind there may be other factors at play like view filters and delays in processing.

Tools To Help Validate

If your Test Plan will be executed by someone else, you may need to instruct them to use a different tool. The appropriate tool will depend on their technical skill and what permissions they have, but good choices include the GA Debugger or Tag Assistant Chrome Extensions; the GTM debug panel; Charles Proxy; and the Network tab in their browser’s developer tools.

The last part of the proof step is choosing how to document what you found. Sometimes you don’t need to do anything: seeing the data in your reports is enough documentation itself. Sometimes it’s useful to save screenshots or paste values into a spreadsheet, especially if you might later need to dig up the exact value that was sent. Some tools, like Tag Assistant and Charles, have built-in recording and save functions.

For example, Google’s Tag Assistant is a great tool for this type of testing. Just before your test, click the “record” button on the extension. Then, follow the steps for your test. Then click stop recording. The extension will produce a report on all of the hits being sent to Google Analytics which can be downloaded and saved with your results.

Saving proof is also important if someone else is doing the checking, so you can double-check their work afterwards.


A little bit of process goes a long ways towards instilling trust in data. A Test Plan is a small, simple process that you can use that will help you keep your data rolling in the way you expect it, even as your site grows and evolves. A structured and commonly accepted process that is followed with every change will help prevent breaks in data and incorrectly tracked items on your website, saving time and money in the long run. A Test Plan may start small, but can be as elaborate as needed to verify the changes were completed correctly.

Logan Gordon is a Consultant with seven years of experience in Web Analytics, where he has helped many companies become more data-driven organized. He drove to Pittsburgh from California and is making up for not playing in the snow as a kid. His interests include Bulgarian language (due to his wife), cooking, Bulgarian cooking, and science fiction. His only pet is Bob, a dead shark in a jar.

  • Yehoshua Coren

    While a Test Plan and well documented QA process is a very important part of my own implementation work, I find one of my biggest pain points is lack of automation within the QA. The article above is spot on in terms of many things, but doesn’t address how the creation of the Test Plan and the *repeated* needs to run through those tests manually are a huge use (waste?) of time. I’m speaking as someone who uses manual tools such as those listed above to validate implementations all the time. The more I do manually data validation, the more I feel that I’m going about things all wrong and that I’m fully just wasting my time.

    To that end, what automated tools / or processes do you (or other readers) suggest for this sort of work? Personally, I haven’t done my due diligence when it comes to researching solutions, but I am a firm believer that there MUST be a better way than to look at hit payloads manually and marking things down in a spreadsheet. Hub’scan? ObservePoint? Jan Exner’s or Simo & David’s work with Selenium? /

    • Logan Gordon

      HI Yehoshua,

      I absolutely agree, repeated testing can be a huge time sink and automatability would be a big benefit. Unfortunately, the tools for automating this sort of work tend to be heavyweight–I assume because in the general case, you need a headless browser that can execute JavaScript, and those tend to be heavy themselves. This drives up the infrastructure requirements. The tools you mention are definitely worth their cost in terms of saved labor, but they’re also out of reach for many projects.

      The best sweet-spot that I’ve been able to find is using a tool like Charles that can automatically record a session in a text format, and then writing scripts that run checks against that saved session. This isn’t full automation, but it does cut down the amount of work by a good chunk.

Contact Us.


24 S. 18th Street, Suite 100,
Pittsburgh, PA 15203

Follow Us



We'll get back to you
in ONE business day.