Facebook changes pages

As Facebook rolls out the mandatory timeline for users, the company has now announced a similar update for pages. Whille bringing the whole of site up to date with the new design, the changes for pages appear aimed at brand … Continue reading

Exploring the New Features in Bing Webmaster Tools

Posted by Daniel Butler

Bing recently announced some pretty cool new features within their Webmaster Tools, so in this blog post we are going to delve a little deeper to see exactly what these tools are capable of.

The Markup Validator (Beta)

Photobucket

Found within the ‘Crawl’ tab of BWMT, the Beta Markup tool works in a similar way to the Google rich snippets testing tool extracting the following elements from a specified URL:

  • Microdata
  • Microformats
  • RDFa
  • Schema.org
  • pen Graph

The inclusion of the open graph is a nice touch, and I can see this coming in handy. Upon submitting a URL, we are presented with a neat extract of any featured markup. Let’s use imdb.org as an example:

Photobucket

However other than extracting elements from a page, there seems to be little actual validation taking place. There are no references to missing elements for example, or whether the mark up could potentially generate a rich snippet.

Let's take a closer look at a URL with incomplete mark up. In the following example an “fn” field is missing for the hproduct element of a page, causing a flag to be raised within Google’s testing tool:

Photobucket

However pasting this same URL within the Bing markup validator just produces the below:

Photobucket

The URL actually being tested here contains hreview-aggregate and extensive use of hreview but there are no references within the Bing Validator, so results are also incomplete.

I really want to like this tool, but I need jam in my Victoria sponge – as this is still in a Beta format, fingers crossed for an update (or perhaps a rename).

Bing Keyword Research Tool

So Bing have finally released their own keyword tool:

Photobucket

Overview of features:

  • Broad/Exact (select ‘strict’ for exact) match keyword search volumes
  • 6 month data history (you can select any date range within this period)
  • Export data for a max of 100 keywords at a time
  • Filter by country and language
  • History feature to track previous research queries

A very clean and simple to use interface but a shame that the data isn’t yet available via an API as there is going to be quite a bit of heavy lifting if you’re generating a substantial keyword research campaign, but none the less we now have some data to play with from Bing directly.

There are a ton of awesome posts to check out on SEOmoz that go into detail about the keyword research process, so I’m not going to go into great detail here, but with the data available from Bing I would be looking to:

  1. Consolidate data into a single spreadsheet
  2. Obtain current rankings for each keyword in both Bing and Google
  3. Use the Google Adwords API to extract monthly search volume for each keyword
  4. Using Google analytics, marry up keywords and associated traffic
  5. Break down keywords into meaningful categories
  6. Use pivot tables/charts to compile this data for identifying key opportunities (low hanging fruit) in both search engines:

    1. Along one axis display separated search volumes for both Google and Bing, also traffic from analytics
    2. On the other axis display current ranking position in both Google and Bing
    3. Filter this chart by ranking between position 5 and 20.

For illustration purposes here is a quick mock up of how this can be developed:

Photobucket

The numbers along the bottom reflect specific keywords, but for demonstration purposes these have been labelled as numbers.

Although the keyword data from Bing isn’t yet available within an API, Bing has released an API for the rest of the data within Webmaster Tools (looking forward to having a play around with this).

Look forward to hearing about your experiences using Bing’s latest tools.

Whew! That, my friends was my first ever SEOmoz post. Did I get round to introducing myself? I’m Dan, Senior SEO consultant at SEOgadget.  I’d love to know what you think and how you’re using the new features in Bing’s toolset. Until the next time!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Google doodles Gioachino Rossini’s 220th

Gioachino Rossini was one of those rather unfortunate people who only officially have a birthday once every four years, being born in a leap year on February 29th. And Google has doodled to celebrate Rossini’s 220th, as the man was … Continue reading

Stop Paying for Stupid Clicks: Negative Keywords for Positive ROI

Posted by KeriMorgret

One of my guilty pleasures is looking through the search query reports (SQR) of an AdWords campaign for the cringe-worthy search queries that led to someone clicking on a PPC ad. Really Google? You felt that goat transportation cost was related to my keyword of freight costs? Or that a babe cam search should show my ad for digital camera? Sadly, these matches and worse can happen if you lack proper negative keywords.

This screenshot shows what happens when your campaign does not have enough negative keywords. It is just as important to have negative keywords as it is to have regular keywords.

  • People really DO click on anything and everything, including these off-target ads, and the advertiser gets charged for that click.
  • Most people are smart enough to not click, so the advertiser isn't directly charged. They just get hit when it comes to their quality score (affecting your cost per click and ad ranking) , which is based in part on your clickthrough rate (CTR). If nobody is clicking on your ads, Google is apt to lower your quality score and increase your cost per click.

I'm going to help you brainstorm and greatly expand your negative keyword list. Evan Steed, co-founder of Meathead Movers, has been brave enough to let me look at his AdWords account and share some real-life examples with you here (and in my February 29th SMX presentation) from an account with no negative keywords. Meathead Movers is based near my hometown on the central coast of California, and they do some awesome things in the community, including moving women out of domestic violence situations for free. That's always impressed me, and I'm glad to be able to give something back to a local business.

Start with the Search Query Report

Download your search query report, and review what people actually entered to trigger your ad. You'll find some good candidates for negative keywords here, and you can start developing organized negative keyword lists.

Go Beyond the Search Query Report to Find Negative Keywords

I use the search query report for gathering negatives I had missed, and to find ideas for entire classes of negative keywords. This all started when I found "honeymoon with a stranger" in a search query report, found out it was a movie title, and got the idea to search IMDB for other titles containing honeymoon. Suddenly I had "zombie honeymoon", "honeymoon for three", and a large variety of other keywords in my negative keyword list. I saw lots of honeymoon resort ads showing for these queries, and realized not too many people were using this method, and started thinking of other ways to find negative keywords.

I prefer to have a good negative keyword strategy in place before I even launch a campaign, to prevent some of these stupid clicks from ever happening. Here are some of the resources I use.

The first resource is an engaged brain. Words often have many meanings, and this can cause you trouble. If you are marketing only to the United States, it's tempting to dump a list of all countries except the US into a list, but remember that Georgia is both a US State and a country. Also, make sure that you don't use the same word in your campaign as in your negative keyword list. Microsoft AdCenter has a nice feature that will alert you to these keyword conflicts.

Existing Negative Keyword Lists

Review existing negative keyword lists that other people have generated. If you do nothing else, review these lists. You'll find near-universal keywords (like ebay, craigslist, sex, porn), keywords to exclude job seekers (resume, position, salary, job), keywords to exclude information seekers (how to, about, what is, how do I), and many more.

Geography Lists

This is helpful for excluding people searching outside of your area of service. Even though Meathead geo-targeted their ads to appear only where they offered service (they only offer moving services in the state of California), people are looking to move from California to another state. Lists like this are also helpful in building your regular keyword list, as you can easily find all of the counties in a state, and all of the cities in each county, and develop targeted ad groups for your product or service.

Movie Lists

I use IMDB's title search and check Feature Film, TV Movie, and TV series to get the most common titles without being bogged down in every single TV episode title ever made.

In the display options at the bottom, I choose to display compact and sort by number of votes descending. This gets you a list of the most popular movies at the top of the list, and you can easily copy the titles that make sense for your list.

Music Lists

Leo's Lyrics does a good job of listing song names in a compact format. In this example, with so many titles being just "move", I'd consider adding some artist names to a keyword list, along with the words lyrics, artist, and album.

Book Lists

For books, I haven't found a great way to get just the most popular titles in an easy manner. I'd just scan Amazon and Barnes and Nobel online and sort by popular items.

Wikipedia Lists

Wikipedia is a great source of lists on nearly any topic. Search "list of [keyword] wikipedia" and you'll often get a great list, along with references for other sites that have similar lists. If you are an animal shelter that only has cats and dogs, you might go for the list of domesticated animals in Wikipedia so your ad doesn't show for people wanting to adopt a pig (and you might want to head to their list of cat and dog breeds as well when you develop your regular keywords).

Government Lists

Governments are great for more than just good backlinks. For regulated industries, they often have lists of  approved companies in that industry. You can use that for a negative list in your branding campaign, and as a keyword list in a campaign targeting people searching for your competitors. Another handy feature is that there is often an export option in these lists to download in a text or CSV format.

Top Lists

Forbes and other sites have endless top 10 and top 100 lists of all kinds of subjects. In Evan's case, I'd use some of the celebrity names as negatives to block his ads from being shown when someone searches for information on a celebrity moving to Los Angeles or Santa Barbara or another of his target cities.

Affiliate Lists

Some affiliate programs have detailed lists of negative keywords that can provide inspiration. If I were advertising for something related to Whitney Houston, I'd add the list of JC Whitney (an auto parts retailer) variations to my negatives list.

Paulson Management Group and Link Connector have several lists of negative keywords for specific campaigns.

Finding alternate meanings

You don't want your financial institution showing up for queries for blood banks and food banks. How to think of some of those other meanings for words ahead of time?

Wikipedia Disambiguation pages

Google Queries

Meathead has a new service for packing in addition to just moving. They knows they need to exclude Green Bay Packers, but wants ideas of what other meanings packing can have beyond the moving industry. Searching for [packers -"green bay" -moving -movers] yields a company in their service area called Island Packers, agriculture packing, and a restaurant called Packers.

Vocabulary lists

Meathead had a query for moving furniture. They don't focus on rearranging furniture, so needs to have an exclusion list for their campaigns that focuses on furniture. An ESL vocabulary list provides a nice text-based list for easy copying and brainstorming.

Yahoo Answers

Yahoo Answers provides some natural-language ideas for negative keywords that you might have otherwise missed.

Keyword Research Tools

Soovle shows suggestions from any number of engines (you can choose) for your keyword. It's another way of quickly spotting off-topic trends.

Übersuggest scrapes Google Suggest and other suggestion services to come up with lists.

Short Words

If you have a short keyword or an acronym, check to see if it's also an acronym for something else, a stock symbol, or an airline code.

Link Builder and SEOs

You also don't want to show your ad to people looking to build links related to your keywords. Rand's post has a number of phrases you'd want to exclude, like "submit url" "add site" "suggest a url".

Trending Topics

Keep an eye on Google Trends and Twitter Trends for a new phrase that has come into prominence. Google seems to not display ads for suddenly trending topics much of the time (like not showing ads when you searched for [cruise ship italy] right after the cruise ship sank), but it's also good to add in negatives to keep yourself covered rather than completely trust in Google's algorithms.

Bonus Round! Tools to Harvest Data

Not every site is going to have a nice plain text list ready for you to copy and paste. I've found a couple of tools that are helpful for harvesting data and making it easily usable.

Dafizilla Table2Clipboard lets you easily paste data with its formatting to Excel, where you can then manipulate the data for just the information you need.

Outwit Hub offers a variety of ways for you to extract data from web pages. This tool deserves several blog posts of its own on its overall uses for SEO, not just in collecting keywords.

Wrapping Up

Whew! There's a lot to think about when finding negative keywords. Is it all worth it? Check out an interview with Ken Jurina with case studies where using tens of thousands of negative keywords has helped businesses save 5% to 40% on their PPC.

What are some of your favorite ways to find negative keywords, and what are some of the worst search queries you have seen?

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

February Linkscape Update: 66 Billion URLs

Posted by randfish

After some wrestling with Amazon's EC2 and the tragic loss of many hard disks therein, we've finally finished processing and have released the latest Linkscape update (previously scheduled for Feb. 14). This new index is, once again, quite large in comparison to our prior indices, and contains a mix of crawl data going back to the end of last year. In fact, this is technically our largest index ever!

Here are the latest stats:

  • 65,997,728,692 (66 billion) URLs
  • 601,062,802 (601 million) Subdomains
  • 140,281,592 (140 million) Root Domains
  • 739,867,470,316 (740 billion) Links
  • Followed vs. Nofollowed

    • 2.21% of all links found were nofollowed
    • 57.91% of nofollowed links are internal
    • 42.09% are external
  • Rel Canonical – 11.11% of all pages now employ a rel=canonical tag
  • The average page has 71.88 links on it

    • 60.98 internal links on average
    • 10.90 external links on average  

We also ran our correlation metrics against a large set of Google search results and saw very similar data to last round. Here are the latest numbers using mean Spearman correlation coefficients (on a scale of 0 to 1, higher is better):

  • Domain Authority: 0.26
  • Page Authority: 0.37
  • MozRank of a URL: 0.19
  • # of Linking Root Domains to a URL: 0.26

Our index stats also check the comprehensive of our crawl data against a large set of Google results, and we've got link data on 82.09% of SERPs in this release. This is slightly down from last month's 82.37%, which we suspect is a result of the late release. Crawl data ages with the web, and new URLs make their way into the SERPs, too. To help visualize our crawl, here's a histogram of when the URLs in this index were seen by us:

Crawl Historgram for Feb. 28th Index

We always "replace" any older URLs with newer content if we recrawl or see new links to a page, so while there may be some "old, crusty" stuff from December, the vast majority of this index was crawled in mid-to-late January.

In the next few weeks, we're working on a new, experimental index that may be massively larger (2-3X) this one, and closer to what's in Google's main index at scale. This is very exciting for us and we hope, for all of you who use Open Site Explorer, the Mozbar, the Linkscape API and tools from our partners like Hubspot, Conductor, Brightedge and our newest API partner, Ginza Metrics (check out some cool stuff they're doing with Moz data here).

Feedback, as always, is welcome and appreciated!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

How Google Makes Liars Out of the Good Guys in SEO

Posted by wilreynolds

This past week I gave the keynote presentation at Searchfest in Portland and I hit on a few themes that seemed to resonate with the audience, and with Rand. So I wanted to share them here. It’s what I have been passionate about trumpeting for some time now. And it’s that the "Good Guys" of SEO, the people who do the things like building great content and community are being made into two faced liars every day by Google. Every day we tell our clients to build good content and Google will reward them we know that it’s a white lie most times, because the other side of that coin is and ALSO build anchor text links so you can actually rank well, because community building is not enough of a factor yet.

Just examine for a second this backlink profile to a sub page for a competitor to one of our clients:
 

What does a backlink profile like that say to you?

I think the above image from one of my slides illustrates this best…I showed how a client of mine who is getting killed by a website who is just targeting tons of anchor text only links on GARBAGE sites and is KILLING my client in the rankings. This is a truth we are all used to by this point that is nothing new. But let’s take a look at Google's rules. Go to that URL and do a Control F for the word "link" – you will find three instances. None of them talk about link building as a tactic to help you rank better, just to be leery of having to link to an SEO company. While that is a good tip, there is not one tip that talks about building links as important, HUH?
 
A little more searching and I found this resource.
 
Notice here Google says: The quantity, quality, and relevance of links count towards your rating.
 
GREAT! They've admitted that the number of links, the quality of links, and relevance count – sweet!
But if you look at that screen grab above, do you see relevance, do you see quality? I don't, I see quantity and anchor text.
 
Later on Google says:
 
The best way to get other sites to create relevant links to yours is to create unique, relevant content that can quickly gain popularity in the Internet community. The more useful content you have, the greater the chances someone else will find that content valuable to their readers and link to it.

Hmmm, let's see how this plays out. But before we do, do me a favor:

Take five seconds to think of the SEO companies that you respect most, whom you consider to be constantly creating unique relevant content in this industry and whom you think of as thought leaders, and participants in the community.
 
5…..4…3….2…1..
 
Ok, now go type in SEO company, SEO consultant, or SEO agency on Google (unpersonalized) and report back on whether or not you saw one of those companies / consultants / agencies you hold in high regard anywhere in the top 10.
 
Let's take three companies with active blogs, lots of social engagement, and tons of high quality links and compare them to sites in the top 15. The companies I picked were SEOGadget, Distilled, and SEER Interactive (us) all come to mind VERY quickly. I am not mentioning by names of the companies I picked who where ranking top 15, but let's examine some differences
 
Looking at our site stats according to SEOmoz
  • SEOGadget has over 50 pages with 10 or more linking root domains
  • Distilled has over 100 pages with 10 or more linking root domains
  • SEER has over 30 pages with 10 or more linking root domains (we got some work to do!)
The "other guys" never had more than two, yet they are killing us on the rankings.
 
I knew putting this data in a chart form would illustrate this best:
 
First I looked at RSS subscribers, by going to Google Reader and searching for their blogs like this:
Wow that description sucks, I gotta work on that…anyway…
 

Half-Truth #1 – If people subscribe to my blog, that will show Google that I am writing good content and people want it, and that should help me rank, right?

 
Reality: Not even close pal. The four mystery SEO companies have seven subscribers to their blog combined.
 

Half-Truth #2 – If I engage with people on Twitter and social channels – that will show Google that I am engaging my audience, and I'll be rewarded with rankings, right?

 
Reality: Nope. Connecting with people on social can get you links in many ways but if you did that well and didn't get anchor text, you'd probably fail.
 

Half-Truth #3 – If I engage with people on Google+ and get added to circles, Google can DEFINITELY see that – that will show Google that I am engaging my audience, and I'll be rewarded with rankings, right?

Lastly, I looked at Google Circles (obviously you can buy Google accounts to add you to circles, but I am hoping Google can see more engagement not just counts), here is what I got:
 

 
Reality: Not yet. But I sure hope it comes.

What message does this send to SEO providers?

OK Big G – We are all playing by your rules, building community, working our tails off on social, and getting our butts kicked, why are you recommending I tell clients to do those things if they aren't helping us?
 
It's sad to think that if I wanted to rank well for keywords in my industry, writing this post, getting comments on it, and engaging in the community by answering questions counts LESS to help us rank well for targeted competitive keywords than me getting 20 anchor text links on a tag page? A freakin tag page! So when I spend time doing the HARD work, I get fewer rankings than those who take the lazy way out?

Is that really the message Google wants to send?

Think about the daily high wire act every one of us undertakes, too much anchor text – you win temporarily and risk getting banned too little you risk your reputation as an SEO company and are likely to be branded a snake oil salesman.
 
But let's also think in the same way we consult with clients, we tell our clients every day that people "Google things" and when they perform searches, they make sometimes make purchasing decisions, based on those searches, right?
 
So when people search Google for "SEO company" and they find this smut outranking the goog guys of SEO…Google is perpetuating the cycle they want to end.
 
They are "letting" the bad guys rank, which only gets them more clients, and pollutes more of the web with crappy sites that have over aggressively linked. Let's also act like noobs for a second – if a client is picking between SEO Gadget or Outspoken Media and one of the companies who ranks on page 1, then guess what they might say to Richard or Rhea? The prospective clients may say that they don't have the social proof, which would be true. It's logical to say, well Google MUST like what company X is doing because why else would they reward them with such high rankings?
 
People don't think about "algorithmic weights" and "over optimization" they believe in what they can see, and what they SEE is that the company ranking #1 or #2 has the social proof that maybe SEER or Distilled does not when it comes to the rankings.

C'mon Google! You are perpetuating the problem.

REAL SEOs wish that we NEVER had to worry about anchor text, we are the people who care about this industry and want to do the GOOD work. The real question is why does Google make us into liars everyday in the eyes of our potential clients? If we follow Google's rules to a T, we will likely never get the rankings, and if we don't get the rankings, we are branded as snake oil salesman.
 
Personally I can't wait for Google+ to start impacting results more. I want to see our TRUE industry leaders rankings to FINALLY be rewarded by our hard work in the community and I bet a LOT of others are with me!
 
If you are saying Wil help me get anchor text in a better way, then I want to give you a few ideas on how to get your targeted anchor text:
  • Include the keyword in your domain name, so consider that when registering domains or microsites
  • Include the keywords in your digital assets, so whether it is a scholarship or a whitepaper, just the "suggestion" of titling a scholarship or whitepaper with your target keywords will help
  • Link internally with targeted anchor text in blog posts, when people copy your posts or scrape them, they will pull in your anchor text and you'll have a chance to get links

Hoping the good guys get rewarded soon!! Or we'll all be selling snake oil!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

92 Ways to Get (and Maximize) Press Coverage

Posted by chriswinfield

Boiler Room Quote

I love Ben Affleck's first scene in the movie "Boiler Room." I always felt that the quote above perfectly relates to companies and press coverage. The ones who don’t get coverage will quickly dismiss it as useless and a waste of time and money to pursue, while the ones who regularly get coverage just smile and hope that you keep thinking that way…


Over the last 12 years, I have been featured in hundreds of major newspapers, magazines, websites and blogs (everything ranging from the NY Times, USA Today and CNN to TechCrunch, Entrepreneur and so on), and I can tell you first-hand that it has helped me and my companies in an enormous way. It's brought me:

  • Publicity (well duh, Chris!)
  • Clients
  • Partnerships
  • Links
  • Traffic
  • Improved Employee Morale
  • Credibility
  • Money

Even more importantly, I have helped hundreds of businesses and friends get coverage. In many cases, the coverage they received was the tipping point for their career or business. A couple of weeks ago, someone who I met at a conference and became a friend of mine told me:

"Incidentally, your advice on PR in the past has been invaluable with [their domain] – PR is our biggest source of traffic by miles."

I had no idea this was the case. Their site has been extremely successful, and it got me thinking that I had never really laid everything out in one place. See, PR isn't the core of my business. I'm not a PR genius or even a PR flack. BlueGlass doesn't offer traditional PR services; we do it as part of an overall Internet marketing campaign.

I've worked with a bunch of different PR people in my career. Some were amazing, some were terrible. I've done lots of things on my own (some were amazing and some were terrible). With all of that, I have learned a lot and I want to share it with you.

So without further adieu, here are 91 ways/tips/thoughts/things that have helped me get and maximize press coverage over the last 12 years. This is the stuff that's worked for me and with a little bit of tenacity, I am positive it can work for you, as well!

Know WHO You Are and WHAT You Want

Yogi Berra Quote

  1. Determine your message by answering the following questions:

    * What’s different about you or your company?
    * What are you the expert of?
    * What makes you better than your competitors?
    * What’s your “Unique Selling Proposition” (USP)?
     
  2. Determine if you want national or local coverage (or both!).   
     
  3. What can you use (beyond your company or expertise) to help you stand out? I’ve used my hair. This guy used yellow shoes.
     
  4. Create a list of everywhere you want to be covered: newspapers, sites, blogs, trade journals, etc.

Build Your Media List

Jay-Z Quote

  1. Identify the reporters at each publication who write about the specific topic for which you want to be covered.
     
  2. Find their contact info. This will usually be included with their stories, but if it's not, search LinkedIn, Google, or on the publication’s site. If you’re still stuck, call the publication.
     
  3. Create a spreadsheet with all of the publications, corresponding reporters, and their contact info. Include a column for notes where you can keep track of preferred contact methods, pitching preferences, best time to contact, and any other relevant info you learn after you’ve gotten to know each reporter.
     
  4. Or, use a tool like Bulldog Reporter (pay as you go) or MEDIAtlas (yearly $$$ subscription).

Research (And Then Research Some More)

GI Joe Quote

  1. Read a reporter’s work before you reach out to him or her. Write down your thoughts on some of his or her recent stories.
     
  2. Is the reporter’s email at the end of his or her articles? This is a good sign they’re open to contact. Tip: Most journalists have (at least) two email addresses. One for the public (the catch-all) and one that they actually use. This is why your email subject line and the email itself are SO important. You have to be the ‘"signal" in all the ‘"noise" they have to get through.
     
  3. Do the reporters respond to comments on the site or blog that they write on? Do they respond only to certain types of comments or to all of them? Make notes of particular comments that they react most favorably to.
     
  4. Is he Mr. Twitter 2012? Is she a Google + Gal? Start following them to feel out their personalities and observe how responsive they are to other people online.
     
  5. Many of the list building services will tell you how the reporter likes to be contacted. Follow those directions.

The Art of the Email

David Ogilvy Quote

  1. Make contact with the reporter via email, telling them how much you enjoyed their latest piece and which parts you enjoyed the most. You’ll be shocked by how many reporters will respond to a quick congratulatory note. Tip: Don’t half-ass this step. If you didn’t really read it and aren’t familiar with their work at all, don’t do this.
     
  2. Follow best email marketing practices (especially with your subject line). Your subject line will most likely mean the difference between making contact with the journalist. Make it count. You’ll need to catch the reporter’s attention in an overflowing inbox.
     
  3. Keep the email short. Remove at least one sentence from whatever you wrote…
     
  4. NEVER include attachments.
     
  5. When they respond, tell them what you do and let them know you’d love to help with any stories they have coming up if they relate to what you do.
     
  6. Try to get quoted on timely topics. Once you’ve made initial contact, email them when breaking news happens and give your own unique perspective. Keep it short and sweet.
     
  7. Include a short bio (& a link to your longer one with a picture of you) in the message. This will save them from having to get more information from you if they’re on a tight deadline to get a story out.
     
  8. Stay on top of "What’s Hot" in your industry so that you can proactively pitch. Pitch yourself as an expert source or figure out a way to work your company into the pitch. There are a bunch of good sites that can help you with this:
  • Google Trends or Trendistic to see what’s hot right now
  • Google Alerts (free) or Giga Alert (small subscription fee but a bit more comprehensive) to monitor activity based on specific topics (i.e. “content marketing” or “Facebook advertising”).
  • And for our industry: Hacker News and Pinboard’s Popular section help me to find stories that might not have hit the mainstream yet. It’s important to not forget to step outside of your bubble on a daily basis.

Working the Phones

Don Draper Quote

  1. If you’re not a phone person, you’ll need to learn how to muscle through it or at least “fake it until you make it." Some reporters prefer email communication, while others prefer the phone (especially if they’re in a hurry to gather a lot of information).
     
  2. Know what you’re talking about. I can’t stress the importance of this one enough. You won’t be able to look things up while you’re on the phone (at least, not discreetly). Prepare before any calls to ensure you really know the topic inside and out.
     
  3. Be energetic and positive. Tip: It might sound corny, but smiling while you’re on the phone automatically makes you sound friendlier. Being likeable can make a reporter more comfortable reaching out to you for help on future stories. If you had a choice between talking to a miserable person or a happy one (all things being equal), who would you choose? I thought so…
     
  4. Always tell the reporter something unusual or unexpected that will make you stick out and guarantee you end up in their story. We live in a 140-character, sound-bite driven world. Remember this…
     
  5. Be definitive. Have a clear opinion on the subject. This is going to help them get that quote they need.

Growing Your Relationships

Dale Carnegie Quote

  1. A strong relationship with just one reporter can be invaluable. Treat each of these relationships like gold, and you can count on coverage for years. I have been in more than 30 stories in USA Today, mostly in the same reporter’s articles (and the others were from people he introduced me to at the paper). This one relationship that I cultivated was one of the most valuable assets early in my career.
     
  2. Be adaptable. Some opportunities may not be exactly what you’re after, but being flexible and able to accommodate a reporter’s story in spite of this (and still work your message in somehow!) will position you as a dependable source.
     
  3. Always go above and beyond. After a call or interview, send follow up info such as links, supporting materials, etc. Few things will make you stand out in a reporter’s mind more than making his or her job easier.
     
  4. Pitch ideas. As journalism moves into a purely online form, journalists are competing more than ever for original stories. Again, making a reporter’s work easier will make you stand out. Come up with story ideas for them in which you can also offer your expertise (and work your message in).
     
  5. Send a thank you note after an interview reminding the reporter you’re eager to help with anything in the future.

Social media makes all of the above much easier and effective…

Use LinkedIn

Reid Hoffman Quote

  1. Once you’ve established contact, add the reporters you’re targeting as connections on LinkedIn.
     
  2. Use the import tool to find reporters with whom you’ve already emailed back and forth.
     
  3. Always send a personalized message when adding a new contact. Make it original; don’t use the default greetings supplied by LinkedIn. ;Tip: DON’T select that you were colleagues at your company (this is the quickest way to make sure someone won’t add you as a connection – journalist or not).
     
  4. Understand how journalists use LinkedIn.
     
  5. Optimize your profile so you can be found by reporters looking for a source: use keywords in your title, summary, and throughout your past job descriptions.
     
  6. Be approachable. Make it clear in your summary you’re open to press contacts or mention publications you’ve appeared in.
     
  7. Include all of your contact information in your profile: phone numbers, email, social profiles, office location, etc.
     
  8. Make your profile public so you’ll show up in search results even if you’re not someone’s 2nd- or 3rd- degree connection.
     
  9. Additionally, a public profile allows non-connections to see your contact info. This allows direct access to contacting you without being in your network.
     
  10. Add your Skills & Expertise to your profile. These are easily searchable and are a quick way for reporters to find possible sources.
     
  11. Influencers can “rank” on the Skills & Expertise page. Some of the best ways to rank for a certain skill include joining (and participating in) groups around that skill and following related companies for that skill.
     
  12. Be active on LinkedIn Answers to position yourself as an expert on a given topic. Experts are featured on each topic’s Answers page. You can also display your Expert topics on your profile.
     
  13. Subscribe to the RSS feed for the Answers topics you want to become an “expert” in; this will save you from checking back for new questions.
     
  14. Customize your LinkedIn Today page. This news aggregator features the most popular content being shared on LinkedIn and Twitter, grouped by industry. It automatically shows you headlines based on your profession, but you can select which topics you want to see headlines from and even follow specific publications.
     
  15. By studying what’s popular on LinkedIn Today, you can get a good idea of which publications are highly shareable among certain professional crowds. Consider targeting some these publications if people in your company’s target industry are sharing from them often.

Use Twitter

Biz Stone Quote

  1. Follow all of the reporters you’re targeting. Here’s a good list of journalists on Twitter.
     
  2. For help finding journalists on Twitter from a specific publication, use the Muckrack directory.
     
  3. Create a Twitter list of these reporters so you can easily keep up with them in a separate stream. Remember, lists can be made private, so only you can see them and the people listed don’t know they’re listed.
     
  4. Share their stuff. Don’t just hit the retweet button, but add a few words of your thoughts on their piece when you share a link to their story. This will help you stand out to really popular reporters who get hundreds of tweets.
     
  5. Attribute a reporter with an @mention anytime you share a link to his or her story.
     
  6. Don’t forget to make local connections. Use LocalTweeps to find reporters in your area.
     
  7. Track (and participate in) journalism-related hashtags. A few include: #journchat (weekly chat among journalists, Mondays at 8 p.m. EST), #haro (“help a reporter out," used by journalists looking for sources), and #ddj (data-driven journalism topics)
     
  8. Be there when a reporter needs help right away. Follow @profnet to see reporter needs based on deadline times, and @helpareporter specifies immediate needs by placing “URGHARO” at the beginning of tweets.

Use Facebook

Mark Zuckerberg Quote

  1. Understand why journalists use Facebook: to share their stories, interact with their readers, curate content and find sources.
     
  2. Many journalists now allow you to subscribe to their Facebook updates, so their posts show up in your newsfeed without being their friend. Search the reporters you’re trying to connect with by name, and if they‘ve enabled the subscription option, subscribe to their posts.
     
  3. Interact with them and become visible by liking and commenting on their posts.
     
  4. When sharing a link to a story from a journalist you’re forming a relationship with, but you’re not yet Facebook friends, set these updates as public so anyone can see them. When a journalist views how many “shares” their story has, your post will be visible.
     
  5. “Like” the pages of the publications you’re targeting. If you can’t find a publication by searching directly on Facebook, their site will most definitely have a link to their page.
     
  6. Liking a page allows you to share content directly from the page. If a reporter doesn’t allow subscriptions like I mentioned above, this is the next best method for sharing their stories on Facebook.
     
  7. Using Facebook Ads, you can make your company visible to reporters. Facebook Ads can target users based on where they work (like a publication you’re trying to target!).
     
  8. In regards to the above, use these ads strictly for branding purposes and have them lead to more info about your company (a compelling landing page with recent news, press releases and media coverage is ideal).

Make Them Come to You (Inbound Coverage)

Seth Godin Quote

  1. Create kick-ass content! Among many other reasons, extraordinary content can lead reporters TO you. There’s a big reason why content marketing is so hot right now (and always has been and will be). It's also one of the reasons why you constantly see people like Danny Sullivan show up in so many articles about search engines.
     
  2. Conduct market research on current trends in your industry. Publish the full results, but also consider making these into easily digestible forms, like a blog post of the most interesting findings.
     
  3. Also conduct surveys and opinion polls around hot (or emerging) topics in your industry.
     
  4. Publish your most compelling case studies. These can be used as examples by the press when reporting on your industry.
     
  5. Make all of the above into visual formats such as videos, infographics, kinectic typography. Because so many publications are online, they also need visual and/or interactive content to include in stories.
     
  6. Set up your Google authorship profile to appear as a credible source and help your content stand out in the SERPs.

The Importance of Social Proof

Donald Trump Quote

  1. Add relevant social sharing buttons to your blog that also display the number of tweets, likes, shares, etc., a post has gotten (check out this post from Kristi Hines for more about displaying social proof).
     
  2. Enable comments on your blog, but also make participation easy to see by placing the number of comments at the top of each post. An added bonus of responding to all of your blog comments: it doubles the number of comments on each post.
     
  3. Create a “Featured In” section on your site listing some of the publications you’ve appeared in.
     
  4. List your most impressive past and upcoming speaking engagements on your site. An event inviting you to speak is proof you know your stuff.
     
  5. Actively grow your following on social networks. Your Twitter followers are by no means a direct reflection of your knowledge, but a down-to-the-wire reporter who needs an authority on a topic immediately may use this to help gauge your level of expertise. Do this by following other people, sharing great content and engaging in conversations daily.
     
  6. If you have a large number of email subscribers, put this number next to your sign-up section (this will also help to attract even more new subscribers!).

What to Do Once You Get Coverage

John Wooden Quote

  1. Share it on all of your social networks.
     
  2. Treat the article or post like it’s your own. Build links to it. Encourage sharing. Drive traffic to it!
     
  3. Include the link in your email newsletter and/or in your signature.
     
  4. Put it on your site. Start an “As Seen In” section… you’ll need it once you keep getting a ton of coverage!
     
  5. Let the reporter know you’ve been driving traffic to the story. If you contribute to the success of a piece, the reporter will be more willing to talk to you again.
     
  6. Most media websites have a most popular/most emailed/most shared/etc. widget on their site. Many also do round-up posts, email, Tweet, share on Facebook, etc. about the most popular stories of the day/week/month. If you help to promote your story and get in one of these spots, you will get the extra coverage.

A. B. C. (Always Be Connecting)

Zig Ziglar Quote

  1. Actively introduce reporters, bloggers, and journalists to people who can help them out. Keep them up-to-date on the latest trends and things that you see happening. Don’t expect anything in return immediately.
     
  2. Seek out guest blogging opportunities in your industry. This not only helps build your authority and gains visibility for you and your company, but also presents a chance for link building. Most blogs will allow at least a branded link within your guest post or author bio.
     
  3. When you can’t actually help a reporter with a story (either you don’t have time or it’s completely outside of your expertise) refer them to someone who can. This saves the reporter time, and helps your friend. Win-win.

Measure Results

Tony Robbins Quote

  1. Start a spreadsheet with the link to the story and columns for key metrics like: social shares, links, referral traffic, and lead generation.
     
  2. Track metrics like social shares and comments. If the publications makes these number visible, this will be easy….
     
  3. If they don't, you will need to track down the shares yourself. A basic search on Twitter with the link to the story will pull up all instances of shares, regardless of a link shortener being used. Plugging the URL into Topsy will show the number of tweets shared as well as the level of influence of those who shared.
     
  4. Keep track of the number of links to your coverage. Using something like Open Site Explorer is the easiest way to go about it, but you can also track these by setting up a Google Alert for “link:<the story URL>”.
     
  5. You can also count the number of times stories linking to your coverage were shared and commented on.
     
  6. Monitor your analytics for referral traffic. Note all instances of traffic from the original story and the sites that linked to the story.
     
  7. Pay attention to your organic traffic for searches leading to your site that relate to the topic discussed in your coverage.
     
  8. Use tools such as Topsy (free) Trackur ($), Sprout Social ($) or Radian6 ($$) to monitor buzz across the social web.
     
  9. Did you get a lot of new leads/sales after coverage? Many times, new customers will tell you themselves where they heard about you (keep track of this!). Also include a “how did you hear about us” option in your contact forms and allow space to include a source. KISSinsights is a great tool to help with this.

Steve Jobs Quote

At the end of the day, it comes down to tenacity and not being afraid to ask for something. Don't get caught up in thinking that you aren't worthy of press coverage or that a reporter doesn't want to hear from you. Just ask. The worst that someone can do is ignore you or say no. Simply by asking and actively pitching, you are ahead of the vast majority of your competitors.

With that thought in mind, if you liked this post, would you mind thumbing it up and/or leaving a comment below? I want to know, what has worked for you? Where have you found success or hit roadblocks?

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Web Site Migration Guide – Tips For SEOs

Posted by Modesto Siotos

Site migrations occur now and again for a various reasons but arguably are one of those areas many SEOs and site owners alike do not feel very comfortable with. Typically, site owners want to know in advance what the impact would be, often asking for information like potential traffic loss, or even revenue loss. On the other hand, SEOs need to make sure they follow best practice and avoid common pitfalls in order to make sure traffic loss will be kept to a minimum.

Disclaimer: The suggested site migration process isn't exhaustive and certainly there are several alternative or complimentary activities, depending on the size of the web site as well as the nature of the undertaken migration. I hope that despite its length, the post will be useful to SEOs and web masters alike.

Phase 1: Establishing Objectives, Effort & Process

This is where the whole migration plan will be established taking into account the main objectives, time constrains, effort, and available resources. This phase is fundamental because if essential business objectives or required resources fail to get appropriately defined, problems may arise in the following phases. Therefore, a considerable amount of time and effort needs to be allocated in this stage.

1.1 Agree on the objectives

This is necessary because it will allow for success to be measured at a later stage on the agreed objectives. Typical objectives include:

  • Minimum traffic loss
  • Minimum ranking drops
  • Key rankings maintenance
  • Head traffic maintenance
  • All the above

1.2 Estimate time and effort

It is really important to have enough time in your hands, otherwise you may have to work day and night to recover those great rankings that have plummeted. Therefore, it is important to make sure that the site owners understand the challenges and the risks. Once they understand that they, it is more likely they will happily allocate the necessary time for a thorough migration.

1.3 Be honest (…and confident)

Every site migration is different. Hence previous success does not guarantee that the forthcoming migration will also be successful. It is important to make your client aware that search engines do not provide any detailed or step-by-step documentation on this topic, as otherwise they would expose their algorithms. Therefore, best practice is followed based on own and other people’s experiences. Being confident is important because clients tend to respect more an expert's authoritative opinion. This is also important because it can impact on how much the client will trust and follow the SEO's suggestions and recommendations. Be careful not to overdo it though, because if things later go wrong there will be no excuses.

1.4 Devise a thorough migration process

Although there are some general guidelines, the cornerstone is to devise a flawless process. That needs to take into consideration:

  • Legacy site architecture
  • New Site architecture
  • Technical limitations of both platforms

1.5 Communicate the migration plan

Once the migration process has been established it needs to be communicated to the site owner as well as to those that will implement the recommendations, usually a web development team. Each part needs to understand what they are expected to do as there is no space for mistakes, and misunderstandings could be catastrophic.

Most development agencies tend to underestimate site migrations simpl because they focus almost exclusively on getting the new site up and running. Often, they do not allocate the necessary resources required to implement and test the URL redirects from the old to the new site. It is the SEO’s responsibility to make them realise the amount of work involved, as well as strongly request the new site to move first on a test server (staging environment) so implementation can be tested in advance. No matter how well you may have planned the migration steps, some extra allocated time would always be useful as things do not always go as planned.

In order for a website migration to be successful, all involved parts need to collaborate in a timely manner merely because certain actions need to be taken at certain times. If things do not seem to go the desired way, just explain the risks ranging from ranking drops to potential revenue loss. This is certainly something no site owner wants to hear about, therefore play it as your last card and things are very likely to turn around.

1.6 Find the ideal time

No matter how proactive and organised you are, things can always go wrong. Therefore, the migration shouldn't take place during busy times for the business or when time or resources are too tight. If you're migrating a retail site, you shouldn't be taking any risks a couple of months before Christmas. Wait until January when things get really quiet. If the site falls into the travel sector, you should avoid the spring and summer months as this is when most traffic and revenue is being generated. All that needs to be communicated to the client so they make an ideal business decision. A rushed migration is not a good idea, thus if there isn't enough time to fit everything in, better (try to) postpone it for a later time.

Phase 2: Actions On The Legacy Site

There are several types of site migrations depending on what exactly changes, which usually falls under one or more of the following elements:

  • Hosting / IP Address
  • Domain name
  • URL structure
  • Site Architecture
  • Content
  • Design

The most challenging site migrations involve changes in most (or all) the above elements. However, for the purposes of this post we will only look at one of the most common and complicated cases, where a web site has undergone a radical redesign resulting in URL, site architecture and content changes. In case the hosting environment is going to change the new hosting location needs to be checked for potential issues. Whoishostingthis and Spy On Web can provide some really useful information. Attention needs to be paid also on the geographic location of the host. If that is going to change, you may need to assess the advantages/disadvantages and decide whether there is a real need for that. Moving a .co.uk web site from a UK-based server to a US one wouldn't make much sense from a performance point of view.

In case the domain name is changing you may need to consider:

  • Does the previous/new domain contain more/less keywords?
  • Are both domains on the same ccTLD? Would changing that affect rankings?

2.1: Crawl the legacy site

Using a crawler application (e.g. Xenu Link Sleuth, Screaming Frog, Integrity for Mac) crawl the legacy site making sure that redirects are being identified and reported. This is important in order to avoid redirect chains later. My favourite crawling app is Xenu Link Sleuth because it is very simple to set up and does a seamless job. All crawled URLs need to be exported because they will be processed in Excel later. The following Xenu configuration is recommended because:

  • The number of parallel threads is very low to avoid time outs
  • The high maximum depth value allows for a deep crawl of the site
  • Existing redirections will be captured and reported

Custom settings for site crawling with Xenu Link Sleuth

2.2 Export top pages

Exporting all URLs that have received inbound links is more than vital. This is where the largest part of the site’s link juice is to be found, or in other words, the site’s ability to rank well in the SERPs. What you do with the link juice is another question, but you certainly need to keep it into one place (file).

Open site explorer

Open Site Explorer offers a great deal of information about a site’s top pages such as:

  • Page Authority (PA)
  • Linking Root Domains
  • Social Signals (Facebook likes, Tweets etc.)

In the following screenshot, a few, powerful 404 pages have been detected which ideally should be 301 redirected to a relevant page on the site.

Majestic SEO

Because Open Site Explorer may haven’t crawled/discovered some recent pages, it is always worth carrying out the same exercise using Majestic SEO, either on the whole domain or the www subdomain, depending on what exactly is being migrated. Pay attention to ACRank values, pages with higher ACRank values are the most juiciest ones. Downloading a CSV file with all that data is strongly recommended.

Webmaster tools

In case you don’t have a subscription to Open Site Explorer or Majestic SEO you could use Google’s Web Master Tools. Under Your Site on the Web -> Links to your site you will find Your Most Linked Content. Click on 'More' and Download the whole table into a CSV file. In terms of volume, WMT data aren’t anywhere near OSE or Majestic SEO but it is better than nothing. There are several other paid or free backlinks information services that could be used to add more depth into this activity.

Google analytics

Exporting all URLs that received at least one visit over the last 12 months through Google Analytics is an alternative way to pick up a big set of valuable indexed pages. If not 100% sure about how to do that, read this post Rand wrote a while ago.

Indexed pages in Google

Scrapping the top 500 or top 1000 indexed pages in Google for the legacy site may seem like an odd task but it does have its benefits. Using Scrapebox or the scraper extension for Chrome perform a Google search for site:www.yoursite.com and scrape the top indexed URLs. This step may seem odd but it can identify:

  • 404 pages that are still indexed by Google
  • URLs that weren’t harvested in the previous steps

Again, save all these URLs in another spreadsheet.

2.3 Export 404 pages

Site migrations are great opportunities to tide things up and do some good housekeeping work. Especially with big sites, there is enormous potential to put things in order again; otherwise hundreds or even thousands of 404 pages will be reported again once the new site goes live. Some of those 404 pages may have quality links pointing to them.

These can be exported directly from Webmaster Tools under Diagnostics->Crawl Errors. Simply download the entire table as a CSV file. OSE also reports 404 pages, so exporting them may also be worthwhile. Using the SEO Moz Free API with Excel, we can figure out which of those 404 pages are worth redirecting based on metrics such as high PA, DA, mozRank and number of external links/root domains. Figuring out where to redirect each of these 404 pages can be tricky, as ideally each URL should be redirected to the most relevant page. Sometimes, this is can be "guessed" by looking for keywords in the URL. In cases that it is not possible, it is worth sending an email to the development team or the web master of the site, as they may be able to assist further.

2.4 Measure site performance

This step is necessary when there is an environment or platform change. It is often the case, that a new CMS although does a great job in terms of managing the site’s content, it does affect site performance in a negative way. Therefore, it is crucial to make some measurements before the legacy site gets switched off. If site performance deteriorates, crawling may get affected which could then affect indexation. With some evidence in place, it will be much easier building up a case later, if necessary. Although there are several tools, Pingdom seems to be a reliable one.

The most interesting stuff appears on the summary info box as well as on the Page Analysis Tab. Exporting the data, or even just getting a screenshot of the page could be valuable later. It would be worth running a performance test on some of the most typical pages e.g. a category page, a product page as well as the homepage.

Pingdom Tools Summary

Keep a record of typical loading times as well as the page size. If loading times increase whilst the size of the page remains is the same, something must have gone wrong.

Pingdom Page Analysis Tab

Running a Web Page Test would also be wise so site performance data are cross-referenced across two services just to make sure the results are consistent.

The same exercises should be repeated once the new site is on the test server as well as when it finally goes live. Any serious performance issues need to be reported back to the client so they get resolved.

2.5 Measure rankings

This step should ideally take place just before the new site goes live. Saving a detailed rankings report, which contains as many keywords as possible, is very important so it can be used as a benchmark for later comparisons. Apart from current positions it would be wise to keep a record of the ranking URLs too. Measuring rankings can be tricky though, and a reliable method needs to be followed. Chrome's Google Global extension and SEO SERP are two handy extensions for checking a few core keywords. With the former, you can see how rankings appear in different countries and cities, whilst the latter is quicker and does keep historical records. For a large number of keywords, proprietary or paid automated services should be used in order to save time. Some of the most popular commercial rank checkers include Advanced Web Ranking, Web CEO and SEO Powersuite to name a few.

With Google Global extension for Chrome you can monitor how results appear in different countries, regions and cities.

Phase 3: URL Redirect Mapping

During this phase, pages (URLs) of the legacy site need to be mapped to pages (URLs) on the new site. For those pages where the URL remains the same there is nothing to worry about, provided that the amount of content on the new page hasn’t been significantly changed or reduced. This activity requires a great deal of attention, otherwise things can go terribly wrong. Depending on the size of the site, the URL mapping process can be done manually, which can be very time consuming, or automation can often be introduced to speed things up. However, saving up on time should not affect the quality of the work.

Even though there isn't any magic recipe, the main principle is that ALL unique, useful or authoritative pages (URLs) of the legacy site should redirect to pages with the same or very relevant content on the new site, using 301 redirects. Always make sure that redirects are implemented using 301 redirects (permanent ) that pass most link equity from the old to the new page (site). The use of 302 (temporary) redirects IS NOT recommended because search engines treat them inconsistently and in most cases do not pass link equity, often resulting in drastic ranking drops.

It’s worth stressing that pages with high traffic need extra attention but the bottom line is that every URL matters. By redirecting only a percentage of the URLs of the legacy site you may jeopardise the new domain’s authority as a whole, because it may appear to search engines as a weaker domain in terms of link equity.

URL Mapping Process (Step-by-step)

  1. Drop all legacy URLs, which were identified and saved in the CSV files earlier (during phase 2), into a new spreadsheet (let's call it SpreadSheet1).
  2. Remove all duplicate URLs using Excel.
  3. Populate the page titles using the SEO for excel tool.
  4. Using SEO for Excel, check the server response headers. All 404 pages should be kept into a different tab so all remaining URLs are those with a 200 server response.
  5. In a new Excel spreadsheet (let's call it SpreadSheet2) drop all URLs of the new site (using a crawler application).
  6. Pull in the page titles for all these URLs as in step 3.
  7. Using the VLOOKUP Excel function, match URLs between the two spreadsheets
  8. Matched URLs (if any) should be removed from SpreadSheet1 as they already exist on the new site and do not need to be redirected.
  9. The 404 pages which were moved into a separate worksheet in step 4, need to be evaluated for potential link juice. There are several ways to make this assessment but the most reliable ones are:

    • SEO Moz API (e.g. using the handy Excel extension SEO Moz Free API)
    • Majestic SEO API
  10. Depending on how many “juicy” URLs were identified in the previous step, a reasonable part of them needs to be added into Spreadsheet1.
  11. Ideally, all remaining URLs in SpreadSheet1 need to be 301 redirected. A new column (e.g. Destination URLs) needs to be added in SpreadSheet 1 and populated with URLs from the new site. Depending on the number of URLs to be mapped this can be done:

    • Manually – By looking at the content of the old URL, the equivalent page on the new site needs to be found so the URL gets added in the Destination URLs column.

      1. If no identical page can be found, just chose the most relevant one (e.g. similar product page, parent page etc.)
      2. If the page has no content pay attention to its page title (if known or still cached by Google) or/and URL for keywords which should give you a clue about its previous content. Then, try to find a relevant page on the new site; that would be the mapping URL.
      3. If there is no content, no keywords in the URL and no descriptive page title, try to find out from the site owners what those URLs used to be about.
    • Automatically – By writing a script that maps URLs based on page titles, meta description or URL patterns matching.
  12. Search for duplicate entries again in the ‘old URLs’ row and remove the entire row.
  13. Where patterns can be identified, pattern matching rules using regular expressions are always more preferable because that would reduce the web server's load. Ending up with thousands one-to-one redirects is not ideal and should be avoided, especially if there is a better solution.

Phase 4: New Site On Test Server

Because human errors do occur, testing that everything has gone as planned is extremely important. Unfortunately, because the migration responsibility falls mainly on the shoulders of the SEO, several checks need to be carried out.

4.1 Block crawler access

The first and foremost thing to do is to make sure that the test environment is not accessible to any search engine crawler. There are several ways to achieve that but some are better than others.

  • Block access in robots.txt (not recommended)

This is not recommended because Google would still crawl the site and possibly index the URLs (but not the content). This implementation also runs the risk of going live if all files on the test server are going to be mirrored on the live one. The following two lines of code will restrict search engines access to the website:

User-Agent: *
Disallow: /

  • Add a meta robots noindex to all pages (not recommended)

This is recommended by Google as a way to entirely prevent a page's contents from being indexed.

<html>
<head>
<title>…</title>
<meta name="robots" content="noindex">
</head>

The main reason this is not recommended is because it runs the risk to be pushed to the live environment and remove all pages out of the search engines' index. Unfortunately, web developers' focus is on other things when a new site goes live and by the time you notice such a mistake, it may be a bit late. In many cases, removing the noindex after the site has gone live can take several days, or even weeks depending on how quickly technical issues are being resolved within an organisation. Usually, the bigger the business, the longer it takes as several people would be involved.
  • Password-protect the test environment (recommended)

This is a very efficient solution but it may cause some issues. Trying to crawl a password protected website is a challenge and not many crawler applications have the ability to achieve this. Xenu Links Sleuth can crawl password-protected sites.

  • Allow access to certain IP addresses (recommended)

This way, the web server allows access to specific external IP addresses e.g. that of the SEO agency. Access to search engine crawlers is restricted and there are no indexation risks.

4.2 Prepare a Robots.txt file

That could be a fairly basic one, allowing access to all crawlers and indicating the path to the XML sitemap such as:

User-agent: *
Allow: /
Sitemap: http://www.yoursite.com/sitemap.xml

However, certain parts of the site could be excluded, particularly if the legacy site has duplicate content issues. For instance, internal search, pagination, or faceted navigation are often generating multiple URLs with the same content. This is a great opportunity to deal with legacy issues, so search engine crawling of the website can become more efficient. Saving up on crawl bandwidth will allow search engine to crawl only those URLs which are worthy of being indexed. That means that deep pages would stand a better chance to be found and rank quicker.

4.3 Prepare XML sitemap(s)

Using your favourite tool, generate an XML sitemap, ideally containing HTML pages only. Xenu again does a great job because it easily generate XML sitemaps containing only HTML pages. For large web sites, generating multiple XML sitemaps for the different parts of the site would be a much better option so indexation issues could be easier identified later. The XML sitemap(s) should then be tested again for broken links before the site goes live.

Source: blogstorm.co.uk

Google Webmaster Tools allow users to test XML sitemaps before they get submitted. This is something worth doing in order to identify errors.

4.4 Prepare HTML sitemap

Even though the XML sitemap alone should be enough to let search engines know about the URLs on the new site, implementing an HTML sitemap could help search engine spiders make a deep crawl of the site. The sooner the new URLs get crawled, the better. Again, check the HTML sitemap for broken links using Check My Links (Chrome) or Simple Links Counter (Firefox).

4.5 Fix broken links

Run the crawler application again as more internal/external broken links, (never trust a) 302 redirects, or other issues may get detected.

4.6 Check 301 redirects

This is the most important step of this phase and it may need to be repeated more than once. All URLs to be redirected should be checked. If you do not have direct access to the server one way to check the 301 redirects is by using Xenu's Check URL List feature. Alternatively, Screaming Frog's list view can be used in a similar manner. These applications will report whether 301s are in place or not, but not if the destination URL is the correct one. That could only be done in Excel using the VLOOKUP function.

4.7 Optimise redirects

If time allows, the list of redirects needs to be optimised for optimal performance. Because the redirects are loaded into the web server's memory when the server starts, a high number of redirects can have a negative impact on performance. Similarly, each time a page request is being made, the web server will compare that against the redirects list. Thus, the shorter the list, the quicker the web server will respond. Even though such performance issues can be compensated by increasing the web server's resources, it is always best practice to work out pattern matching rules using regular expressions, which can cover hundreds or even thousands of possible requests.

4.8 Resolve duplicate content issues

Duplicate content issues should be identified and resolved as early as possible. A few common cases of duplicate content may occur, regardless of what was happening previously on the legacy web site. URL normalisation at this stage will allow for optimal site crawling, as search engines will come across as many unique pages as possible. Such cases include:

  • Directories with and without a trailing slash (e.g. this URL should redirect to that).
  • Default directory indexes (e.g. this URL should redirect to that).
  • Case in URLs. (e.g. this URL should redirect to that, or just return the 404 error page like this as opposed to that, which is the canonical one).
  • Different protocols. The most typical example is when a website is accessible via http and https. (e.g. this URL should redirect to that). However, this type of redirect needs attention as some URLs may need to exist only on https. Added Feb 26
  • Accessible IP addresses. Being able to access a website by requesting its IP address can cause duplicate content issues. (e.g. this URL should redirect to that). Added Feb 26
  • URLs on different host domains e.g. www.examplesite.com and examplesite.com (e.g. this URL should redirect to that).
  • Internal search generating duplicate pages under different URLs.
  • URLs with added parameters after the ? character.

In all the above examples, poor URL normalisation results in duplicate pages that will have a negative impact on:

  • Crawl bandwidth (search engine crawlers will be crawling redundant pages).
  • Indexation (as search engines try to remove duplicate pages from their indexes).
  • Link equity (as it will be diluted amongst the duplicate pages).

4.9 Site & Robots.txt monitoring

Make sure the URL of the new site is monitored using a service like Uptime Robot. Each time the site is down for whatever reason, Uptime Robot will be notified by email, Twitter DM, or even SMS. Another useful service to set up a robots.txt monitoring service such as Robotto. Each time the robots.txt file gets updated you get notified, which is really handy.

Uptime Robot logs all server up/down time events

Phase 5: New Site Goes Live

Finally the new site has gone live. Depending on the authority, link equity and size of the site Google should start crawling the site fairly quickly. However, do not expect the SERPs to be updated instantly. The new pages and URLs will be updated in the SERPs over a period of time, which typically can take from two to four weeks. For pages that seem to take ages to get indexed it may be worth using a ping service like Pingler.

5.1 Notify Google via Webmaster Tools

If the domain name changes, you need to notify Google via the Webmaster Tools account of the old site, as soon as the new site goes live. In order to do that, the new domain needs to be added and verified. If the domain name remains the same, Google will find its way to the new URLs sooner or later. That mainly depends on the domain authority of the site and how frequently Google visits it. It would also be a very good idea to upload the XML sitemap via Webmaster Tools so the indexation process can be monitored (see phase 6).

5.2 Manual checks

No matter how well everything appeared on the test server, several checks need to be carried out and running the crawler application again is the first thing to do. Pay attention for:

  • Anomalies in the robots.txt file
  • Meta robots noindex tags in the <head> section of the HTML source code
  • Meta robots nofollow tags in the source code
  • 302 redirects. 301 redirects should be used instead as 302s are treated inconsistently by search engines and do not pass link equity
  • Check Webmaster Tools for errors messages
  • Check XML sitemap for errors (e.g. broken links, internal 301s)
  • Check HTML sitemap for similar errors (e.g. using Simple Links Counter or Check My Links)
  • Missing or not properly migrated page titles
  • Missing or not properly migrated meta descriptions
  • Make sure that the 404 page returns a 404 server response
  • Make sure the analytics tracking code is present on all pages and is tracking correctly
  • Measure new site performance and compare it with that of the previous site

Using Httpfox, a 302 redirect has been detected

5.3 Monitor crawl errors

Google Webmaster tools, Bing Webmaster Tools and Yandex Webmaster all report crawl errors and is certainly worth checking often during the first days or even weeks. Pay attention to reported errors and dates and always try figure out what has been caused by the new site or the legacy one.

5.4 Update most valuable inbound links

From the CSV files created in step 3.2, figure out which are the most valuable inbound links (using Majestic or OSE data) and then try to contact the web masters of those sites, requesting a URL update. Direct links pass more value than 301 redirects and this time-consuming task will eventually pay back. On the new site, check the inbound links and top pages tabs of OSE and try to identify new opportunities such as:

  1. Links from high authority sites which are being redirected.
  2. High authority 404 pages which should be redirected so the link juice flows to the site.

In the following example, followed and 301 external links have been downloaded in a CSV file.

Pay attention to the '301' columns for cells with the Yes value. Trying to update as many of these URLs as possible so the point directly to the site would pass more link equity to the site:

Identify the most authoritative links and contact website owners to update them so they point to the new URL

5.5 Build fresh links

Generating new, fresh links to the homepage, category and sub-category pages is a good idea because:

  1. With 301 redirects some link juice may get lost, thus new links can compensate for that.
  2. They can act as extra paths for search engine spiders to crawl the site.

5.6 Eliminate internal 301 redirects

Although Web masters are quite keen on implementing 301 redirects, they often do not show the same interest updating the onsite URLs so internal redirects do not occur. Depending on the volume and frequency of internal 301 redirects, some link juice may evaporate, whilst the redirects will unnecessarily add an extra load to the web server. Again, in order to detect internal 301 redirects, crawling the site would be handy.

Phase 6: Measure Impact/Success

Once the new site gas finally gone live, the impact of all the previous hard work needs to be monitored. It may be a good idea monitoring rankings and indexation on a weekly basis but in general no conclusions should be made earlier than 3-4 weeks. No matter how good or bad rankings and traffic appear to be, you need to be patient. A deep crawl can take time, depending on the site's size, architecture and internal linking. Things to be looking at:

  • Indexation. Submitted and indexed number of URLs reported by Webmaster Tools (see below)
  • Rankings. They usually fluxuate for 1-3 weeks and initially they may drop. Eventually, they should recover around the same positions they were previously (or just about).
  • Open site explorer metrics. Although they do not get updated daily, it is worth keeping an eye on reported figures for Domain Authority, Page Authority and MozRank on a monthly basis. Ideally, the figures should be as close as possible to those of the old site within a couple of months. If not, that is not a very good indication and you may have lost some link equity along the way.
  • Google cache. Check the timestamps of cached pages for different page types e.g. homepage, category pages, product pages.
  • Site performance in Webmaster Tools. This one may take a few weeks until it gets updated but it is very useful to know how Google perceives site performance before and after the migration. Any spikes that stand out need should alarm the web master and several suggestions can be made e.g. using Yslow and Page Speed in Firefox or Page Speed and Speed Tracer in Chrome.

Check site performance in Webmaster Tools for unusual post migration anomalies

Indexation of web pages, images and videos can be monitored in Google Webmaster Tools

Appendix: Site Migration & SEO Useful Tools

Some of the following tools would be very handy during the migration process, for different reasons.

Crawler applications

Xenu Link Sleuth (free)
Analog X Link Examiner (free)
Screaming Frog (paid)
Integrity (For MAC – free)

Scraper applications

Scraper Extension for Chrome
Scrapebox (paid)

Link Intelligence software

Open Site Explorer (free & paid)
Majestic SEO (free & paid)

HTTP Analysers

HTTP Fox (Firefox)
Live HTTP Headers (Firefox)

IP checkers

Show IP (Firefox)
WorldIP (Firefox)
Website IP (Chrome)

Link checkers

Simple Links Counter (Firefox)
Check My Links (Chrome)

Monitoring tools

Uptime Robot (monitors domains for downtime)
Robotto (monitors robots.txt)

Rank checkers

Google Global (Chrome)
SEO SERP (Chrome)
SEO Book Rank Checker (Firefox)

Site performance analysis

Yslow (Firefox)
Page Speed (for Firefox)
Page Speed (for Chrome)
|Speed Tracer (Chrome)

About the author

Modesto Siotos (@macmodi) works as a Senior Natural Search Analyst for iCrossing UK, where he focuses on technical SEO issues, link tactics and content strategy. His move from web development into SEO was a trip with no return, and he is grateful to have worked with some SEO legends. Modesto is happy to share his experiences with others and writes regularly for a digital marketing blog.This is his first post for SEOmoz.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Create Crawlable, Link-Friendly AJAX Websites Using pushState()

Posted by RobOusbey

Many people have an interest in building websites that take advantage of AJAX principles, while still being accessible to search engines. This is an important issue that I've written about before in a (now obsolete) post from 2010. The tactic I shared then has been superseded by new technologies, so it's time to write the update.

This topic is still relevant, because of a particular dilemma that SEOs still face:

  • websites that use AJAX to load content into the page can be much quicker and provide a better user experience
  • BUT: these websites can be difficult (or impossible) for Google to crawl, and using AJAX can damage the site's SEO.

The solution I had previously recommended ends up with the #! (hashbang) symbols littering URLs, and has generally been implemented quite poorly by many sites. When I presented on this topic at Distilled's SearchLove conferences in Boston last year, I specifically called out Twitter's implementation because it 'f/#!/ng sucks'. Since I made that slide, it's actually got worse.

Why talk about this now?

I was driven to blog about this now, because after some apparent internal disagreements at Twitter in early 2011 (c.f. this misguided post by one Twitter engineer, followed by this sensible response by another) they seem to working to reverse the decision and replace all those dreadful URLs for good. Even though Twitter shouldn't be held up as the paragon of great implementation, it's valuable for all of us to look at the method that many smart-thinking websites (Twitter included) are planning to use.
 
In general, I'm surprised that we don't see this approach being used more often. Creating fast, user-friendly websites that are also totally accessible by search engines is a good goal to have, right?

What is the technology?

So – drumroll please – what is the new technology that's going to make our AJAX lives easier? It's a happy little Javascript function that's part of 'HTML5 History API' called window.history.pushState()
 
pushState() basically does one thing: it changes the path of the URL that appears in the user's address bar. Until now, that's not been possible.
 
Before we go further, it's worth reiterating that this function doesn't really do anything else – no extra call is made to the server, and the new page is not requested. Plus – of course – this isn't available in every web browser, and only modern standards-loving browsers (with Javascript enabled) will be able to make use of it.
 
In fact, do you want a quick demo? Most SEOmoz visitors are using a modern browser, so this should work for you. Watch the page URL and try clicking ON THIS TEXT. Did you see the address change? The new page URL isn't a real location, but – so far as you're concerned – that's now the page you're looking at.
 
SEOmoz readers are smart people; I expect that you're realizing various ways that this can be valuable, but here's why this little function gets me excited:
  • you can have the speed benefits of using AJAX to load page content (since for many websites, only a fraction of the code delivered is actually content; most is just design & templating)
  • since the page URL can accurately reflect the 'real' location of the page, you have no problem with people copy/pasting the URL from the address bar and linking to / sharing it (linking to a page that uses #fragment for the page location won't pass link-juice to the right page/content)
  • with the #!s out of the way, you don't need to worry about special 'escaped URLs' for the search engines to visit
  • you can rest easy, knowing that you are contributing good quality URLs (as discussed in the post montioned earlier) to the web.

The Examples

I launched a pushState demo / example page to show how all this performs in practice.

window.history.pushState() example

Click the image above to visit the demo site in all its glory.

If you click between the cities in the top navigation, you'll be able to see that only the main content is being loaded in with each click; the page navigation.
(This can be confirmed by playing the Youtube video on the page; notice that it doesn't stop playing as you load in new content.)
 
If you want to see a bunch of examples of this 'in the wild', you can take a look at almost any blogspot.com-hosted-blog with one of their new 'dynamic views' in place, just add '/view/sidebar' to the end of the URL.
 
For example, this blog: http://n1vg.blogspot.com can be viewed with the theme applied: http://n1vg.blogspot.com/view/sidebar 
 
If you click posts in the left hand column on that second link, you'll see the content get loaded in with very impressive speed; the URL is then updated using pushState() – no page reload took place, but your browser still reflects the appropriate URL for each piece of content.

The Techie Bit

If you like the sound of all this, but you start to feel out of your depth when it comes to tech implementation, then feel free to share this with your developers or most tech-oriented friends. (References are linked at the end of this post.)
 
The important function we're utilizing takes three parameters:
window.history.pushState(data, title, url)
 
There's no value in worrying about the first two parameters; you can safely set these to null. In the brief example I gave at the top of this post, the function simply looked like this:
window.history.pushState('','','test/url/that-does-not-really-exist')
 
Our workflow for implementing this looks like the following:
  • Before doing anything else, make sure your site works without JS; Google will need to be able to follow your links and read content
  • You'll also have to create server-side processes to serve just the 'content' for particular pages, rather than the fully rendered HTML page. This will depend a great deal on your server, your back-end set up; you can ask in the comments below if you have questions about this bit.
  • Instruct Javascript to intercept the clicks on any relevant internal links (navigation elements, etc.) I'm a big jQuery fan, so I rely on the click() function for this
  • Your Javascript will look at the attributes of the link that was clicked on (probably the href) and use whatever JS/AJAX you want to load the appropriate content into the page
  • Finally, get all the SEO benefits by using the pushState() function update the URL to match the content's 'real' location
By having your internal links work 'as normal' and then adding this AJAX/HTML5 implementation on-top, you are taking advantage of the benefits of 'progressive enhancement': users with up-to-date browsers get the full, fast and spiffy experience, but the site is still accessible for less capable browsers and (critically in this case) for the search engines.
 
If you want some code to implement this, you can take a look at the head section of the demo that I shared above – that contains all of the Javascript necessary for doing this.
 
Basic code for getting this done looks like this:
 
// We're using jQuery functions to make our lives loads easier
$('nav a').click(function(e) {
      url = $(this).attr("href");
 
      //This function would get content from the server and insert it into the id="content" element
      $.getJSON("content.php", {contentid : url},function (data) {
            $("#content").html(data);
      });
 
      //This is where we update the address bar with the 'url' parameter
      window.history.pushState('object', 'New Title', url);
 
      //This stops the browser from actually following the link
      e.preventDefault();
}

Important Caveat

Although the code above works as a proof of concept, there are some additional things to do, in order to make this work as smoothly as my demo.
 
In particular, you'll probably want the 'back' button on the user's browser to work, which this code snippet won't allow. (The URL will change, but the content from those historical pages still needs to be loaded in.) To enable this, you'll need the popState() function; this detects a URL change, allowing you to fire whatever function you have for grabbing page content and loading it in.
 
Again, you can see this in action in the head of the demo page at http://html5.gingerhost.com.

Resources and Further Reading:

There are plenty of resources that cover the HTML5 History API pretty thoroughly, so I'll defer to them in letting you read about the details at your leisure. I'd suggest taking a look at the following

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Google Analytics Certification and How to Pass the GAIQ Test

Posted by Slingshot SEO

When I hear the word, “cookies,” I generally think of warm, gooey homemade chocolate chip cookies. But when it comes to passing the Google Analytics Individual Qualification (GAIQ) test, I had to put my cravings for Mrs. Fields’ Nibblers aside and learn about the differences between first-party and third-party cookies.

Google Analytics Cookie Monster - Delete cookies!?

Cookies are just one of the many topics covered on the exam, and passing can be a daunting task, especially for those unfamiliar with the program and its ever-changing features. The GAIQ test is one of the best ways to become a more knowledgeable user and deepen your understanding of Google Analytics. For those to new to GA or seeking additional tips & tricks, check out our Google Analytics Guide. Studying for the exam can be a fun process, and I would like to offer some advice so that you can pass as well.

The GAIQ Test

The test is limited to 90 minutes, consisting of 70 multiple choice questions with two to five answer choices. The trickiest part is that some questions ask you to select "all that apply," which means there can be up to 24 possible answer combinations for those questions (assuming you have to select one answer). The test can be accessed at the Google Testing Center, and each sitting costs $50. During the test, you have the ability to pause and come back anytime within the next five days. Although the questions vary in difficulty, it's an open book exam. The pass mark is 80%, which means you must answer at least 56 out of 70 questions correctly.

Preparing for the Exam

All the topics and content covered on the exam are available through Google’s Google Analytics IQ Lessons, formerly known as Conversion University, which consists of online lessons that are freely available for viewing at your leisure. There are 21 different presentations that are easily digestible and will last a total of roughly 2 hours and 15 minutes. However, these presentations move fairly quickly, so I recommend pausing and taking notes that you can use during the exam. A rough outline of topics is listed below:

  • Accounts & Profiles
  • Interface Navigation
  • Tracking Code
  • Interpreting Reports
  • Traffic Sources
  • Campaign Tracking & AdWords Integration
  • AdWords
  • Goals
  • Funnels
  • Filters
  • Advanced Segments
  • Cookies
  • Regular Expressions
  • E-Commerce Tracking
  • Domains & Subdomains
  • Custom Reports
  • Motion Charts
  • Internal Site Search
  • Event Tracking & Virtual Page views

The GAIQ lessons are the best way to study for the test and should be your starting point. I recommend watching each video at least twice, and using your own Google Analytics profile in tandem with the videos, to practice and walk through each lesson to make sure you understand the topics. It is important to note that there have been many changes to Google Analytics over the past year, and Google has updated its exam in January 2012. The fundamental material covered on the exam has stayed the same, but if you are still using the old version of Analytics, you may want to get used to the new version and all of its new features before taking the exam.

I would not be surprised if Google started asking questions on features that are only available in the new version (multi-channel funnels, real-time analytics, social plugin analytics, and flow visualization). Also, there is always a chance that Google has made an update, but hasn’t changed the test question or GAIQ lesson videos. For example, the “__utmc” cookie is no longer used by the Google Analytics tracking code to determine session status, but it is still mentioned in the GAIQ lessons and could still be asked about on the exam as one of the cookies that Google sets. When in doubt, I would answer questions like this based on whatever has been taught in the GAIQ lessons. It is more likely that Google would not change the test without updating the videos first.

When Taking the Exam

For a “pass-the-exam” strategy, the most important thing to remember is to keep moving. Answer all of the easy questions first and don’t get tied down by any one question. You have roughly 1 minute and 16 seconds to answer each question, so if you answer all of the easy ones first, you can judge how much time you have left to finish the remaining, tougher questions. You have the ability to mark questions, answer them, or leave them incomplete. A good strategy is to answer the easy ones, mark the questions that require some research, and leave the questions you have absolutely no idea about blank. That way, during your second run-through, you can review all marked questions first and do the most difficult questions last. I feel safe in assuming that all questions are weighted equally in the score and that there is no penalty for guessing incorrectly.

During the test, I recommend having the following resources open on your computer: Google Analytics IQ Lessons, an Analytics account, the Google Help Center, and Jens Sorensen’s test notes. There will be some questions that require research, so keep these resources close.

Practice Problems

I’ve included some original practice problems with solutions that will help you get ready for the exam. These problems are meant to challenge you, but do not necessarily represent how Google will test you on these topics. These problems should be a final test to take after watching all of the GAIQ lessons. They are available for download in the link below :-)

Download Slingshot SEO GA IQ Practice Problems

Passing the Exam

If you pass, Google sends you an email with an official certificate showing that you have passed the exam. The certificate is valid for 18 months from the date of the passed exam. Google does not give you the results for each question, but it lists the percentage of questions you answered correctly, and the four most missed topics on your exam.

Google Analytics Qualified Individual Badge

Sometimes, the difference between passing and failing can be a matter of how you interpret some of Google’s questions. They can be quite tricky, so be sure to pay attention to detail on every question. If you fail, you may take the exam again, but you have to wait 14 days and can only take it twice within a 30-day period. You have to pay the $50 fee for each sitting, so do your best to pass it the first time.

If you’ve taken the exam, we’d love to hear your thoughts and study tips. Or if you have any other questions, please leave a comment!

Best of luck!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!