Showing posts with label tutorial. Show all posts
Showing posts with label tutorial. Show all posts

Thursday 4 December 2014

301 Redirect Old Domain Without Passing Link Juice or Referral Signals

If you're hit by Google algorithm's Penguin and tried your best to disavow all the "Bad" links coming to your site, but your site has not been recovered yet, then you might be thinking of starting a new website with clean backlinks portfolio and White Hat SEO.

Of course you do not want your visitors to go to the old abandoned site, and of course you cannot 301 redirect the old domain to the new one, or else you will be transferring all the harmful link signals with you.

So, the best technique to do (after you've decided to start a fresh site) is do this simple yet very effective technique:

1- get a new domain name to use as intermediary  (Example: www.oldsite2.com)
2- Add a Robots.txt file and make the root domain (of the intermediary site) Disallowed

User-agent:*
Disallow: / 

3- Redirect (301) the old domain to the intermediary. 
4- Permananetly redirect (301) the intermediary to the brand new domain



More to do:

You can also:
1- Add a robots.txt file to the old site to deindex it from search engines (follow step 2)
2- Use Google's URL removal tool and remove all the URLS of the old site.


A Fresh Beginning:

Now it is a new opportunity to start fresh with a new domain, new content, and better strategy.



Short Story Long:

  • http://searchenginewatch.com/sew/how-to/2355513/youve-been-hit-by-penguin-should-you-start-over-or-try-to-recover
  • http://searchenginewatch.com/sew/how-to/2384644/can-you-safely-redirect-users-from-a-penguin-hit-site-to-a-new-domain

Thursday 5 June 2014

How to allow visitors to your site to hide their data from Google analytics? -With a browser Add-on!

To provide website visitors the ability to prevent their data from being used by Google Analytics, Google have developed the Google Analytics opt-out browser add-on for the Google Analytics JavaScript (ga.js, analytics.js, dc.js).

If you want to opt-out, download and install the add-on for your web browser.
The Google Analytics opt-out add-on is designed to be compatible with Chrome, Internet Explorer 8-11, Safari, Firefox and Opera.

In order to function, the opt-out add-on must be able to load and execute properly on your browser. For Internet Explorer, 3rd-party cookies must be enabled.

If you want to opt-out, download and install the extension for your web browser. In order to function, the opt-out extension must be able to load and execute properly on your browser.

Here is the official link for Google Analytics Opt-out to install the plugin: https://tools.google.com/dlpage/gaoptout



You can Even hide from more:

Avoid your data being collected by Digital Analytix

If you would like to opt out from being measured ever by Digital Analytix, you may opt out by clicking here.  If you choose this opt out, a cookie will be placed on your computer instructing Digital Analytix not to measure your use of or visits to events with Digital Analytix tags.  However, please note that if your web browser does not accept cookies, or if you delete the opt out cookie, the opt out is invalidated.  Also, please note that this opt out is only effective for the web browser you were using when you opted out, because cookies are specific to each web browser. 

Opting out of Analytical Performance Cookies:

If you would like to opt out of Analytics cookies, please do so by clicking on the links below:

Opting out of Behavioral Advertising Cookies:

If you would like to disable “third party” cookies generated by advertisers or providers of targeted advertising services, you can turn them off by going to the third party’s website and getting them to generate a one-time “no thanks” cookie that will stop any further cookies being written to your machine. Here are links to the main third party advertising platforms we use, each of which have instructions on how to do this:


You can find out how to decline other online behavioral advertising by visiting:





Monday 7 April 2014

10 Web Usability Lessons from Steve Krug's Don't Make Me Think

Don't Make Me Think is a book by Steve Krug about human-computer interaction and web usability. The book's premise is that a good software program or web site should let users accomplish their intended tasks as easily and directly as possible. Wikipedia


A Summary for the Book



10 Usability Lessons



Update:
I found the whole book online here if you are interested in a Free copy

Thursday 27 March 2014

Let People Know "In Real-Time" When Your blog is Updated with PubSubHubbub

As a blogger (Publisher) you want to notify the cyberspace about your new blog post, first to get it crawled faster, and second to avoid your article being stolen by another Blog and it gets crawled and ranked before you do.

That is when PubSubHubHub comes in handy as it sends realtime notifications to blogfeed hubs when you update your blog.

A simple, open, server-to-server webhook-based pubsub (publish/subscribe) protocol for any web accessible resources.

Pubsubhubbub is used for content publishing by many websites, including all blogs served by blogger.com and WordPress.com, news sites including CNN and Fox news, and social networks like diaspora

Parties (servers) speaking the PubSubHubbub protocol can get near-instant notifications (via webhook callbacks) when a topic (resource URL) they're interested in is updated.

The protocol in a nutshell is as follows:
  • An resource URL (a "topic") declares its Hub server(s) in its HTTP Headers, via Link: <hub url>; rel=”hub” . The hub(s) can be run by the publisher of the resource, or can be acommunity hub that anybody can use: Google's, or Superfeedr.
  • A subscriber (a server that's interested in a topic), initially fetches the resource URL as normal. If the response declares its hubs, the subscriber can then avoid lame, repeated polling of the URL and can instead register with the designated hub(s) and subscribe to updates.
  • The subscriber subscribes to the Topic URL from the Topic URL's declared Hub(s).
  • When the Publisher next updates the Topic URL, the publisher software pings the Hub(s) saying that there's an update.
The protocol is decentralized and free. No company is at the center of this controlling it. Anybody can run a hub, or anybody can ping (publish) or subscribe using open hubs.
Google and Superfeedr offer a public and scalable open hub for anybody to use.




How to Use PubSubhubhub with your feeds?

  • Add an //atom:link tag under //atom:entry for Atom feeds or under //rss:rss/channel for RSS feeds. The //atom:link tag should haverel attribute set to hub and href attribute set to https://pubsubhubbub.appspot.com/
  • Alternatively, your feed can be served with two Link headers:
    • one with rel attribute set to hub and href attribute set to https://pubsubhubbub.appspot.com/
    • one with rel attribute set to self and href attribute set to the feed URL of the feed
  • The above is covered in more detail in the PubsubHubbub 0.4 specification.
  • Whenever new content is added to a feed, notify the hub. This is accomplished by sending a POST request tohttps://pubsubhubbub.appspot.com/ with Content-Type: application/x-www-form-urlencoded and two parameters encoded in the body:
    • hub.mode equal to publish
    • hub.url equal to the feed URL of the feed that has been updated. This field may be repeated to indicate multiple feeds that have been updated

Hub debug

From here you can,
  • Subscribe to a feed or debug your subscriber
  • Publish a feed or debug your published feeds

If you are a WordPress Blogger and wants to Ping the hub easily, there is a Plugin for that.
https://wordpress.org/plugins/pubsubhubbub/

The Plugin does the following:

Sends realtime notifications when you update your blog
Supports multi-user installations (WordPress MU)
Supports multiple hubs
Supports all of the feed formats used by WordPress, not just ATOM and RSS2
Supports latest spec (Version 0.4)
Announces which hubs you are using by adding <link rel="hub" ...> declarations to your template header and ATOM feed
Adds <atom:link rel="hub" ...> to your RSS feeds along with the necessary XMLNS declaration for RSS 0.92/1.0

Wednesday 19 February 2014

Get in-Depth Insights On which Products Your Customers Want but you don't Sell!

You have a website! Selling products! Offering Services! or Whatever...
You have your own search engine with a search box to make life easier for your visitors...
That is Great!

Do you want to know which products your visitors are looking for but you do not have them in your store because you don't know about such a demand??!

A very valuable piece of information is the list of terms that people search for that brings back zero matches. Here you've really blown it with your customer, and you’ll want to see this information so you can improve the experience in the future.

Your site’s internal search engine probably provides reports on searches with no results, but using Google Analytics' Event Tracking can be a simpler method with a one-time setup for a one-line code.

Set your page up so that if there are no results, the following piece of JavaScript code is run:

_gaq.push(['_trackEvent', 'Search Results', 'No Results', '[Searched phrase]',1,true]);

Note that it is important that [Searched Phrase] is replaced by the actual search string your user used. Otherwise your reports will be of no use to you. Also note that in this case, the Non-Interaction variable is set to True. This means that if this is the only page the visitor is seeing, it will result in a bounce. And that makes sense because the No Results page was not helpful to them. 

Tuesday 11 February 2014

Use rel="alternate" hreflang="x" annotations to Serve the Correct Language or Regional URL to Searchers!

The rel='alternate' attribute enables you to tell search engines that a web page is available in different language versions. For example, you could add the following to the head section of a web page if that page is available in English, German and French:

<link rel=”alternate” href=”http://en.example.com” hreflang=”en” />
<link rel=”alternate” href=”http://de.example.com” hreflang=”de” />
<link rel=”alternate” href=”http://.fr.example.com” hreflang=”fr” />

All other languages can be directed to the default version of your website:

<link rel=”alternate” href=”http://example.com” hreflang=”x-default” />

Some example scenarios where rel="alternate" hreflang="x" is recommended:
  • You keep the main content in a single language and translate only the template, such as the navigation and footer. Pages that feature user-generated content like a forums typically do this.
  • Your content has small regional variations with similar content in a single language. For example, you might have English-language content targeted to the US, GB, and Ireland.
  • Your site content is fully translated. For example, you have both German and English versions of each page.

Using language annotations

Imagine you have an English language page hosted at http://www.example.com/, with a Spanish alternative at http://es.example.com/. You can indicate to Google that the Spanish URL is the Spanish-language equivalent of the English page in one of three ways:
  • HTML link element in header. In the HTML <head> section of http://www.example.com/, add a link element pointing to the Spanish version of that webpage at http://es.example.com/, like this:
    <link rel="alternate" hreflang="es" href="http://es.example.com/" />
  • HTTP header. If you publish non-HTML files (like PDFs), you can use anHTTP header to indicate a different language version of a URL:
    Link: <http://es.example.com/>; rel="alternate"; hreflang="es"
  • Sitemap. Instead of using markup, you can submit language version information in a Sitemap.
If you have multiple language versions of a URL, each language page must identify all language versions, including itself.  For example, if your site provides content in French, English, and Spanish, the Spanish version must include a rel="alternate" hreflang="x" link for itself in addition to links to the French and English versions. Similarly, the English and French versions must each include the same references to the French, English, and Spanish versions.
You can specify multi-language URLs in the same domain as a given URL, or use URLs from a different domain.
It's a good idea to provide a generic URL for geographically unspecified users if you have several alternate URLs targeted at users with the same language, but in different locales. For example, you may have specific URLs for English speakers in Ireland (en-ie), Canada (en-ca), and Australia (en-au), but want all other English speakers to see your generic English (en) page, and everyone else to see the homepage. In this case you should specify the generic English-language (en) page for searchers in, say, the UK. You can annotate this cluster of pages using a Sitemap file or using HTML link tags like this:
<link rel="alternate" href="http://example.com/en-ie" hreflang="en-ie" />
<link rel="alternate" href="http://example.com/en-ca" hreflang="en-ca" />
<link rel="alternate" href="http://example.com/en-au" hreflang="en-au" />
<link rel="alternate" href="http://example.com/en" hreflang="en" />
For language/country selectors or auto-redirecting homepages, you should add an annotation for the hreflang value "x-default" as well:
<link rel="alternate" href="http://example.com/" hreflang="x-default" />

Supported language values

The value of the hreflang attribute identifies the language (in ISO 639-1 format) and optionally the region (in ISO 3166-1 Alpha 2 format) of an alternate URL. For example:
  • de: German content, independent of region
  • en-GB: English content, for GB users
  • de-ES: German content, for users in Spain
Do not specify a country code by itself! Google does not automatically derive the language from the country code. You can specify a language code by itself if you want to simplify your tagging.  Adding the country code after the language to restrict the page to a specific region.  Examples:
  • be: Belarusian language, independent of region (not Belgium French)
  • nl-be: Dutch for Belgium
  • fr-be: French for Belgium 
For language script variations, the proper script is derived from the country. For example, when using zh-TW for users zh-TW, the language script is automatically derived (in this example: Chinese-Traditional). You can also specify the script itself explicitly using ISO 15924, like this:
  • zh-Hant: Chinese (Traditional)
  • zh-Hans: Chinese (Simplified)
Alternatively, you can also specify a combination of script and region—for example, usezh-Hans-TW to specify Chinese (Simplified) for Taiwanese users.
Finally, the reserved value "x-default" is used for indicating language selectors/redirectors which are not specific to one language or region, e.g. your homepage showing a clickable map of the world.

Common Mistakes

Important: Make sure that your provided hreflang value is actually valid. Take special care in regard to the two most common mistakes:
In general you are advised to sign up with your site to Webmaster Tools. This enables you to receive messages in regard to wrong annotations.
Example Widgets, Inc has a website that serves users in the USA, Great Britain, and Germany. The following URLs contain substantially the same content, but with regional variations:
  • http://www.example.com/ Default page that doesn't target any language or locale; may have selectors to let users pick their language and region.
  • http://en.example.com/page.html English-language homepage. Contains information about fees for shipping internationally from the USA.
  • http://en-gb.example.com/page.html English-language; displays prices in pounds sterling.
  • http://en-us.example.com/page.html English-language; displays prices in US dollars.
  • http://de.example.com/seite.html German-language version of the content
rel="alternate" hreflang="x" is used as a page level, not a site level, and you need to mark up each set of pages, including the home page, as appropriate. You can specify as many content variations and language/regional clusters as you need.
To indicate to Google that you want the German version of the page to be served to searchers using Google in German, the en-us version to searchers using google.com in English, and the en-gb version to searchers using google.co.uk in English, userel="alternate" hreflang="x" to identify alternate language versions.
Update the HTML of each URL in the set by adding a set of rel="alternate" hreflang="x" link elements. For the default page that doesn’t target any specific language or locale, add rel="alternate" hreflang="x-default":
<link rel="alternate" hreflang="x-default" href="http://www.example.com/" />
<link rel="alternate" hreflang="en-gb" href="http://en-gb.example.com/page.html" />
<link rel="alternate" hreflang="en-us" href="http://en-us.example.com/page.html" />
<link rel="alternate" hreflang="en" href="http://en.example.com/page.html" />
<link rel="alternate" hreflang="de" href="http://de.example.com/seite.html" />
This markup tells Google's algorithm to consider all of these pages as alternate versions of each other.

Monday 3 February 2014

The Guestographic Formula: Effective Way to Get Effective Backlinks

Have you heard of the term Guestographic before?
Guestographic is simply about creating this formula successfully:

Great content + Targeted Outreach + Added Value = Links


Here are some important tips to make the Guestographic formula Working:

First: The On-Page Strategy 

  1. The content has to be informative and valuable
  2. The design has to be attractive and professional
  3. Use Gifographics (animated infographics)
  4. Make it sharable (add social media buttons and a Pin it button)
  5. Make it embedable. For Wordpress sites use the embed code generator plugin or use generators like SEO gadget embed code generator or Siegemedia embed code generator. (sometimes if you have a JavaScript code to be embedded on a WordPress site you will need to use the 'Insert HTML snippet Plugin' 

Second: The off-Page Strategy 

  1. Submit the infographic on Infographic submission sites (another list) or use a paid service to submit it for you
  2. Find sites that write about your infographic’s topic and show them the infographic (you can also offer to write an introduction for it) 

Wednesday 29 January 2014

How to Screw an SEO Interview? A Crash Course for Internet Lovers Who Want to Make a Living

What really disgusts me in the world of SEO is to find a smart person who loves the internet and social media, eager to learn and have high sense of creativity, being asked some silly questions about SEO and based on his/her "By the Book Answers" will be hired or rejected.

What those employers do not know is that SEO is not a Science, it is Literature; It is ART. 
And as an employer you may stumble one day upon an SEO Picasso, Michelangelo, Bernini, Dante, etc,... will you ask them by which hand do they hold the brush, or What is the definition of "Mosaic"? If you do, then congratulations, your future as an Internet Businessman is almost over. 


Anyway, This post is not for employers, but for those fresh-minded Internet Lovers who, like me, believe that SEO is an Art, and want to pursue a career as SEO specialists, Web Strategists, or Internet Marketing Professionals but need to get pass all those silly HR background checks or Old-fashioned SEO questions. 

I am not saying an SEO specialist should not have "The Basic SEO knowledge". What I am saying is that such knowledge can be simply acquired in a couple of days. Plus, SEO is always evolving and changing, and what was considered "Basic SEO tasks" is now considered spamming and harmful, Further, if the SEO algorithm is pure science and everybody did it according to the theory, how do you think millions of sites will be ranked if all of them used the same techniques on that same book?!!

Neil Patel SEO Method


OK Let's Start.... 

Q: What are the most important onsite SEO factors?
A: Meta Tags, Content, inner links

Q: What is a good Back-linking Strategy?
A:  
  1. Directory listing (relevant niche) inc. DMOZ and Yahoo! Dir
  2. Local Directories (for Local Businesses)
  3. Link Baits and Skyscrapers (Content based)
  4. Link Chains and Link Pyramids
  5. Social Media
  6. Press Releases
  7. Guest Posting (on relevant and authoritative pages) 
  8. Social Bookmarking
  9. Microsites (WordPress or Tumblr)
  10. Affiliation links
  11. Review links
  12. Testimonial Links
  13. Sponsorship Links
  14. Links from clients or suppliers
  15. Sweepstakes, Promotions, and coupons distribution 
  16. Forums Participation (relevant and authoritative)
  17. Commenting on relevant blogs (avoid automatic scraping and Spinning)
  18. Monitoring backlinks (with tools like OpenSiteExplrer, MajesticSEO, or ahrefs)
  19. Disavowing harmful links
  20. Analyzing Traffic Channels (Google Analytics) 

Q: What is a good Local SEO Strategy?
A:

  1. Use a Local TLD (Top Level Domain) - .CA for Canada, .IT for Italy, .EG for Egypt, etc.
  2. Submit to Google Locals and link map to contact us page
  3. have business address, phone number, and working hours onsite (in a Structured data format)
  4. Use Local Keywords 
  5. Use a Local phone number (not a 1-800 toll free)
  6. Submit business to Local Directories
  7. Participate in online Local social activities (Forums, social media, etc.)
  8. Get reviews and testimonials
  9. Add the site link and logo to employees email signatures
  10. Be active on social media with promotions, discounts, competitions, sweepstakes, and seasonal coupons (use scarcity marketing) and do a local press release (online and offline) to advertise such promos. 

Q: What is Canonicalization, EMD, Pagination, KeyWord Proximity, Keyword density, Keyword Frequency, Keyword Prominence, Keyword Stuffing, Cloaking, 404, 301, LSA, and LSI
A: 
  1. URL Canonicalization: picking a canonical (preferred) URL as the preferred version of a page with a duplicate version.
  2. Pagination: Using rel="next" and rel="prev" attributes to tell search engines that a page is continued in other pages (used mainly with products or long articles)
  3. EMD: Exact Match Domain; Using the target KW as a domain name (e.g.: www.WhatisTheBestInsuranceCompanyinCanada.com" Although Google says that an EMD is devalued, many experts believe that it can still be effective to rank for a specific KW. 
  4. KW Proximity: The distance between Keywords in a long-tail phrase; The shorter the distance the more relevant the Phrase is to the search query. (For example: a website contains the keywords that make up the search term “dentist Montreal implant” in the heading “Your professional dentist in Montreal; dental practice for minimally invasive implants”. The search term proximity between “dentist” and “Montreal” is one word, between “Montreal” and “implant” it is five words. The smaller the distance between a search term’s individual keywords, the more relevant it will be from a search engine's point of view.)
  5. KW Density: the ratio (percentage) of keywords contained within the total number of indexable words within a web page. (A good ratio is between 2 - 8%) 
  6. KW Frequency: the number of times a keyword or keyword phrase appears within a web page.
  7. KW Prominence: A KW is prominent if placed in the Title tag (or H1 header)
  8. KW Stuffing: a Black-hat (not good) SEO technique where you add all the Keywords you want to rank for next to each other or allover the page just for the sake of SEO without giving a logical meaning to the reader.
  9. Cloaking: Another Black Hat technique where the content presented to the search engine spider is different from that presented to the user (like hiding some KWs with a JavaScript code or CSS, or just using the same color of the page background) 
  10. 404: A Not Found error message could be caused by a broken link, a deleted page, or just a URL typo (It is important to have a 404 page to navigate users to other pages on the site instead of showing them an error)
  11. 301: Permanent Redirection: to redirect links and pages to other URLs and keep the link juice flowing
  12. LSA: Latent Semantic Analysis:  Analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA assumes that words that are close in meaning will occur in similar pieces of text.
  13. LSI: Latent Semantic Indexing: identifying patterns in the relationships between the terms and concepts (Synonyms) contained in an unstructured collection of text. (So make sure when using a KW in the title of an article to write relevant content about such KW even if you are not mentioning the exact KW)
Q: What are the names of Google Algorithms:
A: The Panda Penalizes bad Content, The Penguin Penalizes bad backlinks, and the Hummingbird is an algorithm that understands the intention of the search query and does not penalize. 

Q:  Where do you get your SEO news and updates from?
A: Blogs (Google Webmasters Central, SEroundtable, Mattcutts, Kiss Metrics, Moz, Search Engine Land, and Have Results)

Q: What is the first thing you do to analyze a website?
A: I crawl it with Screaming Frog SEO Spider and check its backlinks

Q: What tools do you use?
A: There are thousands of SEO tools and most of them do the same thing since they just grab data from the top tools through APIs. Bit for me I feel comfortable with the following: Google Analytics, Webmaster tools, Tag Manager, Adwords KW Planner, Moz, My Seo Tool, Woorank, Rank Tracker, Raven, SEM Rush, SEO Profiler, PingdomSimilar Web, and some browser extensions for quick analysis. 


All My Best Wishes 


Other resources: 
  • http://www.slideshare.net/malarkodiseo/seo-26811106 
  • http://moz.com/ugc/-7-job-interview-questions-to-ask-a-senior-seo-specialist 
  • http://searchenginewatch.com/article/2295280/9-Interview-Questions-to-Ask-Your-SEO-Hires 


Friday 24 January 2014

SEO during site downtime or Maintenance

If you take down your website temporarily, you must inform search engines such as Google. As you could read above, this is done by utilizing the HTTP status code: 503 Service Unavailable, that informs the search engines that the server is temporarily unavailable. To do this one must first create a file that returns a 503 status code on the server. When the search engine sees this, it will understand the situation. This can be done by copying the four lines below into Notepad (or the like) and saving it as 503.php. You must then place this file in the root of your server.

The first two lines tell us that it is a 503 status code, and the last line is used to tell when the website is expected to be online again. Google understands this message, so it is possible to tell Google when to visit the website again. You must either provide a number (seconds) or a date. If you live in Denmark like I do and you expect to return on the 5th of January 2012, at 14:00, you must put down:



Notice that I wrote 13:00:00 in the code, even though I wrote 14:00:00 above. This is due to the fact that the time must be provided in GMT/UTC, which is, in my case, 1 hour behind local time.

But it is not enough to just put a 503 message on your server. You will receive visitors (Google included) from many different sources and to all sorts of pages of your website. They must all be redirected to the message explaining that the website is temporarily closed.

On an Apache/Linux server, this can be easily solved by using a .htaccess file to redirect all the pages towards the 503.php file. The .htaccess file is often used for 301 redirects, but that is not our purpose here. We will use a 302 redirect. You may have been previously warned about using this sort of redirect, and for good reason. It can do a great deal of damage if not used correctly. But in this case, it must be used, and in fact a 301 redirect would be detrimental in its place.

Save the 6 following lines as a .htaccess file and place it in the root of your server as well.
The 'R' in the last line indicates that this is a 302 redirect. R is 302 by default. To create a 301 redirect, it would have said [R=301, L]. The clever thing about this file, however, is that we can give ourselves access to the site and simultaneously show everyone else a 503 message. Let’s say you have the following IP address: 12.345.678.910. You then put the numbers in line 4 as shown below:

When you have placed the two files (503.php and .htaccess) on your server, you’re done. You now have peace and quiet to tinker with your website, as long as you leave those two files in the root of your server – and if Google visits, they’ll know that the site will be back later, and you’ve even let them know when to try again.

But what about passing on the message to your visitors?

How to tell your visitors that the website is only closed temporarily.

With a few additions to the 503.php file, which we made just before, we can pass on a message to visitors:
Source: http://moz.com/blog/how-to-handle-downtime-during-site-maintenance 

Thursday 23 January 2014

A JavaScript code to Track Keyword Ranking using Analytics Events

I found this method in a post by Justin Cutroni, a notable author of many Analytics book. 
It details a way to measure your website’s rank for certain keywords by installing custom code. It requires that you have Google Analytics installed on your site and a little programming knowledge (or at least the knowledge of how to place the code).

This method uses custom code and Google Analytics events to collect and report on keywords that people used to find your site.

Here is code that will look at the referring URL from organic Google search,
Code:
<script type=”text/javascript”>
if (document.referrer.match(/google\.com/gi) && document.referrer.match(/cd/gi)) {
  var myString = document.referrer;
  var r        = myString.match(/cd=(.*?)&/);
  var rank     = parseInt(r[1]);
  var kw       = myString.match(/q=(.*?)&/);
 
  if (kw[1].length > 0) {
    var keyWord  = decodeURI(kw[1]);
  } else {
    keyWord = "(not provided)";
  }
 
  var p        = document.location.pathname;
  _gaq.push(['_trackEvent', 'RankTracker', keyWord, p, rank, true]);
}
</script>
Note that the above section of code will only pull keywords from referring URL’s from Google Organic search.

An explanation of the code


This section of code parses the keyword out,
Code:
var myString = document.referrer;
var r = myString.match(/cd=(.*?)&/);
var rank = parseInt(r[1]);
var kw = myString.match(/q=(.*?)&/);
and this section of code checks to see if the keyword is (not set)or (not provided),
Code:
if (kw[1].length > 0) {
var keyWord = decodeURI(kw[1]);
} else {
keyWord = "(not provided)";
}
And this snippet sends the keyword data to Google Analytics using an event,
Code:
_gaq.push(['_trackEvent', 'RankTracker', keyWord, p, rank, true]);
All of this code goes AFTER your standard analytics tracking code which should be installed in the head of your web pages.

If you are unfamiliar with event tracking you can learn more about it here,

About Events - Analytics Help

Tracking events is pretty simple once you understand how to do it. These are user actions that can be tracked separately and they include things as simple as a click on an external link to downloads or video views. Events are a great way to track extra things on your site that might otherwise have no data collected about them. Using event tracking is a preferred method over tracking virtual pageviews.

When you set up events, each one will have 5 parts.
Category: In the snippet above, the Category is called RankTracker. Note that this can be called anything you want, just be sure and change it in your code snippet before installation. This is how you will identify data in your analytics reports under content -> events -> top events. If you change it, just remember to make it unique and something that will be easy for you to identify later.
  • Action: The action in the code snippet above is KeyWord.
  • Label: In the snippet, this is the “p” in the snippet above and identifies the landing page. Note that this value is optional. You can use it to provide additional information about the event. In this case it may be helpful to know what page a visitor landed on to see what pages are ranking for specific key terms in search. For instance maybe you are optimizing a certain page for a particular keyword. The label will help you identify your progress.
  • Value: In the snippet, this is the SERP rank. The rank of the search engine result will be recorded as the value of the event in your reports.
  • Non-interactive: set this to TRUE

This is a truly innovative technique as it gives you real data about the keywords that are driving people to your site through organic search. One thing you have to remember is that this only works when people are visiting your site through organic search. If you don’t receive a lot of organic search traffic, you may not get much use out of the code.

Some important tweaks and notes

Instead of using document.referrer.match(/google\.com/gi) to detect the referring URL you can use document.referrer.match(/google\./gi) to match foreign versions of Google as well as google.com

If you had to install the basic analytics tracking code on all pages of your site because you aren’t using a template or any kind of include file, you will have to do the same with the code snippet to track keywords.

This script cannot be loaded using the Google tag manager

You can find the original post with comments here, A New Method to Track Keyword Ranking using Google Analytics

Tuesday 21 January 2014

Google Algorithms Pets in a nutshell

Are you still getting confused between the 3 algorithms of Google; Panda, Penguin, and the Hummingbird?
which is which and which does what?
here is a simple diagram showing you the function and basic information on each:


Source: http://www.link-assistant.com/news/key-google-updates.html 


Thursday 9 January 2014

How to Get Notified by SMS or Email if my Website is Down?

In 4 simple steps, you can Get Notified by SMS or Email if your Website is Down, and for free! 


  1. Sign-in to your Google account and then click here to copy this Google sheet into your Google Drive. You may use your Google Apps account as well.
  2. Put your website URLs in cell B2 (comma separated) and your email address in cell B3. If you wish to be alerted by text messages, just replace No with Yes in cell B4.
  3. You’ll find a new Website Monitor menu in your Google Docs toolbar. Click Initialize and you’ll get a pop-up asking for authorization. Grant the necessary access.
  4. Go to the Website Monitor menu again and choose “Start” to begin the monitoring process. Close the Google Sheet.



Friday 3 January 2014

Link Audit Formula: What to Do if Google Detected a Pattern of Artificial or Unnatural Links Pointing to Your Site?

Even if you do not do any kind of Black hat SEO, or involved in any unnatural link building schemes, you may find a message from Google telling you that you have been penalized. 
I once got this message from Google Webmaster Tools:
Google has detected a pattern of artificial or unnatural links pointing to your site. Buying links or participating in link schemes in order to manipulate PageRank are violations of Google's Webmaster Guidelines.
As a result, Google has applied a manual spam action to gouverneur.com/. There may be other actions on your site or parts of your site.

Recommended action
  • Use the Links to Your Site feature in Webmaster Tools to download a list of links to your site.
  • Ensure that unnatural links pointing to your site are removed.
  • When these changes are made, and you are satisfied that links to your site follow Google's Webmaster Guidelines, submit a reconsideration request. If you're unable to remove links pointing to your site, please provide as much detail as possible in your reconsideration request.
  • For an updated list of manual actions currently applied to your site, visit the Manual Actions page. If no manual actions are listed, there is no longer a need to file a reconsideration request.
If we determine your site is no longer in violation of our guidelines, we'll revoke the manual action.
If you have any questions about how to resolve this issue, please visit the Webmaster Help Forum.


If you got it too and do not know what to do, I believe it is time for a Deep Link Audit.

Let's Start:

Collecting the Link Data
To get a complete backlink profile, you will need a paid subscription to a backlink checker. Everyone seems to have a “favorite”, but any of the “Big 4” SEOmozMajestic SEOLink Research Tools or Ahrefs will do the job.

We will be focusing on the following link characteristics:

·         The URL of the page linking to you
·         The URL on your site that is being linked to
·         The IP of the URL linking to you
·         The anchor text used
·         The Percentage (Mix) of Anchor text
·         The follow/nofollow status of the link
·         A measure (rank) of the link’s trust & authority

To begin, enter the URL to audit into the backlink tool. Next, export the data into a CSV file. Sort in ascending value (low to high) by domain/trust/Moz/Cemper whatever rank. In theory this will provide you with a list of links in the order of weakest to strongest. I say “In Theory” as some of the weakest links may be harmless, and some powerful paid links may be killing you. There is no pure algorithmic solution. To do a link audit correctly, requires a manual review.
Analyzing the Link Data
Links that need to be reviewed and considered for removal are the following:
Links that appear on a domain that isn't indexed in Google.
This usually signals a quality problem. A quick way to test for this is to run a “site” command:
Example: “Site:haveresults.com”
sometimes a perfectly good site isn’t indexed, because of a bad robots.txt, like:
User-agent: *
Disallow: /
This usually happens when a website leaves the development stage, but the robots.txt isn’t changed to allow the search engines to crawl the site. That’s why a manual review is important.
Links that appear on a website with a malware or virus warning.
This is pretty self explanatory.

Links that appear on the same page as spammy, unrelated links.
Run the Google Search Command: inurl:links sex,viagra,payday loans and you can find unlimited hacked pages, too.
Links that appear on a page with Google PageRank that is gray bar or zero.
This usually signals poor quality or low trust, but it could also indicate a new page that hasn’t been updated in the PR bar. Gray PR is not the same as PR 0 (zero). The graybar is sometimes a quality indicator, but doesn’t necessarily mean that the site is penalized or de-indexed. Many low quality, made for SEO directories, have a gray bar or PR 0.
Links coming from link networks.
Link networks are a group of websites with common registrars, common IPs, common C-blocks, common DNS, common analytics and/or common affiliate code. Chances are, if a group of websites shares a common ip, you will also find some of the other characteristics of a link network, so that’s where I look first. If using Ahrefs, you would navigate to Domain reports>yourwebsite.com>IPs and get a report like this:
Then Drill down to Domain reports>yourwebsite.com>referring domains, to discover a crappy network
Sitewide Links – especially blogroll and footer links.
Most are unnatural and none pass the juice that they once did.
Watch for exceptions to the rule: After a manual review, I am able to determine that in this case, the first sitewide link found in the tool is natural and there is no need to remove it:. Just one more example of why human intervention is necessary to get a link audit right.

Paid links.
If you are attempting to recover from a manual penalty, every paid link must be removed. No exception. The Google spam team spends all day every day rooting out paid links. After a while, spotting a paid link becomes second nature. That juicy link that you are certain that you can slip by Google will stick out like a sore thumb to the trained eye and will only prolong the agony of a manual penalty.
Beyond specific link types, which could be considered “suspicious”, there are new link rules that need to be reviewed and adhered to in a Post Penguin era.
Post-Penguin Link Audit Considerations
Keep in mind that Penguin is just the latest anti link spam algorithm rolled out by Google. They are hammering websites built on link schemes and rewarding sites with a natural backlink profile. A natural profile contains an assortment of link types, pointing to a website. Your audit should turn up a good mix of:
·         Brand links: Variations include: Your Domain, YourDomain.com, www.YourDomain.com, YourDomain.
·         Exact-match anchor text keyword links: These anchor text links should point to the most appropriate page on the website (the one you are optimizing).
·         Partial-match keyword links: It’s important not to over-optimize with exact match keywords, otherwise you could trip a phrase based filter.
·         Generic Links: Like “Read More” or “Click Here.” Keep in mind that good content should fill this need with little if any work required on your part.
·         Page title links: Some of your links should be the same as your page title.

There are some good tools on the market like Link Detox and Remove’em to help you with link audits and even link removals. The key takeaway is that no matter what tool you are using, a human review is going to be necessary to “get it right.” Leaving it to metrics alone is a formula for failure.

What follows is a step-by-step, tactical walkthrough of exactly how to perform a link profile audit, and how to figure out which links should be removed and/or disavowed.
What you’ll need:
  • Scrapebox (A tool every SEO must have in their arsenal)
  • Proxies for Scrapebox (optional, recommended. I recommend going for the “Bonanza” package from the “Exclusive Proxies” section.)
  • Microsoft Excel

Find Your Anchor Text Ratio

To get started, we need to analyze the most important signal that Google’s Penguin algorithm looks for: over-optimization of anchor text.
Step 1: Get a list of your website’s inbound links and put the list in your Excel spreadsheet. You can get this information from the following sources:
For the most complete information, try to combine data from all four sources. However, I recommend just using the data from your Google Webmaster Tools account. It’s free, and usually about as thorough as you’ll get from the other sources. Plus, it’s straight from Google. For this walkthrough, we’ll assume you’re using the list from your Webmaster Tools account.
Note: To get a list of your inbound links from Google Webmaster Tools, follow the steps below:
  1. Login to Google Webmaster Tools
  2. Click your Website
  3. Click “Traffic” on the left navigation
  4. Click “Links to your site”
  5. Click “Who links the most”
  6. Click “Download latest links”
Step 2: Run your list of links through Scrapebox to get the anchor text of each link. For a detailed walkthrough of how to set up Scrapebox, load proxies, etc., please see my post on how to use Scrapebox to find guest blogging opportunities. Depending on how long your list of links is, and how many proxies you’re using, this step could take a long time.
For lists of links that are 1,000 or less, it shouldn’t take more than 10 minutes. But several nights ago, I ran a report on a list of links that was over 43,000, and I had to let Scrapebox run over night in order to complete.
Step 3: Export the report to Excel on your desktop. You may need to open and re-save the file after you export it, because for some reason it often corrupts immediately after export. Opening and re-saving the spreadsheet should fix it.
Step 4: Within your spreadsheet, sort your columns as such:
  • Column A: Source URL
  • Column B: Destination URL
  • Column C: Anchor Text
  • Column D: Found?
Step 5: Sort column D by alphabetical order and remove all rows in which column D’s value is anything other than “Found.” You’ll likely see lots of “Not Found,” “Error 404″ and such from the Scrapebox output, which should be removed.
Step 6: Delete Column D (it’s no longer necessary).
Step 7: Add a new Column D with header “Number of Anchor Occurrences.”
Step 8: In cell D2, enter the following formula: =COUNTIF($C$2:$C$6633,C2).
Note: Change “6633″ in the above formula to whatever the number of the last row of your data set is.
Step 9: Apply this formula to all rows in column D by clicking in cell D2 and then clicking the box in the lower-right of the cell, and dragging it down the entire length of Column D. You’ll now have a list of the number of occurrences of each anchor text in the spreadsheet.
Step 10: Open a new tab (or worksheet) within your spreadsheet and paste in the data from Columns C and D.
Step 11: That data will still contain the formulas in the cells, so we need to remove that. To do so, copy/paste the data from columns C and D into notepad. Then, re-copy and paste it back into your new worksheet. The values for “Number of anchor occurrences” will now be absolute values rather than formulas.
Step 12: Now, it’s time to remove duplicates. Remove duplicates by highlighting your two columns, then going to the “Data” tab in Excel and clicking “Remove Duplicates.” In the ensuing popup box, make sure both columns are checked and then click OK.
Step 13: Add a new column C with header “Percent of Total.”
Step 14: Sort by Column B (“Number of anchor occurrences”) from largest to smallest.
Step 15: Scroll down to the last row containing data, and in column B, in the cell directly below the cell containing the last piece of data, enter the following formula: =SUM(B2:B6633).
This will result in the total number of links.
Note: Change “6633″ in the above formula to whatever the number of the last row of your data set is.
Step 16: In Column C (“Percent of Total”), click in cell C2 and type the following formula: =B2/$B$422.
Note: Change “422″ in the above formula to the number of the row that contains the total number of links, which you created in step 15.
Step 17: Change the format of the values in Column C to “Percentage” with two decimal points. You can do this by highlighting the column, right-clicking, and selecting “Format Cells” then changing the “Category” setting to “Percentage.”
Step 18: Apply this formula to all rows in column C. You should now have a list of percentages of anchor text as a ratio of the entire link profile.
Step 19: Highlight in red any rows in which the anchor text exceeds 2 percent of the overall link profile, EXCEPT the following anchor types:
  • Brand anchors
  • Naked URLs
  • Images (i.e. no anchor text)
The remaining highlighted anchor text is the anchor text for which your inbound link profile is over-optimized.
If you’ve made it this far and found no over-optimized anchor text in your inbound link profile, congratulations! You’re probably not a target of Google Penguin. If you did find over-optimized anchor text, read on.

Analyze Your Referring Domains

Next, it’s time to get a list of referring domains, and gather some metrics on each one so we can determine whether we have any domains that need to be completely disavowed.
Step 20: Copy/paste your list of links into a Notepad file.
Step 21: Load that file into Scrapebox using the “Import URL list” button.
Step 22: Click “Trim to Root”
Step 23: Click “Remove/Filter” then click “Remove Duplicate Domains.”
Step 24: Click “Check PageRank” and “Get Domain PageRank” to get the domain PR of each domain.
Step 25: Export the list of domains using the “Import/Export URLs & PR” button.
Step 26: Copy/paste the output from your newly exported file back into your Excel spreadsheet and sort by PR from largest to smallest.

Find Out Which Links and Domains Need to Be Disavowed or Removed

Now, it’s time to figure out which links and domains need to be removed or disavowed.
Step 27: Refer to your list of anchor text percentages. Find the first highlighted anchor (from Step 19) and note what the anchor is.
Step 28: Return to your Scrapebox output with the column that includes anchor text, and sort by anchor text, in alphabetical order.
Step 29: Scroll down the list of anchors until you find the first occurrence of the anchor you noted in step 27.
Step 30: Copy/paste all link URLs containing that anchor into a new worksheet titled “links to disavow.”
Step 31: Repeat steps 27-30 for all anchor texts highlighted in red from Step 19.
Step 32: Refer again to your list of anchor text percentages. Go through each anchor and eyeball any anchors that are completely unrelated to the niche or maliciously and obviously spam (for example, porn, gambling, or viagra-related anchors). Add all links containing these anchors to your “links to disavow” worksheet in addition to a new, separate list.
Step 33: Load your list of links from the “links to disavow” worksheet into Scrapebox and get the domain PageRank of each link.
Step 34: Copy/paste the output from your newly exported file back into your Excel spreadsheet and sort by PR from largest to smallest.
Step 35: Highlight all links with a PR of 4 or below, and all links with malicious or completely unrelated anchor text.
Step 36: Add the highlighted links to your “links to disavow” list. Now, it’s time to figure out which domains to completely disavow.
Step 37: Copy/paste your list of links from Step 33 (your “links to disavow” spreadsheet) into a Notepad file.
Step 38: Load that Notepad file into Scrapebox and repeat steps 20-26.
Step 39: Add all domains with PR 2 or below to your disavow list.
Step 40: Eyeball the remaining domains and highlight any that don’t end in the following extensions (unless you’re sure you don’t want to remove them):
  • .com
  • .net
  • .org
Step 41: Add the highlighted domains to your “links to disavow” list.
You should now have a list that contains the following:
  • A list of links that contain anchor text for which your inbound link profile is over-optimized, which reside on a domain that’s PR 4 or less
  • A list of links that contain spammy, malicious, or completely unrelated anchor text
  • A list of domains that contain links to your website with over-optimized anchor text and are also PR 2 or less
  • A list of domains with domain extensions that are not .com, .net or .org
To disavow an entire domain, use the following format:
domain:spamdomain1.com
domain:spamdomain2.com
domain:spamdomain3.com
To disavow individual links from a domain, use the following format:
http://spamdomain4.com/contentA.html
http://spamdomain5.com/contentB.html
http://spamdomain6.com/contentC.html
Your disavow list should look like this:
domain:spamdomain1.com
domain:spamdomain2.com
domain:spamdomain3.com
http://spamdomain4.com/contentA.html
http://spamdomain5.com/contentB.html
http://spamdomain6.com/contentC.html
Step 42: When you’re ready to submit your list of links to disavow, follow Google’s official instructions on how to do so.

Closing Thoughts

  • If you have access to the SEOMoz API, feel free to substitute domain authority (DA) as your metric rather than PageRank. This is a more accurate metric to use, but it’s expensive to use it in bulk. In step 35, substitute PR 4 with DA 40 or below. In Step 39, substitute PR 2 with DA 30 or below.
  • Why did I choose 2 percent as the threshold for over-optimization? I’ve done at least 50 inbound link profile audits, and in my experience, the sweet spot appears to be about 2 percent.  The 2 percent figure is purely based on my hands-on experience in the field working with real clients who were penalized by Google Penguin.
  • How did I come up with the specific PR and DA thresholds for disavowal? Again, this is based purely on my experience in the field. There’s no textbook that’ll tell you the “right” number(s) or even metrics to use.

Source: 
http://www.searchenginejournal.com/how-to-know-which-links-to-disavow-in-google/50709/
http://searchenginewatch.com/article/2207168/How-to-Conduct-a-Link-Audit