Think Like a Robot – Relocation


Posted by Sam Battin, Senior Natural Search Specialist A problem I heard about this week moved me to write about the proper procedure for moving content on your site to a new location. You should never be afraid to move your content for good reasons. Changing the CMS? Good reason. Revising your directory structure so it’s more logical? Good reason. The important thing, as always, is to keep search engines accurately informed about the correct location of your content. Search engines rank your content high when it’s good. If you tell search engines where you’re moving it, they’ll be more than happy to give it the same high ranks. Keeping search engines informed is easy if you know how to do it; much of our job here in SEO at Performics is learning about the capabilities and limitations of search engine robots. In this post I’ll explain what search robots “think” when you move content around. Understanding how robots think is a good way to predict how your actions will affect your site. Remember, robots are stupid. They’re great at doing what they’re told, but if they’re not told to do something, then they’ll never do it, as we’ll soon see… shutterstock_42826063Okay so a site I heard about had revised their website to make their URLs more search friendly. This is great. What they did, they took their existing URLs and revised them on a development site to include a more logical organization and descriptive words in the URLs that matched page content. Once that was done, they launched the site by making the development site live and turning off all the old URLs on their server. That is to say, if anyone had clicked a bookmark to one of the old URLs after the site re-launched, the server would return a 404 “Not found.” This was the wrong thing to do, as they found their ranks and search traffic immediately plummeted following the re-launch. shutterstock_42608179 The ranks dropped because Google hadn’t indexed the new URLs; they only had indexed the old URLs, and the old URLs were all returning 404s. Google won’t put up a link to a 404 they know about – why would they? If a user clicks a link on Google and finds a 404, they may be tempted to try a different search engine (especially if this happens a lot). It’s in Google’s best interest to only show links to pages that they know exist. At any rate, all the pages that had been ranking high were now gone, removed by Google once they indexed the 404. So next they did damage control by putting up 301 redirects at all the old URLs. Now you’d think this would fix things, because 301 redirects will transfer all of the inbound link value. Right? Well… 301s will transfer the link value, but only if a search engine actually indexes the 301. It’s like the tree falling in the forest; if no one is around to hear it, does it really make a noise? As it turns out, search engines didn’t index the 301s because they had already indexed 404s at those locations. From the search engines’ perspective, they don’t want to spend time crawling URLs they don’t have to. So at the moment search engines discovered 404s at the site’s old URLs, they decided “Okay! This URL no longer exists and we don’t have to crawl it anymore! It’s Miller Time!” So when the 301s were added, search engines stopped crawling the old URLs, and the site’s traffic didn’t improve. shutterstock_43578469What to do at this point? You need to tell search engines to check those old URLs again. You should submit an XML sitemap file that contains your site’s new URLs as well as all of your site’s old URLs. Search engines will then crawl the old URLs and discover the 301 redirects. This will put your site on track to regaining its old visibility. Once search engines have crawled all of your old URLs, they will remove the URLs from their indexes and replace them with the targets of the 301 redirects you put there. Once all old URLs stop appearing in search results, you can revise your XML sitemap to remove the old URLs.  


Comments are closed.

Performics Newsletter

[raw]



[/raw]