Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Can you help me not confuse the Googlebot?
1 point by andrewljohnson on Jan 13, 2009 | hide | past | favorite | 3 comments
I have implemented the RSH library for my website www.trailbehind.com.

This lets me change the hash of the URL and store data when someone surfs around on my maps, so that I can provide links within the map and back button functionality.

However, I'm worried that Google and the other search engines aren't going to take kindly to these links.

I realized that Google wouldn't do things like index my page titles and other content, because they are generated when the DOM is ready via javascript.

So, for now what I've done is maintain the hard links. For example, these two URLs generate the same content:

1. http://cabin.trailbehind.com/#get_map_from_name/Yosemite_National_Park

2. http://cabin.trailbehind.com/park_map/Yosemite_National_Park/

I think the problem with this though is that if people link my site, they will end up linking the hash URLs, while Google will only check out the hard links. And thus, people blogging about me won't help my pagerank.

Has anyone ever dealt with anything similar or have thoughts on this issue? Google rankings are very important to TrailBehind, and part of our strategy (which is already working) is getting listed for the long-tail of parks and forests.



How about redirecting the first URL to the second? Updating the fragment part via JavaScript shouldn't hit the server so that wouldn't be affected by the redirect. If you did get into a situation where it inadvertantly redirects then you could just check the referrer to prevent that.


I don't understand what you mean. If you update the location with location.href, the page will reload. I think the poster wanted to avoid reloads, hence the whole point of using rsh.


The page doesn't reload when ony the fragment part is changed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: