The Magic SEO Ball

All your SEO questions answered by The Magic SEO Ball!

Is Google wrong about robots.txt?

March 10, 2014 By Natan Gesher

I saw a search result that said, “A description for this result is not available because of this site’s robots.txt – learn more.” But when I checked that site’s robots.txt, the page was not blocked at all. Did Google just mess this one up?

Magic SEO Ball says: most likely.

Most likely.

I searched for Ray Kurzweil so I could explain to some coworkers that he seriously intends not to die. The fifth and sixth organic results were both from ted.com. The fifth result, but not the sixth, strongly implied that Googlebot had been blocked from crawling the page, and consequently was unable to provide a meta description.

A description for this result is not available because of this site's robots.txt – learn more.

But when I looked at ted.com’s robots.txt, I saw only this:

User-agent: *
Disallow: /index.php/profiles/browse
Disallow: /index.php/search
Disallow: /search

Google wasn’t blocked from crawling the page at all.

I don’t know what’s going on here – specifically whether TED has used another method to prevent this page from being crawled or whether Google actually thinks that the ted.com robots.txt is blocking that directory – but it seems strangely like a Google mistake.

Can Google tell when a page doesn’t work well in mobile?

December 20, 2013 By Natan Gesher

Can Google tell when a page doesn’t work well in mobile?

Magic SEO Ball says: without a doubt.

Without a doubt.

Scroll down to the “User Experience” section they’ve recently added to the Page Speed tool: http://developers.google.com/speed/pagespeed/insights/?url=http%3A%2F%2Fmagicseoball.com%2Fwill-reconsideration-request-get-reviewed%2F.

They are evaluating the pixel size of page elements and telegraphing a future where a bad user experience on mobile is a ranking factor for mobile search.

This answer was contributed by Tre Jones.

Will my reconsideration request get reviewed?

December 18, 2013 By Natan Gesher

I have some manual actions in Webmaster Tools and I submitted a reconsideration request. Will anybody at Google actually read it?

Magic SEO Ball says: better not tell you now.

Better not tell you now.

Magic SEO Ball is actually aware of a recent incident in which a website received two “manual action” penalties, cleaned itself up by removing all the spam, submitted a reconsideration request, received useless boilerplate drivel back from Google, submitted a new reconsideration request, and then still received the same useless boilerplate drivel back from Google.

After this, the website’s SEO director complained on Quora, on Stack Exchange and on Google’s own product forums.

After submitting the question in those different places and sending it to high level SEOs and a few Googlers, both penalties disappeared within twelve hours.

It would be hard not to interpret this as meaning that nobody at Google ever read the two reconsideration requests and that the site was never reviewed until he elevated the issue by publicizing it.

On the other hand, maybe the penalties were just scheduled to drop at a certain date, and that date was coincidentally the same as when he complained.

Should non-trailing slash URLs redirect?

December 16, 2013 By Natan Gesher

Should my site’s URLs without trailing slashes redirect to trailing slash versions? Or should my trailing slash URLs redirect to non-trailing slash versions? What if some page types do it one way and other page types do it another way? What about canonical URLs? And what’s up with file extensions at the end of URLs?

Magic SEO Ball says: concentrate and ask again.

Concentrate and ask again.

Lots of issues here, some of which are referring to SEO best practices and others of which are closer to personal preference and which just require a simple decision and some consistency.

First issue: file extensions.

It made sense in the 1990s, when we manually wrote websites in individual html files, for those files to be uploaded using FTP clients into folders, and for the files in the folders to have URLs like page.html (or page.htm). This doesn’t make sense now. If you’ve built a new site in the past decade, its URLs probably shouldn’t be using file extensions like .html or .htm, .aspx or .asp, .php, &c.

Second issue: consistency.

Your site needs to have a coherent URL structure, and that coherence needs to extend all the way down to whether every URL has a trailing slash or whether not a single URL has one. Admittedly, this is in part a QA issue: you should never land on a page on your own site and not know if the URL is correct or whether you’re looking at a page that shouldn’t exist.

Pick one option or the other, trailing slash or non-trailing slash, and go with that option for all pages, without exceptions. Since you’re going with one or the other, implement redirects at the app level from the one you didn’t chose to the one you did choose: http://domain.tld/directory/page/ should redirect to http://domain.tld/directory/page or vice versa.

Third issue: choosing trailing slashes or not.

I recommend that you choose trailing slashes for two reasons:

  1. Another QA issue: it’s easier to see that you’ve landed on a page with a trailing slash and it’s the correct page, and if you’ve landed on a page without a trailing slash know that it’s the incorrect page, than vice versa.
  2. WordPress uses trailing slashes. WordPress is the default web publishing tool, so using WordPress-style URLs is a good idea.

Of course, this is largely a personal judgment call.

Fourth issue: canonicals.

You already know that a self-referencing canonical actually needs to reference itself, as opposed to a different URL that doesn’t exist.

In other words, if you’ve chosen to use trailing slashes, but you’ve created the URL incorrectly at http://domain.tld/directory/page instead of http://domain.tld/directory/page/, the canonical needs to be http://domain.tld/directory/page (the page that actually exists), not http://domain.tld/directory/page/ (the page that should exist).

Got that? Canonical URLs actually have to be for the pages as they exist, not as they ought to exist.

Of course, if both http://domain.tld/directory/page and http://domain.tld/directory/page/ resolve, fix the problem by redirecting according to your URL rules and make sure the canonical on the target page is correct.

Bonus: capital letters.

When you’re creating a new site and putting URL rules in place for trailing slash or non-trailing slash, you may as well decide that none of your URLs will use any capital letters, and that any URL with capital letters should be redirected to an all-lowercase version.

Bonus bonus: do the trailing slash redirect and the lowercase letters redirect in the same step so there’s only ever one redirect instead of two jumps.

Is title the same as headline?

December 13, 2013 By Natan Gesher

My SEO submitted requirements that our CMS should not allow titles longer than seventy characters. He also mentioned elsewhere in the requirements document that headline should generate the title (which should then be editable). So headline is the same as title, right?

I built the headline field in the CMS so that it will not accept a headline from editors longer than seventy characters. Good?

Magic SEO Ball says: my reply is no.

My reply is no.

When an SEO talks about a title, he means the “title” tag.

When he talks about a headline, he almost definitely means the on-page “h1” tag.

It’s an SEO best practice for titles to be limited to seventy characters, because longer titles are likely to be truncated when they appear in search engine results. There is no particular character limit on headlines, expect that they not be very long or overwhelming to users.

It’s also a good idea for editorially created headlines to generate titles, but there can be some collision over the fact that headlines have no length limit while titles do. Therefore, if you’re working on a CMS and get these requirements from your SEO, keep in mine that he probably wants the title field to be editable.

Based on a true story.

A section of my site is called “Deals”; should I make the H1 “Specials”?

May 13, 2013 By Natan Gesher

Magic SEO Ball says: very doubtful.

Very doubtful.

But really, why would you do this? If you’ve got an area of your site that’s about deals and other related things, and you’ve decided that it will be called “Deals,” why would you use some other term instead of “deals” to tell your users what that area of the site is about? It just defies logic, not to mention the first rule of marketing that we learned at our first SEO job: never make up a different term to describe your product that’s separate from the one you’ve already made up.

Based on a true story.

Does offering your website in different languages improve your SEO performance?

May 12, 2013 By Natan Gesher

Magic SEO Ball says: Concentrate and ask again.

Concentrate and ask again.

Just on its face, the answer is basically yes, because it’s clear that not everybody in the world speaks English, so if you offer your content in multiple languages, people who aren’t searching for it in English will find it and that means improved SEO performance.

Followup: And if it does, what would be better to do: Have a single domain for all lanuages, i.e. de-de.domain.com for a german audience, or have a dedicated domain for each language, i.e. domain.de for the german audience.

This is really much more complex. In the past, Google representatives have stated that the gold standard in internationalization was to use country-specific TLDs, which would mean putting global content on domain.com, German content on domain.de, French content on domain.fr, &c. That addresses countries only, however, and not languages: there may be many people in Germany and France who prefer the English-language content and there may of course be many German and French speakers in countries besides Germany and France, such as in Austria and Canada.

Lately, we have conferred with some very high level expert SEOs who advise duplicating all content on domain.com, in region- and language-specific directories and using the rel=alternate markup to help search engines figure out which versions should appear to searchers in which countries.

We expect that the answer would be different based on different circumstances, such as the type of site (ecommerce, b2b, content, local-specific).

Based on a Quora question.

Should I update my robots.txt file to disallow all bots?

May 11, 2013 By Natan Gesher

Magic SEO Ball says: My reply is no.

My reply is no.

Seriously though, why would you do this?

Maybe if you are creating a new site and want it to be private and specifically aren’t interested in receiving any search engine traffic, that might make sense… but if you’ve already got a big site that gets a lot of its traffic from SEO, you should really make sure that your robots.txt doesn’t block search engines.

Based on a true story.

Will a lot of nofollow links hurt my site?

July 3, 2012 By Natan Gesher

Magic SEO Ball says: Ask again later.

Ask again later.

Magic SEO Ball fundamentally accepts Google’s contention that a relatively high number of nofollow links won’t hurt your site’s ranking. But this acceptance is with a caveat: those nofollow links won’t necessarily hurt your rankings now, but this may be something that is addressed in future algorithm updates, whether by adjusting a current ranking factor like Penguin or by adding a new one. You need to ask yourself whether your backlinks profile is abnormal for your industry or niche; if the answer is yes, is that because of something dodgy that you’ve done or because you’re the only one building links the clean way while your competitors are all dodgy?

Should I disavow spammy backlinks in Bing?

July 2, 2012 By Natan Gesher

Magic SEO Ball says: Don’t count on it.

Don't count on it.

Magic SEO Ball appears to be skeptical about Bing’s invitation to webmasters to disavow spammy links pointing to their sites. This makes a lot of sense, as our friends at Bing also appear to be uncertain about how Bing treats bad incoming links. As Vanessa Fox writes:

I still don’t understand what a site owner gets in return for the time spent disavowing links with this tool. I can only conclude that Bing may in fact lower a site’s ranking due to spammy incoming links.

If this is true, it’s contrary to what Bing says about how its ranking algorithm works. Perhaps Bing simply would prefer for webmasters to do the dirty work and identify all the bad links to save Bing the trouble.

  • « Previous Page
  • 1
  • 2

All your SEO questions answered by a black magic ball!

  • About the Magic SEO Ball
  • Ask A Question
  • SEO glossary

Recent Posts

  • Are fragment identifiers that change content cloaking?
  • Does UTM tracking on inbound links affect SEO?
  • Did Thumbtack really break Google’s rules?
  • Is HTTPS a tie-breaker?
  • Is there a duplicate content penalty?

Natan Gesher | Sharav | Colossal Reviews | Megalomania:me | The Magic SEO Ball | LinkedIn | Facebook | Mastodon