Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That doesn’t matter. If a cursory visit doesn’t inspire confidence, users will back out and that will look like low engagement/low value to Google’s ranking analysis.

It’s actually a more helpful analysis for this topic to do a cursory, cosmetic look.



Except that examine is an excellent site, and their cursory look was actually more of a "motivated reasoning look". If you land directly on an examine page for a supplement, it's quite clear the site is really well done.

I'm a designer as well, not just a developer, and it's layout is really well done: clean, straightforward, clearly presented, and the data is easy to find. They link to studies and always err on the side of caution in their descriptions.


They also recently went through a redesign and I think it's really clean. But yeah, idk about their homepage and stuff, but go on any of their pages for a specific supplement and every sentence is thoroughly cited. It's also a great resource for finding relevant studies if you wanna do your own research.


> They also recently went through a redesign

In my experience, this is often the cause of mysterious drops in search ranking. It's very easy to inadvertently introduce changes that negatively impact your ranking without even noticing.

At a quick glance, I noticed that many articles on examine.com link to hundreds of external references (e.g. more than half the page of https://examine.com/supplements/creatine/). In Internet Archive snapshots from before the redesign, these have the rel="nofollow" attribute, but on the current site, they do not. I'm not saying that's the cause, but it might be worth looking into exactly what changed in the redesign.


We removed rel=nofollow AFTER the rankings went down.


There are almost always drop offs when you redesign and Google reevaluates your content, sure add back in no follow if you like but Google is really clever. It sees every page on the site has changed to something else; if that is a recent change we’ll go with Occam’s razor here.


I guess I meant recent as in about 3 months ago. But hey maybe you should notify them of the issue


Ah a redesign - I wonder if they had some migration issues it is possible to tank your traffic if you don't know what your doing.


Nope - if anything traffic slightly went up after it went live.


except, how does one just magically land one of these specific pages? usually, that would be from a search result page, but that's the point of the post. after that, it's using their site's main landing page, and then browsing/searching/etc. so the main page still has to be usable, and not just the individual articles


With the big search box right on the homepage?


Not everyone is being hostile. I think you've been beat up on this thread, but I wasn't being aggressive to you. I quite clearly stated that you use the search page on the landing site to find stuff locally. The original post was specifically about not getting results in search engines, and that's what I was attempting to support.

For me, I've never even heard of the website in question. Why would I, as it's not a field of interest for me. However, if I were to search for it, it would be a search engine result, not a search field on some website I have never knew existed.


I mean he’s right. The home page is fine too it has a big clear search box. I think he’s just pointing out there’s really no “there” there. The examine site is legit, information packed, easy to navigate and parse, and it’s primary use case is landing via search results so it’s best pages are the ones you see most.

Sidebar: examine is one of two sites I regular use the google site filter for: “site:examine.com (supplement name)”. A good search engine would put it first for basically any supplement.


Except everyone is missing the point that you can't find it in a search engine. Has anyone RTFA? That's the entire point of the article. Using a website's local search ability is irrelevant to people that have never heard of the website to visit. The search engine is where people go to find something they don't know where to go for the information they seek. Hell, parental units still go to google first (as in browser default for new window/tab) and type in facebook.com in the search field rather than directly into the browser's location field.

Also, the main page has 2 search boxes if we want to get pedantic about it. Why? I'm assuming as a dev type mindset that they will do the same thing as one is always there while the landing page disappears with use.


The point then is if you search for creatine, you end up on that page not, not the homepage.

And so if you end up on https://examine.com/supplements/creatine/ it pretty much blows away all other pages on creatine.

So at the end of the day, you still end up on useful information.


So how you go by applying this reasoning to this very website where you are commenting?

Hacker News would never be, for me personally, on the top of the list of websites that inspire confidence by its looks, it's only when you delve into it and realise the content is actually great that you can appreciate it.

Such a shallow evaluation is quite strange coming from technical people who are used to mailing lists and all sorts of badly designed (or at least aesthetically unpleasing/neutral) pages...


I don’t know how Google evaluates link aggregators. My comment was based on how Google evaluates content sites right now. I’m not saying it’s fair to Examine.com’s researchers and writers, as most of the replies to my comment appear to assume. I should have made that more clear.


They actually have published guidelines, and we meet them 100%.


Indeed. I sincerely hope they improve the situation. A favorite site of mine, Metafilter, went through something similar a few years ago. Google cracked down on user-generated content too broadly, similar to here, and they have had a tough time recovering despite their care to follow the guidelines. It’s frustrating to see it happen again.


Yup! We even mentioned MF in our blog post.


I think HN follows the aesthetic, to a good extent, of a sparse text based doc page. Those kinds of sites rank high in my personal trustworthiness rubric.

The more something has been “designed”, the more the sight falls into the untrustworthy category, barring substantial evidence to the contrary.


> Hacker News would never be, for me personally, on the top of the list of websites that inspire confidence by its looks, it's only when you delve into it and realise the content is actually great that you can appreciate it.

Other than the whole unchangeable orange theme, I think it's fairly easy to intuit.


>unchangeable orange theme

“Topcolor” setting on your profile page


Ahhhh that's what that does lol


>. If a cursory visit doesn’t inspire confidence, users will back out and that will look like low engagement/low value to Google’s ranking analysis.

But that has no bearing on whether the site is actually presenting comprehensive or accurate information.


That's not the parents point though.

If users act as though it's a dodgy site, how is google supposed to know whether it isn't? You could argue that the wrong heuristics are being used, I'm not sure the technology is there to do it any other way though?


That's not the user acting as though it's a dodgy site, that's the user acting "learning is hard! let's go to youtube".

If you are actually doing research on some subject and are actually prepared to read and obtain new knowledge then yes, your cursory first look is going to be about the content and sources/references, and if you're still going to bounce on the looks then either you're not really trying or you are still just looking for a simpler bite sized easy answer (which pretty much do not exist in this field of science).

By that logic, Google could penalize every site that has in-depth knowledge about some subject.

And now that I think about it, these are EXACTLY the kinds of websites I've been missing from the Google search results in the past years. Most people first stop for "in-depth" knowledge would be Wikipedia (try defending THAT one, 6 years ago ...) and if you really want to, maybe that PDF of a publication is not behind a paywall. The web used to be full of pages that just were made by people crazy smart about a subject and they wrote about the thing they love ...

Just for illustration, I went through my old bookmarks, the original link was dead but the page still exists: http://gernot-katzers-spice-pages.com/engl/index.html Just browse a few pages and see what a quality site it is. It even has each page in both Germand and English.

This page used to pop up all the time when you searched for spices way back in the first half of the 2000s. Try googling "fenugreek" now and cry ...

And the majority of this quality content is not even ad supported at all. That is the worst part. So many people seriously argue that you need ads to support the internet. Well, THIS is the internet that I want, the good one, the promised internet. And look at that "fenugreek" search result page again, it's being fucking buried by this shitty ad supported internet of hollow articles about "fenugreek health benefits". THAT is the internet you get, you support by supporting ads. For every starving quality journalist at the news websites that argue they have to serve you megabytes of adtech with a two paragraph article, there are a thousand regurgitated content farm bullshit sites, easily consuming the vast majority of this internet advertising pie ... it's like cheering on mass murder because the obituaries make such nice haikus some times.


>> low engagement/low value to Google’s ranking analysis.

Measuring engagement is great for shallow content, if you even can call it that. We see that all across the net.

But high-value, in-depth knowledge, is very often relatively boring.


How do you know this is what Google uses to determine how accurate the information is? Assumptions here are worthless since all they do it lead towards uninformed theories.

Examine.com is a very trustworthy website with good research, unbiased information and very good citations that are summarized in a scientific way.

Regardless of the heuristics they use to determine misinformation, they definitely messed up here and it should be re-evaluated.


That is one of the guidelines for YMYL sites unfortunately if you work in "dodgy" areas like insurance.

A few years ago some of the mega UK insurance brands got into major trouble with google. I wont mention any names but directly afterwards they started using cute animals - obviously Sergi had being doing some naughty Black Hat SEO


If a cursory look dismisses a good site, the person looking doesnt have accurate skimming and scanning skills. Judging trustworthiness is probably one of the hardest skills on the web. It takes immense amounts of practice. Like many other skills, its probably one where people are over confident in their own abilities.


This xkcd provides a surprisingly effective metric:

https://xkcd.com/1301/




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: