Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Online review systems are dead. Surely some must still be valid, but the well is poisoned, and nothing can be trusted.


We've come full circle. Word of mouth is all you can trust. The Internet as a great "disintermediation" medium is over.


From this I get you mean "word of mouth" in a very physical way? As in "my cousin Jane says..."?

Because "Word of mouth" over the internet would imply that The Internet still has a big role to play in disintermediation.

Would "reviews" from your network, any network, physical, or digital, of people you know, be valuable? I believe that a "review" is only useful if you know the reviewer well enough. Regardless of the medium.

For example, when "angry uncle Joe" rants about some libtard-foreigner-restaurant-scum which is ruining the neighborhood with their horrible foreign food, I know how to take that 'review' of the restaurant. Same when aunt Carol explains how she heard from the new boyfriend of the cousin of Foo that her brother said that she heard from.. and so on. But when a friend who knows a lot about fintech explains that SomeBank is really doing a lot of interesting tech and hiring, I also know how to take that.

If you know who the "review" comes from, you can trust it. Otherwise you cannot. I assume this comes down to "word of mouth", though.


I think "word of mouth" just means opinions about products that are communicated to you via a network of acquaintances, acquaintances of acquaintances, etc. that is very unlikely to have any underlying commercial incentives. That communication can obviously still happen over the internet.

The point is that when you're looking for a toaster oven and your friend says that his coworker's spouse just got a ToasterGenius Model Z and they're happy with it, you can be fairly confident they're not shilling for ToasterGenius. They may not know anything about toaster ovens, and they may not have the same toasting needs as you, but at least you can probably trust that their opinion is genuine. Of course, I suppose it could turn out that ToasterGenius is running an MLM and your friend's coworker's spouse hosts ToasterGenius home parties to sell ToasterGenius products!


Not only you can trust the opinion is genuine, you can also gauge it.

You probably know your friend well enough to tell if they know anything about toasters or evaluating consumer products in general, and whether their views and needs corresponds to yours. So you rate the recommendation for ToasterGenius Model Z somewhere on the scale from "my friend is a bona fide toaster expert / fanatic", through "my friend toasts as much as I do and is attentive", all the way to "my friend doesn't know first thing about toasters, and also doesn't understand the concept of communicating uncertainty about one's beliefs".

This kind of ability is what makes word-of-mouth within your social network super useful. On top of that, if your friend ends up purposefully shilling for a MLM, they're doing that at the risk of immediately burning your friendship.

(Tangential: MLMs are social cancer that prey on relationships between friends and family members; they should be excised from this planet.)


I don't think it needs to be physical but if I google a company and a big rant about them on reddit comes up coupled with a bunch of positive feedback on linkedin and glassdoor then I'm going to trust the reddit poster more. It might be someone grumping without fair grounds about unfair treatment but that's the only testimonial you can really trust these days.


And if the reddit post is a competitor paid smear, timed with some event, how could you possibly know? In that case, the glassdoor, linkedin, and reddit testimonial have the same weighted value of 0, as neither the quantity nor quality of reviews online can be trusted anymore.


If nothing else, looking at the history of the account (be it HN, Reddit, or something else) can give you a sense of legitimacy. Of course, even that could be faked or purchased, but if you have someone who seems to have a variety of other interests besides slagging companies, at least right now you can trust that it's a genuine review from a personal perspective.


Other folks might reply differently but my honest response is - I'm not going to do that. When I've switch jobs I've never had a shortage of offers so if I saw that post it'd raise a flag for me and I'd be less inclined to follow up with that company - other really positive factors might swing it back or not. This means I might end up missing a great opportunity but job searches are never exhaustive and we all just try to do the best we can to evaluate different positions with our limited resources.


I wouldn't - glassdoor and other moderated discussion boards allow for these smear campaigns to be weeded out but if their usage declines significantly then smear campaigns are going to get a lot more effective again.

I'm not saying the point I outlined above is the best outcome for the world, it's just the outcome for me and it's pretty regrettable that we're sliding back into it.


To me the in-person-ness of the word of mouth doesn't matter, what's important is that word of mouth is coming from people you know in real life, not internet-only friends, gaming buddies, Twitter followers, etc. So that real friend could text you the word of mouth recommendation instead of telling you at work, over dinner, or at a party, but the reason it's more valuable is because ostensibly that real life friend wouldn't recommend something that was garbage to you since there is meatspace social capital involved.


> If you know who the "review" comes from, you can trust it. Otherwise you cannot. I assume this comes down to "word of mouth", though.

Yes, in the context of a relationship. A "review" from someone one has some kind of relationship is more meaningful. Possibly a weak one, like a fellow HackerNews poster. Which brings back the Internet--helps one form new kinds of relationship.


It’s always the same issue; trust doesn’t scale. You need to trust whoever is providing the review, there is no easy way to magically make a bunch of random reviews trustworthy.


Physical. Otherwise tech companies might selectively show you reviews: Your friend Alice's (quite sincere) glowing review, but not your friend Bob's also sincere scathing one.


When I wrote "word of mouth" I meant a mouth that I can see move from someone I know. I should have made that clear, sorry.


No problem, and no need to excuse yourself! I was merely curious and started thinking about the implications.


Real people on the internet can still be trusted. Just not companies that try to monetise your opinion.


Yup, the arbitrage (of utility & free exploration vs. corporate monetization) that the internet provided is receeding. Another example is Youtube. It used to to be free to watch videos; now there are multiple ads being shown. It's only a matter of time before it becomes exactly like cable tv, where you have ~3-5 minute ad breaks per 15 minutes AND a monthly payment. We're in the middle step, where you can pay Youtube Red OR watch multiple short/skippable ads. I'll give it 10 years before the internet becomes exactly like cable in the early 2000s.


I take a different stance: the scientific method is all you can trust. Word of mouth is also inherently biased (e.g. someone that works somewhere may paint a rosier picture at a moderate enough company due to in-group dynamics or even mild forms of stockholm syndrome, just as a ex-employee might be overly negative, and it's up to us to identify potential sources of biases in these anecdotes and take them with the appropriate amount of salt)

Same goes for online reviews. Even if a scummy company deletes 1 star reviews, there might still be 2 star ones, or 3 star ones that are (in my experience, anyways) more grounded than the more impulsive 1 and 5 star counterparts. A keen applicant of the scientific method would question a unnatural distribution of high reviews with low information density, not because of prior accusations of foul play, but simply because one wants to come up with a reasoned theory of how the dynamics of review systems play out in general. You may not necessarily have hard evidence that intentional shenanigans are occurring in any given review system, but you can still make up your mind in terms of what are likely factors for why reviews are the way they are and how much weight you're comfortable putting in them.


Not clear how you will apply a scientific method to a site that can hide, convince, bribe, censor, etc users so the information you gather is incomplete to have a good analysis. More when people have a short focus span and a scientific method is complex.


This is just my personal interpretation, but the way I approach is I create a trustworthiness score for the set of reviews. For example, too many 5 stars without comments and without low stars might earn a low trustworthiness score because the distribution looks unnatural compared to the distribution in similar competitors. Or a set of 6 reviews, no matter their distribution will also receive a low trustworthiness score simply because there's not much to go on. Reviews of a restaurant complaining about a particularly egregious experience earn a low trustworthiness score since it looks like an outlier and there's emotions running wild. Review sets with a decent number of 3 star reviews containing several well-articulated paragraphs tend to earn a high trustworthiness score. Etc.

So, rather than the score being a dimensionless good-or-bad scale, it's a meta analysis of the reviews, judged on multiple dimensions. This means that some companies/restaurants/products simply don't provide enough information for me to form a conclusion despite having a number of reviews that lean towards either "good" or "bad". And that's ok, because the very fact that I've considered so many different dimensions also tells me that there isn't necessarily a single absolute best option.

This, in my mind, seems like a more accurate depiction of reality than blindly ranking by number of 5 star reviews.


The problem I see with that approach is that modern transformer based NLP like GPT3 make generating this sort of "review" text almost free. The value you place in those reviews is the perceived effort put into them. When that effort goes to zero... it's just more automated spam reviews with a "believable" statistical spectrum.

Honestly, I don't see how to trust reviews if the reviewers have no skin in the game (either their reputation or money). If identities and reputations can be faked/generated at low cost, then we're back to just money. Honestly, one theory of advertising (like McD and bigC) is that they throw money at pretty ads to convince you that lots of people have given them money so they must be good... it's pay to play, but it's not completely wrong. Limiting reviews to those who have actually bought the item also helps, if you time limit/delay it and weight by the cost of shipping/stocking.


GPT3 is certainly a valid thing to consider, but I think it has some weaknesses (e.g. there's an uncanny valley to them in terms of balance between relevance, novelty and coherence) and even the most sophisticated GPT3 cannot, by itself, mess with every marker (e.g. volume, distribution of content quality, conflict of interest between having generated low reviews and the ability to doctor review numbers, nuances associated with weasel words in real text vs in training set, variance in emotions/tone/interjections, reviewer consistency, aggregator reputation, etc)


Makes me want to build an ML system to rate ratings. Of course that could escalate into an arms race.


> We've come full circle. Word of mouth is all you can trust.

5 years ago on HN someone would have said this is an industry ripe to be 'disrupted'. It can't be hard to create a new review site? Easy VC money, an exit within 5 years by acquisition from Yelp or Glassdoor or LinkedIn/MS.


If the fundamental problem is trust, are you really suggesting that "it can't be hard to create" that trust?


A distributed trust network is a really interesting hard problem but does seem solvable in some ways (as seen by blockchains, though they have their own issues).

I've wondered if a somewhat simplistic graph based solution could work, inspired by organizations like medieval guilds and the mafia, where you generally need to be invited to the trust network and promoted to be more trusted within the network as others vouch for your trustworthiness. Suspicious users are highly punished (in the real life examples, often violently) and quickly removed from the network.

Obviously can be gamed like any other system, but would be much harder to do so and you can leverage the vouch/invite graph to detect trusted users that are highly connected to suspicious ones.

Surely there's something like this out there, though? Or is this just not a valuable business problem to solve?


It sounds like you're describing a product review site implemented as a distributed trust network, and the plan to establish the trustworthiness of contributors is to bolt on a contributor review site implemented as a distributed trust network.


maybe this works if you have to 1) be invited and 2) also pay to get in?


Isn't this how recommendations in social networks work?


How many people in a "friends" list are people you actually know?


I want a review site where I can only see reviews from people I have vetted (could be pseudonymous).


Blockchain!

Proving that reviews haven't been removed might actually make this a real application for a blockchain.


Of course, there's no reason whatsoever to use a blockchain for this. The review website could simply offer dumps of their data signed with their private key.


Problem is, you will be forced by courts to remove some reviews for the usual reasons, making you instantly untrustworthy


You can publish data dumps and the internet will do the rest. I'm sure there are more than enough people out there who might either not care about the law or are located outside of that jurisdiction who would be more than happy to call out a scummy company when their existing bad reviews suddenly disappear from the latest data dumps.


Blockchain can't be "forced" to remove a review. Not how that works.

You can force a public facing site to stop displaying it, but if it's decentralized it is still there.

The disadvantage being of course there is no way to remove fake/scam reviews.


You usually build a chain of hashes of messages, not of the messages themselves. Meaning that you can delete the message, leaving evidence of the deletion in the form of a lonely hash in the chain.


Yes, but if uncensorability is a feature you want, there's nothing stopping you (other than the enormous cost[1]) from using the messages themselves.

[1]: Not necessarily a deal-breaker. For perspective, consider that a 10-word transatlantic telegram in 1860 cost the equivalent of $2600 in 2013 USD.


Another solution is to store a URL and a integer score (so you can rank companies). The longest URL I was able to get through CloudFlare recently was about two thousand characters but most will be much smaller. If someone wants the review taken down they can talk to whomever is hosting the content. Tweets, Medium articles, Facebook posts etc could hold the review content itself. Want to take down 5 unfavorable reviews, then write 5 letters to 5 different companies.


As another user noted, you can encode the message into the chain though.


The Reddit discussion contains a few suggestions:

https://www.comparably.com/


It seems like this is a space where a non-profit might be more successful. Any profit-based company is going to have incentives to game reviews. Consumer Reports is published by a non-profit. They have a decent reputation. Reputation exists on the internet. HN has a good reputation too.


Consumer Reports these days looks like nothing but a marketing catalog with unclear monetization plan. We regularly get some sort of a "teaser flyer" addressed to a previous resident, and it's filled with clickbait titles and short scandalous blurbs telling the reader to pay for the full article. It looks like a cross between a tabloid and a product catalog. They might still do some valuable testing, but they've definitely been taken over by marketing MBAs, and are "optimizing their content".


Taken over by the humbugs of always that now are computer savvy... It was a matter of time.


Eternal September 2.0


Word of mouth also actually can't be trusted if it doesn't have full view of reality. I revealed some secrets to a friend about a company and it turns out it was just my department that had the problem


> Word of mouth is all you can trust.

Yes, and only if the words come from the mouths of trusted friends. Everything else is probably tainted by monetary incentives and should be taken with several grains of salt.


Word of mouth has moved into platforms such as next door and moms groups on Facebook. Literally closed circles and small communities with vested trust. I agree with your point to a certain extent.


Every single type of review site is plagued with coordinated shenanigans and gaming the system.

Antivaxxers brigading doctors on Yelp & Google. Diehard fandoms brigading Rotten Tomatoes. Sketchy Chinese sellers buying fake Amazon product reviews in FB groups.

Platforms know this by now, so consumers should too. The internet is basically a series of factions competing for amplification from algorithms at this point. Pump up your guy, downrank theirs. Coordinate for an algorithmic boost or trend where possible.

It's just marketing by way of manufactured consensus, not a review.


True, but why isn't web of trust a thing on Glassdoor? Why is reputation so broken?


My wife and I used to use Yelp at our go-to, day to day review platform. After some of my wife's customers directly experienced some serious WTF behaviour on Yelp's part, our trust in it was, as you said, completely broken.

I'm not at all surprised that Glassdoor is playing similar games.

The following isn't necessarily meant to disagree with you.

A common thread is that Yelp and Glassdoor directly make money based on their reviews. That's their core product. Other comments about how trust should be a core product are relevant, but that doesn't seem to be what's happening.

My wife and I started using Google reviews. To be clear: we are by no means starry-eyed Google fans. We're quite aware of many their problems and WTFisms.

In this case, though, unless I'm mistaken, Google reviews are at most very indirectly a product for them.

This is important, because in my mind, that makes it far less likely for them to meaningfully/substantively fuck around with the results.

It would be easy (and perhaps fun!) for me and anyone else reading this to inject doubt into this assertion.

I'm happily open to additional illumination here, particularly as far as a practical daily use review system.


A counter argument is that if Google doesn’t care enough to manipulate its reviews because the stakes are too low, then they might also turn a blind eye to restaurants, businesses, and other listings manipulating the system themselves (again assuming it’s too small time for Google to care).

But I don’t have a better solution either, my wife and I have followed almost your exact same pattern of relying on Yelp until it became untrustworthy and then Google Reviews (which is what I still use now).


I've personally seen my (and others') negative google reviews removed. I reposted mine since I'd saved it; we'll see how long it stays up.


My previous employer would just sent requests to delete negative reviews on Google Maps because the reviewer didn't use their real name, and poof, it was gone. Easy as pie.


Google reviews are gamed by companies that have thousands of accounts to throw positive reviews at their clients pages.


I think the core issue isn’t paid reviews etc. I think the problem is trying to come up with an arbitrary standard that this restaurant/GPU/shoe stands relative to all others. The simple truth is I want to know two different things, is it functional and would I like it. Star rankings and even short text reviews don’t really answer those questions.

A 1 star review because it was DOA says nothing about failure rates. People also have different preferences on things like cost, spicy food, or what looks tacky making a 3 vs 4 star review mostly meaningless.

This is why independent reviews like consumer reports are so useful. Look at a comparison between similar devices and consider tradeoffs. Or find a movie reviewer with similar tastes and you can generally just follow their suggestions.

Net result reviewers and rating systems just don’t have great incentives which devolves into companies just gaming the algorithm.


I agree that star reviews are not very helpful, but text reviews absolutely can be. It's true that there might be some fake reviews in there, and that some people are posting in the heat of the moment and will give a biased perspective, but read enough of them and you can get a feeling for common themes that impact certain types of people.

Steam is a great example. If you click through to the person who made the review, you can also look at other games they own or other things they reviewed to get a sense the kind of person they are. If they're like you, then their review is probably going to be more helpful to you. If they're very much NOT like you, then that's a useful data point too.

I think one of the special challenges with Glassdoor is that reviewers are anonymous, in order to protect them from career-related backlash. This makes it harder to understand if a particular reviewer's experience is something that will also affect you. But if you think about it, that is a real indictment of the state of workplace relations - workers are at such a structural disadvantage that they are terrified of sharing candid views about current or even former employers in a way that could come back to bite them.

I know i've gone out of my way to try to avoid putting identifying information into Glassdoor, and even toned down some criticism as insurance just in case the employer did figure out who i was. That's not a good thing - it only serves to make my review less helpful! This is all the more reason why it's so disappointing to me that Glassdoor has openly shifted to be a service focused on corporate branding - employers already had all the power to start with.


I agree Glassdoor is in an unusually difficult niche, but rating systems are all over the place.

Steam reviews are interesting because I have started watching someone’s YouTube lets play instead. Even bad ones are mildly interesting and they quickly tell you a lot more about the game than what I get from random reviews.

The best online review system for me is Rotten Tomatoes which shows both what the general public thinks and what the critics do. Looking at say https://www.rottentomatoes.com/m/star_wars_the_rise_of_skywa... I appreciate many of the viewers liked it, but it’s the critics that I find actually agree with. Yep, worse than the prequels, but not quite walk out of the movie theater bad. https://www.rottentomatoes.com/m/star_wars_episode_i_the_pha..., https://www.rottentomatoes.com/m/star_wars_episode_ii_attack..., or https://www.rottentomatoes.com/m/star_wars_episode_iii_reven...


I don't even think star-based reviews are a problem--they're useful for sorting out the people that aren't happy. You can then look at why they weren't happy and see if it's a real issue or not, and if it's an issue relevant to you. (For example, a doesn't-play-nice-with-X downvote doesn't matter if you don't plan to play with X.)

What is a problem is just presenting an average of the ratings, though.


Seems like most web based anything sees rapid steps back due to ease of gaming the system.


I think the issue is the question of moderation - it's really trivial to throw up a discussion board on the web, but moderating that discussion board is where things get hairy since everyone wants to keep out the trolls but it can be hard to tell a legitimately fraudulent review from one a company claims is fraudulent - and there are extremely strong incentives to always believe the company since they've got the money.

I think the heart of the issue is that something like glassdoor gets corrupted like this then a bunch of tech folks say "Forums, forums are easy and CHEAP!" go out and roll their own - maybe a few make it big and start having to police traffic once the trolls discover them... at that point they discover that the message board portion of their product is trivial and cheap but what actually costs is the moderation of that... then the costs begin to pile up and they make some deals with some of the better actors among companies to let them subsidize policiing their own reviews and it gets normalized and then the system collapses.

Moderating things is hard.


I don't that's the issue (or I misread you then)

It's the whole web thinking pattern. Make everything accessible, fast and free. Nobody makes money like this, and thus the system produced has no added value. It will devolve into whatever low hanging fruit can live on the inertia of branding and trends. Until someone makes an article like the one above, then someone else will try.

If I want to rant, I'd say 10 years later someone will make a real system to grade things, and it will require serious work and serious money (think about banking grades) then the system will be kept in place (to an extent.. think about banking grades).


The main problem is that review systems of today are funded by investors who want profits, not honesty. That will always translate into unscrupulous companies paying to manipulate reviews, which will generate more cash than honest reviews.

A good review system can exist but it shouldn't be a venture-backed startup expecting unicorn returns.


I would also throw Yelp in there now that they're instituting a form of a social credit system by allowing a user to accuse a business of being racist which then flags the account with a message that says:

‘Business Accused of Racist Behavior’

The end result of this is too obvious to require elaboration.


Hopefully we can extend this helpful system to every day life, maybe with some sort of reddish lettering system whereby we attach a symbol to the clothes of those who have committed an -ism or a -phobic.


Hear me out, what about an armband?


TrustPilot has gone the same way.

If you look in the clickbait at the bottom of lots of articles, you'll see one advertising a trick that every Android user should know. It takes you to a website at SecuritySaversOnline.com which has an Advertorial for TotalAV anti-virus, that implies it is free, which it isn't. I posted a negative review of this site and then called TotalAV to inform them of this site that was perhaps a fake affiliates site, but during the chat, they said it was their site - TotalAV were using a site with fake offers to advertise them. So, I posted a negative review of them too on TrustPilot - it's one of the 1* reviews here: https://uk.trustpilot.com/review/totalav.com?languages=en&st... (search for cyberspy)

TotalAV disputed my claims, but I demonstrated to TrustPilot that it was a genuine review, even though I wasn't a customer and the review remains.

Why is TrustPilot broken?

TotalAV have over 31K reviews, 87% 5, average 4.8 - but most of them one liners from people who have no review history. Compare with other AV providers - Sophos have 11 reviews, average 2.1, AVG have 2534, average 1.9, McAfee 358 average 1.4, Norton 347, average 1.5 Basically, people don't review their AV unless thay have a bad experience - or the supplier asks them to. I pointed out to TrustPilot that TotalAV's review profile looked fake but, while they let my review stay they have done nothing to rectify the entirely disproportionate review profile of TotalAV


Trust is valuable, build your business , give it away for free, build trust, then sell out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: