Hacker Newsnew | past | comments | ask | show | jobs | submit | ccortes's commentslogin

> in the long run those businesses fall to leaner competitors

This is not true at all. You can find plenty of examples going either way but it’s far from truth from being a universal reality


> It was comparing to a hypothetical world where everything is perfectly organized, everyone is perfectly behaved, everything is perfectly ordered, and therefore we don't have to have certain jobs that only exist to counter other imperfect things in society.

> Jobs that don't provide value for a company are cut, eventually.

Uhm, seems like Greaber is not the only one drawing conclusions from a hypothetical perfect world


People here seem to be conflating thinking hard and thinking a lot.

Most examples mentioned of “thinking hard” in the comments sound like they think about a lot of stuff superficially instead one particular problem deeply, which is what OP is referring to.


If you actually have a problem worth thinking deeply about, AI usually can’t help with it. For example, AI can’t help you make performant stencil buffers on a Nokia Ngage for fun. It just doesn’t have that in it. Plenty of such problems abound, especially in domains involving some or the other extreme (like high throughput traffic). Just the other day someone posted a vibe coded Wikipedia project that took ages to load (despite being “just” 66MB) and insisted it was the best it was possible to do, whereas Google can load the entire planet (perceptually) in a fraction of a second.


Oh wow, is that what you got from this?

It seems more like a non experienced guy asked the LLM to implement something and the LLM just output what and experienced guy did before, and it even gave him the credit


Copyright notices and signatures in generative AI output are generally a result of the expectation created by the training data that such things exist, and are generally unrelated to how much the output corresponds to any particular piece of training data, and especially to who exactly produced that work.

(It is, of course, exceptionally lazy to leave such things in if you are using the LLM to assist you with a task, and can cause problems of false attribution. Especially in this case where it seems to have just picked a name of one of the maintainers of the project)


Did you take a look at the code? Given your response I figure you did not because if you did you would see that the code was _not_ cloned but genuinely compiled by the LLM.


> then you're doing the opposite of what the author proposes

No, it’s exactly what the author is writing about. Just check his example, it’s pretty clear what he means by “thinking in math”

> Scientific conensus in math is Occam's Razor, or the principle of parsimony. In algebra, topology, logic and many other domains, this means that rather than having many computational steps (or a "simple mental model") to arrive to an answer, you introduce a concept that captures a class of problems and use that.

I don’t even know what you mean by this.


I really want to get into ocaml but the syntax is sooo ugly I feel like you need a great IDE set up to be able to be productive with it.


Might want to check out ReasonML.


> In this article, being "functional" is just serving as a proxy for code quality.

It is not, it is being very specific about what it means and what it is referring to


> It doesn’t really matter why it’s not working.

It does, because it changes the strategy.

If you think the ads are working and have 10k potential customers then you start thinking about how to increase your conversion rate thinking you could get a chunk of those 10k, you might think distribution is solved.

But if it turns out only 2.5k are real humans then your conversion rate might not even be an issue and it’s just the marketing strategy that needs tweaking.

The whole point is that they are giving you fraudulent traffic which you use as real data to figure out the next steps. If you don’t know it’s fraudulent or how much of the clicks are fraudulent then you are taking decisions under the wrong assumptions.

> You can’t stop fraudulent clicks just like you can’t stop your SuperBowl ad from playing while your viewers are in the bathroom

That’s not even a good analogy, we are taking clicks, not impressions.


That’s the point, without the fraudulent clicks you would just move on to some other strategy because the pricing would not be worth it.

Fake clicks give the illusion that ads are working and instead you have to optimize your funnel or whatever else.


> Does it cease to be a good metric?

Yes if you run anything other than the 100m


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: