> It was comparing to a hypothetical world where everything is perfectly organized, everyone is perfectly behaved, everything is perfectly ordered, and therefore we don't have to have certain jobs that only exist to counter other imperfect things in society.
> Jobs that don't provide value for a company are cut, eventually.
Uhm, seems like Greaber is not the only one drawing conclusions from a hypothetical perfect world
People here seem to be conflating thinking hard and thinking a lot.
Most examples mentioned of “thinking hard” in the comments sound like they think about a lot of stuff superficially instead one particular problem deeply, which is what OP is referring to.
If you actually have a problem worth thinking deeply about, AI usually can’t help with it. For example, AI can’t help you make performant stencil buffers on a Nokia Ngage for fun. It just doesn’t have that in it. Plenty of such problems abound, especially in domains involving some or the other extreme (like high throughput traffic). Just the other day someone posted a vibe coded Wikipedia project that took ages to load (despite being “just” 66MB) and insisted it was the best it was possible to do, whereas Google can load the entire planet (perceptually) in a fraction of a second.
It seems more like a non experienced guy asked the LLM to implement something and the LLM just output what and experienced guy did before, and it even gave him the credit
Copyright notices and signatures in generative AI output are generally a result of the expectation created by the training data that such things exist, and are generally unrelated to how much the output corresponds to any particular piece of training data, and especially to who exactly produced that work.
(It is, of course, exceptionally lazy to leave such things in if you are using the LLM to assist you with a task, and can cause problems of false attribution. Especially in this case where it seems to have just picked a name of one of the maintainers of the project)
Did you take a look at the code? Given your response I figure you did not because if you did you would see that the code was _not_ cloned but genuinely compiled by the LLM.
> then you're doing the opposite of what the author proposes
No, it’s exactly what the author is writing about. Just check his example, it’s pretty clear what he means by “thinking in math”
> Scientific conensus in math is Occam's Razor, or the principle of parsimony. In algebra, topology, logic and many other domains, this means that rather than having many computational steps (or a "simple mental model") to arrive to an answer, you introduce a concept that captures a class of problems and use that.
If you think the ads are working and have 10k potential customers then you start thinking about how to increase your conversion rate thinking you could get a chunk of those 10k, you might think distribution is solved.
But if it turns out only 2.5k are real humans then your conversion rate might not even be an issue and it’s just the marketing strategy that needs tweaking.
The whole point is that they are giving you fraudulent traffic which you use as real data to figure out the next steps. If you don’t know it’s fraudulent or how much of the clicks are fraudulent then you are taking decisions under the wrong assumptions.
> You can’t stop fraudulent clicks just like you can’t stop your SuperBowl ad from playing while your viewers are in the bathroom
That’s not even a good analogy, we are taking clicks, not impressions.
This is not true at all. You can find plenty of examples going either way but it’s far from truth from being a universal reality