Hacker Newsnew | past | comments | ask | show | jobs | submit | intended's commentslogin

In many examples, LLMs betray the fact that they are not reasoning, because when provided with problems that can be solved with the ability to reason, they fail.

Even in this discussion someone provided an example of coming up with board game rules. LLMs found all board game rules valid, because they looked and sounded like board game rules. Even when they were not.

In short, You can learn a subject, you can make a mental model of it, you can play with it, and you can rotate or infer new things about it.

LLMs are more analogous to actors, who have learnt a stupendous amount of lines, and know how those lines work.

They are, by definition, models of language.

IF you want a better version - GENAI needs to be able to generate working voxels of hands and 3D objects just from images.


I don’t believe the board game rules example. I think this would be a piece of cake for an llm. I’m happy to be proven wrong here if you share an example.

This is the user I took the example from: https://news.ycombinator.com/item?id=47689648#47696789

This assumes the limiting factor is content generation, not ability to read and verify.

You make the point later in your comment, but consider it a minor issue. “Randos”

the actual limits are verification, and then attention. Verification is always more expensive than generation.

However, people are happy to consume unverified content which suits their needs. This is why you always needed to subsidize newspapers with ads or classifieds.


> This assumes the limiting factor is content generation, not ability to read and verify.

Content generation is the thing copyright applies to. If you want to create a reward system for verification, it's not going to look anything like that.

It mostly looks like things we already have, like laws against pretending you're someone else to trade on their reputation so that people can build a reputation as trustworthy and make money from subscriptions or ads by being the one people to turn to when they want trustworthy information.

> However, people are happy to consume unverified content which suits their needs. This is why you always needed to subsidize newspapers with ads or classifieds.

I suspect the real problem here is the voting thing. When people derive significant value from information they're quite willing to pay for it. Wall St. pays a lot of money for Bloomberg terminals, companies pay to do R&D or market research, individuals often pay for financial software or games and entertainment content etc.

But voting is a collective action problem. Your vote isn't very likely to change the outcome so are you personally going to spend a lot of money to make sure it's informed? For most people the answer is going to be no, so we need something that gives them access to high quality information at minimal cost if we want them to be informed.

Annoyingly one of the common methods of mitigating collective action problems (government funding) has a huge perverse incentive here because the primary thing we want people to be informed about is political issues and official misconduct, so you can't give the incumbent politicians the purse strings for the same reason the First Amendment proscribes them from governing speech.

So you need a way to fund quality reporting the public can access for free. Advertising kind of fit but it never really aligned the incentives. You can often get more views by being entertaining or inflammatory than factual.

The question is basically, who can you get to supply money to fund factual reporting for everyone, whose interest is for it to be accurate rather than biased in favor of the funder's interests? Or, if that's not a thing, whose interests are fairly aligned with those of the general public? Because with that you can use a patronage model, i.e. the content is free to everyone but patrons choose to pay money because they want the work to be done more than they want to not pay.

The obvious answer for "who" is then "the middle class" because they're not so poor they can't pay a few bucks while still consisting of a large diverse group that won't collectively refuse to fund many classes of important reporting. But then we need two things. The first is for the middle class to not get hollowed out, which we're not doing a great job with right now.

And the second is to have a cultural norm where doing this is a thing, i.e. stop teaching people illiterate false dichotomy nonsense where the only two economic camps are "Soviet Communism" in which the government is required to solve everything through central planning and "greed is good" where being altruistic makes you a doofus for not spending all your money on blackjack and cocaine. People rather need to be encouraged to notice that once their basic needs are met, wanting to live in a better world is just as valid a use for free time and disposable income as designer shoes or golf.


I don’t think thats how fair use works.

Yes.

1) Quantity is its own quality: Scale makes a difference

2) The tools themselves automate tasks and consolidate their outputs. The “sale” of a piece of content, and its consumption, shifts away from the people producing it Example: We have entire networks and systems that depended on consumption occurring on the site itself. News websites, or indie sites depend on ad revenue.


> I write for two main reasons

> people read things… their life is better

> it’s just my own

What was the point of writing this though?

Perhaps I should know who you are, but assuming you are a regular HN forum user - you are still very much a participant in a larger information economy / ecosystem.

All of us depend on that system, that commons.

Visits to Wikipedia have dropped by at least 8% since 2025, other estimates are starker. This will have an impact on donations.

These reports are similar for many sites which write or produce content.

Your individual behavior may be perfectly fine, and you are entitled to your perspective, but that doesn’t become a defense for the degradations of the commons.

If anything, it’s a classic example of the kind of argument that ends up entangling ideas and making conclusions harder to reach.


Forest for the trees?

I doubt that anyone reading this can’t get the point of the analogy.

The value is in showing where the analogy fails, and either disproves the point, or deepens the point.


This would make sense if the regime command structure had apparently not designed itself for this exact type of conflict.

They were in a fight, took losses, and made significant gains.

They proved their planning was correct, that the distributed nature of their power grid was correct, that they are able to project force and genuinely destabilize the strait.

Things have been proven that were previously uncertain, and they have not been proven in America’s favour.

Crucially America’s ability to defend its allies was tested and found wanting. The entire conflict was of unit economics, in that a cheap 30k drone beat out billion dollar investments.

America also spent the better part of this administration alienating themselves from the one allied nation with extensive drone combat experience.


Admittedly, this is the interesting part. Ukraine via its leader apparently did try to reach US in exchange for money, but, and there stories get confused, was ignored. I have to wonder if Trump has some actual fixed winners table in his mind ( because he does not seem to follow the most optimal path ).

This came across as so confident that I had a moment of doubt.

It is most definitely an attackers world: most of us are safe, not because of the strength of our defenses but the disinterest of our attackers.


There are plenty of interested attackers who would love to control every device. One is in the white house, for example.

I’ve just gone through 3 separate papers on the cognitive impact on GenAI, and the points being raised are far more nuanced than what you are assuming them to be.

I mean, you could read the papers themselves, they aren’t inimical to your position by nature.

For example, one of the more salient results is that the more confident you are in AI, the less likely you are to check the output.

When a new invention arrives on the scene, its properties need to be mapped.


That sounds correct and straight from The Ironies of Automation.

https://dl.acm.org/doi/10.1145/2448136.2448149


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: