Hacker Newsnew | past | comments | ask | show | jobs | submit | fwip's commentslogin

I don't think phones are really any cheaper than the mini PCs that businesses can already buy. Which makes sense, because a phone has to include a battery, touch screen, and is under tighter space constraints.

Android phone is $100 a piece. Minipc is much more expensive.


One problem is that it's exceedingly difficult to tell, as a reader, which scenario you have encountered.

This is the strongest argument against it, I think. Sometimes you can't easily tell from the output whether someone thought deeply and used AI to polish, or just prompted and published. That adds another layer of cognitive burden for parsing text which is frustrating.

But AI-generated content is here to stay, and it's only going to get harder to distinguish the two over time. At some point we probably just have to judge text on its own merits regardless of how it was produced.


My exposure and usage of “AI” has been very limited so far. Hence that is what I am and have been doing all the time: Read the text mostly irrespective of origin.

I do note that recently, I wonder what was the point the author wanted to make more often only to then note that there are a lot of what seems to be the agreed on standard telltale signs of excessive AI usage.

Effectively there was a lot of spam before already hence in general I don't mind so much. It is interesting to see, though, that the “new spam” often gets some traction and interesting comments on HN which used to not be the case.

It also means that behind the spam layer there is possibly some interesting info the writer wanted to share and for that purpose, I imagine I'd prefer to read the unpolished/prompt input variant over the outcome. So far, I haven't seen any posts where both versions were shared to test whether this would indeed be any better, though.


Yet more LLM word vomit. If you can't be bothered to describe your new project in your own words, it's not worth posting about.

Aren't LLMs lossy? You could make them lossless by also encoding a diff of the predicted output vs the actual text.

Edit to soften a claim I didn't mean to make.


LLMs are good at predicting the next token. Basically you use them to predict what are the probabilities of the next tokens to be a, b, or c. And then use arithmetic coding to store which one matched. So the LLM is used during compression and decompression.

Yes LLMs are always lossy, unless their size / capacity is so huge they can memorize all their inputs. Even if LLMs were not resource-constrained, one would expect lossy compression due to batching and the math of the loss function. Training is such that it is always better for the model to accurately approximate the majority of texts than to approximate any single text with maximum accuracy.

The Dark Knight Rises (the batman movie with Bane) seemed especially notable in this way - almost directly caricaturing the Occupy Wall St protests that were relevant at the time.

From looking at some new car options lately, it seems like you're lucky if you can get floor mats for $200. This doesn't take away from your point - I suppose I'm just griping.

The announced "under $20K" price was including the now-cancelled $7,500 EV subsidy.

Having an LLM write your blog posts is also lazy, and it's damn tedious to read.

Well, they aren't trying to win your sympathies.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: