There's nothing wrong with research that doesn't make it to the public. There is definitely something wrong with making false promises to the public, who buy tickets to your park based on what you advertised could be an attractions there, which never materialized.
Cars are inherently dangerous, though. They're multi ton hunks of metal moving at high speeds. That's dangerous from literally any angle you can imagine.
There are ways to make it less dangerous, sure. But they're never 100% safe. Which makes them, by definition, inherently dangerous. That's... What those words mean.
So long as you’re also willing to label swimming pools, grapes, and crayons as, by definition, inherently dangerous on account of not being able to be made 100% safe, then I’ll at least grant you a level of consistency in your argument.
Swimming pools are absolutely inherently dangerous. Why do you think lifeguards are a thing?
Like, really man? If you can't even recognize as dangerous the one activity that famously requires someone specifically trained to save people to be present, then I'm happy to end this conversation right here. It's clearly just a waste of time all around. I just hope there's no one in your life depending on you to judge what's safe and what's not.
Comparing "100% safe" vs the danger cars represent is so ridiculous I have to question if you're kidding? We're talking 40,000 people killed every year in the US alone on account of traffic accidents. And you're talking about grapes and crayons?
And swimming pools are pretty dangerous though? There are around 4,500 drowning deaths per year in the US, so on the order of 10x fewer than due to car accidents, but still quite a lot.
GP is the one who argued “not 100% safe” as evidence of inherently unsafe.
I agree with you that it’s a comically wrong threshold, which is why I offered that series that was progressively more safe but never 100% safe as examples against that line of reasoning.
Make the threshold "won't kill you 99.9% of the time, even if you have little to no training at that specific activity" then. Is that specific enough for you to engage meaningfully with the conversation at hand, and show why you think driving is at the same side of this threshold as eating grapes or using crayons?
(I believe) OP's point is about a company being global relative to amount of users, not just their geography. If you have single digit thousands of users or less, you still don't need those optimizations even if those users are located all around the world.
LLMs learned from human writing. They might amplify the frequency of some particular affectations, but they didn't come up with those affectations themselves. They write like that because some people write like that.
Those are different levels of abstraction. LLMs can say false things, but the overall structure and style is, at this point, generally correct (if repetitive/boring at times). Same with image gen. They can get the general structure and vibe pretty well, but inspecting the individual "facts" like number of fingers may reveal problems.
That seems like straw man. Image generation matches style quite well. LLM hallucination conjures untrue statements while still matching the training data style and word choices.
> AI may output certain things at a vastly different rate than it appears in the training data
That’s a subjective statement, but generally speaking, not true. If it were, LLMs would produce unintelligible text & images. The way neural networks function is fundamentally to produce data that is statistically similar to the training data. Context, prompts, and training data are what drive the style. Whatever trends you believe you’re seeing in AI can be explained by context, prompts, and training data, and isn’t an inherent part of AI.
Extra fingers are known as hallucination, so if it’s a different phenomenon, then nobody knows what you’re talking about, and you are saying your analogy to fingers doesn’t work. In the case of images, the tokens are pixels, while in the case of LLMs, the tokens are approximately syllables. Finger hallucinations are lack of larger structural understanding, but they statistically mimic the inputs and are not examples of frequency differences.
- 12 ounces bread flour became just "12 ounce"
- Yield of 2 dozen cookies became just "2 servings"
Also, not really a bug per se, but the site contains both metric and imperial measures already, and I see that you only take the imperial units and convert them on demand. For some of the measures, like for the butter, the difference is significant. Not huge, but enough to be noticeable.
All that said: this is pretty great. I've actually been thinking about making a recipe app for quite a while now, and I see the most interesting features I thought of you have implemented. Scraping recipes on demand, interactive timers, recipe scaling. Really cool!
Only idea I had that I don't see here is automatically estimating nutrients and calories. But that's probably for the best: When I tried making the aforementioned app, I started by diving right into that particular rabbit hole, and, well... I still don't have any recipe app to show for it :p Here be dragons!
I think the author's (ex) friend believes the same about the hair salon thing. That there is a hierarchy of power and potential for intimidation in the context of a worker and a client. E.g. the guy at the restaurant being flirty with the waitress.
I've said before that the age of an internet user can be estimated by how many free image hosting services they have seen come and go, like rings on a tree trunk.
reply