Hacker Newsnew | past | comments | ask | show | jobs | submit | r00sty's commentslogin

I imagine his opinions might have changed by now. If we're still residing in 2023, I would be inclined to agree with him. Today, in 2025 however, LLMs are just another tool being used to "reduce labor costs" and extract more profit from the humans left who have money. There will be no scientific developments if things continue in this manner.


I always spend more time fighting the bot and having to debug the code it gives for anything it gives to be useful.


In a more perfect world where we were just discussing the merits of the tech, I would be more inclined to agree. But I have to impress that the entire point of the tech is to do everything.

AI receives so much funding and support from the wealthy because they believe that they can use it to replace humans and reduce labor costs. I strongly suspect that AI being available to us at all is merely a plot to get us to train and troubleshoot the tech for them so it can more perfectly imitate us. Then, eventually, when the tech is "good enough" it will rapidly become too expensive for normal people to use and thus become inaccessible.

Companies are already mass-firing their staff in favor of AI agents even though those agents don't even do a good job. Imagine how it will be when they do.


Oh you strongly suspect a global cabal of wealthy elites are manipulating the world? What does your unsupported musings about global power have to do with AI use cases?


Wealthy people are manipulating the world in their favor, and frequently explicitly tell us so. They can do it without forming a global cabal because they're wealthy.


This is good info. Too many products have hyperbolic promises but ultimately fail operationally in the real world because they are simply lacking.

It is important that this be repeated ad nauseum with AI since it seems there are so many "true believers" who are willing to distort that material reality of AI products.

At this point, I am not convinced that it can ever "get better". These problems seem inherent and fundamental with the technology and while they could possibly be mitigated to an acceptable level, we really should not do that because we can just use traditional algorithms then that are far easier on compute and the environment. And far more reliable. There really isn't any advantage or benefit.


I mean. How would you feel if you coded a menu in Python with certain choices but when you used it the choices were never the same or in the same order, sometimes there were fake choices, sometimes they are improperly labelled and sometimes the menu just completely fails to open. And you as a coder and you as a user have absolutely no control over any of those issues. Then, when you go online to complain people say useful stuff like "Isn't it amazing that it does anything at all!? Give us a break, we're working on it bro."

That's how I see LLMs and the hype surrounding them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: