Hacker Newsnew | past | comments | ask | show | jobs | submit | NateEag's commentslogin

Yep - it honestly reads like an LLM's summary, which often miss critical nuances.

I know, especially with the bullet points.

The meat there is when not to use an LLM. The author seems to mostly agree with Masley on what's important.


> You get to start by dumping your raw unfiltered emotions into the text box and have the AI clean it up for you.

Anyone semi-literate can write down what they're feeling.

It's sometimes called "journaling".

Thinking through what they've written, why they've written it, and whether they should do anything about it is often called "processing emotions."

The AI can't do that for you. The only way it could would be by taking over your brain, but then you wouldn't be you any more.

I think using the AI to skip these activities would be very bad for the people doing it.

It took me decades to realize there was value in doing it, and my life changed drastically for the better once I did.


I think they do, and you missed some biting, insightful commentary on using LLMs for scientific research.

They're saying that if they completely refused to touch any system that has been touched by AI, they would be unable to find paying work.

Thus, they won't use it directly themselves, but are willing to work with people who do.


This is not wrong, but the comment you replied to implies the author of the comment understood that perfectly already.

Qt uses "slow as shit" JavaScript in its UI markup language:

https://doc.qt.io/qt-6/qtqml-javascript-expressions.html

Is your complaint with Electron, the "browser as local GUI app" framework that's been popular with SaaS vendors for their "native" apps?


Right, for small scripting, not for the majority of the app. All the backend interaction is in C++.

Like, electron is fine, but it's orders of magnitude slower than it needs to be for the functionality it brings. Which is just not ideal for many desktop applications or, especially, the shell itself.

Ultimately people use electron because they know HTML, CSS, and JS/TS. And, I guess, companies think engineers are too stupid to learn anything else, even though thats not the case. There is a strong argument for electron. But not for Linux userland dev, where many developers already know Qt like the back of their hand.


As a longtime musician, I fervently believe in doing the best you can with the tools you have.

As a programmer with a philosophical bent, I have thought a lot about the implications and ethics of toolmaking.

I concluded long before genAI was available that it is absolutely possible to build tools that dehumanize the users and damage the world around them.

It seems to me that LLMs do that to an unprecedented degree.

Is it possible to use them to help you make worthwhile, human-focused output?

Sure, I'd accept that's possible.

Are the tools inherently inclined in the opposite direction?

It sure looks that way to me.

Should every tool be embraced and accepted?

I don't think so. In the limit, I'm relieved governments keep a monopoly on nuclear weapons.

The people saying "All AI is bad" may not be nuanced or careful in what they say, but in my experience, they've understood rightly that you can't get any of genAI's upsides without the overwhelming flood of horrific downsides, and they think that's a very bad tradeoff.

I agree with them.


The Corridor Crew [1] are luminaries in our field, and they are incredibly bullish on this tech.

They've made dozens of essays and done tons of experiments showing that they think AI is going to be great for our field:

https://www.youtube.com/watch?v=DSRrSO7QhXY (scrub through the timelines to the end of these videos to see)

https://www.youtube.com/watch?v=iq5JaG53dho

https://www.youtube.com/watch?v=mUFlOynaUyk

https://www.youtube.com/watch?v=GVT3WUa-48Y

Listen to them.

Our entire industry pays attention to them, and they're right!

[1] https://en.wikipedia.org/wiki/Corridor_Digital


The Corridor Crew [1] are luminaries in our field, and they are incredibly bullish on this tech.

They are literally "react" youtubers who have never worked a single day as professional vfx artists.

This is like saying Jake Paul is the heavyweight boxing champion of the world.


If a year ago nobody knew about LLMs' propensity to encourage poor life choices, up to and including suicide, that's spectacular evidence that these things are being deployed recklessly and egregiously.

I personally doubt that _no one_ was aware of these tendencies - a year is not that long ago, and I think I was seeing discussions of LLM-induced psychosis back in '24, at least.

Regardless of when it became clear, we have a right and duty to push back against this kind of pathological deployment of dangerous, not-understood tools.


ah, this was the comment to split hairs on the timeline, instead of in what way AI safety should be regulated

I think the good news about all of this is what ChatGPT would have actually discouraged you from writing that. In thinking mode it would have said "wow this guy's EQ is like negative 20" before saying saying "you're absolute right! what if you ignored that entirely!"


My minimalist version has a better domain name:

http://endinter.net/


If you were a thoughtful, careful, law-abiding business, yes.

I submit the evidence suggests the genAI companies have none of those attributes.


If he (or his employees) are actually exploring genuinely new, promising approaches to AGI, keeping them secret helps avoid a breakneck arms race like the one LLM vendors are currently engaged in.

Situations like that do not increase all participants' level of caution.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: