Hacker Newsnew | past | comments | ask | show | jobs | submit | WhyOhWhyQ's commentslogin

I disagree about your final conclusion. To add another aphorism to your collection, "The pendulum always swings back".

I'd love to hear a good argument for optimism if you've got one. I suppose the pendulum thing works sometimes on certain timescales, but for a physical analogy "shit rolling downhill" might be more accurate. Typically doesn't roll back up and momentum builds. Just as "rich get richer" and inequality accelerates, so "bullshit makes bullshit" and things begin to spiral if truth / earnest effort is not even neutral, but now arguably a disadvantage as mentioned in TFA. Small course-corrections seem pretty rare in history or in nature without revolutions or catastrophe

Or, "the tide goes out, and reveals those who are skinny-dipping".

In this context, the crisis--brown-outs; natural disaster; political instability--will show who retains enough knowledge or hard-copy references and resources to survive.


"I also think it takes a lot of the fun out of the whole thing."

Agreed. It's like replacing a complex and fulfilling journey with drugs.


I always state when I use AI because I view it to be deceptive otherwise. Since sometimes I'll be using AI when it seems appropriate, and certainly only in direct limited ways, this rule seems like it would force me to be dishonest.

For instance, what's wrong with the following: "Here's interesting point about foo topic. Here's another interesting point about bar topic; I learned of this through use of Gemini. Here's another interesting point about baz topic."

Is this banned also? I'm only sharing it because I feel that I've vetted whatever I learned and find it worth sharing regardless of the source.


Lmao we're using webassembly to run 7b parameter llms on contact lenses in the future. What a world it'll be.

I think it's morally wrong to trust a child's well-being to an LLM over a trained medical professional, and I feel strongly enough about this to express that here.

I agree with you, and reading some of the takes in this thread is actually blowing my mind.

I cannot believe that the top-voted comment right now is saying to not trust doctors, use an LLM to diagnose yourself and others.


You've never had "the experience".

I'll share mine.

Unusual severe one sided eye pain. Go to regular doctor's, explain, get told it's a "stye" and do hot compresses.

Problem gets worse, I go to urgent care. Urgent care doc takes one look at me and immediately sends me to the ER saying it's severe and she can't diagnose it because she is unqualified.

Go to ER, get seen by two specialists, a general practicioner and a gaggle of nurses. Get told it's a bad eye infection, put on strong steroids.

Problem gets worse (more slowly at least).

Schedule an urgent appoint with an opthalmologist. For some reason the scheduling lady just like, comprehends my urgency and gets me a same day appointment.

Opthalmologist does 5 minutes of exam, puts in some eye drops and pain is immediately gone. She puts me on a very serious steroid with instructions to dose hourly and visit her daily. Only reason I am seeing out of both eyes today.

As the top comment says, do not just "trust" Doctors. About 70% of hospital deaths are due to preventable mistakes in the hospital. People who are invested in their own care, who seek second opinions, who argue (productively) with their doctor have the best outcomes by far.

Nobody said not to work with doctors, but blindly trusting a single doctor will seriously harm your outcomes.


>About 70% of hospital deaths are due to preventable mistakes in the hospital.

It's awful that you had a bad experience, but no. Nowhere near 70% of hospital deaths are from preventable mistakes.

I would also note that in your experience, you ended up trusting a different doctor (ophthalmologist), not ChatGPT. Second opinions from other qualified professionals is a thumbs up from me.


I would add to your note, that the person that was correct in their care was the actual expert. Doctors are experts in their fields, but until they saw an ophthalmologist, they didn't see the right practitioner.

Just like I wouldn't go to my podiatrist to treat a complex case of rosecea, urgent care and GCP aren't for specialized, complex and rare cases.


I think chatgpt can help to argue with doctors and get to the specialist faster.

The author doesn't say don't trust doctors or trust chatgpt. He says don't trust "a single doctor" and look for a second opinion whenever possible.

> About 70% of hospital deaths are due to preventable mistakes in the hospital.

Sure sounds like an asspull stat. Extraordinary claims, extraordinary evidence. Do you have a reference for that? Care to share?


At the high end, it's about 4.1%

https://pubmed.ncbi.nlm.nih.gov/31965525/


[flagged]


This is a completely unacceptable comment on HN. The guidelines make it clear we're looking for a higher standard of discourse here. These lines in particular are relevant:

Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."

Please don't fulminate. Please don't sneer, including at the rest of the community.

Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.

Eschew flamebait. Avoid generic tangents. Omit internet tropes.

We have to ban accounts that continue to comment like this, so please take a moment to remind yourself of the guidelines and make an effort to observe them if you want to keep participating here.

https://news.ycombinator.com/newsguidelines.html


Everyone on this website has thoughts on LLM's, not just you.

LLM's are heavily biased by leading questions, which is not a good property in a doctor. They also tend to respond in a way to please the prompter, which might be a frustrated parent, not in a way which is impartial and based on good medical practices. They frequently speak with high confidence and low accuracy.


Maybe you should put this comment into ChatGPT and ask it if you should take a step away from the computer for a little bit.

>The problem is you're all dumb and getting dumber.

But yeah, you're the only smart person here. Clap clap.


>Of course, you and every other histrionic lemming won't take this to heart or even really read it. You'll skim it, pretend that's the same thing, look for the no-no words, hammer your keyboard, confabulate a response to something no one actually said, then pat yourself on the back for having slapped down yet another hand reaching out to lift you out of your own ignorance.

Like clockwork.


Of course I'm not going to read fucking paragraphs of pretentious writing calling me dumb.

Genius call.


This is misguided, the moral imperative is to get to the correct diagnosis and treatment by any means possible, usually an "all of the above" strategy. In my experience many doctors are not ideal, and this "always consult a doctor" is legal boilerplate to avoid being sued.

Usually you've got less than a 50-50 chance of getting useful help from a doctor on any one visit.

In my case multiple doctor's treatment of my hypertension was largely useless because it followed the "typical" treatment. I only found what worked for me by accident.

In other cases my own research was far superior to a regular doctor's visit.

I can also quote the experience of a family member whose research into a major surgery paid huge dividends and if they had followed the advice of the first 2 doctors the surgery would very likely have failed.

The chances of getting correctly diagnosed and treated for a problem that is not obvious is surprisingly close to zero unless you have a very good doctor who has plenty of time for you.


[flagged]


>How does this line up with your religious belief that doctors are infallible and should be 100% trusted?

They did not say that in their comment.

Replying in obvious bad faith makes your original comment even less credible than it already is.


It's an incredibly credible comment. But if you have a need to have slavish trust in authority, be it LLMs or MDs, yeah, it'll look preposterous that someone might have an independent mind and (gasp) believe their own lying eyes before a supposedly arcane, but in fact incredibly banal and fallible priesthood.

But they aren't believing their eyes, they are believing a LLM. Wanna talk about a fallible priesthood? People are becoming adepts of the babble box.

They make an argument to always "verify", but if your first source is ChatGPT, where are you verifying next? And why not go their first?


If you actually slowed down and read what I wrote, you'd see that I said that people blindly believing in either LLMs or MDs are stupid (as did the OP).

And gee, what are people supposed to do when they encounter potentially unreliable information from a generic non-vetted source? The same thing you do with literally any other one:

https://usingsources.fas.harvard.edu/what%E2%80%99s-wrong-wi...

> If you do start with Wikipedia, you should make sure articles you read contain citations–and then go read the cited articles to check the accuracy of what you read on Wikipedia. For research papers, you should rely on the sources cited by Wikipedia authors rather than on Wikipedia itself.

> There are other sites besides Wikipedia that feature user-generated content, including Quora and Reddit. These sites may show up in your search results, especially when you type a question into Google. Keep in mind that because these sites are user-authored, they are not reliable sources of fact-checked information. If you find something you think might be useful to you on one of those sites, you should look for another source for this information.

> The fact that Wikipedia is not a reliable source for academic research doesn't mean that it's wrong to use basic reference materials when you're trying to familiarize yourself with a topic. In fact, the Harvard librarians can point you to specialized encyclopedias in different fields that offer introductory information. These sources can be particularly useful when you need background information or context for a topic you're writing about.

This isn't rocket science.


"AI always thinks and learns faster than us, this is undeniable now. "

Sort of a nitpick, because what's written is true in some contexts (I get it, web development is like the ideal context for AI for a variety of reasons), but this is currently totally false in lots of knowledge domains very much like programming. AI is currently terrible at the math niches I'm interested in. Since there's no economic incentive to improve things and no mountain of literature on those topics, unless AI really becomes self-learning / improves in some real way, I don't see the situation ever changing. AI has consistently gotten effectively a 0% score on my personal benchmarks for those topics.

It's just aggravating to see someone write "totally undeniable" when the thing is trivially denied.


> It's just aggravating to see someone write "totally undeniable" when the thing is trivially denied.

You've described AI hype bros in a nutshell, I think.


Well that took a surprising turn. (1) Friendly dunking on someone named todepond. (2) Interesting ideas about xhtml... looks like I'm going to learn something here. (3) Ideological conflict.

Is there a backstory here? Or is this just random venting?

Anyways, I reject the idea that loose programming is more "tolerant" in any sociological manner.


Humans design the world to our benefit, horses do not.

Most humans don't. Only the wealthy and powerful are able to do this

And they often do it at the expense of the rest of us


This is really cool and inspirational! Looking forward to studying this closer!

I personally never found Microsoft software anything but shoddy: Microsoft word, internet explorer, power shell, outlook, powerpoint... these all make me shudder... I'll give them visual studio, that one is pretty good. I think Excel is fine. Skype was okay I guess. Anyway, I think the shoddiness can't really be the actual threat to Microsoft given their track record.

When I joined my current employer, a few years ago, one of the things I thought might be good was that for the first time in a fairly long career I'd be using Visual Studio.

Previously I'm a vi or vim user for everything, for many years.

But I can say after a few years experience I'm not really impressed. It's too big and too slow. It has a few things I kinda like, a lot of half-working things I'd love if they worked consistently (e.g. some things I work on can be debugged, some can't, experts might know why but I don't) but as they are they're too unreliable to really change how I do things - but overall it's not enough to e.g. miss VS when I'm writing my own stuff, still in vim on Linux.

Actually all of the Microsoft technologies I've run into were disappointing with only two exceptions which I'll get to. Powershell felt like they hadn't really learned the right lessons from the Unix shells for example, and Entra ID (called "Azure Active Directory" when I started caring) is a confusing mess.

Two exceptions: 1. C# is a pretty good language. Mostly it's a better Java. Is that amazing? Not really, but it's still pretty good, it delivers reasonable performance, there's a large ecosystem, I don't hate writing it.

2. Azure itself has to have a way to "cut off" payments because Microsoft sold a product where students can get a limited amount of credit. The student doesn't have any money, so if they had $50 of Azure credit and Microsoft lets them spend $85 before turning off all the Azure systems that credit was funding, well, too fucking bad - Microsoft eats the $35 loss. Accordingly Microsoft are better (not perfect, but better) than AWS or Google's thing at actually turning stuff off when it exceeds what you asked to spend.


Skype was an acquisition.

And became pretty bad afterwards

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: