Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Challenges for Artificial Intelligence in Medicine (cardiogr.am)
120 points by brandonb on Oct 4, 2016 | hide | past | favorite | 45 comments


(OP here)

We spend a lot of time thinking about how to make AI succeed in medicine. Given that so many efforts, including MYCIN, have been tried and failed before, one of the key questions to answer is "Why now?" In other words, what has changed in the world which will let AI succeed where it has failed before?

I'm curious: is anybody else here applying deep learning, or any other subfield of AI, to healthcare?

If so... do the challenges listed in this post resonate? Do you believe the shifts identified are the right ones to focus on?


MYCIN did not fail: https://www.ncbi.nlm.nih.gov/pubmed/480542

From the abstract: "MYCIN received an acceptability rating of 65% by the evaluators; the corresponding ratings for acceptability of the regimen prescribed by the five faculty specialists ranged from 42.5% to 62.5%." So, better than the human experts considered individually.

Expert systems are able to explain their reasoning, which is essential if they are to be used for diagnosis. Neural networks cannot.

Deep learning and similar approaches might be useful in two areas though: interpretation of images, e.g. what's on this X-ray or ultrasound scan, or what type of rash is this; and undiscovered associations, e.g. are patients who were given drug X combined with drug Y for disease P more likely to get disease Q later on in life?

Deep learning in medicine also has a downside even supposing it works: lots of patient records are required, and anonymous ones can be linked to they people they describe, so there's a confidentiality problem.

PS I notice that you appear to be agreeing with my in your article: "This was the MYCIN project, and in spite of the excellent research results, it never made its way into clinical practice." and even refer to the same paper, so we're really just using different criteria for success/failure. One of the problems of expert systems was getting them adopted by end users. I don't see how using neural networks will be any different in that regard.


Yep—as you point out, the first paragraph of the article cites the same 1979 MYCIN accuracy results you did. My criteria for success is enduring impact on the way medicine is practiced, so the the rest of the article tries to answer the "What's different today?" question about adoption you raise in your second-to-last sentence.


I've used deep learning for segmenting brain anatomical scans, and I worked in a lab that used neural networks to detect cancerous tumors.

I suspect the first major hospital-facing implementations of machine learning will be in radiology, e.g.: http://suzukilab.uchicago.edu/, which has been diagnosing cancerous tumors in CT scans with neural networks since before it was cool (one reason you won't see the terms 'deep learning' in the literature since they were originally just 3-layer networks, before the term was even coined). IIRC it outperformed the average radiologist.

I wonder if the label problem could be less difficult for some low-hanging fruit. The CT scan neural network required something like 40k labeled scans from a radiologist, but it could come for free: many yes/no disease detections will eventually be resolved by human labeling anyway by your doctor. If you had access, say, to every CT scan taken, and electronic health records for the patients, your labeling is noisy and biased but at least at massive scale. The problem is (legitimately) restricted access to health records in the US. Maybe some European countries have better data access?

And the implementation problem will eventually disappear. I remember talking with a radiologist years ago, who remarked "some people in my field have no idea it's about to disappear". I'm not so sure there will be no more radiologists, but their role will definitely change. Hospitals would be okay with this, actually, since radiologists are expensive. Eventually radiology scans will probably be like ordering blood tests, where fewer and fewer MD's are required.


Many of the challenges with radiology are historical. Imaging technologies have been treated as a devices for producing images on which measurements can be made, rather than a measurement device from which images are formed. This has lead to difficulties for quantitative imaging techniques and so we continue to rely on the (albeit impressive) qualitative assessment of the radiologist.

This is changing (shameless plug for my company http://www.pulmolux.co.uk) but medicine moves slowly preferring evolution over revolution. Having said that, I certainly detect that the scanner manufacturers (GE, Philips, Siemens, etc) having reached saturation with radiologists now have a thirst for disruption and see the referring physician as the next customer. MRI cardiology being something of an example.


My Dad is a consultant radiologist, recently retired but still doing some part time work. When I've had conversations about whether he thinks his job might be automated, he says that the main problem he's seen with current ML systems is that they throw up far too many false positives. They also aren't great for unusual or corner cases. For example, in one scan he saw recently the very corner of the picture was occluded because the radiographer had left something on the machine (not quite sure what). He said most trainees would have just ignored it, but he sent it back. Turns out it was hiding a tumour. Now I know this is just one example, but he said you'd be surprised by the number of weird things that turn up like that. Where he does see it having a place is to help radiologists from missing really obvious things because they're tired. Most people don't realise how much concentration is required to just look at scan after scan for miniscule clues indicating a potential problem.


Ct scans aren't really used to look for brain tumors. We use mri for that mostly. Ct is used for screening of stroke, trauma and other things. Source: radiologist / me.

I work on radiology image segmentation also. And I agree it is solvable with machine learning. But even if software could do a job of a radiologist, it wouldn't replace one any more than your ekg reading program replaced cardiologists.


> But even if software could do a job of a radiologist, it wouldn't replace one any more than your ekg reading program replaced cardiologists.

I don't think the radiologist is going anywhere soon but the role is changing. Radiologists are increasingly having to deal with more and more derived information. They need to understand the algorithms being used as well as the biology, anatomy, physiology and disease being investigated. I can see a time when algorithmic specialists become a regular part of their multidisciplinary team.


> Ct scans aren't really used to look for brain tumors.

I can see how my phrasing was confusing, but I didn't mean to suggest that Ct scans are used to look for brain tumors. My work segmenting brain scans was not tumor-related, just gray/white matter segmentation.


I think there are some folks at UCSB that are working on modeling the health state of trauma patients' based on incomplete and noisy data. Pretty fascinating stuff. I'm mostly familiar with it from talking to Dr. Bernie Daigle at the University of Memphis.

I'm curious, since you have a background in fraud detection, are there parallels or insights you brought to healthcare from that area? I'm currently working in fraud detection, and I'd like to move to healthcare.

Also, what are your thoughts on operations research for healthcare? That is, not modeling individual health of patients, but instead improving scheduling or other operational aspects of a hospital or clinic.


There are lots of parallels to fraud detection! Label imbalance is the obvious one: in both cases, you're looking for the proverbial needle in the haystack, so techniques like anomaly detection or unsupervised learning are really important.

I haven't given much thought to the operations research side of things. I think what I'd ask about any idea there are the same ones I'd ask about any B2B business: first, what's the hard ROI for the customer? How can you measure it? If you succeed, what makes this defensible over time?


I work for a large pharma, and the primary uses for AI that I've seen here are 1) biomarker development and 2) a more precise or accurate, and reproducible way to measure medical signals.

Biomarkers predict outcome, of drug effect or of toxins/safety. When used preclinically (non human models) biomarkers continue to be highly valued internally. If a computational model can reliably anticipate outcome, it can shorten trial time and cost. However, in clinical/human use, biomarkers seem increasingly fraught in recent years. However the FDA seems to be less receptive to surrogate outcome predictors, and more demanding of concrete quantifiable clinical adverse events (e.g. the LDL/HDL ratio vs stroke, BMD vs bone breakage, A1C vs retinopathy). As such, I'd be circumspect about developing biomarkers that lead directly to diagnosis. Like the PSA test, even a strong biomarker that also introduces false positives or uncertainty is likely to face opposition to adoption in standard medical practice.

As to the use of AI (esp. pattern recognition) to better measure drug response or toxin effect, this seems to be well received, at least for in-house use. Automation of signal acquisition or analysis, if it can reliably improve on the status quo, in my experience, is well received by my employer. As a drug development cost cutting measure or as a more reliable rater of symptom measurement, AI seems to be win-win. That doesn't mean I see a wholesale rush to adopt AI-related tech here, but the interest seems to be steady and positive. Presumably this should lead to greater use of AI by manufacturers of medical instruments, which frankly I have not seen (though Siemens certainly has hired its share of quants, presumably to serve such ends. However, these folks may well spend most of their time working their magic on external contracts.)

Often in pharma, I think AI, like math models, are perceived by biologists to be too synthetic and abstract, and lack the credibility of a well trod mouse model. Unless AI/quant models lead to a < .05 T Test and a visible separation of error bars between groups, it's unconvincing statistically. And unless the AI can be tied _convincingly_ and directly to an underlying chemical mechanism, it's unconvincing biologically. Scientists are a tough audience.


I think you are confusing a few issues regarding 'success'. As pointed out Mycin performed well, and since de Dombal's work in the late 60's we knew that computers could perform better than experts in specific clinical domains. Similarly, Internist-1 and other systems performed quite well. The block for them was integration into clinical workflows. The biggest barrier was getting structured data that machines could use to run the algorithms, not the lack of performance.

Today, workflow integration issues still remain, there is still a lot of free text entered etc. However a more pervasive issue is the lack of outcome data against which to train. In other words, what are we optimising algorithms for? In many health care systems we capture raw data, e.g. observations and labs, but not patient outcomes that are meaningful (i.e. based on optimising patient utility vs some more easily captured data).

The final issue for deep nets and ML is that these are descriptive models, they learn from experience, where as we know there is huge variation in practice and outcomes. In medicine we may want normative models based on best evidence, or some combination. And then there's integration with individual patient utilities.


I'm not in this field directly, but I have spent a lot of time interacting with scientists and medical professionals regarding new approaches to the field. I'm a little surprised I didn't see more about resistance to unfamiliar technology from the medical field. The essay touched on this briefly, but I find there to be quite a lot of pushback in medicine against new methods in general.

In some cases, this is a simple problem of embedded traditions, or worse, resistance to something that could put you out of a job. In a lot of ways, though, the resistance to change in medicine is seen as an important safeguard; when you're dealing with people's lives, you hesitate before making any changes in procedure, because we know our current methods work okay. Even if there are others which seem to work much better, we should proceed cautiously. How worried are medical professionals about adopting the more opaque techniques of AI? Can you persuade people to accept a diagnosis from something that can't explain its reasoning? Are you worried about any "bugs" or undiscovered unusual behavior in edge cases?


What's strange is that although doctors have been reluctant to adopt AI like MYCIN, the pace of adoption for other innovations like new surgeries, drugs, and implantable devices is actually quite rapid. So I don't think doctors reject new methods in general.

I think the hidden factor here is the business model. A new surgery makes money for a hospital, which provides a countervailing force to the caution you mention.

A new diagnosis algorithm may lose the hospital money, so what is the force that is going to push for its adoption?


The fee for service model prevalent in healthcare today is certainly one reason. However there is a general mistrust of algorithms. Here's a paper that documents what hey call Algorithm Aversion: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2466040

Note that there are exceptions and this can be overcome.


I met with a company recently that started out on the clinical side doing fMRI brain imaging. They currently house the nations largest private dataset and have some very compelling sub-datasets in various areas, such as Parkinsons, ADHD, Alzheimers, and Strokes.

They've hit the point of gaining enough training data and were moving onto the next phase where they wanted to utilize deep learning to help augment the doctors decision making. There's a ton of red tape here with the FDA but that was the near term goal. Augment the doctors decision making but not replace them.

I believe they'll succeed in doing this one way or another. We had talked about also pairing brain imaging data with genetic data (eg. Alzheimers and mutations on APOE). The critical things we talked about were how we actually train the different models, what a sequential approach towards this type of software development would look like, etc. We believed that the most pertinent focus point would need to be a refined supervised deep learning model.

I can definitely sympathize with the complexities of deep learning in healthcare.


I was a bit surprised that you seem to consider the challenges to be "technical, political and regulatory". Why? Because obviously one word is missing and that's "science".

Maybe the lack of adoption of machine learning (I don't like to call something AI that really isn't) is due to the fact that medicine is more demanding on high quality scientific evidence than the IT industry.

Related, Gerd Antes from Cochrane once wrote a very interesting piece on the promises of "Big Data" (which is kinda related) and that they still need to hold up to scientific evidence: http://www.labtimes.org/editorial/e_654.lasso


Great post.

In addition to the lack of (labeled) data, the deployment and adoption challenges, and the fear around regulation, I would add another challenge: patient data is very complicated and highly heterogeneous: think doctor and nurse notes, machine-generated imaging, all kinds of measurements, patient habits, patient medical histories, etc.


> I'm curious: is anybody else here applying deep learning, or any other subfield of AI, to healthcare?

> If so... do the challenges listed in this post resonate? Do you believe the shifts identified are the right ones to focus on?

I'm applying deep learning in a healthcare application and certainly the points you raise resonate and are generally the right areas to address for utilizing AI (for points 2 and 3 really any innovative technology) but let me expand on them a bit.

#2: Deployment and the Outside-In Principle - Very often new technologies are coupled with business model innovation to bring about change. The complex and often perverse mechanics of the healthcare system in this country make this exceedingly difficult. I agree that models that more rationally couple risk with reward (e.g. Accountable Care Organizations, Bundled Payments, and payer/provider organizations like Kaiser) provide the right incentives to reduce cost while increasing the quality of care. This is an environment where technology can make a difference; not so in the fee for service model. I think the software "deployment" model is much less of an issue.

#3: Regulation and Fear - This is actually a significant challenge. The FDA has significant incentive to be very conservative with their approvals (lives may hang in the balance) and for them the risk of failure (i.e. a death caused by a device/test they approved and likely makes the news headlines) is MUCH more traumatic and negative than the rewards of success (some costs are reduced for a specific treatment or diagnostic that almost no one will ever hear about). Additionally the FDA, Doctors, and medical administrators suffer the same fear of formulaic decisions that most people do. We simply don't trust an algorithm to make decisions and so will resist them or hold them to significantly higher standards than we hold human driven decisions. This means even if you get through the FDA you'll potentially have resistance of doctors and patients.

If we export some of the other posts about radiology I can just imagine a patient's experience in a hospital. I'm sure a doctor or radiologist will share an anecdote like the one in this thread (radiologist saw an occlusion due to a poor image and detected a tumor) and scare a patient into thinking a radiologist is better than an algorithm at reading an MRI/x-ray despite statistics that will (eventually) show that the deep learning algorithm will be more consistent and accurate. To be clear I'm not arguing that a radiologist not continue to be involved and review the diagnosis but that it will be difficult for this technology to be established in high-stakes applications due to fear.


This is an insightful response! I'm curious, what AI application are you working on?


It's, perhaps unsurprisingly, in the diagnostics arena.


I'd like to apply ML to healthcare, I think it's the holy grail.

However, I feel like you're making it much simpler than it really is. You've got a cadre of PhDs at UCSF, how am I going to compete with your all-star team with all of your wealth of knowledge to release something truly innovative?


I read your question a few times but I'm sorry, I still don't understand. Can you rephrase it?


Our startup, CliniCloud, is currently looking at applying deep learning on respiratory recordings obtained from auscultation using our digital connected stethoscope. We've also partnered with teaching hospitals to try and obtained labelled and "clean" data samples to try and use a semi-supervised approach for the detection of asthma and wheeze severity rating.

To be honest, even if we were to stumble across a revolutionary algorithm with 99% sensitivity and specificity, I am sceptical about the dissemination and use of such a product in medicine for at least the next decade.


What makes you skeptical about dissemination of your product in particular?


I am applying ml to a healthcare domain, and one of the challenges is irrational faith in the outcome. My model says things like - based on your past diet, you must drink more coffee and eat fewer chicken wings and.... When I showed this to a nutritionist, she is alarmed and think people who use my app might switch to an all-coffee diet! Now the ml model is simply interpreting features to minimize the loss function. It doesn't know what is coffee, or what will happen to a human if he switches to an all-coffee diet in reality. One of the VCs said my app was like a GPS for the body. This is awesome and problematic in the same way - if your GPS tells you to turn right and it's pitch dark and you just do what the GPS says and fall off a cliff, is it the GPS's fault ? Perhaps you didn't pay the annual update fee so it's working off of the old maps.

However in the big picture I agree with you. Now is the time to be building these things.


1. Compute $/flop, see "cloud."

2. Tooling. Spark, Tensorflow, etc.

3. Policy. Folks who make decisions in hospitals are finally coming around to this whole "computer" thing. Slowly, to be sure. Eventually those who don't figure it out will be bought by those who do.


Props for mentioning MYCIN. It's a big bugbear of mine that very few people remember it and you never see it mentioned in articles about AI in medicine.


I work for a company that does machine learning on clinical notes. The challenges the author introduces are real, but he misses the mark on the last point "Only Partially a Problem: Regulation and Fear."

Actually, regulation and fear are the main reasons that machine learning hasn't taken off in clinical medicine. More precisely, the provider's fear of getting sued and the regulations that require a licensed practitioner to "have the final say." There is one more problem as well --> machine learning doesn't solve a problem that providers think they have. It's lesson #1 from The Lean Startup or The Startup Owner's Manual. You may have the best EKG-reading software in the world (I have no doubt computers could surpass providers on this task), but if the providers don't feel they need it, it simply won't be adopted. This is the Watson situation at heart.

Conversely, here are some areas in medicine where machine learning has been adopted:

1. Medical billing code generation: Several companies have systems for reading notes using natural language processing and predicting billing codes using market-basket analysis.

2. Identifying bacterial cultures: Inpatient bacterial cultures are placed in a big incubator and constantly scanned for growth. When growth is suspected, there are emerging algorithms to automatically classify the bacteria. Similar work is being applied to other areas of pathology (see: http://www.nature.com/articles/ncomms12474)

3. Image-analysis in radiology: There are a few radiology companies that are demonstrating superior results by applying novel algorithms. While not "machine learning" per se, the existence of such algorithms is encouraging for future advancements in radiology, since it's a step beyond just viewing the image. Here's one such company that has gained FDA approval for their blood flow mapping technology: http://www.ischemaview.com/


Many people might not understand just how busy physicians are, and how difficult it can be to integrate a new product into the clinical workflow.

The most pressing thing to understand is that clinicians spend the VAST majority of their time gathering all of the necessary information to make a diagnosis. In other words, they aren't puzzling over how to diagnose about 85% (made that up) of their patients.

Once the necessary information is gathered, an experienced doc doesn't usually spend more than about 10-15 seconds debating different diagnoses. Therefore, if your tool takes more than 10-15 seconds to launch, enter any necessary data, and get a result, you are slowing the clinician down and they won't use it. This is why automated EKG interpretations (which are very much a real thing used at hospitals across the country) print directly on the EKG printout - it doesn't cost the clinician more than about 2 seconds to read what the machine thinks and adjust their interpretation accordingly[1].

One of the major problems limiting adoption of "expert" computer systems is the amount of (very expensive) integration it takes to get them under that 10-15 second limit. One of the big reasons radiology is seeing a lot of buzz around machine learning and automated interpretation is that integration becomes a lot easier when you can just feed in an image and maybe 5 words about the indication for the study.

I would love to go on for a while about this stuff, but I'll stop there for now :)

[1] Some people here might be interested to learn that non-cardiologists generally don't have negative views about automated EKG interpretations. But we are also very well-aware that when we make decisions about a patient, those decisions have to be anchored to something a lot more substantial than "the machine told me to do it."


One way to think about AI's potential impact is less about replacing what physicians do well currently, and more about doing things they can't do at all.

Take ECGs -- it's true that in a hospital, an automated ECG interpretation doesn't buy you much. But what about about the patient with a paroxysmal heart rhythm that doesn't show up when they're at the doctor's office?

I was at a patient conference recently, and people were describing the first time they felt atrial fibrillation (a common abnormal heart rhythm). Many times, by the time they got to the doctor, they were back in sinus rhythm and thus the ECG showed no abnormality. Some were told they were just feeling "anxious" or "going through menopause." It often took months of persistence just to get a diagnosis.

Now, if have cheap sensors + AI analyzing the patient's whole heart history before they walk in the door, you can do a lot of good for real people.


To address your example directly - we already have holter monitors that would show a case of atrial fibrillation quite easily. They aren't terribly expensive, at least for something that has to have FDA approval, and they are frequently used. Heck, you don't even need "AI," in the sense of neural networks/machine learning/some other buzzword. Current systems will review a strip collected over several days and flag any abnormal rhythms.

The problem comes with determining who to put on a monitor. In the case of the patients you described, it's actually quite likely that the doctors seeing these patients considered the possibility of afib. The symptoms, though, can be very vague, and they are seen nearly every day in the doctor's office. It's simply too expensive to put every patient on a holter monitor - the doc's office has to be paid to maintain the monitors (which people abuse at home), the nurses have to be paid to teach patients how to correctly wear them, the monitor company has to be paid for whatever absurdly expensive and proprietary review software they supply, and the prescribing doctor (oftentimes the prescribing cardiologist) has to be paid to review and confirm the machine's interpretation.

All of this for a transient rhythm which any second year medical student would easily recognize if presented the EKG from across the room.

The sad reality is that the patients you described were experiencing the system as it is "designed" (I use the term loosely) to work. The fact that someone is persistently seeking help for their problem dramatically raises the probability that something is truly wrong, and doctors actually recognize this and take it into account. This is one of the reasons it's considered best practice to establish a long term relationship with one doctor who knows you well, but it's harder and harder to do with insurance companies only reimbursing for 15 minute visits.


What kind of information do they gather, and can that be automated?


One of the challenges of medicine is that the information is gathered from so many sources and is so "fuzzy" in quality.

Building a "database" of information from which to make a diagnosis is unlikely to be easily automated. Take a straightforward case of a patient who comes to the emergency department after "fainting". Did they slowly kind of "melt" to the ground, or did they just BOOM fall? Were they confused after they woke up, or just a little sleepy? Was it a hot day or is it wintertime? Were they wearing a shirt and tie, or a t-shirt? Different answers to each of these questions will change the probability of each potential diagnosis. The signal:noise ratio is frequently very low, and there's not a great way to improve it without adding an extremely large amount of cost and time to an already expensive and slow healthcare system.

Good clinicians already have an idea of the top 2-3 most likely possibilities before they walk into a patient's room, based on epidemiology and a quick review of a patient's chart, but we try to be flexible enough to discard those preconceptions if new info becomes available. Sometimes clinicians fail to fully investigate what a patient is telling them, and that's where the real mistakes get made.


I'm working with a few people on ML applications for medical image segmentation, in Finland and south east Asia. I think ML aided diagnosis will be commonplace pretty soon.

Here in the UK, DeepMind has been doing interesting work on retinal and radiology images with NHS.

While I agree that large enough quantities of labeled data and legal access to it can be hard to get, interestingly, there are many more low hanging fruit in medtech space that don't necessarily have anything to do with machine learning.

Take hospital IT software for instance. Doctors literally waste double digit percentage of their time wrestling with really bad legacy software.

Even the really expensive solutions, like Epic Systems, is horrible. I am hopeful that better options will become available and future public health budgets don't get wasted on the kind of systems that exists now


Saving time and disparate product integrations are definitely the main requests I see from them. We buy a ton of crap to do patient care and people want widgets that talk with everything. Then someone decides those products are out of date, and so the widgets need to be rewritten. Then there are budget constraints because the hospital's goal isn't to have very quick EMRs running on beefy hardware/infrastructure. But naturally if you show them how much time/money is wasted waiting on a slow database call, it gets ignored.

Google should write EMR software. They would probably be pretty good at it.


"Google should write EMR software. They would probably be pretty good at it."

They (Google DeepMind) are actually doing it for the NHS. I was surprised how much of their work with NHS seems to be UI/UX and "normal" backend/client engineering, rather than ML. It looks very good. I hope they are going to open source it.

https://youtu.be/KF1KhuoX2w4?t=25m36s


The interesting part is that a lot of effort is already being made to improve those systems. I even know a family doctor who was working in his spare time on improving IT infrastructure.


Could I connect with that family doctor you know? I love speaking with technology-minded individuals in healthcare. My contact info is hello@james.hu

Thanks!


AI again? Expert systems are around to support medical doctors' decision making for 2+ decades. Studies demonstrated that doctors can use them to improve their decisions. Hardly anybody uses them in practice.

In real life, medical information often is stored as PDF or similar in the hospital information system. An interesting challenge for AI would be to encode these PDFs.


Yeah, I build a decision support product that uses an expert system/GOFAI. We parse PDFs, root around the EHR, read and analyze unstructured data, and so on. Parsing pdfs isn't that hard, unless you want to get things like EKG results, then you need to to OCR and and some analysis on the now potentially garbled text.

We have some pretty active users with great results, but doctors are super busy. Its hard to get them to use anything that isn't in their standard tool kit or tie to payments. And, that understandable when you see 14 + patients a day. Getting into the workflow if the real challenge for AI in my view.


I'm a bit disappointed in the straw man assumptions in the first paragraph about AI + cats. There's an enormous amount of work being done applying AI and Deep learning to healthcare. Enlitic is one example. The MLHC conference is entirely devoted to the topic. Deepmind's work with the NIH is also well known.


The real disruption is in giving power to the patient not the doctor. I want that power. I check online resources all the time about every sign and symptom I get, about every drug and medicine and about all procedures in order to avoid visits at all cost, only for surgery, only as last resource.

Yes, self-medication is wrong, right now is wrong, and there exactly is the disruption. Give information to the patients as a first line of defense, then let doctors handle the special cases.


This is a hard one.. you want a cautious doctor but at the same time you need someone who will order the test when necessary and is not overworked. The balance is in self advocating and not crying wolf. That is the problem AI needs to solve.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: