This therapy pre-dates antibiotics but is harder to target to the pathogen. Russia has been using phages for decades.
Other options include natural antibiotics, like manuka honey, which is already being used (as "MediHoney") by hospitals in the U.S. when other antibiotics fail. My main question is: Why is this end-of-line therapy, when it has few to no side effects and it's more difficult for bacteria to develop resistance to it (since it's a multifaceted natural compound, instead of a single-strategy synthetic compound)?
Answer: profit. But it's time for us to evolve. And evolution in this case doesn't necessarily mean more high-tech -- we're low-tech organisms, it's quite possible that the solution is also low-tech.
Actually you're quite mistaken. The reason no one uses phage therapy is because for it to be effective it has to be given in a cocktail, i.e. you're given 20 different phages at a time. The FDA/CDC strongly discourage cocktail usage as it promotes the development of resistance more than antibiotic abuse.
MediHoney is used mainly as a wound dressing to prevent infection, not to treat current infections (maybe infections at the wound site). The answer is not profit as you may believe.
That's not the only reason no-one uses phage therapy - there's also a strong cultural resistance in the west. I've worked on phage targeting various Salmonella serovars, and while you do require cocktails for the most effective treatment, but phage evolve along with the pathogen. There's no inherent limit on the availability of phage like there is with our current suite of antibiotics.
With extensive sequencing-based environmental monitoring, as we're starting to see in many western countries, we can detect the evolution of new bacterial and phage strains in near real-time and isolate the phage to treat the pathogens. We can keep up with pathogen evolution. There's no equivalent system for generating new antibiotics. I reckon this kind of tech is only a few years away - all the pieces are coming into place.
No but it's the main reason. If I had a nickel every time someone told me we can do real-time sequencing I'd be rich. You're greatly overestimating our technical capabilities and underestimating the costs and challenges that would be involved. Theoretically we can identify a new phage a week after resistance is detected, factor in all of the other costs and time involved and it's even worse. The major downside of phage therapy is that there is no readily available phage if resistance develops as there is with antibiotics (the manufacturing process for a new phage is not that short). If you find a methicillin resistant infection you can give a tetracycline or cephalosporins.
Additionally, don't forget that phages are not all lytic and not all are effective (i.e. look at C. difficile) and that phages won't affect intracellular pathogens (i.e. Salmonella, to my knowledges phages don't work but please share as you say you have experience with this).
So while phages can be useful (I never said they weren't) their use is limited and has it's own drawbacks.
(Bacterio)phages target bacteria only by nature, there are several mechanisms that prevent them from targeting human cells. They recognize bacterial membranes and not human, their replication mechanisms don't work in human cells and our cells have defenses against them. In short it would take a great amount of evolution to make them target human cells in the order of thousands of years at least due to the several existing barriers.
Edit: Just in case someone calls me out on this 'thousands of years' is an educated guess on my part from studying evolution not something I've actually read.
How long did bacteria and penicillin coexist? Thousands of years? Then practically overnight the bacteria decided they needed to evolve...
I don't think evolution always works according to the popular "and then the next giraffe's neck was 1mm longer than its parents'" gradual change model.
In different places, not an issue. It's only when penicillin is directly applied to a bacterial colony is there any chance for bacterial evolution to take place.
> Then practically overnight the bacteria decided they needed to evolve
You mean, when humans applied penicillin to bacteria on a large scale for the first time? That changed the bacteria's environment, most died, except those naturally resistant. It's classic natural selection.
> I don't think evolution always works according to the popular "and then the next giraffe's neck was 1mm longer than its parents'" gradual change model.
But one can argue that evolution almost never works, on the ground that the vast majority of mutations aren't adaptive. But the argument misses the point that some tiny fraction of the mutations become the entire future species because of increased reproductive fitness.
Also, the "gradual change" you describe normally arises because of the odd beneficial mutation, which, apart from being very improbable, might require many thousands of years to manifest itself.
Fair enough, but I suspect "thousands of years" of evolution may come very quickly if phages were to be used half as widely as antibiotics are.
Besides that, not all of the cells bacteriophages could harmfully target are human cells. What about the billions of gut bacteria in our digestive tract?
You don't take phages back out of the person, so the source is not under evolutionary pressure to affect humans -- each batch is "seeing" humans for the first time.
The issue isn't just that antibiotics are used routinely for animals. The issue is that high volume (and low cost) animal rearing techniques require extensive antibiotic use to compensate for the awful conditions and diet of the animals. Legislating against antibiotic use in animals would definitely help though.
I think the right time to be scared for the future of pure science was in the 1970s. Pure R&D institutions started dropping like flies after that, and we almost lost our innovation edge.
I think the Internet brought us back from the brink -- except now it's not large institutions sponsoring R&D, but rather small companies and individuals. Pure science suffers in comparison to small tech (like apps and web stuff) because it's expensive, but I think that could change as we perfect crowd-funding mechanisms and as enough people accumulate enough resources to start privately funding pure science research.
I don't think traditional higher ed institutes are safe by any means -- but a PhD itself is still a worthwhile use of time, whether used for its intended purpose or as a pivot.
The problem with a PhD is that it requires a laser focus on a narrowly defined topic. I'm much more interested in the connections between different areas. Thus, no PhD so far. I also wouldn't want to put myself entirely under one advisor's control to the degree that PhD programs typically require.
I do think it would be an interesting way to re-structure my time, though.
At Purdue CERIAS, the philosophy and linguistics departments are official granters of its Information Security masters degree, along with the technology school.
I went to college more than 15 years ago at a top-25 ranked school. Unlike in this article, one of the defining characteristics of my college experience (and one of the reasons I chose that college) was that no one asked anyone how they did on their SATs or how they were doing in class.
There was no implicit academic ranking or pressure. It was each person's individual choice to work extremely hard, to work the required amount, or to slack off, and all three choices were viewed as valid.
I also didn't feel so much pressure to study something with the sole end goal of making money. I understand why that's changed for millennials, although I think it's unfortunate. Maybe a solution would be for schools to support more double liberal-arts/STEM majors?
On the good side, the career services described in the article are leaps and bounds ahead of the career services I received at college. If those have improved, it's a big step in the right direction.