Hacker Newsnew | past | comments | ask | show | jobs | submit | lyle_nel's commentslogin

I don't disagree with you on any particular point. However, who do you suppose we trust to decide what is a software implemented weapon and what is not? Of course there are very clear black and white examples, but the in between is where we should be concerned. Consider the USA stance on export of encryption as a historical example of where it can go terribly wrong.


Emissions of CO2 on its own does not necessarily imply that there is a carbon footprint.

Besides the carbon footprint of transporting food, the food we consume is carbon neutral since the carbon in the food comes from the atmosphere. Therefore the CO2 gas we expel is mostly carbon neutral.

Cars are a different story since their carbon originates from ancient reserves(oil and coal) that has long since been removed from the carbon cycle. This carbon is now reintroduced to the carbon cycle, thus leading to a positive carbon footprint.

Put differently, if our food we eat consisted of carbon extracted from oil or coal, then we would be adding CO2 to the current carbon cycle, thus leading to a positive carbon footprint. However this is not true.


Oh but there's so much more energy in your food. There's the energy that went into producing fertilizer, energy to make pesticides, energy burned to run tractors, energy to process the harvest into what you get in the grocery store, potentially more energy involved in packaging. For plastic packaging, there's the oil used to make the carbon-rich polymers. Etc.

Just like cars, all of that energy comes from burning fossil fuels. The only way you'll find carbon neutral foods is if you walk out in the woods, pick it off a wild plant, and eat it right there.

Generally, the total energy that goes into producing and transporting something is called "embodied energy." It's not necessarily 1:1 related with carbon footprint since it can hypothetically come from renewable energy instead of fossil fuels, but it's a very related concept.

https://en.wikipedia.org/wiki/Embodied_energy


Most of what you just said would actually reflect pretty badly on vegan diet values and improve paleo diet values.

Vegan diets are the product of intensive agriculture basically anywhere in the world. Paleo, depending on country, can be pretty low intensity, for instance, in Argentina where cattle roams free in the grassland.

While I easily concede that in places like most of the USA - where cattle is a product of intensive livestock exploitation - a paleo diet will produce a lot of CO2 compared to a vegan diet, that is actually not true in places like Argentina, or, to a lesser degree, in places like southern Europe.


Indeed, I did not take all of the above mentioned things into account.


I don't see them claiming to have found the missing link. In any regard, biologists are unlikely to use the term 'missing link' since it implies evolution is discontinuous, which is a common mistake the layman makes.


I have a small cluster of machines that I run experiments on. GNU parallel makes the dispatch of jobs on remote machines very easy.

In addition, I often use it to search for sequences by running grep in parallel. For example

$ parallel 'grep {1} -f haystack.txt' :::: many_needles.txt

Where {1} is a single line in many_needles.txt


If you find yourself searching lots of haystacks, and your needles are just text and not a regex, a better approach is to stuff all the needles into some kind of index, then chop up the haystack into overlapping tiles (of variable width from the smallest needle to the largest), then search each tile against the index of needles. This effectively searches all the needles at once and turns the operation from O(n) where n is the number of needles to O(m) where m is the number of tiles in haystack.txt.

It may seem to be a trivial difference, but then you can search multiple haystacks at once fairly easily, and this approach scales to hundreds of millions of needles at once. The code for it isn't very difficult either, heck you can just use an in memory SQLite dB to get a searchable, temporary, index and rely on using some of the most tested software in history.


This also works for sorting as well and is typically called Radix Sort or Bucket Sort.

Basically using unique attributes of the data you then divide and conquer on those attributes (e.g. for Radix you make buckets based on the digits).


Unless those patterns are regexes, you should just be using

    $ fgrep -f many_needles.txt haystack.txt


How much faster is a plain text search really than an regexp without special characters? You'd think this would be quite easy to optimise for a regexp engine.

I admit I try to use -f all the time but your post suddenly made me realise I'd never actually measured the effect. :/

Edit: yes sorry I meant -F


There are a few things to clear up here. Namely, fgrep (equivalent to grep -F) is orthogonal to -f. fgrep/`grep -F` is for "fixed string" search, where as -f is for reading multiple patterns from a file. So `fgrep -f file` means "do a simple literal (non-regexp) search for each string in `file`."

There are two possible reasons why one might want to explicitly use fgrep/`grep -F`: 1) to avoid needing to escape a string that otherwise contains special regex characters and 2) because it permits the search tool to avoid the regexp engine entirely.

(1) is always a valid reason and is actually quite useful because escaping regexes can be a bother. But whether (2) is valid or not depends on whether your search tool is smart enough to recognize a simple literal search and automatically avoid the regex engine. Another layer to this is, of course, whether the regex engine itself is smart enough to handle this case for you automatically. Whether these optimizations are actually applied or not is difficult for a casual user to know. I don't actually know of any tool that doesn't optimize the simplest case (when no special regex features are required and it's just a simple literal search), so it seems to me that one should never use fgrep/`grep -F` for performance reasons alone.

However, if you use the `-f` flag, then you've asked the tool to do multiple string search. Perhaps in this case, the search tool doesn't try as hard to do simple literal optimizations. Indeed, I can actually witness evidence in favor of this guess. The first command takes 15s and the second command takes 10s:

    LC_ALL=C grep -c -f queries /tmp/OpenSubtitles2016.raw.en
    LC_ALL=C grep -c -F -f queries /tmp/OpenSubtitles2016.raw.en
The contents of `queries`:

    $ cat queries 
    Sherlock Holmes
    John Watson
    Professor Moriarty
    Irene Adler
grep in this case is GNU grep 2.26. The size of /tmp/OpenSubtitles2016.raw.en is 9.3GB. The only difference between the commands is the presence of the -F switch in the second command. My /tmp is a ramdisk, so the file was already in memory and therefore isn't benchmarking the speed of my disk. The corpus can be downloaded here (warning, multiple GB): http://opus.lingfil.uu.se/OpenSubtitles2016/mono/OpenSubtitl...

Interestingly, performing a similar test using BSD grep shows no differences in the execution time, which suggests BSD grep isn't doing anything smart even when it knows it has only literals (and I say this because BSD grep is outrageously slow).

As a small plug, ripgrep is four times faster than GNU grep on this test and has no difference whether you pass -F or not.

(This is only scratching the surface of literal optimizations that a search tool can do. For example, a good search tool will search for `foo` when matching the regex `\w+foo\d+` before ever entering the regex engine itself.)


This is a great comment and ripgrep deserves a more prominent plug, clickable: https://github.com/BurntSushi/ripgrep


BSD grep decides on a pattern by pattern basis which match engine to use. -F is unlikely to affect performance.


Oh dear, you appear to be correct. Adding additional queries to `queries` (while being careful not to increase total match count by much) appears to increase search time linearly. From that, it looks like BSD grep is just running an additional pass over each line for each query.


(Sorry, this is mildly off-topic.) Not sure if this fits your usecase, but you should check out codesearch if you haven't already: https://github.com/google/codesearch

(Russ Cox's excellent writeup is here: https://swtch.com/~rsc/regexp/regexp4.html)


Could someone explain to me how this is not a correlation implies causation fallacy?

I don't where a causal relation is demonstrated.


Although not this particular face recognition. I managed to fool face recognition passwords in the past with a simple photograph of the person. As you would expect it works perfectly fine.


This is part of the reason Windows Hello won't use standard webcams for facial recognition. They require a depth camera (very unusual feature on laptops, usually marketed as Intel Real Sense) so that a simple photograph isn't enough to fool the camera.

This is still far from perfect though; a truly determined bad actor could create a passable 3D model, or even a face mask, and probably still fool the sensor. It just takes more work. The whole point of passwords is that no one can know them but the person intending to use them. Using a publicly visible part of my body is just asking for trouble.


I am not sure that this constitutes a proof. It seems induction would be required to show that the property holds.


Most world class athletes are already genetically superior by the measure of athleticism. Even though these favourable mutations or inherited traits have not been engineered, they still exist and it does enhance the athlete's performance.


London does it. http://abstracts.aetransport.org/paper/index/id/2041/confid/... https://www.youtube.com/watch?v=w4oeMB0tYII

The video is a bit cheesy, but it gets the point across.


This seems relevant to your questions. Among other things, it discusses which crops are best suited for salinated soil as well as improving conditions for crops in salinated soil using potasium fertilisers. Also relevant to your question, it discusses desalination strategies. http://www.fao.org/docrep/x5871e/x5871e04.htm


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: