How is an example of doing a tail activity like competitive powerlifting and the anecdote of underperforming in the activity on a vegan diet relevant?
I am pretty sure that gentlemen at the lumber yard do not have 300kg deadlift events and are not competing.
It is well known that diet has very little to do with progress for an amateur athlete. Consistency is the biggest factor in long term success. If you want to eek out the most out of your body, then you might want to optimize, but then you're already deviating to obsessive behavior.
Yeah, if you want to increase your squat by 20% short-term, you're going to have to eat more food and be obsessive about it, but if you're fine with consistent training over a span of 10-20 years, there's no reason to stress too much about diet.
When I hear people talking about diet and "optimal performance", I always wonder what the hell are they doing that they need to optimally perform. Like, what exactly is optimal performance of a sedentary programmer that maintains elasticsearch clusters? What, he feels tired sooner if he's on a vegan diet instead of something else?
That is a valid claim. The artificial bioreactors in question are rarely as efficient as the body of an animal. So these "reduction" claims come from thinking about how much animal meat production is reduced by meeting the demand with fake meat, not taking the energy demands of fake meat production into account at all.
This review debunks the "bioavailability" and "quality" claims quite easily.
"bioavailability" (pdcaas diaas) is a contrived measure of protein source completeness made up to characterize protein issues of starving humans. It was also calibrated on rats and pigs, not humans. The "complete protein" misinformation is everywhere. Even on Wikipedia https://en.wikipedia.org/wiki/Complete_protein (edit: now I see that someone recently changed the statement to "adequate proportion" while previously it was just referencing that it contains all 9 essential amino acids).
All plants contain all essential amino acids, including green grass. Completeness in case of no starvation does not mean a thing. If you are eating 3000kcal of wheat every day, protein is no longer a worry, despite wheat having a 25% made-up bioavailability measure that was deduced by checking what happens to nitrogen in a live pig.
The protein "concentration" is just not true. Chicken breast does not have 90g of protein in 100g of chicken breast. 35g of protein in 100g of raw soybean. You will have to find a really rare steak of 100g to top that.
> Many ex-vegans have been consistently anemic, even while taking iron supplements, until they re-introduced meat.
This is not at all the conclusion of peer reviewed articles. Increased occurrence of nutrient deficiencies in vegan populations does not exist. Some people get deficiencies on any diet, and this is also the case for diets that have low-calorie high-volume foods like plants. Nothing special about that. People need to eat more volume but rarely do so. Similarly, they need to eat more calories. Talking about micronutrients in a world with no starvation is unnecessary. If you're eating a variety of plants, enough calories, it is certain you are consuming all of the necessary nutrients and protein. If you want to do powerlifting, marathon running, you will need to adjust your diet, whatever diet it is.
I think you misunderstood the protein concentration concerns. People are concerned about the ratio of calories from protein to calories from other macronutrients. In the case of beans like edamame, the concentration of fiber becomes relevant as well.
In contrast, chicken breast is nearly entirely calories from protein and very low fiber. Replacing some chicken breast with soy makes sense, but your fiber budget would cap out quickly and that would leave no room for other fiber rich foods like fruits and leafy green vegetables.
There was an explicit mention of protein quality and bioavailability. Both terms are standard and widely used. If OP meant something different by these terms, then fine. There's a minor loss of protein that is trapped in fibre for most plant foods that humans eat.
There is small amount of fiber in tofu, seitan, tempeh or other soy derivatives.
The "protein concentration" was mentioned together with iron, as if it is a problem of deficiency, not in the context of obsessive compulsive dieting by tweaking your macros to achieve the best results.
I am not sure the fiber budget you mention is a reality. There is no threshold where your body will stop behaving if you consume too much fiber.
Some people have significant health problems if they eat to much fiber. They are literally on a low fiber diet as a medical matter.
The rest of us have lower GI problems of some sort if we go too much over the daily recommended amount. That can cause significant practical and social problems. But you are correct that we probably won't die. It could be painful in some cases, though, and that would be enough.
It's not true that by weight chicken breast has higher concentration of protein than beans.
100g of raw beans, or 100g of raw lentils, has more grams of protein than a chicken breast. 100g of raw soybeans has more grams of protein than same amount of beef.
Of course, "raw" is the catch word, because when cooked, beans/lentils fill with water and then you might claim that concentration has been lowered but the amount of protein stays the same (nothing is lost by cooking).
First transformer models still dealt only with the training set.
Eventually it was extended to work with an external data source that it queries. This is not a new thing, for example, image style transfer and some other image tasks that were attempted before the domination of NNs did the same thing (linear models would query the db for help and guided feature extraction).
The greatest effect in transformers is the attention mechanism combined with self-supervised learning. Investigations in self-supervised learning tasks (article illustrates one word gap, but there are others) can result in superior models that are sometimes even easier to train.
As for SAT, optimization, graph neural networks might end up being more effective (due to high structure of the inputs). I'm definitely awaiting for traveling salesman solver or similar, guided by NN, solving things faster and reaching optimality more frequently that optimized heuristic algos.
> I'm definitely awaiting for traveling salesman solver or similar, guided by NN, solving things faster and reaching optimality more frequently that optimized heuristic algos.
There was a competition for exactly this at Neurips 2021
> As for SAT, optimization, graph neural networks might end up being more effective
Learning from data is a different problem from optimization. For example, if facts about cities gave additional clues beyond their location about the optimal order, then learning could benefit in the travelling salesman problem. Or if the cost of paths is only known implicitly through data examples.
Compare to how NN:s can be used for data compression, for example upscaling images, by learning from photographs only the tiny the subset of all possible images that are meaningful to humans. But it is not useful for general data compression.
Optimization is also data, given a local state, can you identify the sequence of transformations that will get you to a better state. The reward is instantly measurable and the goal is minimizing the total cost.
AlphaGo is local search guided by a learned heuristic, which is trained in a simulator of the game. The heuristic learns an approximation of the value of moves and board states, and is analogous to the "compute_cost_of_tour()" routine in TSP algorithms.
In the basic TSP (for example) there is no other data to learn from than "distances" between vertices, and anything learned from a single instance of the problem amounts to overfitting. This might still be useful - for example learning efficient sub-paths on a fixed map, rather than searching for them every time.
Self-organized maps can be used as a neural approach to find TSP solutions; in these cases the network itself is the optimized solution. Think of it as ~gradient-descent~ optimization for TSP. Not sure if it is relevant in benchmarks. (I think it might amount to minimizing the sum squared distance between hops (or a bound on that), not the total length of tour. It favours many shorter hops over a few long hops.)
(If you want time-window constraints in LKH, IIRC, you can try adding the time-diff as penalties to your global cost function.)
LKH does support a lot of things mentioned, but for practical usages it would not work. It's nice to leave it running and see what can be accomplished but asking it to give you back something in 1 second, with a lot of constraints, gives back solutions that are not feasible.
In the basic TSP there is a lot of data.
For example, the reason why minimum spanning tree works is because the algorithm makes use of the relationship between vertices. Similar techniques use alpha-nearness, Steiner trees and direct modifications of distance matrix to create relaxations of the TSP and improve the performance of local search (I believe most are implemented in LKH).
I am obviously not expecting NNs to be capable of doing something like that currently but I'm hoping they might be able to discover interesting instance patterns for something more constrained.
> But these do not stay they same between problem instances; anything you learn from solving one problem is not helpful when solving the next problem.
But nothing in ML stays the same between instances. The reason why ML works is because there are redundancies in the training set. I am pretty sure that distribution wise, set of TSP instances still has a lot of redundancies.
You would want your model to learn to execute something like MST or to approximate alpha-nearness or to remap the instance into a relaxation that when solved by a simpler algorithm results in a solution that, when remapped back to original, is feasible and optimal.
> I'm definitely awaiting for traveling salesman solver or similar, guided by NN, solving things faster and reaching optimality more frequently that optimized heuristic algos.
Just in case we are not being clear, let's be clear. Bluntly in nearly every practical sense, the traveling salesman problem (TSP) is NOT very difficult. Instead we have had good approaches for decades.
I got into the TSP writing software to schedule the fleet for FedEx. A famous, highly accomplished mathematician asked me what I was doing at FedEx, and as soon as I mentioned scheduling the fleet he waved his hand and concluded I was only wasting time, that the TSP was too hard. He was wrong, badly wrong.
Once I was talking with some people in a startup to design the backbone of the Internet. They were convinced that the TSP was really difficult. In one word, WRONG. Big mistake. Expensive mistake. Hype over reality.
I mentioned that my most recent encounter with combinatorial optimization was solving a problem with 600,000 0-1 variables and 40,000 constraints. They immediately, about 15 of them, concluded I was lying. I was telling the full, exact truth.
So, what is difficult about the TSP? Okay, we would like an algorithm for some software that would solve TSP problems (1) to exact optimality, (2) in worst cases, (3) in time that grows no faster than some polynomial in the size of the input data to the problem. So, for (1) being provably within 0.025% of exact optimality is not enough. And for (2) exact optimality in polynomial time for 99 44/100% of real problems is not enough.
In the problem I attacked with 600,000 0-1 variables and 40,000 constraints, a real world case of allocation of marketing resources, I came within the 0.025% of optimality. I know I was this close due to some bounding from some nonlinear duality -- easy math.
So, in your
> reaching optimality more frequently that optimized heuristic algos.
heuristics may not be, in nearly all of reality probably are not, reaching "optimality" in the sense of (2).
The hype around the TSP has been to claim that the TSP is really difficult. Soooo, given some project that is to cost $100 million, an optimal solution might save $15 million, and some software based on what has long been known (e.g., from G. Nemhauser) can save all but $1500 is not of interest. Bummer. Wasted nearly all of $15 million.
For this, see the cartoon early in Garey and Johnson where they confess they can't solve the problem (optimal network design at Bell Labs) but neither can a long line of other people. WRONG. SCAM. The stockholders of AT&T didn't care about the last $1500 and would be thoroughly pleased by the $15 million without the $1500. Still that book wanted to say the network design problem could not yet be solved -- that statement was true only in the sense of exact optimality in polynomial time on worst case problems, a goal of essentially no interest to the stockholders of AT&T.
For neural networks (NN), I don't expect (A) much progress in any sense over what has been known (e.g., Nemhauser et al.) for decades. And, (B) the progress NNs might make promise to be in performance aspects other than getting to exact optimality.
Yes, there are some reasons for taking the TSP and the issue of P versus NP seriously, but optimality on real world optimization problems is not one of the main reasons.
Here my goal is to get us back to reality and set aside some of the hype about how difficult the real world TSP is.
When TSP is mentioned today, unlike 50 years ago when LK heuristic got published, I assume all of the popular & practical variants, like time window constraints, pickup and delivery, capacity constraints, max drop time requirement after pickup, flexible route start, adding location independent breaks (break can happen anytime in the sequence or in a particular time window of day) etc. Some of the subproblems are so constrained that you cannot even move around that effectively as you can with raw TSP.
Some of the subproblems have O(n) or O(n log n) evaluations of best local moves, generic solvers are even worse at handling that (Concorde LP optimizations cannot cover that efficiently). When no moves are possible, you have to see what moves brings you back to a feasible solution and how many local changes you need to do to accomplish this.
For example, just adding time windows complicates or makes most well known TSP heuristics useless. Now imagine if we add a requirement between pairs of locations that they need to be at most X time apart (picking up and then delivering perishable goods), that the route can start at an arbitrary moment etc.
I personally spent quite a lot of time working on these algorithms and I'd say the biggest issue is instance representation (is it enough to have a sequence of location ids ?). For example, one of my recent experiments was using zero suppressed binary decision diagrams to easily traverse some of these constrained neighborhoods and maintain the invariants after doing local changes. Still too slow for some instances I handle (real world is 5000 locations, 100 salesmen and an insane amount of location/salesmen constraints).
Amazing. Of course I've heard of Kernighan long ago, but this is the first I've heard of LKH.
I did a lot in optimization, in my Ph.D. studies and in my career, but I dropped it, decades ago -- my decision was made for me by my customers, essentially there weren't any or at least not nearly enough that I could find.
Actually, my summary view is that for applications of math in the US, the main customer is US national security. Now there are big bucks to apply algorithms and software to some big data, and maybe, maybe, there is some interest in math. But the call I got from Google didn't care at all about my math, optimization, statistics, or stochastic processes background. Instead they asked what was my favorite programming language, and my answer, PL/I, was the end of the interview. I'm sure the correct answer was C++. I still think PL/I is a better language than C++.
Early in my career, I was doing really well with applied math and computing, but that was all for US national security and within 50 miles of the Washington Monument.
Now? I'm doing a startup. There is some math in it, but it is just a small part, an advantage, maybe crucial, but still small.
There's quite a resurgence of need for optimization.
There's a lot of companies that want to provide an Uber/Lyft-like service of their own product. So you have a bunch of smaller problems that you want to solve as best as possible in ~1 second.
A lot of small companies with their delivery fleets want to optimize (pest control, christmas tree delivery, cleaning, technical service, construction (coordinating teams that construct multiple things at multiple locations at the same time) etc.).
On the other hand, not related to TSP, the whole energy market in the US is very LP/ILP optimizable and has a lot of customers (charging home batteries, car batteries, discharging when price is high, etc.).
I would admit that the scientific field of discrete optimization is littered with genetic algorithms, ant colonies and other "no free lunch" optimization algorithms that make very little sense from progress perspective, so it does feel like the golden era was from the 70s to early 90s. I do not have a PhD but somehow ended up doing machine learning and discrete optimization most of my career.
Improvements to ant colonies or genetic algorithms are not pushing the field forward. It becomes a benchmark game and has been that for the last 20 years (which many abuse, you can start from a previous best solution and leave your computer running for days and just claim that your new algorithm improvement found the new best solution, it's also quite common to never release your code).
If you look at the roots of discrete optimization, all of the approaches used in a solver like Concorde (developed in the open), there's no where near the amount of development and paradigm shifts happening in ant colonies, genetic algorithms, genetic programming, tabu search, annealing and similar.
E.g., finding an efficient representation of time-windows+pickup-and-delivery+breaks+flexible-start-time that allows you to efficiently update the solution and get an answer if the solution is feasible after the change and what the new cost is, is more progress than changing some recombination patterns in your genetic algorithm that will result in improvement on the instance set you are optimizing for (basically overfitting to data).
Here's an example of a paper that lists various update/feasibility/cost diff algorithms and their time complexity for a bunch of subproblems on a list-of-location-ids representation. Any genetic algorithm that wants to be fast will need to deal with that too.
That's why I think that graph NNs might allow us to find a way to remap our simple representation to something more efficient that is easier to work with and that can apply local changes without much effort.
For example, what if you can apply a transformation to TSP problem with time windows by adding new vertices or changing the distance matrix to eliminate time windows completely but still keep applying very efficient local changes that bring you close to optimum fast (do the same for pickup-and-delivery, flexible start time, etc.). Similar thing, an integer linear programming solver is used but the way the constraints of your instance are defined is hard to work with, there is a local pattern you are not seeing that allows simplification.
There have been attempts to learn exploration strategies of ILP solvers with ML but none made leaps forward (unlike AlphaFold, AlphaZero, AlphaGo, or even AlphaCode - competitive programming code generation). The biggest reason for that is that the current principled algorithms (60-30 years old) are insanely good on fundamental problems.
I remember reading about a new set of constraints, nurse rostering (nurse scheduling), and once researchers applied the principled methods, all of the instances of interest got solved to proved optimality. The amount of genetic algorithms, ant colonies and who knows what that was applied to these instances in the meanwhile was ridiculous and unnecessary.
Python-MIP is a great library that provides an interface to many different algorithms like this. It's practical for using in scientific programming where appropriate, and if you read through the docs you can find the names of specific algorithms that it uses with pointers to where to learn more.
Can look at the now old work of G. Nemhauser. His work was for combinatorial optimization and not just for exactly the traveling salesman problem (TSP).
E.g., there is
George L. Nemhauser and
Laurence A. Wolsey,
Integer and Combinatorial Optimization,
ISBN 0-471-35943-2,
John Wiley & Sons, Inc.,
New York,
1999.
Some approaches involve set covering and set partitioning. Soooo, for the FedEx fleet, first just generate all single airplane feasible tours from the Memphis hub and back. Here can honor some really goofy constraints and complicated costing; can even handle some stochastic issues, i.e., the costs depend on the flight planning and that depends on the loads which are random, but it would be okay to work with just expectations -- we're talking complicated costing! Then with all those tours generated, pick ones that cover all the cities to be served, i.e., partition the cities. Have a good shot at using linear programming, tweaked a little to handle 0-1 constraints, to pick the tours.
Then more generally for a lot of practical problems can write linear programming problems with some of the variables integer. Then can tweak the simplex algorithm of linear programming to handle some of such constraints fairly naturally in the algorithm. E.g., of course, can proceed with now classic branch and bound.
The TSP taken narrowly can be regarded as more specialized.
So, net, there is a big bag of essentially tricks, some with some math and some just heuristics.
Part of the interest in the issue of P versus NP was to do away with the bag of tricks and have just some one grand, fantastic algorithm and computer program with guaranteed performance. Nice if doable. Alas, after all these years, so far not really "doable", not as just one grand, fantastic .... And the question of P versus NP has resisted so much for so long that it has even a philosophical flavor. And there are serious claims that a technically good algorithm would have some really astounding consequences.
Sure, I have some half baked ideas sitting around that I hope will show that P = NP -- doesn't everyone? But my point here was just simple: For several decades we have been able to do quite well on real problems. Oh, for the problem with 600,000 0-1 variables and 40,000 contraints, otherwise linear, I used nonlinear duality theory (which is simple) or, if you wish, Lagrangian relaxation -- it's one of the tricks.
Another old trick: For the actual TSP in any Euclidean space (sure, the plane but also 3 dimensions or 50 if you want), that is, with Euclidean distance, just find a minimum spanning tree (there are at least two good, that is, polynomial algorithms, for that) and then in a simple and fairly obvious way make a TSP tour out of that tree. That approach actually has some probabilistic bounds on how close it is to optimality, and it does better with more cities -- it's another tool in the kit.
My main conclusion about the TSP, combinatorial optimization, and optimization more generally is that there are way, Way, WAY too few good customers. Whether there is 15% of project cost to be saved or not, the people responsible for the projects just do NOT want to be bothered. In simple terms, in practice, it is essentially a dead field. My view is that suggesting that a young person devote some significant part of their career to optimization is, bluntly, in a word, irresponsible.
These are really interesting maps, but I'm missing a few important data points.
For the cancer one, I feel like it's misleading. E.g. Russia and Central Africa might be low in cancer because of their low life expectancy, and Central Africa additionally because of its majority black population (which I assume to be less prone to skin cancer). As a white person, I believe it would be a mistake to move to Central Africa expecting to reduce one's skin cancer risk, as evidenced by the high skin cancer rates of white people in South Africa or Australia. So for that map it would be great to be able to filter it by age group and ethnicity.
The disability-adjusted life years (DALY) one seems much more useful. What stands out there is how much healthier people are on average in Western Europe compared to the U.S. Perhaps that is in part because of the more egalitarian societies and health care systems in European countries, which greatly improve the health of the less well-off. I wonder if that comes at the cost of the better-off, i.e. if say the top 10% in the U.S. are actually healthier in the U.S. than in Europe. It would be interesting to be able to filter by income level.
> the more egalitarian societies and health care systems
As someone from Romania, I believe that is true. But since joining the EU, we've also seen changes in food laws, which have a large impact on health.
For example, recently it became mandatory to label citrus fruit when it was treated with preservatives, to know whether the peel is edible. People used to consume the peel regardless; now more people are aware.
There is also an important difference between "flavorings" and "natural flavorings". For instance, artificial butter flavoring causes "popcorn lung".
Your last paragraph goes off the rails a bit, the diacetyl in ordinary butter is no better for your lungs than its synthetic equivalent. If there's a difference it's the amount of exposure, not whether the identical molecule comes from in vitro synthesis rather than in vivo.
I figure some of those light coloured countries have more people simply not getting a diagnosis. Additionally going to central africa as a white person might increase your chance of skincancer enough to offset any reduced of other cancers.
It's probably better to just go to where people live the longest and adjust to the local diet/rythm, etc
Using half-open intervals for std::lower_bound and std::upper_bound has an unexpected benefit. It allows the functions to return an insertion point that keeps the sequence sorted, even if that insertion point is at the end of the sequence.
the maxim is to "speak as you write and write as you speak". however there are variations to adoptions of the rule. croatians will tend write foreign words as written in that language, while in serbia you always follow this rule. so for example in serbia it is Majkl Džordan while in croatia its just Michael Jordan
Why would evaluation be faster with more narrow layers? If there were fewer tokens it would definitely be faster, because transformers scale by tokens^2, but here "narrow" means number of channels, for presumably the same number of tokens.
If I remember correctly the fully connected layers after the attention block are [?, a*h] * [a*h, b*h] (for some scalars a,b and hidden size h), which means that transformers also scale with h^2. I don't know what fraction of the total FLOPs that section of the model takes for practical model sizes, but it would indicate that making the model narrower for the same number of params would reduce compute.
Oh yeah, I think you're right. If it's all fully-connected (and parts of a transformer are) then more thinner layers use fewer FLOPs than fewer thicker layers, for the same number of parameters. As long as the layers are wide enough to keep the GPU busy it'll run faster.
It is notable that they got only 2.5 extra BLEU while translating text to english, and 6.2 extra when translating text from english to another language.
Since the network will have seen far more english text than text of other languages, it suggests that performance on limited training data is more improved.
Actually no. Each layer requires output from previous one, which means sequentially computation. While wider layers can utilize GPU parallel computation better. This is kind of trade-off between less memory (less parameters) vs longer time.