The criticism that too much goes to fundraisers is meaningless without an explanation of how much would be optimal. If spending an additional $1m on fundraising meant an additional $1.1m in donations, then that's $100k more for Wikipedia that wouldn't have been there before.
If huge profit-making companies like Disney, Coca-Cola and McDonald's spend so much on marketing and sales, then it must be profitable. Similarly, there's no reason that fundraising spend wouldn't be financially advantageous to a non-profit.
If you support Wikipedia enough to donate, then it makes sense to want them to raise as much as they can. In which case you should enthusiastically support them running like a business-savvy organization.
Spending on adversiting is wasteful and socially destructive. It's deception to donors to not tell them that only 10% of their donation goes toward the mission, and 90% is wasted that could be directed elsewhere.
No, it's not wasted; that's my point. Why don't you think Disney's investors demand a $7bn annual saving by cutting the "waste" on advertising?
The spend on fundraising does go towards the mission, because it increases the amount of money for the mission. You're making the incorrect assumption that donations are constant.
Imagine that a charity hiring a fundraiser for $50k garnered $150k of additional donations. Let's say without that fundraiser, they got $100k in donations. So with the fundraiser, they got $100k + $150k - $50k = $200k net to spend on the charitable purpose.
20% of all donations go to the fundraiser's salary while she's employed. But if the fundraiser is sacked, then 100% of donations go to the mission, yet the mission takes a loss of 50%.
I had a major epiphany deriving this myself as a teenager, because it was the first time I used algebra to calculate something "real". The equation was trivial, but the principle of mathematical modelling struck me like lightning. In an instant the world felt more comprehensible and tractable, I suddenly felt like Neo seeing the Matrix.
It’s the nexus between activist funding, friendly media channels, and the social promotion due to like minded folks interested in pressing a specific agenda that often causes some bad science and studies to leak out.
Where there is a large industry, there is a large anti-industry. To me, the point is that there is a large potential for bias in either direction so incentives to reproduce, validate, and otherwise test results need to be found and somehow added to the process. I would say that despite the size of this industry, this particular issue is the tip of the iceberg when you consider all the industries that have a significant impact on humans.
Good for you, but NAC has a host of effects so you couldn't say it was due to glutamate for sure.
For example, NAC mainly increases glutathione, the primary antioxidant your body uses, so it may reduce the level of oxidative stress and lower your body's stress response. It also increases the level of SAM-e in your body (as you can transform sulfur containing amino acids methionine and cysteine into each other). SAM-e is a crucial cofactor for many processes including creation of neurotransmitters such as dopamine and serotonin. And an increase in sulfur amino acids (including NAC) also modulates your gut microbiome.
I'm not sure that those really are the extremes. I think quite a lot of people believe that justice may or may not ever happen depending on how we choose to act as a civilization. Even further than that, many people believe injustice is an inevitable feature of existence.
No, 63.3% of all participants got a favourable outcome, so 13% cheated (suggesting 26% would cheat if necessary, as half got a favourable outcome without cheating).
The participants all had varying degrees of belief in a just world which were measured with a six-item test. The results of that test correlated significantly with the chance of a favourable self-reported coin toss.
It is meaningless to draw inferences from comparisons to another study, as they were selected differently so there's no control, and it isn't necessary as this study contains a random and varied sample of participants.
Yes, there is a question over the representativeness of Mechanical Turk workers. But even though they're surely different on average from the average human, this study is controlled by comparing them to each other rather than to some pre-determined statistics, so the sampling bias should be largely cancelled out, barring second-order interactions.
> Yes, there is a question over the representativeness of Mechanical Turk workers.
There are more fundamental issues. Having run several studies either with MTurk or panel providers who use MTurk to source some/most of their respondents, I have trouble trusting any study with an MTurk sample that doesn't explicitly show 1. how they verified respondent location and demographics and 2. how they controlled for bots and mindless click-throughs.
Even though they used a convenience sample, issues like reading comprehension (which you can get from non-native English speakers VPN'ing through a US-based IP) and participants trying to get through the study as quickly as possible - or automating their responses altogether - absolutely matter.
The Bristol Four acquittal was not an act of jury nullification.
Under the Criminal Damage Act 1971, no offense is committed if there is a "lawful excuse" for the damage.
The defence argued that there was a lawful excuse on several grounds:
1. The defendants believed they were preventing a more serious crime (public indecency because of the statue's offensiveness).
2. The defendants believed the statue was owned by the citizens of Bristol (as stated on its plaque), who they believe consented to its removal.
3. Their right to freedom of expression and assembly under the European Convention on Human Rights.
The judge instructed the jury that they were allowed to consider questions such as the statue's offensiveness in deciding whether these excuses applied.
It's true that there are many mathematically equivalent ways to describe physical systems. But the important point is that some are more useful than others. For example, Lagrangian mechanics and Hamiltonian mechanics are equivalent to Newtonian mechanics, but they can give much better intuition for certain problems. Feynman diagrams are equivalent to grinding out the QFT algebra by hand à la Schwinger, but they give a completely different intuition for the underlying Physics.
More importantly, though, they could use this NN on systems that have not yet successfully been modeled, perhaps complex dynamical systems, to discover good parameters and conserved quantities.
> For example, Lagrangian mechanics and Hamiltonian mechanics are equivalent to Newtonian mechanics, but they can give much better intuition for certain problems. Feynman diagrams are equivalent to grinding out the QFT algebra by hand à la Schwinger, but they give a completely different intuition for the underlying Physics.
I just read about Langrangian and Hamiltonian mechanics. I didn't encounter those at all in my EE physics, and they are fascinating. Great examples! Are you a physics professor, or is this stuff undergrad physics majors learn?
Used to be third-year in the major under Classical Mechanics.
There's a good series of videos on YT, with the title Variational Calculus and the Euler-Lagrange equation on channel Structural Dynamics. I have only seen the first few. This first video should give you the full playlist:
> More importantly, though, they could use this NN on systems that have not yet successfully been modeled, perhaps complex dynamical systems, to discover good parameters and conserved quantities.
That would only make sense to try if the model could do this for systems we already understand. By the sound of the article, it can't even do that. Despite many efforts the researchers couldn't even understand the second pair of parameters. That doesn't correspond to my understanding of "good parameters".
I'm not sure why you don't think it's Physics. It's about formulating laws that describe the behaviour of physical systems - that's the essence of what Physics is. I have a PhD in high energy theory and this really seems like Physics to me.
Sure, if the pseudoscientific description of factor analysis applied to images is correct, then it's phyiscs.
As-is, it's pseudoscience. What happens when you do a factor analysis on images? You get some measure of the axes of geometrical variance across those images.
Are those axes "related" to any physical variables, sure -- but almost never directly. To suppose the system itself had these properties is to suppose, for example, constellations actually exist and cause your personality traits.
Everything we want to know is what phyical properties of the system give rise to the observed consistent correlations in geometrical properties. *THAT* is physics.
Showing these geometrical properties exist and are consistent is just what we're trying to explain.
You cannot go from images to the domain of physics -- there are an infinite number of theories consistent with these images domains. And this is pseudoscience.
It's really easy to test whether or not it works - see how well the model predicts on out-of-training sample data. That wouldn't work with astrology.
There's no such thing as "physical properties of the system" other than measurable quantities that can be used to make predictions, which is what this does. There's no reason to be sure that temperature, for example, is a "real" physical property of a system rather than just one of many variables that would help us model it and understand it.
Do you think it's pseudoscientific because there's no theory-ladenness in the predictions?
It's pseudoscience because there's nothing in the geometrical properties of those images called "gravity", etc. One can generate those pixel patterns from an infinite number of theories with an infinite variety of causally efficacious parameters.
From the article, it doesn't work. They found on known physics it gives 4.7 dimensions, of which only two are explicable -- 4 is correct; the others have no known physical interpretation. No surprises: those two are just the geometric properties of the system (angles) which are actually properties of the image. The others are pure bullshit.
Since, of course, the real physical parameters of the system we take to have generated those images are not present in them. The images are distal effects of these things
Only in cases where the geometric properties of the target system are causally relevant to its actual causal properties will this work -- ie., only when "angles matter"
Thinking you can infer laws of nature from images is pseudoscience, and these guys need to think more carefully about why we experiment in the first place
eg., Consider that if mass is a relevant causal property, there'd be no way of inferring it from images: two objects can be visually identical whilst having radically different masses... making images *OBVIOUSLY* not a measure of mass...
this project almost defines the modern kind of schizophrenic pseudoscience born of this wave of AI
It also incidentally underscores the amazing predictive powers of Noam Chomsky, when he thinks he's describing something that common sense indicates is dumb, and then a few years later someone actually goes out and does it, and does it in earnest, and unironically, and tries to promote it as an actual advance:
So for example, take an extreme case, suppose that somebody says he wants to eliminate the physics department and do it the right way. The “right” way is to take endless numbers of videotapes of what’s happening outside the video, and feed them into the biggest and fastest computer, gigabytes of data, and do complex statistical analysis — you know, Bayesian this and that [Editor’s note: A modern approach to analysis of data which makes heavy use of probability theory.] — and you’ll get some kind of prediction about what’s gonna happen outside the window next. In fact, you get a much better prediction than the physics department will ever give. Well, if success is defined as getting a fair approximation to a mass of chaotic unanalyzed data, then it’s way better to do it this way than to do it the way the physicists do, you know, no thought experiments about frictionless planes and so on and so forth. But you won’t get the kind of understanding that the sciences have always been aimed at — what you’ll get at is an approximation to what’s happening.
But he's saying it is bad to do it that way instead of doing traditional physics because you get no understanding, which is true, but in this study they're not using it as a "physics engine" to pilot aircraft or whatever, they're using it as a trick to generate novel hypotheses, which could then be theorised and investigated properly, not as a replacement to theory.
Unless you have videos of experiments designed to observe measurement devices we have created, on systems we have designed, it's all useless.
The only useful thing in figuring out how nature works is creating truely novel experimental circumstances and measuring them with novel devices created for that purpose.
You cannot do science as a statitics of images; that's pseudoscience. And chomsky is here only half-right; it's actually much worse than he's sayign.
I think I understand your criticism. There is no inherent ground truth in the image. The mass example is great since a 2D plane can't capture the quantity mass it's literally impossible the dimensions don't work. At best a 2D plane could show you correlations of mass (mass vs something plotted out). Hence this is just modern AI aka pattern-matching on steroids.
I think a counter argument would be that if there is SOME signal in the photos AND there's enough training data that does have the correct ground truth signal that the scientists are matching up then you can have SOME level of accuracy. If the training set can reasonably cover the space of possibility that we're interested in then we can get reasonable interpretations.
However in this case the insane number of physical phenomena will always be larger than any training set so this approach should NEVER generalize there will always be way too much noise which is what the scientists have figured out here. So I agree with you that it's extremely limited but I don't think I'd call it pseudoscience there might be very limited domains where for example the only data we have available are images and so such a tool may be appropriate.
I definitely share your frustration though since any half way decent scientist should have just done a thought experiment instead and figured that this wouldn't work well. This smells like BS academic marketing where they always inflate their own impact and significance.
Who cares if there's no gravity? Gravity wasn't sent down to us from heaven on a stone tablet, it's just a concept that lets us make predictions. At school I was taught it was a force and at university I was taught it was a pseudoforce resulting from fixing a non-inertial frame. Both approaches give correct answers, even though they're conceptually very different. There's no objective way to say which is right; they're just different approaches to modelling.
And maybe the 4.7 is actually more correct? The 4-parameter model is an approximation that neglects friction and air resistance. Moreover the double pendulum is a chaotic system and chaotic systems sometimes have dynamics described by laws with non-integer exponents such as Lyapunov dimension. I'm just spitballing, but the point is that it's not a priori ridiculous.
It's definitely possible to estimate mass from images. How do you think we know the masses of asteroids and planets? No-one put them on a scale, we just record their motion and work out which value fits best.
> Who cares if there's no gravity? Gravity wasn't sent down to us from heaven on a stone tablet, it's just a concept that lets us make predictions.
A concept that says gravity is the result of bending spacetime, with the speed of light being constant. It's not just a model, it's saying the universe is 4D spacetime, which explains why GR is so predictive.
It is just a model though! Everything in science is just a model. We better hope it's just a model, because it's incompatible with quantum field theory, which is another very accurate model. The only consistent model that bridges the two, superstring theory, says that spacetime and gravity could fundamentally be many things, from closed strings travelling between D3-branes to the holographic projection of a conformal theory - and you still get the same predictions.
I show someone a photo of a bowling ball and a styrofoam ball of the same shape and size. If someone thinks they can infer from a simple visual scan of the scene (analyzing the factors you see) are they delusional?
Perhaps they could leverage their lifelong training set which correlates scenes that look like they have bowling balls with scenarios that have a high mass movable sphere.
Perhaps we could have a good laugh together by painting a bowling ball to look like styrofoam and painting styrofoam to look like a bowling ball- then we could watch the silly ai/human apply an incorrect mental model and fail to grasp the causal reality! Ohohoho
None of this works without astrology, since it was the guiding theory behind Brahe and Kepler's measurements. The out-of-sample training data that Newton used for confirmation was comet orbits. Would ML really have created an elegant, closed-form theory about the elliptical shape of orbits and the power-law dependence of the period? Without these insights, there would be no inverse-square law in the first place, and perhaps we would only have an effective theory.
The purpose of Brahe's measurements, and the reason he hired Kepler, was to gather data for astrological predictions. The principles of astrology led them to look for simple, basic principles in a way that a computer would not, unless directly programmed to do so. The astronomical measurements alone were not enough.
>> It's about formulating laws that describe the behaviour of physical systems - that's the essence of what Physics is.
I didn't see any attempt to formulate laws. The researchers trained a neural net model to predict the next event in a sequence. That is not a natural law, it's a maximum probability estimator.
To clarify, a natural law would be a formula with variables that one can plug in numbers to, in order to predict the behaviour of a system. For example, Newton's law of gravitation is a natural law, Kepler's laws of planetary motion are natural laws, the laws of thermondynamics are natural laws. But a neural net model trained to predict the next frame in a video? How is that a "law"?
I don't see any fundamental difference. A deep neural network is a universal function approximator. It uses different language from what we're used to (weights and activations instead of analytic functions and calculus) but that's not a big deal. The point is that it uses only a handful of latent variables to describe the state of the system at a given time, and these can be used to predict the system's behaviour, which is fundamentally the same thing that a scientist would try to do.
So, to be clear on what you are saying, if I understand corectly you are saying that training a neural net to approximate a function is formulating a law, like for example a natural law? Is that right?
As a for instance, if I train a neural net to predict the motions of the planets, the trained model is a law of planetary motion, like Kepler's laws of planetary motion? Is that correct?
I would say it's essentially equivalent, especially if you choose a neural network architecture with a very low-dimensional layer in the middle with only a handful of variables.
Then the first half of the network (before the low-dimensional layer) will learn how to "encode" the state of the system in the video in as few variables as possible, such as the orientations and angular momenta of the double pendulum. This is equivalent to what humans do when we look at a messy physical system like the Solar System and model it with a few quantitative parameters.
The bottleneck layer will represent the handful of state variables, and then finally the other half of the network will learn the mathematical function that predicts the system's evolution. This is equivalent to what humans do when we work out physical laws and equations of motion.
OK, thanks for clarifying. I feel that your description of neural nets' inner
workings is a bit idealised and I'm not convinced that we have seen any evidence
that they are as powerful in representing real-world phenomena as you suggest.
But that's a big discussion so let's leave this aside for a moment.
I can agree that a neural net can learn a model that can predict the behaviour
of a system, to some extent, within some margin of error.
That's not enough for me to see neural net models as (scientific) "laws". For
the sake of having a common definition of what a scientific law is, I'm going
with what wikipedia describes as a scientific law: a statement that describes or
predicts some set of natural phenomena, according to some observations
(paraphrasing from: https://en.wikipedia.org/wiki/Scientific_law). Sorry for not
introducing this definition earlier on. If you disagree with it, then that's my
bad for not estabilishing common terminology beforhand.
In that sense, neural net models are not scientific laws because, while they can
predict (but not describe) they are not "statements". Rather they are systems.
They have behaviour and their behaviour may match that of some target system,
like the weather say. But like a simulation of the economy, or an armillary
sphere are not, themselves "laws", even though they are possibly based on
"laws", a neural net's model can't be said to be a "law", even if it's based on
observations and even if it has an internal structure that makes its behaviour
consistent with some (known or unknown) law.
There is also the matter of usability: neural net models are, as we know, "black
boxes" that can't be inspected or queried, except by asking them to analyse some
data. While useful, that's not a "law", because it does not help us understand
the systems they model. If this sounds like a semantic quibble, it isn't. To me
anyway it doesn't make sense to base scientific knowledge on a bunch of
inscrutable black boxes. Scientific laws and scientific theories are not black
boxes.
As an aside, neural nets fall short of what Donald Michie (father of AI in the
UK) called "ultra-strong machine learning" [1]. That's the property fo a machine
learning system that improves not only its own performance, but that of its
user, also. Current techniques aren't even close to that.
____________________
[1] Machine Learning: the next five years, Donald Michie, 1988
I see why you would say that: these neural networks probably have thousands or millions of weights while the equations of motion can probably be written on an index card.
But I would argue that this parsimony is illusory. There's a lot of implicit knowledge needed for the interpretation of physical laws. The laws are written using specialized mathematical notation such as special functions, partial differential equations, in a certain conceptual framework such as Lagrangian mechanics. You need to understand the concept of abstracting and quantifying a dynamic system (most people wouldn't imagine you can do this) and then you have to learn all the tips and tricks of how to reformulate and solve systems.
For example, I could write a mathematical representation of quantum electrodynamics (the theory of how electrons and photons interact) on a single index card. However, I would need to dig into my two shelves of QFT textbooks to actually make any quantitative experimental predictions, on top of my degree, PhD and post doc experience, which I need to even be able to read the textbooks (and I would still mess up the minus signs).
I think it's important to remember that these neural networks are doing all of that - not just finding the physics, but also all the abstraction, calculation and interpretation that is usually taken for granted but actually very non-trivial.
I sort of agree with both myself and yourself. This point of mine technically must be qualified when interpreted like this but I actually mean something slightly more subtle than just parameterization.
The tools of physics have a lot of implicit assumptions that guide the end result in ways that I would describe as parsimonious in terms of how much the output state space must be reduced. They are much more free, which is why they can be amazing for some very hard shit, but proving they're behaving exactly in "physical" way is very hard.
"Time is defined so that motion looks simple" is my favourite quote from MTQ for this reason. It's intuitive and yet also very physically "rigorous" in a way that people don't necessarily realize is a thing in physics beyond just using mathematics.
Maybe we can just train the AI to do the maths for us, dunno, but I think currently this tabula Rasa approach will inform the physics-y-ness. I still call it physics personally, but I don't really think it's interesting from a purely physical perspective.
There have been some works deriving conservation's laws and so on from empirical motion, which I think is very impressive at scale, but I don't know what that does for physics as opposed to the applications of said physics.
I don't see why that's relevant, a video camera is just another instrument that records data, not essentially different from the detectors at the LHC, albeit completely un-optimized - which is necessary for this experiment to work.
If all the AI is seeing of the world is a digital image, then they are likely to mistake the digital image for the world.
As far as I know, the article claims that the AI has discovered new physical variables, yet the researchers are unsure as to what they are. For all we know, these variables are the distance of objects from the edge of the image frame.
If huge profit-making companies like Disney, Coca-Cola and McDonald's spend so much on marketing and sales, then it must be profitable. Similarly, there's no reason that fundraising spend wouldn't be financially advantageous to a non-profit.
If you support Wikipedia enough to donate, then it makes sense to want them to raise as much as they can. In which case you should enthusiastically support them running like a business-savvy organization.