Hacker Newsnew | past | comments | ask | show | jobs | submit | dlsa's commentslogin

Yeah, that comment they don't like? Crazy people will false report some incident to get a SWAT team at your address. Other crazy people will call your employment and start a campaign of attacks to get you fired. Plenty of examples of this out there. Some people, eg youtubers, get swatted multiple times. Police turn up multiple times. These aren't isolated cases.


At least they are finding nothing and confirming they are finding nothing. Less scrupulous operators might be always finding something even if its not there. So that's a sign of good science.

Maybe there's something they haven't found because there's so much data? Mix things up and look again? Change assumptions?

Or as someone I know likes to say to various smaller humans: have you looked around the couch? Really? Are you sure? Have you had a good look? this is how the tv remote usually subsequently reappears as there is a difference between just looking and having a good look.

The best science also happens when you've looked, not found it but now know where not to look. Even better science happens when you know exactly why it shouldn't have been there at all. Surprising science happens when you find the errors in your assumptions and discover it can sometimes be where it previously was not expected to be.


> Have you had a good look?

It's difficult to explain, but they[1] tried very hard.

For example the electron has an electric charge but it's also like a small magnet. In an ideal elementary particle, the value of the magnet is 2 * something. In a real elementary particle the value is almost-2 * something, so they are measuring the almost-2, and it's call g [2].

For an electron, the measured value of g is

  2.00231930436256(35)
, there are is an uncertainty of 0.000000000002%. The problem is that it agree with the current theoretical prediction.

The muon is very similar to an electron, but the experimental g is [3]

  2.0023318416(13)

 and the current theoretical prediction is 

  2.00233183620(86)
  
It's a difference of 0.000000001%. Most people will be happy with that disagreement and forget about it. But They are happy because there is a disagreement and perhaps they can use that to discover a new particle. It still may be a long lived statistical fluke, but it already survived many years. Other team claimed that there is a small error in one of the experimental numbers used in the theoretical calculation, but I'm not sure if they are genius or crackpots or something in between.

And there are many other experiments. I like for example the IceCube [4] that is just a giant chunk of ice in the Antarctica. They try to detect neutrinos from stars. It has many experiments, but in particular some experiments are useful to measure the difference of mass of the neutrinos that is a not very clear part of the Standard Model.

[1] Not my area of research. They live in the next corridor.

[2] https://en.wikipedia.org/wiki/G-factor_(physics)

[3] https://en.wikipedia.org/wiki/Muon_g-2

[4] https://en.wikipedia.org/wiki/IceCube_Neutrino_Observatory


There have been various amazing "tabletop" experiments looking for effects that would imply new particles/forces. Electric dipole moment of the electron, electric dipole moment of the neutron, "fifth force" (deviations from 1/r^2 gravity at very short ranges), neutron oscillation. The cleverness of some of these experiments is astounding.


gus_massa: Since you are likely an expert, could you recommend a resource that explains how you use the Lagrangian equation for the standard model [1] to actually compute a predicted value for the electron's g ?

An elementary resource that goes through basic steps for a computer scientist (non expert in QFT) would be a great. A simpler particle than electron is also ok, but I'd love to understand how you mess with that equation.

[1] http://nuclear.ucdavis.edu/~tgutierr/files/stmL1.html


Sadly not an expert in that area. I only took a course of Nuclear Physics for a Major in Physics [1]. So I can read and understand that stuff, but the fine details pass over my head.

Looking at a recent page of that course, the recomended books are

* F. Halzen, A. Martin, “Quarks and Leptons: An introductory course in modern particle physics” (Wiley 1984)

* D. Griffiths, “Introduction to elementary particles” (Wiley 1987)

(and a few more)

The calculation for g=2 is quite easy (for an advanced Physics student). I remember the general idea, but not the details. I think I can reconstruct the details if necessary. It may be explainable in a blog post skipping some details.

The first correction g=2+1/137.036 is also humanly compresible, and can also be explained with some graphics. It would be very hard for me, but if I have a week to seach and rehearsal it is possible.

As the sibling comment says, the following corrections g=2+1/137.036+g=2+?/137.036^2 get harder and harder. And there are too many technical details and problems. I can only see the graphics and get a shallow understanding, but how they are transformed to integral and how to calculate all of them efficiently is too much for my knowledge.

[1] I never finished my Major in Physics, but I finished the one in Math.


> Looking at a recent page of that course, the recomended books are * F. Halzen, A. Martin, “Quarks and Leptons: An introductory course in modern particle physics” (Wiley 1984) * D. Griffiths, “Introduction to elementary particles” (Wiley 1987)

It is telling that for a recent course the recommended books are over 35 years old. Consistent with the OP proposition.


Rather like Jackson is still a standard electrodynamics text after 60 years. Classical EM is finished. But, quantum field theory is not.


The complete list is

* F. Halzen, A. Martin, “Quarks and Leptons: An introductory course in modern particle physics” (Wiley 1984)

* D. Griffiths, “Introduction to elementary particles” (Wiley 1987)

* J.J. Sakurai, “Advanced quantum mechanics” (Addison Wesley 1967).

* P.E. Hodgson, et al., “Introductory nuclear physics” (Oxford 1997).

* H. Frauenfelder, E.M. Henley, “Sub-atomic Physics” (Prentice Hall 1992)

IIRC the Sakurai book is more about generic quantum mechanics, but he has two books, I'm not sure if this has more about particle physics. The other two are more modern, but I don't remember them. I also tried to keep the list short, because usually the main book of the course cover most of the topics.

Anyway, it's a mandatory undergraduate course for everyone that want to be a Physics. If you want to learn cutting edge particle physics, you should take one or two optative course about the topic, then make a one year undergraduate thesis, then take a 5 years PhD, and then perhaps 2 years of a postdocs. So the cutting edge is like 8 years away.


The paper describing the theoretical steps necessary to compute g for the muon is hundreds of pages of condensed math, theorems and approximations etc.

The SM Lagrangian is not computable, so a big part of theoretical physics is about finding tricks to actually compute it.

Incidentally this is why there is disagreement on the muon g-2 discrepancy, at least two theory groups have calculated different values using different approximations.


It should be noted that the anomalous electron g-2 is computable analytically (at least to very good approximation) which makes the theoretical value much less controversial. The anomalous muon g-2 however depends more heavily on interactions of quantum chromodynamics, which can only be computed using numerical lattice QCD simulations. This is notoriously hard and has only become practical in recent years, hence why theorists don't yet fully agree on the value.

Also, computing even just one part of this value is basically on the level of a theoretical particle physics dissertation. Don't expect to be able to do this without several years of research experience in this specific field.


It may be worth first understanding why g=2 (if you haven't before). This can be done on the basis of special relativity + quantum mechanics, i.e., the Dirac equation:

https://en.wikipedia.org/wiki/Dirac_equation

The "g" is the Lande g factor:

https://en.wikipedia.org/wiki/Land%C3%A9_g-factor

(As I recall nonrelativistic QM gives g=1.)

PS Not a physicist, but learned some of this at some point. Only ever learned about electrons, though; don't know how any of this translates to other particles.


You mess with it by doing diagrammatic perturbation theory, that is, calculating Feynman diagrams. Zee or Weinberg could be good references. There’s also lattice QFT but you generally want to learn the perturbative methods first


I have two recommendations you might find useful. The first is QED, a series of lectures by Richard Feynman. This text covers the qualitative nature of the perturbation theory used for quantum electrodynamics. The second is Quantum Field Theory for the Gifted Amateur by Lancaster and Blundell. It's nicely written and accessible at the advanced undergrad level, building up QFT from the basics.

Caveat-- I work in astronomy but have a PhD in physics and have taken graduate QFT.


You can look at “QFT in a nutshell” by Zee, a highly recommended and pretty accessible book (to the degree a book on QFT can be accessible), for the computation of g for the electron to one loop order. That calculation can also be found in “Quantum field theory and the Standard Model” by Schwartz in Chapter 17 (p. 321). I’m not aware of a textbook exposition of the calculations relevant for the muon g.


They haven't found nothing. They've found something, which is nothing.

They've looked, been able to rule out some hypotheses of what they might find, and have established some evidence against others. Progress achieved, and the search continues.


This.

This author, who should know better, is suggesting that the only "success" is a new discovery.

This is patent nonsense. Every time a hypothesis is ruled out, and every time a hypothesis is ruled out with greater confidence, the experiment has succeeded.

What is true is that discoveries drive public excitement and public support for additional funding. That is a political problem and it is solvable. If Western governments can find the public support for trillions in military expenditures, I am confident that it can be found for the comparably meager budgets of the scientific establishment.


The issue for particle physics specifically, is that they _hope_ to find something that breaks the theory. But so far, only find confirmations of the current Standard Model. Succesful experiments, yeah, but doing little for pushing our understanding of the universe unfortunately.

The reason why they want to break the standard model is, simplified, two-fold:

1. While the theory is incredibly powerful in its domain, we have been unable to unify it with gravity and other theories of matter. This is a problem because it's supposed to be a theory summarizing the fundamental building blocks of the universe and it should therefore describe _everything_.

2. the theory is ugly. It's a mess with many parameters and weird interpretations all shoved together. Physicists don't like this. Not just for aesthetic reasons, but also out of experience. It reminds people of pre-relativity electrodynamics for example. Lorentz had what was essentially a working theory of relativity but it was a mess. People fear the standard model is the new lorentzian relativity, essentially correct but missing some key insight that is needed to fix it.

Finding something that breaks the standard model could go a huge way to solving both these issues. But the standard model just keeps getting confirmed at higher and higher resolution.

In software terms: it's like you know there's a 1/1000'000 bug _somewhere_ in the software but every single test you write to try and find it passes.


There’s a huge mismatch between people who are science fans and people who are doing physics anywhere near particle physics. It’s quite hard to explain how the field is spinning it’s wheels squared against what people consider scientific progress.

Edison’s “I found 100 things that didn’t work” is a nice parable but it doesn’t work across an entire field.


(former PhD in Particle Physics in QCD here, far from an expert)

> While the theory is incredibly powerful in its domain, we have been unable to unify it with gravity and other theories of matter. This is a problem because it's supposed to be a theory summarizing the fundamental building blocks of the universe and it should therefore describe _everything_.

I think this is a misunderstanding of what the Standard Model is and the scientific process that went into it. It is a model for describing the interactions of electroweak and strong force interactions, and that's it. This is based of years of experimental data and coming up with a consistent theory that fits the data. No one went out to come up with a "theory of everything", missed and ended up with the standard model.

The Standard Model is clearly a low energy effective theory of something more, almost by definition. The problem is we have absolutely no data to drive predictions of higher order theories (which could also turn out to be low energy effective theories themselves). Without data, there is a very real chance that the standard model is the best model we're going to have for particle physics.

> the theory is ugly. It's a mess with many parameters and weird interpretations all shoved together. Physicists don't like this. Not just for aesthetic reasons, but also out of experience. It reminds people of pre-relativity electrodynamics for example. Lorentz had what was essentially a working theory of relativity but it was a mess. People fear the standard model is the new lorentzian relativity, essentially correct but missing some key insight that is needed to fix it.

Ugly is a subjective term. A lot of people talk about stuff like 'naturalness' problems with the standard model, but is that really a problem? Who are we to say what numbers are the natural order of things. Gravity is orders upon orders of magnitude weaker than all the other forces, is that 'natural'?

I think comparing it to Lorentzian aether is a little harsh. If you compare special relativity to Lorentzian relatively, special relativity is just a simpler model (it doesn't need aether). I think it's extremely unlikely at this stage that given only the data we have right now, someone would be able come up with a theory that would be fully consistent with the Standard Model but is simpler and doesn't predict new stuff. It's not impossible, but it is very unlikely.

Actually I think the biggest problem with the Standard Model is how to go from the theory to real predictions. Formulating the lagriangian of QCD is the easy bit, converting that to real predictions (either on the lattice QCD end at large alpha_s or perturbative QCD at small alpha_s) is extremely difficult. It's almost laughably absurd where it is not unheard of for calculations of single processes to take a decade or more.


I think a lot of commentary on this thread is losing sight of what the world "model" really amounts to in a scientific context.

It's an abstraction. A bunch of math that just-so-happens to result in accurate predictions. That's all it really is. How the universe really works (putting Tegmark aside) is a separate, ultimately philosophical question.

Much of particle physics is simply exploring the parameter space in which various models might be applicable. In the most exciting case, the model crumples in some new, unexplored region.

The value of bigger accelerators comes down whether the higher energies, in which we have not yet explored, are worth exploring, relative to the cost of doing so. That is certainly debatable.

But it's not a "desert." Nobody knows what higher energies will reveal.


> It's an abstraction. A bunch of math that just-so-happens to result in accurate predictions. That's all it really is. How the universe really works (putting Tegmark aside) is a separate, ultimately philosophical question.

But philosophy is not knowledge, and it is in fact math that is the only form our knowledge can have in this area, whether we like it or not.


Physics is based on metaphor not math. We take common experiences like space, distance, speed, temperature, "energy", quantify them with other stable experiences we can use as reference units, then select the operations on them which happen to have predictive value. The operations have become more abstract over time, but they're still more complex variations on the same underlying concepts - for example generalising 3D Euclidean space to the abstract ideal of a set of relationships in a mathematical space defined by some metric.

There's nothing absolute about either the math or the metaphor. Both get good answers in relatively limited domains.

One obvious problem is that reality may use a completely different set of mechanisms. Physics is really pattern recognition of our interpretation of our experience of those mechanisms. It's not a description of reality at all. It can't be.

And if our system of metaphors is incomplete - quite likely, because our experiences are limited physically and intellectually - we won't be able to progress past those limits in our imagination.

We'll experience exactly what we're experiencing now - gaps between different areas of knowledge where the metaphors are contradictory and fail to connect.


This is all wrong, unfortunately, and that’s because it is based on a wrong premise. Experience and knowledge are two different things, and whether we are capable of experiencing certain aspects of reality or not, math is how we know things. In the areas we cannot experience directly the ability to form mathematical images and ideas can even be thought of, if you will, as an extension of our ability to “see.”


>Physics is based on metaphor not math. We take common experiences like space, distance, speed, temperature, "energy", quantify them with other stable experiences we can use as reference units, then select the operations on them which happen to have predictive value.

If you experience pushing this object that feels to weigh 1kg with a force that feels like 1 N, you are going to experience seeing it accelerate at 1m/s^2.


My point is that there is nothing magical about this model or that model.

HEP experimentation is just exploring parameter space and seeing if our models hold up.


I think we probably agree on the core issue, I just kept things a bit too brief.

There are people who feel like I described, and there are people who disagree to varying degrees (physicists, amitrite?). But I do think we all kind-of agree that we'd prefer to find experimental results that break the standard model vs proving it right now, but it seems unlikely we're going to find that smoking gun anytime soon. The model is an attempt at fitting data and like you said it works in the regime it was designed for, but it can't be _the_ theory of everything. It would be great if it broke somehow so we could investigate _why_ and drive new avenues of research based on that which might be more promising in resolving gravity and the other forces (or the anti-matter mystery, or shed some light on what dark-matter is)*

As the OP said, it's still good science if we prove that the current theory holds up, but no one is really happy with it at this point because everyone knows it's not going to be the final unified theory that we all want to see

---- * personaly I have a gut feeling those three are going to be resolved in rapid succesion if they're ever solved


The author correctly reports a scientific debate inside of science amongst scientists.

Particle Physicists can pretend this is just a political problem all they want, but if more and more other physicists are convinced the field is entering a desert there will be no new accelerator. Maybe even more importantly, if students learn about the true state of the field they will chose more interesting things to study.

Human time and effort is limited, and scientists don't go around and devote hundreds of thousands of person years to rule out random hypothesis. Effort at LHC level is only devoted because there is a very very good reason to band together to get this done that convinced many other scientists (who in turn helped convince funding bodies). LHC has been a huge success on its own terms, but its results are simultaneously a massive problem for particle physics as it stands right now.

Not a problem for science, just a problem for the field of particle physics, which will need to adjust to the current reality rather than holding out for more data.


Expectation is that a new collider 10x as powerful and expensive would not be anywhere near big enough to test current hypotheses.

There is plenty left to investigate in solid-state and superfluid physics. You don't need (much of) a collider for those.


The problem isn't that "finding nothing" isn't progress. The problem is that "finding nothing" is terrible progress-per-dollar.

If you're still having trouble with that concept, peer into the alternate universe where the LHC actually provided enough data to nail down the Theory of Everything. Now that would be some progress-per-dollar to celebrate.

There's a contingent of people who just don't want to think about "how much" progress something is making and want to live in a fantasy world where building a multi-billion dollar particle collider that finds nothing is exactly the same as a $50,000 experiment that finds nothing. I don't know that I'm terribly interested in trying to argue y'all out of that belief. But I can say with great confidence that no matter how good it may make you feel, if you go on to argue about how vital it is to spend another 5x times as much money to build another particle collider that we have no reason to believe will find anything new, you will continue to be marginalized and find your influence waning to apparently no effect.

But in the faint hope of maybe convincing you, consider that there is no infinite money fountain, and even if you just can't process that fact, there certainly aren't an infinite amount of physicists. What is so vital about another particle accelerator that we must dedicate thousands of professional careers to it despite the lack of solid reason to hope anything will come of it? Why not let them do something else? I submit it's all Availability Heuristic. You see and apprehend the particle accelerator, so it must be a good idea. You don't see the thousands upon thousands of other things you're trading away for it, so they don't factor in.

But given the current big fat zero rational reason to build another, it is very easy to build a model in which those other experiments will actually be the ones that make the difference somehow. Probably by some long, convoluted chain we can't imagine now; I doubt there's a bench experiment that we just haven't done that will nail down quantum gravity. But there's a lot of other interesting paths. Quantum computers, for instance, just by their nature, tend to probe the limits of quantum theory in a way nothing else can. Something very interesting could come out of that. Dark matter detectors could produce something. Someone might actually work a theory down into something that can be tested.


I think these are two distinct things:

> The problem isn't that "finding nothing" isn't progress. The problem is that "finding nothing" is terrible progress-per-dollar.

> if you go on to argue about how vital it is to spend another 5x times as much money to build another particle collider that we have no reason to believe will find anything new, you will continue to be marginalized and find your influence waning to apparently no effect.

The first part is fine if by it you mean you think the physics-practitioner-theory of the collider advocates (a theory about what next research steps might be fruitful, not a theory of physics) is now implausible to you. On the other hand if you just think something like "We expect the future (of physics) to be 'like' the past (not making progress)", then that isn't an explanatory statement and is unrelated to whether we should fund a future collider. If you know what you're going to find in an experiment, you're not setting out to discover something new, so there is no such "future will be like the past" principle here.

The second really is an argument not to fund a future collider because it comes with an explanation: what good theory (of physics, this time) do we have that predicts we'll find new tests, or new problems? If there's no very good theory, new tests or new problems might come from other experiments instead, especially if they're a lot cheaper so we can do more of them. Personally I guess that it's a good argument you make here in this second part, but what do I know?


What are the alternatives? Better weapons, better ad targeting systems, better gambling hidden behind a veneer of gaming on mobile? We can look at where our government and our society currently allocates money and find that the allocations looks bad enough that even building a bigger particle accelerator that might not find anything is an improvement overall. As a singular species, I think we would be better for going down that route given the average of what would be given up.

Problem is that humanity is not unified for our own betterment, so that ends up being a bad metric to judge actions upon. I think you are right in the outcome, it would mean losing influence, and even if we get funding it'll likely be diverted from the areas we least want it diverted from. You're probably right and I find that unsatisfactory.


Sorry, are you seriously proposing that either we fund new particle accelerators or we're just going to build weapons/ads/gambling systems, and there are no other choices?

I want to be clear that this is your claim before I spend any more time on it.


No. I'm pointing out that our current system is already spending money on far more wasteful things, thus it should be possible to fund accelerators by taking away from the things that are an outright detriment to humanity than the things that are, at worst, only useless.

I even point out that the reality is likely if we fund particle accelerators, it will likely be diverted from places we don't want it to be diverted from, like other research spending.

>even if we get funding it'll likely be diverted from the areas we least want it diverted from

Note I even end by saying the poster is probably right, for as much as I don't like that they are (not meant as a negative to the poster, but to how humanity currently allocates our resources).


Unfortunately the "weapons are a waste" opinion has taken a severe hit since February.


That feels like a false dichotomy.

This is pushing forward research into theory, even with highly positive results it's completely unknown whether any of those results actually result in any progress for the human race other than knowledge, and at a base cost of €21 billion that knowledge comes with a huge opportunity cost.

We face so many tangible risks right now that €21 billion invested elsewhere into things that will likely produce meaningful advances to our problems that the question of 'is spending this much money disproving philosophical arguments justifiable right now?' should rightly be being asked.


Isn't the false dichotomy that if we spend €21 billion on a particle accelerator then we must take it from other research into advancing humanity instead of taking it from other areas that don't provide benefit to humanity as a whole (though they do provide benefit to some groups at equal or greater cost to others).

>'is spending this much money disproving philosophical arguments justifiable right now?' should rightly be being asked.

In light of all the expenditures we are already making elsewhere, I don't see how many of those can be justified but this one not.


Okay, we need to take that money from somewhere. There is only so much labor on the planet, and that is what the money is buying in the end. (I'm including corruption in labor here) Some labor is more valuable than others, and we can debate how much we want to spend, but in the end if we have someone do X they could do Y instead. Sometimes Y is sit around doing nothing, sometimes it is valuable.

The problem here is we don't know what will be discovered and if it will be useful. Cheap Science Fiction FTL without all the time dilation - very valuable. Add half a decimal point to our models - probably can't be used for anything and so less valuable than a game. I have no idea, I just picked unlikely two extremes.


Better to just create trillions out of nothing and use them to buy financial assets, now that's a good use of money!

(https://www.ecb.europa.eu/mopo/implement/app/html/index.en.h...)


false dichotomy.

Just because you can name some other bad use of money that we do doesn't make some other one good.


You're talking about opportunity costs - it's not a false dichotomy at all. Spending trillions on financial assets mean they are not spent on other things.


Well, if you spend the 21 billion on health research and life extension, you can live to see spending 21 billion on physics research.


Huge fleets of space telescopes

Multiple gravity wave detectors


These I'm afraid will only confirm what we already know.


Really? What about quark stars/planets, long bursts?

-Telescopes gives us lot of confirmations, but also they, especially largest, constantly feed us data about new, unknown things.


It's like going treasure-hunting and demonstrating to everyone's satisfaction that there is definitely no treasure where you looked. It doesn't tell you very much about 1) if the treasure you're hunting really exists (there's many more places it could be), or 2) what exactly the treasure consists of.

It's technically more information, but it's not very much information.

eg, what did we learn from the underwater hunt for MH370? not a lot, millions were spent to still have no clue where the thing is. It's not just political to say that the hunt failed in an important way.


My take is we were asking the wrong questions, and now we know that, so hopefully we can figure out the right questions.


> This is patent nonsense. Every time a hypothesis is ruled out, and every time a hypothesis is ruled out with greater confidence, the experiment has succeeded.

The probem is, as far as I know, that there is an effectively infinite space of supersymmetry hypotheses. Ruling one of those out is pretty worthless success.


> If Western governments can find the public support for trillions in military expenditures, I am confident that it can be found for the comparably meager budgets of the scientific establishment.

Sadly, that achievement -- public support for trillions in military expenditures -- belongs to a not-so-Western government invading a wanna-be-Western government.


>That is a political problem and it is solvable. If Western governments can find the public support for trillions in military expenditures, I am confident that it can be found for the comparably meager budgets of the scientific establishment.

Is it solvable? Humans are notoriously bad at certain things and investing in things that aren't showing interesting results is one of them. How many companies will cut something that prevents problems because they don't see problems?

If you want to solve this, you would need to do it the same way the MIC has solved military funding, by ensuring continued funding of science is necessary for politicians to be re-elected. But that borders close enough to corruption I'm not sure the scientists who need the funding will be agreeable to it, to say nothing of the difficulty engineering this.


> If Western governments can find the public support for trillions in military expenditures, I am confident that it can be found for the comparably meager budgets of the scientific establishment.

We just need, occasionally, a belligerent something to do something to remind us of why experimental particle physics is needed in its equivalent of peace-time.


A new discovery over some time period is a reasonable expectation. For example, if we discover nothing in the next 1000 years we would have to conclude that there is no longer any point in trying.


This is indeed progress but as I understand the situation, it is progress in the shape of taking another step on the ladder but only to realize it looks like it was the final step, after having searched for billions of dollars during decades.

Maybe, scary thought, the theory of general relativity and the standard model is pretty much entirely self-contained and cannot per their design extend to encompass the quantum world?

Other theories maybe can, but then we need to look at even what we have from a completely different angle?

Like how we have colors. By watching a rainbow we can see them all. There can be nothing more. Until you realize they are mere wavelengths in the optical spectrum and there is so much more. But that is a quantum leap in viewing things. However, maybe this is what is needed?

Maybe this is just philosophical garbage though. :D


Sure, GR may well be self-contained and cannot encompass the quantum world in the same sense as the Newtonian mechanics is and can't. But the Nature does not have such boundaries, especially at such a fundamental level, and it is "self-contained" as a whole. So, while it may be impractical to try an create a unified theory of some aspects of reality, say, quantum mechanics and linguistics or economy, theories concerning fundamental aspects of Nature naturally are, and have been, subject to unification.


Note that colors we can perceive aren’t all in the rainbow. There’s no pink in the rainbow. That’s because pink is what is left when you take white light and remove the green part. It’s minus green. There’s no wavelength that corresponds to pink.


So what? A mix of wavelengths is just as real as a single wavelength. More to the point, colors are merely perceptions (artifacts of consciousness) which can be caused by a variety of factors.


The problem IS that they have found nothing. We know the Standard Model, as good as it is, is either incomplete or incorrect and without new physics somewhere we have no indication of how to fix it.


> without new physics somewhere we have no indication of how to fix it

This is a bit harsh. We have loads of unexplained, well measured phenomena. More clues helps. But it's not conclusively necessary.


Idk, theres been no major progress since the 60s-70s after QCD, String Theory is a complete dead end and there aren't any great candidate theories out there. So the lack of findings certainly hasn't helped.


This "nothing" is valueable information nonetheless.

Science is just as much (often more) about ruling out hypotheses as it is about confirming them. Sometimes that means ruling out all existing hypotheses, meaning new ones have to be formulated to be tested in turn.


The problem here is that the most favoured hypothesis currently is "there is nothing there that can be discovered with any accelerator that can be built using less than 80% of the world's GDP over the next 50 years." And all the valuable "nothing" we currently find just supports the hypothesis and that there is no point in formulating additional hypotheses.


I'm curious, what are the experiments we could run if we had for example 80% of the world's GDP?


Lots of Grand Unified Theory candidates become interesting at extremely high energy levels and many of them assume the various fundamental forces will merge. You can test these theories at sufficiently high energy levels.


> Have you had a good look?

Are you aware of how hilarious it is in the context of high energy physicists verifying the standard model to ask if they had "good look"? I don't think a collective effort to look any harder has ever existed in the history of humankind.


I've received feedback from some very smart people who laughed out loud and knew exactly what I meant by having a good look. Two of them are physicists and many more are engineers. They said they have found many metaphorical couches and there's a lot of nothing. They've also found quite a few interesting metaphorical paperclips and other debris. But that there's so many more places to look. They also think there's a bunch of metaphorical couches they still haven't found. Its especially hard to look under something when you can't even recognise what it is to look under. Thats part of the difficulty.

They've also assured me they'll let me know when they finally find another metaphorical tv remote.


Yeah I assumed your comment was in good faith, and as a physics drop-out I'm well aware of how most people have no idea of the sheer scope of these research projects, so I didn't mean it as a jab against you.


I didn't take it as a jab. I also received a bunch of messages from people poking fun at me elsewhere for my comment. In the past several physicists have been a source of fine wine when I've won a bet.

There will probably be photos of couches in my office next week when I get back.


Ok, then where should they look if you're so smart? You think people who have dedicated their lives to studying this subject haven't considered the concept of looking everywhere possible, and that you're adding something to the discussion by trotting out this 'clever' metaphor?


It's probably something they can't measure now that will be discovered next, that's why they need to keep looking.


Bike shedding for the senses


And sometimes there just isn't anything there to find.

But we keep looking because the idea of nothing is simply too abhorrent.

There is a tremendously difficult and disturbing aspect to Peer Gynt (the original story, not the music on which it's based).

Searching for his "self" he peels off layer after layer like an onion, getting ever deeper seemingly towards a real self. But on the last layer he experiences the horror of finding nothing more. What he "is", is constituted by the sum of the layers.

After that, the entire nature of the "search for meaning" has to change.


One thing to notice is that during the experiments they keep only a small percentage of the total data they could record due to limitations in storage and processing capabilities. There’s a lot of fuss inside the scientific community about what to keep and what to disregard.


Just based on my experience in academia, there are probably plenty of people with good hunches about where to go next. Unfortunately, the system (grants system especially) is actively discouraging them from trying anything too new that would undermine the status of the incumbent experts.

If you banned every single influential scientist who hasn't contributed a major discovery in the last 10 years from participating in academia, we'd have colonised the galaxy decades ago.


I've always thought that a huge limiting factor is that we can only really observe from our current point in time. We are time limited which is a big hindrance just as it would be to only observe things form a single physical position (which is also sort of true - but at least we can send probes and whatnot out there).


There probably is a hard limit on how many elemental particles you can find in the first place. I don't know if there are further theories. Are there X or Y-Bosons? Lowfat quark anti-particles?


There are many enhancements of the Standard Model https://en.wikipedia.org/wiki/Physics_beyond_the_Standard_Mo... Some of them propose new particles, some of them make small changes to the particles we know.


> Or as someone I know likes to say to various smaller humans: have you looked around the couch? Really? Are you sure? Have you had a good look? this is how the tv remote usually subsequently reappears as there is a difference between just looking and having a good look.

Dealing with larger humans in a social setting - the only method I've found that works for finding the remote is addressing one of them on the couch and saying "The remote is UNDER YOU!"

Then they *actually look*.


That's happening. I have a friend who did a physics + data science PhD for analyzing existing data in a new way.


The best science is found by accident.


[flagged]


Hey, can you please not fulminate on HN? It sounds like you might know quite a bit about the field (or some of it), but commenting like this degrades discussion and evokes worse from others. If you would make your substantive points thoughtfully, and share some of what you know, that would be much better.

https://news.ycombinator.com/newsguidelines.html


Apologize, I did get carried away. And thanks for keeping folks like us in check!


the more posts are banned and shadowbanned the more like reddit this website becomes, aka a boring place for boring people to say boring things. interesting people don’t like censorship


I think if the GP had written something relevant about their personal experience in a scientific field, that would have been much more interesting, as well as not breaking the site guidelines.

The trouble with your argument "interesting people don't like censorship" isn't that you're wrong—it's that there are also a lot of people (many more, actually) who post dreck. If you want a forum that doesn't get overrun with internet dreck, you have to have some strategy for countering it. HN's strategy is to have a clear organizing principle (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...) and clear rules to support it (https://news.ycombinator.com/newsguidelines.html). That barely works, but it's a lot better than nothing.


Be more interesting. Ok, is it worth me writing a rebuttal to all this? Seems I struck a chord.

Edit: reading some of the comments and seeing venom and vitriol. They're the opposite of the feedback I've gotten from elsewhere.


It's the opposite. The moderation is good, is very needed, and helps keeps discussion quality/signal-to-noise high, making this a site worth visiting. The thoughtless flamewars you can find everywhere else on the web are what bore me, personally. The rules of conduct on this site aren't even a high bar, and no one is preventing anyone from being jerks if they want to merely by keeping it outside of this forum.


Paperclips are important! We need to optimise paperclip production!

some time later

The entire universe is one giant paperclip constructor.


You may find the Universal Paperclips game amusing.

https://www.decisionproblem.com/paperclips/index2.html


I already did.


This reminds me of a joke you recently told.


Good idea. Also needed as part of a disaster recovery or continuity plan. Eg every business should have one.

For those who use a domain with a catchall on it for various purposes... do you have a plan for dealing with what happens if you die and all those many many aliases inadvertantly get handed to some new domain owner?


The fact you couldn't find that reason yourself implies you could not have made an informed choice. You were effectively compelled. That isn’t permission consent. That was permission coercion. Being forced to consent isn't ethical.


Trust is earnt. I can reasonably have a default level of low trust for practically any app. This isn’t some blind anger stance. Its not blanket mistrust or some kind of ignorance. It is healthy scepticism. Totally reasonable in today's environment.

Plenty of app developers have muddied the water enough that apps should have a lower level of trust given to them. Stealing data is a reasonable fear now. It is not reasonable to assert otherwise.

It is therefore reasonable for a person to wonder why geolocation is requested but then be suspicious why it doesn't seem to need it. Regardless of the underlying technical reason. Those technical reasons are part of informed consent. If I don't have informed consent, do I really have consent?

Its also not a sad state at all. Its healthy. Its part of the modern landscape that someone can be suspicious and rightly so. If not, you're setting people up for misfortune. Is there some reason you want people to blindly trust like this? That seems almost abusive to me.


Musk should dig a pit. A pit. An office in a pit. In the the ground. In a pit. Pit office. Office pit. Yes. In a pit.


Its all very interesting as an idea. But in reality when things go fully tracked you'll be fined for not having location data for every trip. It will impact insurance. Your registration. Even basic road usage. People have this idea right now that public roads will always stay free. This is not guaranteed.

Even the coming road taxes are all about constant tracking. How far / which road did you drive? They'll demand the answer. And you put a faraday cage to stop this? Suspicious. The default assumption is going to be fraud. Criminal behavior.

All the EVs will soon enough have some variety of tracking sooner or later. So thats basically all new vehicles. The alternatives will also, whatever they are. Hydrogen?

Transport privacy is likely dead already.


Well maybe the auto manufacturers will push the cell carriers to finally have service everywhere then... That'd be nice.


I might need to learn APL. It seems like I'm missing out on something. Lisp was a gift which helped with my C and Elixir. That was a mini-revelation in itself.


APL, K or J ; I've learned APL, and was able to use it effectively - but things only really "clicked" when I learned K; And despite several attempts, J never did click for me -- not sure why, probably the associations I have for the ascii characters are too strong to break.

But yes, highly recommend learning members of this family. It will help your C for sure as well, though possibly not in ways you'd expect.

After using APL and K for a while, I realized 95% of "abstraction" features in C++ / C# and friends are useless; I now avoid C++; My C code became way shorter, way more efficient, way simpler -- but also much less orthodox.


What do you recommend to learn APL and K?


Try “Learning APL” —- https://xpqz.github.io/learnapl

Disclaimer: I’m the author


Thanks. Will check it out.


From https://en.wikipedia.org/wiki/K_%28programming_language%29

Here's their example:

The following expression sorts a list of strings by their lengths:

x@>#:'x

Ooooooookay. Personally, I recommend a lot of alcohol. This will be a very wild and geeky ride. Bring extra pizza.


Personally, I recommend less alcohol.

Pronounced, this would be "x at grade-down of count of each x" - and indeed, in "q" (a syntactic sugar for K that uses words instead of symbols), this is almost how you would write it (rather: "x at downgrade count each x". It's basically algorithmic math notation: In math you say "b^2-4ac" rather than "b squared minus 4 times a times c". In K you say "x@>#:'x".

It's not just a matter of symbol/syntax familiarity - there's also idioms, etc. But the "scariness" is similar to the "scariness" of a language like Japanese or Arabic, which uses different graphic elements, syntax, vocabulary, and idioms. You might not like it, but it's not because there's something weird or wrong about it -- it's just foreign.


Sounds like a recipe for alcohol addiction and obesity. Otherwise maybe I'd try that. ;)


I learned APL on a mainframe over 30 years ago, which is not something I'd recommend (also, you're unlikely to have access to one), but rumor says Dyalog has good tutorials, such as "Mastering Dyalog APL", see links here https://www.dyalog.com/getting-started.htm

As for K, Stefan Kruger's text seems like a good introduction https://xpqz.github.io/kbook/Introduction.html


Thanks. Will check these out.


I’ve written a few things on APL and K for the learner. Links here:

https://xpqz.github.io/about/


If the office is so great, pay me extra for the commute.


Many companies try to! The problem is that paying extra for the commute deterministically implies that people who do end up working remote are penalized, which they understandably aren't happy about. As the article describes, when companies like Spotify try to come up with an employee friendly policy they inevitably conclude that remote employees should be paid the same as in office ones.


I think just wiping the fees/charges off commuting would be a good start. PT / Fuel costs are still a thing as they continue to increase. Not everyone is using EV. Yet.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: