You’re confusing immediately useful with eventually useful. Pure maths has found very practical applications over the millennia - unless you don’t consider it pure anymore, at which point you’re just moving goalposts.
You are confusing that. The biggest advancements in science are the result of the application of leading-edge pure math concepts to physical problems. Netwonian physics, relativistic physics, quantum field theory, Boolean computing, Turing notions of devices for computability, elliptic-curve cryptography, and electromagnetic theory all derived from the practical application of what was originally abstract math play.
Among others.
Of course you never know which math concept will turn out to be physically useful, but clearly enough do that it's worth buying conceptual lottery tickets with the rest.
Just to throw in another one, string theory was practically nothing but a basic research/pure research program unearthing new mathematical objects which drove physics research and vice versa. And unfortunately for the haters, string theory has borne real fruit with holography, producing tools for important predictions in plasma physics and black hole physics among other things. I feel like culture hasn't caught up to the fact that holography is now the gold rush frontier that has everyone excited that it might be our next big conceptual revolution in physics.
There is a difference between inventing/axiomatizing new mathematical theories and proving conjectures. Take the Riemann hypothesis (the big daddy among the pure math conjectures), and assume we (or an LLM) prove it tomorrow. How high do you estimate the expected practical usefulness of that proof?
That's an odd choice, because prime numbers routinely show up in important applications in cryptography. To actually solve RH would likely involve developing new mathematical tools which would then be brought to bear on deployment of more sophisticated cryptography. And solving it would be valuable in its own right, a kind of mathematical equivalent to discovering a fundamental law in physics which permanently changes what is known to be true about the structure of numbers.
Ironically this example turns out to be a great object lesson in not underestimating the utility of research based on an eyeball test. But it shouldn't even have to have any intuitively plausible payoff whatsoever in order to justify it. The whole point is that even if a given research paradigm completely failed the eyeball test, our attitude should still be that it very well could have practical utility, and there are so many historical examples to this effect (the other commenter already gave several examples, and the right thing to do would have been acknowledge them), and besides I would argue they still have the same intrinsic value that any and all knowledge has.
> To actually solve RH would likely involve developing new mathematical tools which would then be brought to bear on deployment of more sophisticated cryptography.
It already has! The progress that's been made thus far, involved the development of new ways to probabilistically estimate density of primes, which in turn have already been used in cryptography for secure key based on deeper understanding of how to quickly and efficiently find large prime numbers.
Expect mistral to keep getting large cash infusions until they get competitive.
Managed services weren’t needed because big tech was bending to EU regulations and buying out alternatives. The services aren’t rocket science; plenty of euro devs participated and still participate in building them, they’re just on US big tech payrolls. Expertise is there, money isn’t, yet.
the same folks that presided over catastrophic global warming, animal cruelty at industrial scales and human inequality i presume. maybe we can wind those back now theyre gone?
Global warming is the only net new thing on the list and it pays for itself if we get to fusion or planet scale solar. If we don’t, we’re back to the stone age either way.
> LLMs can generate code quickly. But there's no guarantee that it's syntactically, let alone semantically, accurate.
This has been a non-issue with self-correcting models and in-context learning capabilities for so long that saying it today highlights highly out of date priors.
You're referring to tools that fetch content from the web, read my data on disk, and feed it to the models?
I can see how that would lead to a better user experience, but those are copouts. The reality is that the LLM tech without it still has the same issues it has had all along.
Besides, I'll be damned if I allow this vibe coded software to download arbitrary data from the web on my behalf, scan my disk, and share it with companies I don't trust. So when, and if, I can do so safely and keep it under my control, I'll give it a try. Until then, I'll use the "dumb" versions of these models, feed them context manually myself, and judge them based purely on their actual performance.
The 'copouts' are what the frontier models are designed to do. If you aren't using the tool as they're intended to, you'll get poor results, obviously.
No, you were being arrogant and presumptuous, providing flawed analogies and using them as evidence for unfounded and ill-formed claims about the capabilities of frontier models.
Lack of knowledge is one thing, arrogance is another.
reply