That's how I read it too, but the record is only for hottest June 24th of all the June 24ths since 1888. So yeah it's 1/number of years or so but we get 365 attempts at that a year to have this headline.
I started using an AI calorie tracking app. For non-branded foods, the calorie counting is pretty inaccurate and sometimes very off. However, having an app that can track pictures and create short descriptions that someone else (my trainer) can easily review has been pretty helpful.
I'm in your boat. I tried out TikTok out a few times, including making a new account, but it never showed me good content. I had maybe one or two longer sessions, but never felt the need to go back, like I (unfortunately) do with Reddit or Youtube. I could never understand why it was so popular, but maybe I'm just a curmudgeon.
I think that's part of why it's always been a little bit of a head scratcher for me — I didn't really go into it curmudgeonly, I was genuinely interested in it, people seemed to like it, and I was interested in something new. It just never worked out at all for me.
I even had people telling me in all seriousness "I must secretly like the content", as in the algorithm knows better than I do what I like. Which is kind of a weird and maybe even disturbing idea to buy into if you think about it.
I was told to keep at it, which I did. I'd put aside for a long time, go back to it, repeat the process over and over again. Eventually I just gave up. I always felt like it was targeting some specific demographic by default and never got out of that algorithmic optimization spot for me.
>It gives me hope that teenagers are watching his videos and becoming inspired to go into infrastructure.
Sadly, that was me ~10 years ago, but the lure of FAANG money was too strong and I went into EE/CS after 1 year as a civil engineering major. I wonder if one day we will really start feeling the affects of this talent reallocation, and civil engineering will become a higher paying profession.
I think they are correct, do you have a source? From my knowledge the only other components are the fully connected networks which are not big contributors.
It's quadratic, because of the dot product in the attention mechanism.
You can use K-V Caching to get rid of a lot of the quadratic runtime that comes from redundant matrix multiplications, but after you have cached everything, you still need to calculate the dot product k_i * q_j with i,j being index of the tokens. With n tokens, you will get O(n*n).
But you have to remember that this is only n^2 multiplications. It's not exactly the end of the world at context sizes of 32k, for example. It only gets nasty in the hundred thousands to millions.
For small values of N, the linear terms of the transformer dominate. At the end of the day, a double layer of 764*2048 is still north of 3.1 MM flops/token/layer.
I hardly think it's fair to say that they were 'taking advantage'. If Google says unlimited, should the typical person really expect that to be taken away? That's a really bad look for Google. "If we offer something that's good value to you, expect it to be taken away suddenly in the future." Those are not the actions of a company I would want to rely on.
With modern connected devices, I absolutely do expect for features (or even total functionality) to be removed on a whim by the manufacturer. Cloud services are no different.
Lesson being: do not rely on devices or services that rely on a third party. They absolutely will screw you; it’s only a matter of time.
If you do not believe this, then I would say that you have not been around this industry long enough. There may be rare exceptions, but this should be your rule if you care about the longevity of your software and data.
Of course it is not acceptable. But I imagine every company out there that has skin in this game would boldly taunt in reply: “whachha gunna do ‘bout it?”
Sure, we could boycott. How often is that ever effective in this modern age? Is it ever, because I don’t think I ever heard of such outside of history books?
Our lawmakers are corrupt and wholly in the pocket of those who would stand to benefit from perpetuating this shameful status quo. From my perspective, there ain’t damn thing we can do to fix it (or a thousand other problems), unless we are ready to start holding them accountable. I think that will take putting some of their heads on pikes.
Most unlimited services operate like all you can eat buffets. There is some secondary constraint that keeps usage bounded. I.e. the person's ability to eat food.
> unlimited, should the typical person really expect that to be taken away?
Absolutely. The word "unlimited" has been misused by so many companies (especially ISP and mobile) that anyone who has their eyes open should expect it to mean limited.
Also if there is a deal that is exceeding better than other options, don't be surprised when the rules change later.
I never really got how proofs are supposed to solve this issue. I think that would just move the bugs from the code into the proof definition. Your code may do what the proof says, but how do you know what the proof says is what you actually want to happen?
A formal spec isn't just ordinary source-code by another name, it's at a quite different level of abstraction, and (hopefully) it will be proven that its invariants always hold. (This is a separate step from proving that the model corresponds to the ultimate deliverable of the formal development process, be that source-code or binary.)
Bugs in the formal spec aren't impossible, but use of formal methods doesn't prevent you from doing acceptance testing as well. In practice, there's a whole methodology at work, not just blind trust in the formal spec.
Software developed using formal methods is generally assured to be free of runtime errors at the level of the target language (divide-by-zero, dereferencing NULL, out-of-bounds array access, etc). This is a pretty significant advantage, and applies even if there's a bug in the spec.
> A formal spec isn't just ordinary source-code by another name, it's at a quite different level of abstraction
This is the fallacy people have when thinking they can "prove" anything useful with formal systems. Code is _already_ a kind formal specification of program behavior. For example `printf("Hello world");` is a specification of a program that prints hello world. And we already have an abundance of tooling for applying all kind of abstractions imaginable to code. Any success at "proving" correctness using formal methods can probably be transformed into a way to write programs that ensure correctness. For example, Rust has pretty much done so for a large class of bugs prevalent in C/C++.
The mathematician's wet dream of applying "mathematical proof" on computer code will not work. That said, the approach of inventing better abstractions and making it hard if not impossible for the programmer to write the wrong thing (as in Rust) is likely the way forward. I'd argue the Rust approach is in a very real way equivalent to a formal specification of program behavior that ensures the program does not have the various bugs that plagues C/C++.
Of course, as long as the programming language is Turing Complete you can't make it impossible for the programmer to mistakenly write something they didn't intend. No amount of formalism can prevent a programmer from writing `printf("hello word")` when they intended "hello world". Computers _already_ "do what I say", and "do what I mean" is impossible unless people invent a way for minds to telepathically transmit their intentions (by this point you'd have to wonder whether the intention is the conscious one or the subconscious ones).
> thinking they can "prove" anything useful with formal systems
As I already said in my reply to xmprt, formal methods have been used successfully in developing life-critical code, although it remains a tiny niche. (It's a lot of work, so it's only worth it for that kind of code.) Google should turn up some examples.
> Code is _already_ a kind formal specification of program behavior.
Not really. Few languages even have an unambiguous language-definition spec. The behaviour of C code may vary between different standards-compliant compilers/platforms, for example.
The SPARK Ada language, on the other hand, is unambiguous and is amenable to formal reasoning. That's by careful design, and it's pretty unique. It's also an extremely minimal language.
> `printf("Hello world");` is a specification of a program that prints hello world
There's more to the story even here. Reasoning precisely about printf isn't as trivial as it appears. It will attempt to print Hello world in a character-encoding determined by the compiler/platform, not by the C standard. It will fail if the stdout pipe is closed or if it runs into other trouble. Even a printf call has plenty of complexity we tend to just ignore in day to day programming, see https://www.gnu.org/ghm/2011/paris/slides/jim-meyering-goodb...
> Any success at "proving" correctness using formal methods can probably be transformed into a way to write programs that ensure correctness
You've roughly described SPARK Ada's higher 'assurance levels', where each function and procedure has not only an ordinary body, written in SPARK Ada, but also a formal specification.
SPARK is pretty challenging to use, and there can be practical limitations on what properties can be proved with today's provers, but still, it is already a reality.
> Rust has pretty much done so for a large class of bugs prevalent in C/C++
Most modern languages improve upon the appalling lack of safety in C and C++. You're right that Rust (in particular the Safe Rust subset) does a much better job than most, and is showing a lot of success in its safety features. Programs written in Safe Rust don't have memory safety bugs, which is a tremendous improvement on C and C++, and it manages this without a garbage collector. Rust doesn't really lend itself to formal reasoning though, it doesn't even have a proper language spec.
> The mathematician's wet dream of applying "mathematical proof" on computer code will not work
Again, formal methods aren't hypothetical.
> I'd argue the Rust approach is in a very real way equivalent to a formal specification of program behavior that ensures the program does not have the various bugs that plagues C/C++.
It is not. Safe languages offer rock-solid guarantees that certain kinds of bugs can't occur, yes, and that's very powerful, but is not equivalent to full formal verification.
It's great to eliminate whole classes of bugs relating to initialization, concurrency, types, and object lifetime. That doesn't verify the specific behaviour of the program, though.
> No amount of formalism can prevent a programmer from writing `printf("hello word")` when they intended "hello world"
That comes down to the question of how do you get the model right? See the first PDF I linked above. The software development process won't blindly trust the model. Bugs in the model are possible but it seems like in practice it's uncommon for them to go unnoticed for long, and they are not a showstopper for using formal methods to develop ultra-low-defect software in practice.
> "do what I mean" is impossible unless people invent a way for minds to telepathically transmit their intention
It's not clear what your point is here. No software development methodology can operate without a team that understands the requirements, and has the necessary contact with the requirements-setting customer, and domain experts, etc.
I suggest taking a look at both the PDFs I linked above, by way of an introduction to what formal methods are and how they can be used. (The Formal methods article on Wikipedia is regrettably rather dry.)
I think the reason that formal proofs haven't really caught on is because it's just adding more complexity and stuff to maintain. The list of things that need to be maintained just keeps growing: code, tests, deployment tooling, configs, environments, etc. And now add a formal proof onto that. If the user changes their requirements then the proof needs to change. A lot of code changes will probably necessitate a proof change as well. And it doesn't even eliminate bugs because the formal proof could include a bug too. I suppose it could help in trivial cases like sanity checking that a value isn't null or that a lock is only held by a single thread but it seems like a lot of those checks are already integrated in build tooling in one way or another.
Yes, with the current state of the art, adopting formal methods means adopting a radically different approach to software development. For 'rapid application development' work, it isn't going to be a good choice. It's only a real consideration if you're serious about developing ultra-low-defect software (to use a term from the AdaCore folks).
> it doesn't even eliminate bugs because the formal proof could include a bug too
This is rather dismissive. Formal methods have been successfully used in various life-critical software systems, such as medical equipment and avionics.
As I said above, formal methods can eliminate all 'runtime errors' (like out-of-bounds array access), and there's a lot of power in formally guaranteeing that the model's invariants are never broken.
> I suppose it could help in trivial cases like sanity checking that a value isn't null or that a lock is only held by a single thread
No, this doesn't accurately reflect how formal methods work. I suggest taking a look at the PDFs I linked above. For one thing, formal modelling is not done using a programming language.
You mix up development problem with computational problem.
If you can't use formal proof just because the user can't be arsed to wait where it is supposed to be necessary, then the software project conception is simply not well designed.
I don’t think that would necessarily favor small artists. That would just favor artists who are listened to by people who don’t use Spotify a lot.
Right now someone who only streams a few songs gets a very small “vote” (assuming pay is per stream). That would make it so that everyone had the same “voting” power. But I doubt there’s much correlation between people who use Spotify less and small artists. In fact that’s probably a negative correlation if anything, and this could end up hurting local artists.
I'm not sure about spotify, but I have seen this problem with Netflix: I'm an adult paying full for my Netflix subscription and for that price I would love to have a few great and expensive movies every month for my age group.
But what I usually see is lots of movies made for teens who are binge watching Netflix even though they are paying the same. Netflix has some public presentations on their algorithm and from the presentations it looks like they are optimizing for watches without weighting by subscription revenue per watch.