Bad is relative. I usually don't bother with the MPV/yt-dlp combo (except for rare local downloading for backup/convenience purposes) and just let it play in the browser. Not being logged in, because I have no google account anymore. It's smooth, and plays instantly when opened in a new tab. I let it have its cookies, and don't erase them, so I get the content I like, mostly. For things I'm unsure how they'd affect the algorithm, or if they are AI-slopped music, I'm just opening them in a private window. Works for me with just uBO and some additional list subscriptons in there.
In principle any model can do these. Tool use is just detecting something like "I should run a db query for pattern X" and structured output is even easier, just reject output tokens that don't match the grammar. The only question is how well they're trained, and how well your inference environment takes advantage.
In just a matter of a couple of years, we went from a single, closed source LLM entirely outputting tokens slower than one can read, to dozens of open source models, some specialized, able to run on a mobile device, outputting tokens faster than one can read.
The gap between inference providers and running on edge will always exist but will become less and less relevant.
What OpenAi did is like offering accelerated GPUs for 3D gaming that nobody could set up at home, before they could.
Are we using buying better gaming experience by renting cloud GPUs? I recall some companies including Google were offering that. It took a few years for investors to figure people would rather just run games locally.
We aren't dealing with gamers here, but I think the analogy is valid.
What about markdown do you feel limits you in your writing process?
The beauty of markdown is that it’s standardized. If you find your self midway through the book and feel a need to change formats, it’s easy enough to parse and reformat.
reply