This post is for the beginners. For the fresh graduates and junior developers who are just starting to think about performance. These are the things I wish someone had told me early in my career. Consider it a crash course in the fundamentals — practical, opinionated, and grounded in real-world software.
"When I was a child in Sri Lanka, I ended up memorizing the landline numbers of all my close relatives. To this day I remember them. The moment I got a phone where my contacts could be saved, I stopped remembering numbers."
If we look at it rationally, the phone numbers were an extra complexity layer introduced by technology. The smartphones solved that problem rightly so. You just have to member the person's name, rightly so.
> If we look at it rationally, the phone numbers were an extra complexity layer introduced by technology
I don't think that's any more rational than suggesting smartphones supplanted memory training; the phone number is an implementation detail, the lack of practicing memorization of important information is a general case. Smartphones created a dependency on themselves and solved problems that mostly weren't problems, or were often tertiary optimization problems before they came along, while phone technology actually solved a fundamental problem with high-latency communication via mail.
If I ask myself whether I'm generally better off having my contact numbers in my smartphone vs before—which itself is a fictitious premise, since mobile phones had them before they got smart—the answer is definitely "no", because the distribution of people I call isn't so varied as to make memorizing them difficult, but my lack of inclination to do even that means I don't remember the most common case and always need to have the phone or I'm screwed.
It's hilarious to watch this play out with drivers who are entirely dependent on a Maps app for directions in their own city. They don't remember basic routes, address blocks, can't even do it sometimes without the phone speaking it to them. It wasn't really a problem before, you'd just figure it out most of the time, or ask someone for an approximation.
I would use Deep Research mode outputs. Sometimes I run multiple of these in parallel on different models, then compare between them to catch hallucinations. If I wanted to publish that, I would also doublecheck each citation link.
I think the idea is sound, the potential is to have a much larger AI-wikipedia than the human one. Can it cover all known entities, events, concepts and places? All scientific publications? It could get 1000x larger than Wikipedia and be a good pre-training source of text.
Covering a topic I would not make the AI agent try to find the "Truth" but just to analyze the distribution of information out there. What are the opinions, who has them? I would also test a host of models in closed book mode and put an analysis of how AI covers the topic on its own, it is useful information to have.
This method has the potential to create much higher quality text than usual internet scrape, in large quantities. It would be comparative analysis text connecting across many sources, which would be better for the model than training on separate pieces of text. Information needs to circulate to be understood better.
There software was so bad. I once had purchased a license. But when I started using, in the setting there was a possible workflow that if you click that button/menu option it will reset the license key. It happened (because I am curious right?) then my copy was no longer valid. I called their customer support and that person treated my like a criminal trying to lie all the way. So explained numerous times and sent them purchase copies and screenshots. After some attempts they issues a new license key (they might have verified their stupid mistake). On top of that, the version I bought did not have a way to delete option. I had to purchase that feature. This was long time ago and I no longer use this sh*.
I use to license Acronis as well and I share your view of the software quality. Since then I moved on to Macrium Reflect: better UI, faster, fewer glitches. It paid off about 9 months ago when a 1TB 970 Pro NVMe failed. Flawlessly restored in about 50 minutes.
I can also attest that Acronis is crap and Macrium software is very high quality. Too bad, though, Reflect Free edition has been discontinued and the paid edition now locks you in a yearly subscription.
And even if you're scared of them removing the ability to restore for free, they released code to handle the new mrimgx format under the MIT license: https://github.com/macrium/mrimgx_file_layout
Is it me or do others find that always 2020-2025 category icons were the best looking compared to 2025- except for podcasts icon? (And I do not give a damn for podcasts :D )
[Because linking to cool zsh features seems to be a hobby of mine…]
If you're using zsh for this you can access terminfo directly¹, instead of hardcoding escapes(for example, "echoti cup $snowflakes[$i] $i"²) or shelling out to tput. You can also use zselect to replace sleep. Extra points for "echoti smcup" to use the secondary screen when available, instead of wiping the screen.
reply