I'm working on a large (at least 300k+ loc) Django code base right now and we have 32 direct dependencies. Mostly stuff like lxml, pillow and pandas. It's very easy to use all the nice Django libs out there but you don't have to.
I was talking about total deps, not direct. By installing something like Celery, you get 8-10 extra dependencies that, in turn, can also have extra deps. And yeah, extra deps can conflict with each other as well.
That is obviously true but doesn't mean as much as you seem to think. Washing laundry is also not much work but it adds up to a lot over the years, especially if you skip a few weeks of laundry every once in a while. That is not an excuse to not do it.
The answer is the same in both cases: acquire some discipline and treat maintenance with the respect it deserves.
It is easy, and people tend to do what is easy. It takes more effort to minimise dependencies. Your boss or your client will not even notice.
Obviously there are some dependencies that you cannot easily avoid (like the things you mention). On the other hand there is a lot off stuff used that is not that hard to avoid - things like wrappers for REST APIs are often not really necessary.
I've tried Greptile and it's pretty much pure noise. I ran it for 3 PRs and then gave up. Here are three examples of things it wasted my time on in those 3 PRs:
* Suggested to silence exception instead of crash and burn for "style" (the potential exception was handled earlier in code but it did not manage to catch that context). When I commented that silencing the exception could lead to uncaught bugs it replies "You're absolutely right, remove the try-catch" which I of course never added
* Us using python 3.14 is a logic error as "python 3.14 does not exist yet"
* "Review the async/await patterns
Heavy use of async in model validation might indicate these should be application services instead." whatever this vague sentence means. Not sure if it is suggesting us changing the design pattern used in our entire code base.
Also the "confidence" score added to each PR being 4/5 or something due to these irrelevant comments was a really annoying feature IMO. In general AI tools giving a rating when they're wrong feels like a big productivity loss as then the human reviewer will see that number and think something is wrong with the PR.
--
Before this we were running Coderabbit which worked really well and caught a lot of bugs / implementation gotchas. It also had "learnings" which it referenced frequently so it seems like it actually did not repeat commenting on intentional things in our code base. With Coderabbit I found myself wanting to read the low confidence comments as well since they were often useful (so too quiet instead of too noisy). Unfortunately our entire Coderabbit integration just stopped working one day and since then we've been in a long back and forth with their support.
--
I'm not sure what the secret sauce is but it feels like Greptile was GPT 3.5-tier and Coderabbit was Sonnet 4.5-tier.
Took things from "pure noise" to a world where, if you say there's a bug in your patch, people's first question will be "has the AI looked at it?"
FWIW in my case the AI has never yet found _the_ bug I was hunting for but it has found several _other_ significant bugs. I also ran it against old commits that were already reviewed by excellent engineers and running in prod. It found a major bug that wasn't spotted in human review.
Most of the "noise" I get now just leads me to say "yeah I need to add more context to the commit message". E.g the model will say "you forgot to do X" when X is out of scope for the patch and I'm doing it in a later one. So ideally the commit messages should mention this anyway.
I am a member of the CodeRabbit tech support team, would you be able to provide me the ticket number you have open with us? I'd be happy to get this escalated internally so we can get this resolved for you ASAP.
> I tried watching a 15 min yt video without adblock and it had 5 ad breaks with some unskippable ads.
Yeah - I watch most of my YouTubes on the Apple TV and the ads are a pestilence. Sometimes it'll be 50s pre-roll[1] with multiple 30-50s breaks for a 10m videos.
Luckily there exist[0] many fine technologies that let you view them without ads via something like Infuse with a DLNA server if you're that way inclined.
[0] Currently. YT-DLP is fighting the good fight but I don't know how much longer they'll be able to keep in front. But then I'll just stop watching YouTube, really, because it's a horror show without adblock/circumventions.
[1] The video doesn't appear in your history until the pre-roll has finished which means if you can't be arsed sitting through a 50s pre-roll just that second and - at least on the Apple TV - you've not clicked on the video from your homepage / subscriptions, good luck trying to find it again unless you remember the name + channel etc. (which it also won't properly show you until after the pre-roll!)[2]
Riksbanken have been pushing for cash payments too. Personally I think its too little too late. The culture in Sweden has already changed to purely digital
Sweden has also done multiple pilots of a digital currency pressed by the state. This might be an interesting alternative to not give up control of our currency and privacy to banks and cc companies. Also supposed to work offline. https://www.riksbank.se/globalassets/media/rapporter/e-krona...
That used to be semi-common for smaller transactions in Sweden but was made illegal. Not sure why, probably to fight tax avoidance.
At this point the cost of handling cash is way higher than handling cards and as no one in Sweden ever uses cash its no longer relevant at all anyway. Now many (maybe even most?) dont accept cash to avoid the cost of handling cash instead.
There are numerous things still missing in terms of async support. Most notably for me is DB transaction support which leads to most non-safe endpoints running on the shared sync_to_async thread and me having to separate my code into one async function calling another sync function wrapped in sync_to_async.
In fact if you look at the source there is a lot of async methods in the framework itself which just straight up calls sync_to_async e.g. caching. This doesn't matter as much as hopefully it will get added proper async eventually. But I think believing your requests wont block just because you're using async is a bit naive at this point in Django and the async implementation has been going for years.
Not to mention that the majority of third party libraries lack support for async so you'll probably have to write your own wrappers for e.g. middleware.
> But I think believing your requests wont block just because you're using async is a bit naive at this point in Django
TBH personally I have yet to work on any professional async Python project (Django based or not) which did not have event loop pauses due to accidental blocking IO or CPU exhaustion.
I take your point fully though that a lot of Django's "async" methods are really using a thread pool. (True for much closed source async code as well!)
I think one reason for the eReader market not being big is that they're so good and sustainable. I bought my Sony PRS-T2 in 2012 and am still using it to this day. It has battery life for weeks, storage space for 100+ books and works just as well as when I bought it. It's really hard for me to motivate buying a new one when the only interesting "new" tech is backlight and I guess it's the same for most eReader owners.
The ~90e I paid for it back in 2012 was for sure good value!
I had the same issue with my Kobo, which I've had for about 8 years. I was looking at the replacements, but they're quite a lot of money for...what? Slightly faster page turns? A screen that might look a little bit better?
I added this to personal instructions to make it less annoying:
• No compliments, flattery, or emotional rapport.
• Focus on clear reasoning and evidence.
• Be critical of users assumptions when needed.
• Ask follow-up questions only when essential for accuracy.
However, I'm kinda concerned with crippling it by adding custom prompts. It's kinda hard to know how to use AI efficiently. But the glazing and random follow-up questions feel more like a result of some A/B testing UX-research rather than improving the results of the model.
I often ask copilot about phrases I hear that I don't know or understand, like "what is a key party" - where I just want it to define it, and it will output three paragraphs that end with some suggestion that I am interested in it.
It is something that local models I have tried do not do, unless you are being conversational with it. I imagine openai gets a bit more pennies if they add the open ended questions to the end of every reply, and that's why it's done. I get annoyed if people patronize me, so too I get annoyed at a computer.