Git is only better in the same sense that Windows is better than Linux. I really need the tools I use that only run on Windows. I prefer Linux for a lot of things, including daily web development, but the experience of developing Unreal Engine on Linux is lacking. I love fossil and the many features it provides in itself and would use it for everything, except that it doesn't have (working) Intellij plugin for integration, a good GitHub alternative, etc.
I use Udemy courses all the time; great for compliance, game engine training, and insightful training of soft skills. Good instructors have insight and comprehensive coverage that questioning LLMs do not have.
Two's a coincidence, three's a pattern; I guess we will have to wait until next month to see if it becomes a pattern. Was there a particular aspect of the React Server Components that made it easy to have this problem appear? would it have been caught or avoided in another framework or language?
Thanks for this! I've been looking for a good guide to an LLM based workflow, but the modern style of YouTube coding videos really grates on me. I think I might even like this :D
This one is a bit old now so a number of things have changed (I mostly use Claude Code now, Dynamic context (Skills) etc...) but here's a brief TLDR I did early this year https://www.youtube.com/watch?v=dDSLw-6vR4o
I'm curious what people think of quotes like these. Obviously it makes an explicit, falsifiable prediction. That prediction is false. There are so many reasons why someone could predict that it would be false. Is it just optimistic marketing speech, or do they really believe it themselves?
Everybody knows that marketing speech is optimistic. Which means if you give realistic estimates, then people are going to assume those are also optimistic.
Perhaps that is the ideal when it was laid out, but the reality of the common implementation is that planning is dispensed with. It gives some management a great excuse to look no further than the next jira ticket, if that.
The ideal implementation of a methodology is only relevant for a small number of management who would do well with almost any methodology because they will take initiative to improve whatever they are doing. The best methodology for wide adoption is the one that works okay for the largest number of management who struggle to take responsibility or initiative.
That is to say, the methodology that requires management to take responsibility in its "lowest energy state" is the best one for most people-- because they will migrate to the lowest energy state. If the "lowest energy state" allows management to do almost nothing, then they will. If the structure allows being clueless, a lot of managers will migrate to pointy haired Dilbert manager cluelessness.
With that said; I do agree with getting products to clients quickly, getting feedback quickly, and being "agile" in adapting to requirements; but having a good plan based on actual knowledge of the requirements is important. Any strict adherence to an extreme methodology is probably going to fail in edge cases, so having the judgement of when to apply which methodology is a characteristic of good management. You've got to know your domain, know your team, and use the right tool for the job.
> the reality of the common implementation is that planning is dispensed with. It gives some management a great excuse to look no further than the next jira ticket, if that.
Maybe I'm just lucky but I've never experienced this. If anything, the companies I've worked for didn't do anything particularly agile, and were often deliberately trying to change habits and workflows to be more agile. This often came down from engineering managers who wanted to know how the whole project was going to go the day it started, so they could report upwards with a delivery timeline.
I hear you. I feel like my personal experience has definitely influenced my view. I've seen management who want to have a timeline and a deadline from day 1, but don't want to put any effort into thinking out how they could allocate resources to make that happen or what was required of them. So they just ask someone else for a calendar and ask if someone's already made a ticket.
Such great case studies of how LLM coding will make all of your employees 1000x more productive at coding, design, and UX. They really are leading the way showing us into the brighter future of AI software /s
Those are more recent examples, but I think Germany is still a more visceral example for a lot of Western nations because Germany was a high tech, educated industrial nation that was hit with such massive problems from government policy. It's closer to home. Other countries are (wrongfully) easier to dismiss as being just too different from our wealthy and enlightened selves.
Also, Germany was a great power seriously challenging the greatest power of the time, the British Empire. They fell a lot further than those other examples.
People get confused the hyper inflation in Germany was right after the end of WWI when Germany's economy was collapsing. Turns out you can't fix that with monetary policy.
A point. Economists like people to believe that hyper inflation lead to Hitler. When it was austerity policies at the start of the great depression 10 years later that lead to the Nazi's winning in 33. Same austerity policies in the US lead to FDR and the Democrats winning.
Unfortunately we're looking more like Hogan's Heroes than Germany. Both Zimbabwe and Argentina (and, quite frankly, Venezuela) were well developed before they went down the road of disastrous policies.
Servers I setup in openbsd just keep working, and are an easy patch/upgrade process. Servers I setup in Ubuntu break and have weird patching issues. Maybe it's something I'm doing, but I sure do like that OpenBSD seems a lot easier to just have solid and work indefinitely.
Debian (provided you don't just dump a bunch of 3rd party repos) just upgrades cleanly, we have hundreds of servers that just run unattended-upgrade and get upgraded to new Debian version every 2 years.
I used to have this Debian box (which was a PowerMac G4) in my hallway. It had a 1000+ day uptime, back when this kind of uptime was still cool, or at least I thought it was. At some point it was two major versions behind, and I decided to dist-upgrade it. To my amazement, the upgrade went flawlessly, and the system booted without problems afterward. Debian is just great like that.
Not the Grand Poster, but we use the Debian package "unattended-upgrades" to install security updates automatically on our servers, and send an email if a reboot is required to complete the process (kernel upgrade).
Unattended upgrades could be configured to install more than the security release. Even with the stable release, one can add the official APT source for the Debian backports.
Back to OpenBSD... realize that it has no "unattended upgrades" capability. Until syspatch(8) appeared in 6.x you had to download patches and rebuild kernel and userland to get security fixes. Today, you could run syspatch(8) in a cron job but that only covers the base system. You'd need to handle any installed packages separately. And only the current and immediately previous release are supported at all. There are two releases a year, so you have to upgrade every ~6 months to stay in the support window.
Fortunately, with the introduction of the syspatch(8) and sysupgrade(8) utilities this is much simpler than it used to be. And, release numbers are just sequential with one point number, i.e. 7.0 was just the next release after 6.9, nothing more is implied by the "major" number ticking up.
Just curious, how do you manage service restarts, just restart as the update finishes?
I think I’m a bit scarred when a docker upgrade took my entire stack down because of an api mismatch with portainer, so I’m trying to be present during upgrades.
Edit: I’m talking about Debian of course. I’m not familiar with OpenBSD.
Debian still has security fixes, and point releases. unattended-upgrades is the package that automates their install.
I think you can also do unattended release upgrades by using the 'stable' release alias in sources. That will probably result in some stuff breaking since there will be package and configuration churn.
Well - I would recommend using a better linux distribution than Ubuntu.
I run just lighttpd these days; used to run httpd before they decided the configuration must become even more complicated. I don't have any issues
with lighttpd (admittedly only few people use it; most seem to now use nginx).
Ubuntu seems to have a trend of taking something that works under Debian and somehow messing that up. Upgrades are one thing but for a while we had separate instruction on how to make Yubikey tokens work under each version of Ubuntu (we used them as smartcards for SSH key auth), while Debian instructions stayed the same...
Update was also hit and miss on user's desktop machines, for a while ubuntu had a nasty habit of installing new kernel upgrades... without removing old ones, which eventually made boot run out of space and poor user usually had to give it to helpdesk to fix.
Tho tbh most of the problems in any distro with packages is "an user installed 3rd party repo that don't have well structured packages and it got messy".
I have used lighttpd in the past but have been using nginx largely because I got used to it because other people chose it.
Now in more of a position to pick for myself, and I wondered how you feel about the pros and cons of lighttpd? I remember quite liking its config at the time.