Lee is a marketer (not in title but in truth) for Cursor. He wrote a post to market their new CMS/WYSIWYG feature.
We spend ~$120/month on our CMS which hosts hundreds of people across different spaces.
Nobody manages it, it just works.
That’s why people build software so you don’t need someone like Lee to burn a weekend to build an extremely brittle proprietary system that may or may not actually work for the 3 people that use it.
Engineers love to build software, marketers working for gen ai companies love to point to a sector and say “just use us instead!”, just shuffling monthly spend bills around.
But after you hand roll your brittle thing that never gets updates but for some reason uses NextJS and it’s exploited by the nth bug and the marketer that built it is on to the next company suddenly the cheap managed service starts looking pretty good.
Anyway, it’s just marketing from both sides, embarrassing how easily people get one-shot by ads like this.
Send this in an e-mail to tcook@apple.com. He has a team that reads for stuff like this and can magically fix issues.
I've had to do it before, also for a gift-card-related problem (different from yours), and I was contacted by a member of the Apple executive escalations team a couple days later.
Remix v3 isn’t just a new direction, it’s a full reboot. They’ve scrapped React, forked Preact, teased their own UI library, and are positioning themselves as a zero-dependency, fully self-owned framework. It’s not evolution, it’s erasure.
Let’s not forget how Remix got here. It gained traction primarily by merging into the React Router repo, inheriting 11M dependents and a decade of credibility it didn’t earn. From there, it became a thin abstraction over React Router before going dark entirely with the now-famous “taking a nap” announcement.
Now it’s back. Not with stability, not with iteration, but with a declaration, “We’re doing our own thing.” No React. No compatibility. A fresh stack, built from scratch.
Why? That’s the real question.
Shopify reportedly acquired Remix for ~$40M. Maybe this is them pushing for something big and new to justify the investment. Or maybe it’s the founders wanting full-stack ownership and long-term lock-in. Or maybe, with the pressure off post-acquisition, this is just Rich People™ messing around with legacy projects instead of supporting their users.
The leaked version of this pivot was even more aggressive, it openly criticized React and dubbed the shift a “Declaration of Independence.” The blog post toned that down, but the core direction hasn’t changed. It’s still a move away from the ecosystem that gave them a user base, toward something inward-looking and self-defined.
They claim to still be maintaining React Router. But it’s the same team now split across two projects. That means both will slow down. One is legacy support mode, the other is vaporware until proven otherwise.
Meanwhile, others in the space are embracing the now. They’re solving the hard problems of 2025 head-on. They’re building with the current stack, improving developer experience, shipping usable features, and making the future better, not by erasing the past, but by working with it.
This isn’t thoughtful evolution, it’s a hard reset with unclear motives, launched by a team with less to lose than their users.
Trust is easy to lose in devtools. This is how you lose it.
Nah, if you want heavy backend, go with Go/Ktor/C#, not node. If you want light backend, use sth like Hono or H3. If you want to primarily produce html, use Remix or Next.
Adonis/Redwood/Nest is something you will regret in few years because it will lock you down to "their" ways of doing things instead of something with replaceable components.
Admitted, Adonis looks most sensible out of these 3. Redwood is poisoned with needless GraphQL and Nest is written like 2008 java.
In Adonis you can at least pick the db layer.
But even Adonis locks you to their validator instead of Zod or his cousins, they use their own Request/Response classes instead of the platform ones, has yucky inheritance and annotations magic etc.
Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
If one says, "we don't use an ORM", you will incrementally create helper functions for pulling the data into your language to tweak the data or to build optional filters and thus will have an ad hoc, informally-specified, bug-ridden, slow implementation of half of an ORM.
There is a time and place for direct SQL code and there is a time and place for an ORM. Normally I use an ORM that has a great escape hatch for raw SQL as needed.
Folks, if you have problems doing async work, and most of your intense logic/algorithms is a network hop away (LLMs, etc.), do yourself a favor and write a spike in Elixir. Just give it a shot.
The whole environment is built for async from the ground up. Thousands and thousands of hours put into creating a runtime and language specifically to make async programming feasible. The runtime handles async IO for you with preemptive scheduling. Ability to look at any runtime state on a production instance. Lovely community. More libraries than you might expect. Excellent language in Elixir.
> Anything you buy from AliExpress for the cost of a couple of pints is bound to be a bit crap.
This line kinda got me down, because, well, last night I went out for a few pints and paid €16 for two drinks; Here we have a miracle of modern technology available shipped to your door for about the same price of what it now costs to just go out and do the thing people have done when socializing for the last 1500 years.
We're subsidizing the costs of all this modern tech by heavily taxing ourselves on the things once taken as nearly the bare minimum lifestyle.
I've found that keeping my CLAUDE.md minimal (under 100 lines) yields the best results. I focus mainly on these areas:
- Essential project context and purpose
- A minimal project structure to help locate types, interfaces, and helpers
- Common commands to avoid parsing package.json repeatedly.
Regarding the specific practices mentioned:
Implementation Flow: I've noticed Claude Code often tries to write all tests at once, then implements everything when import fails (not true TDD). To address this, I created a TDD-Guard hook that enforces one test at a time, test fail for the right reason, only implement the minimal code to make the test pass.
Code quality: I've had good success automating these with husky, lint-staged, and commitlint. This gives deterministic results and frees up the context for more important information.
When Stuck: I agree that developer intervention is often the best path. I'm just afraid the specific guidance here might be too generic.
Save that as script.py and you can use "uv run script.py" to run it with the specified dependencies, magically installed into a temporary virtual environment without you having to think about them at all.
Claude 4 actually knows about this trick, which means you can ask it to write you a Python script "with inline script dependencies" and it will do the right thing, e.g. https://claude.ai/share/1217b467-d273-40d0-9699-f6a38113f045 - the prompt there was:
Write a Python script with inline script
dependencies that uses httpx and click to
download a large file and show a progress bar
I used to think authoring code was the bottleneck. It took a solid decade to learn that alignment of the technology to the business is the actual hard part. Even in the extreme case like a B2B/SaaS product wherein every customer has a big custom code pile. If you have the technology well aligned with the business needs, things can go very well.
We have the technology to make the technology not suck. The real challenge is putting that developer ego into a box and digging into what drives the product's value from the customer's perspective. Yes - we know you can make the fancy javascript interaction work. But, does the customer give a single shit? Will they pay more money for this? Do we even need a web interface? Allowing developers to create cat toys to entertain themselves with is one realistic way to approach the daily cloud spend figures of Figma.
The biggest tragedy to me was learning that even an aggressive incentive model does not solve this problem. Throwing equity and gigantic salaries into the mix only seems to further complicate things. Doing software well requires at least one person who just wants to do it right regardless of specific compensation. Someone who is willing to be on all of the sales & support calls and otherwise make themselves a servant to the customer base.
I work for about 2k users, they do not give a shit about reactivity... build a monolith, make it comfy, embrace page refresh (nobody gives a fuck about that in the real world), and get shit done.
I can't say anything on how work will be organised, but there's a thought I had the other day:
The rate of adoption of new APIs will slow down considerably.
LLMs only really know what they're taught and when a new API comes out, the body of learning material is necessarily small. People relying on LLMs to do their jobs will be hesitant to code new things by hand when a LLM can do the same using older APIs much faster.
Who is going to do it then? Well, someone has to or else the API in question won't see widespread adoption.
I'm the author of this library! Or uhhh... the AI prompter, I guess...
I'm also the lead engineer and initial creator of the Cloudflare Workers platform.
--------------
Plug: This library is used as part of the Workers MCP framework. MCP is a protocol that allows you to make APIs available directly to AI agents, so that you can ask the AI to do stuff and it'll call the APIs. If you want to build a remote MCP server, Workers is a great way to do it! See:
As mentioned in the readme, I was a huge AI skeptic until this project. This changed my mind.
I had also long been rather afraid of the coming future where I mostly review AI-written code. As the lead engineer on Cloudflare Workers since its inception, I do a LOT of code reviews of regular old human-generated code, and it's a slog. Writing code has always been the fun part of the job for me, and so delegating that to AI did not sound like what I wanted.
But after actually trying it, I find it's quite different from reviewing human code. The biggest difference is the feedback loop is much shorter. I prompt the AI and it produces a result within seconds.
My experience is that this actually makes it feels more like I am authoring the code. It feels similarly fun to writing code by hand, except that the AI is exceptionally good at boilerplate and test-writing, which are exactly the parts I find boring. So... I actually like it.
With that said, there's definitely limits on what it can do. This OAuth library was a pretty perfect use case because it's a well-known standard implemented in a well-known language on a well-known platform, so I could pretty much just give it an API spec and it could do what a generative AI does: generate. On the other hand, I've so far found that AI is not very good at refactoring complex code. And a lot of my work on the Workers Runtime ends up being refactoring: any new feature requires a bunch of upfront refactoring to prepare the right abstractions. So I am still writing a lot of code by hand.
I do have to say though: The LLM understands code. I can't deny it. It is not a "stochastic parrot", it is not just repeating things it has seen elsewhere. It looks at the code, understands what it means, explains it to me mostly correctly, and then applies my directions to change it.
I love that the only alternative is a "pile of shell scripts". Nobody has posted a legitimate alternative to the complexity of K8S or the simplicity of doctor compose. Certainly feels like there's a gap in the market for an opinionated deployment solution that works locally and on the cloud, with less functionality than K8S and a bit more complexity than docker compose.
Docker Swarm is a good idea that sorely needs a revival. There are lots of places that need something more structured than a homemade deploy.sh, but less than... K8s.
The reason that I've heard used repeatedly is that a shocking percentage of folks who aren't Technology producers can't separate visual quality from "doneness" of a project. If you show some business folks something that looks like it works, they'll mentally update the project to "Nearly done!" and then everything else after that becomes "Unreasonable delays."
Here's one of many example use cases we found for GPT4 API:
Our sales people request invoices from a potential customer. On those invoices are our competitor's services and price. Invoices can come in PDF, png, jpeg, excel, csv, email formats. Content formatting can come in random forms. Pricing breakdowns are also non-standard across invoices. We have matching services and our own prices.
The goal is to find similar services where we charge less. In the past, our sales people would spend hours combing through those invoices. We wrote a prompt for GPT4, fed in our services and prices, and asked it to find services we could potentially replace as well as our profit margin. It took us a day to write this prompt. The results were outstanding and GPT4 gave accurate results. We even asked it to package it up in a PDF for us to send to the potential customers. On a Monday morning, we started on the prompt. By Tuesday morning, we got it working well enough that we were confident ship it to a few of our sales people to test.
This will save our company hundreds of thousands each year and we can get back to the potential customer much faster than before - increasing the likelihood of a sale.
If we had to program this like normal software, it'd probably take months to get it right with dedicated engineering resources to account for new invoice edge cases. Chances are, engineering would never even prioritize this feature for our sales people because there is simply no economical way to account for so many different invoice edge cases.
I believe we're just getting started. If we get GPT6 in two years and massive improvement in inference cost and context size, it's going to change everything we do. Heck, even GPT4 with 100x context size and 100x lower cost per inference would be transformative.
If this is a bubble, I'd like to live in it. I believe that many businesses have found use cases similar to the impact of ours. But they're just not broadcasting them to the internet in order to keep it a business advantage.
You'll never see true support for horizontal scalability in Postgres because doing so would require a fundamental shift in what Postgres is and the guarantees is provides. Postgres is available and consistent. It cannot truly be partitionable without impacting availability or consistency.
When an application grows to such a scale that you need a partitionable datastore it's not something you can just turn on. If you've been expecting consistency and availability, there will be parts of your application that will break when those guarantees are changed.
When you hit the point that you need horizontally scalable databases you must update the application. This is one of the reasons that NewSQL databases like CockroachDB and Vitess are so popular. They expose themselves as a SQL database but make you deal with the availability/consistency problems on day 1, so as your application scales you dont need to change anything.
Context:
I've built applications and managed databases on 10's of thousands of machines for a public saas company.
Same here. .tk was the only one back then that allowed you to have your own domain name without subdomains. My memory is that:
1. freeserver.com/~userna <- This was the first URl you could have, sometimes with something inside another directory (freeserver.com/users/u/~usernam).
2. username.freeserver.com <- This wasn't that bad but it didn't look professional. Tripod used to do this.
3. username.fs.com <-- A service with a short domain that provided free subdomains. This was similar to 2 but shorter. Some of them allowed you to chose the domian part.
4. username.tk <-- Among all the free options, this was the best one by far.
Then we grew up a bit and started paying domains :')
There are several tools in this space now- nix is maturing and people are realizing how useful nix can be for dev envs.
* [devenv](devenv.sh) - I am using it and loving it, but worried that the development is not moving forward
* [devbox](https://www.jetpack.io/devbox)
* [daytona](https://www.daytona.io/docs/usage/workspaces/)
* [devshell](https://github.com/numtide/devshell)
* [bob.build](bob.build) more focused on the build
I am glad that flox is pushing its development forward. Does flox have a way to run services? With `devenv` I run my database with `devenv up`.
20 years ago i discovered a project by a university in germany that implemented a well thought through object storage with connections to all sorts of messaging protocols.
it was used as a platform to research collaboration models.
fortunately, at the time some german academic institution offered grants to universities for publishing their projects as Free Software or Open Source, and so this platform was released under the GPL.
what is interesting about the platform is that it not only stores objects like say mongodb, but it also implements user and group management, a hierarchical access control system down to the object level, and messaging. further messaging implements pretty much all communication protocols out there: IRC, XMPP, SMTP, IMAP, and of course HTTP, FTP and more. and no, to get these it does not require a bridge to independent implementations of those protocols, but they are implemented natively into the server, and SMTP or IMAP or the webinterface will all directly access the same stored object.
what's more, the platform allows you to upload your own modules to extend it, which are stored in its object object storage, and can be updated at runtime. it has been used by the university to host various courses for more than 10000 users all handing in assignments at the same time.
i have been using this platform for my own websites pretty much ever since.
the university stopped development on the project more than a decade ago, but i forked it, and eventually added a REST API and implemented multi domain hosting on it so that i could serve multiple websites from the same service.
the code is old, and needs updating. TLS support is outdated, which is something that needs to be fixed before the project can be recommended to anyone else. the built in web templating system is using XSLT, which should be replaced with easier to use alternatives (or simply ignored, as as i do, by building all websites as SPA using the REST API instead. the REST API too, should be updated or replaced by GraphQL.
but aside from these problems the platform is usable like Backend As A Service. ever since adding the REST API, i have not done any custom backend coding, as the platform already provides any features i have ever needed.
the challenge throughout all this time has been to get other developers interested in using such a platform, instead of building yet another CRUD backend from scratch.
i don't currently have time to do any work on it (even its website is down), as i need to focus on paid engagements, but i have not given up hope to be able to revive it and make it popular some day.
MobX is how you get spaghetti code and state changes everywhere. That is why we (our company) moved away from the observer pattern that MobX and RxJS use.
I'm always surprised by the creativity of these personal websites. The web is so utilitarian and marketing oriented that I forget that a web page is a blank canvas ready for artistic expression.
I love React but I use no state management, apart from useState locally within components.
State management in React is a major source of pain and complexity and if you build you application using events then you can eliminate state entirely.
Most state management in react is used to fill out props and get the application to behave in a certain way - don't do it - too hard, drop all that.
Here is how to control your React application and make it simple and get rid of all state except useState:
Trust me - once you switch to using custom events you can ditch all that crazy state and crazy usage of props to drive the behaviour of your application.
This looks really nice, and it's also the first time I hear about Good Enough. Big fan of the Basecamp-ish design with "real" large buttons. I was considering https://micro.blog/ in the past but Pika looks a bit more polished, especially the simple editor.
If someone were to move their Hugo blog to Pika, do you offer a way to import existing blogs, or for example set redirect URLs?
We spend ~$120/month on our CMS which hosts hundreds of people across different spaces.
Nobody manages it, it just works.
That’s why people build software so you don’t need someone like Lee to burn a weekend to build an extremely brittle proprietary system that may or may not actually work for the 3 people that use it.
Engineers love to build software, marketers working for gen ai companies love to point to a sector and say “just use us instead!”, just shuffling monthly spend bills around.
But after you hand roll your brittle thing that never gets updates but for some reason uses NextJS and it’s exploited by the nth bug and the marketer that built it is on to the next company suddenly the cheap managed service starts looking pretty good.
Anyway, it’s just marketing from both sides, embarrassing how easily people get one-shot by ads like this.