Been saying this for years about frontend environments too. My genx.software does the same thing with declarative HTML attributes instead of imperative JavaScript config. Zero setup, zero sync bugs between what you declare and what you get.
Preventive measure: get Scrum Master certified yourself. The training can even be fun with a good instructor.
Then when professional managers come sniffing around muttering about Scrum, you say: "I am a certified Scrum Master. Our process is already 100% Scrum.
Interesting strategy. I've thought about getting certified
just to have the credibility. The irony would be certified Scrum Master saying "we don't need full ceremony right now" harder to dismiss as
"doesn't understand Agile."
The key to success in the world of coding assistants is to be a good manager. The AI is a very fast, but also very stupid, programmer. It will make a ton of architectural mistakes, and AI, more often than not pick the most mid solution possible. If you are a good code architect, and if you can tell the difference between a mid pattern and a good one, and force the AI to do things right, you will rise to the top.
You describe what you want, not the 47 imperative steps to get there. Zero chance to call methods in the wrong order or manage intermediate build state that should never be your problem anyway.
IMHO All libraries should use declarative interfaces.
This is exactly right. Client-side processing for sensitive data like contacts eliminates the trust problem entirely. Most migration tools want you to upload your entire contact database to some random server when the extraction can happen locally in milliseconds.
Pure client-side tools like this should be an option for any personal data processing.
This is what I have done for my users: an option to import their emails to an in-browser db for guaranteed privacy.
Building a production app on Turso now. No bugs or compatibility issues so far. The sqlite API isn't fully implemented yet, so I wrote a declarative facade that backfills the missing implementations and parallels writes to both Turso and native sqlite: gives me integrity checking and fallback while the implementation matures
IMHO breaking free of SQLite's proprietary test suite is a bigger driver than C vs Rust. Turso's Limbo announcement says exactly that: they couldn't confidently make large architectural changes without access to the tests. The rewrite lets them build Deterministic Simulation Testing from scratch, which they argue can exceed SQLite's reliability by simulating unlikely scenarios and reproducing failures deterministically.
Having seen way too many "we're going to rewrite $xyz but make it BETTERER!!", I don't give this one much chance of success. SQLite is a high-quality product with a quarter-century of development history and huge amounts of effort, both by the devs and via public use, of testing. So this let's-reinvent-it-in-Rust effort will have to beat an already very good product that's had a staggering amount of development effort and testing put into it which, if the devs do manage to get through it all, will end up being about the same as the existing thing but written in a language that most of the SQLite targets don't work with. I just can't see this going anywhere outside of hardcore Rust devotees who want to use a Rust SQLite even thought it still hasn't got past the fixer-upper stage.
I needed SQLite as a central system DB but couldn't live with single-writer. So I built a facade that can target SQLite, Postgres, or Turso's Rust rewrite through one API.
The useful part: mirroring. The facade writes to two backends simultaneously so I can diff SQLite vs Turso behavior and catch divergences before production. When something differs, I either file upstream or add an equalizing shim.
Concurrent writes already working is a reasonable definition of success. It's why I'm using it.
How do you want to define success for this project relative to SQLite? Because they already have concurrent writes working for their rust implementation. It's currently marked experimental, but it does already work. And for a lot of people, that's all they want or need.
> IMHO breaking free of SQLite's proprietary test suite is a bigger driver than C vs Rust.
I don't understand this claim, given the breadth and depth of SQLite's public domain TCL Tests. Can someone explain to me how this isn't pure FUD?
"There are 51445 distinct test cases, but many of the test cases are parameterized and run multiple times (with different parameters) so that on a full test run millions of separate tests are performed." - https://sqlite.org/testing.html
SQLite's test suite is infamously gigantic. It has two parts: the public TCL tests you're referencing, and a much larger proprietary test suite that's 100x bigger and covers all the edge cases that actually matter in production. The public tests are tiny compared to what SQLite actually runs internally.
It allows the code to be fully public domain, so you can use it anywhere, while very strongly discouraging random people from forking it, patching it, etc. Even still, the tests that are most applicable to ensuring that SQLite has been built correctly on a new compiler/architecture/environment are made open source (this is great!) while those that ensure that SQLite has been implemented correctly are proprietary (you only need these if you wanted to extend SQLite's functionality to do something different).
This allows for a business model for the authors to provide contracted support for the product, and keeping SQLite as a product/brand without having to compete with an army of consultants wanting to compete and make money off of their product, startups wanting to fork it, rename it, and sell it to you, etc.
It's pretty smart and has, for a quarter century, resulted in a high quality piece of software that is sustainable to produce and maintain.
The test suite that the actual SQLite developers use to develop SQLite is not open-source. 51445 open-source test cases is a big number but doesn't really mean much, particularly given that evidently the SQLite developers themselves don't consider it enough to provide adequate coverage.
That’s like if I gave you half the dictionary and then said it’s ironic that if there really weren’t any letters after “M” you wouldn’t be complaining.
This reminds me of my own experience in the early days of iPhone.
I used to scan for new apps with excitement and expectation. At a certain point the cost of discovery became too high and I just lost interest. The long tail of apps simply became a graveyard of obscurity.
Ironically, this article is itself selection pressure. Once 'excessive bold' becomes a known LLM tell, it gets trained out. We're in an arms race: identify a style marker, watch it disappear from the next model generation.
reply