About a year ago I did the same migration: Obsidian to self-hosted Directus.
My main reasons were:
- Straightforward queries. I have a lot of structured data in my notes and lesson plans, and being able to work with SQL was ideal.
- A web app was much more reliable than Obsidian's third-party sync platforms.
- I could extend Directus to do all sorts of other things. I eventually built my wedding planner and website backend on the same Directus instance that holds my notes.
(I also built a set of scripts on my Hackberry Pi that let me write text files on the go that saved to Directus)
The biggest disadvantage is that the writing and saving experience isn't as fluid.
You're the first person I've heard of that has gone this route. Cool to know I'm not alone.
And yeah I have the same gripes around the writing experience. I prefer Vim so I've been looking into ways I can use my notes as local markdown which sync on save. Of course, keeping the Directus editor for my mobile device edits.
I compared the dimensions of the Slate with my '06 Pontiac Vibe hatchback, and it's only a few inches longer. I suspect the Slate + Fastback kit will be pretty close to a hatchback in size and function.
I'm quite excited about this. Ticks all my boxes for "low" tech, simple, moddable, useful, and cheap. I'm hoping my aging Pontiac Vibe holds out long enough to upgrade to one of these, if they succeed. I put in a preregistration!
The problem is, the kind of person who cares about those things, as valid as they are, buys 0-1 cars per 20 years, and the market is driven (ha ha) by people who buy 2-3 cars every 2 years.
Very true. This truck appeals to me very much. My wife and I have a 2010 Accord and a 2014 CR-V. We could afford newer and/or fancier cars, but we just don't care about those things.
We're thinking of buying a newer car at some point, but between interest rates and, now, tariffs, we're not in any hurry.
My 30-year-old daughter is still driving the Toyota version, the Matrix, also 2008, that we bought in about 2013. She loves the thing. If she didn't have it, I'm sure I would still be driving it.
I find it hilarious that it's a limited-edition M Theory model. It has a badge glued to the dash that says "1926 of 5000." For a Toyota econobox.
At work we have two applications in production with Directus, a CMS and a CRM, both highly specialized (~35 custom extensions) for our use-cases.
We've had our teething issues, mostly with migrations and the UX in some areas, but overall it has saved us a ton of dev time and been a great force multiplier for us.
I also use it at home to manage my notes, tasks, and such as structured data.
Tell me what I'm missing here - my project requires a download of N resources over N different hits to get them. Now in theory I host all my dependencies like css file, font-files, images and js files. Assuming I turn on gzip at my web server (nginx lets say) and I set up cache headers such that every resource file downloaded does not expire for 1 year, then the files download 1x for every initial page load. The first time is a hit not unlike downloading a decent sized image file which we all do without complaint on a daily basis.
Sure, I get it - a whole bunch of stuff has to come down before Document.onload() can be called and thats time from a JS perspective the user is looking at nothing. What I like to do is setup the default screen to be a fixed position blocking layer with my company logo on it and the last thing Document.onload() does is remove it. The initial load they see it for 1.4 seconds. Subsequent reloads from browser cache it's up barely long enough to notice.
So the problem of initial load is easily managed and not that big of a problem in the first place honestly. Simply setup caching on the web server to persist the files as long as possible ( I think a year is the max in most places now ) and problem solved right?
Now occasionally I run across some browser on some platform thats greedy about caching and refuses to pay proper attention to cache headers and replace them with newer versions when they come down. Its a little more of a pain but still well worth it to simply add a fingerprint to each remote resource fetched:
Now at build or deploy time I incorporate a little script that generates the current time in seconds and uses sed to replace FINGERPRINT with the current time in seconds (123888238823482834) such that browser sees:
This is a unique URL and forces the browser to pull down the new asset. This is easy to do with resources pulled down in the <head> section and more problematic with images however you just change the image name from "image" to "image_v2" on edits or changes and the problem goes away. Its easily enough done to iterate the image directory changing "*.jpg" files to a new version number and making the same replacements in html and js files if you really want to get tricky about it.
Now, new files are fetched one and only once per device and the page weight of a particular JS file becomes practically irrelevant.
> Subsequent reloads from browser cache it's up barely long enough to notice.
Are you measuring the time on your personal machine, or a machine that represents what your typical visitor is using? If you're using a recent Macbook, that's going to have very different performance characteristics than, say, an old Android phone. Something that's instantaneous on a Macbook could take ages on an old Android.
> Now, new files are fetched one and only once per device and the page weight of a particular JS file becomes practically irrelevant.
It's not just download speed -- it's the parsing and execution of the script that takes CPU and memory. 115KB of JS is much heavier than e.g. 115KB of JPEG.
These "because why not" projects are my favorite sort. I'd love to build something like this with four keys representing bit pairs (00, 01, 10, 11) where you could type bytes by repeatedly pressing keys. Bonus points for mounting it on the back of a phone for one handed typing.
I've used GNOME for so long that anything other than Nautilus feels clunky to me. Though to be fair I almost never search for anything, so those pain points don't bother me. I just want a file manager that gets out of my way.
As I see it, the cause isn't industry-specific, it's just incredibly apparent because of how fast our industry repeats the cycle of:
- Make thing
- Thing becomes ubiquitous and widely used
- Thing has warts and limitations
- Make new thing on top of old thing, because old thing can't be reworked without major consequences because of its ubiquity. Repeat.
If you've worked in any job with reasonably complex processes, there's likely a few procedures that no one really understands, but when people try to change them or build on them, they only break things or get more confusing.
Not sure what can be done about it, but I have a hard time blaming anyone or anything.
My main reasons were:
- Straightforward queries. I have a lot of structured data in my notes and lesson plans, and being able to work with SQL was ideal.
- A web app was much more reliable than Obsidian's third-party sync platforms.
- I could extend Directus to do all sorts of other things. I eventually built my wedding planner and website backend on the same Directus instance that holds my notes.
The biggest disadvantage is that the writing and saving experience isn't as fluid.