What’s your point? Technologies come and go. People built Ruby apps that did their job and now good luck finding maintainers for them. It’s just how our market works. We’re lucky if our stack lasts 10 years.
I agree with your larger point, but FWIW your example does not match my experience.
I've never had any special difficulties hiring Ruby/Rails devs. Quality is high, availability is reasonable (hiring is always something of a struggle!). My first Rails hire was in 2003 (rails-0.8, IIRC) and my most recent was about 6 months ago.
That's 17 years, and counting.
OTOH, I've never had a good time hiring people to maintain EOLed apps. Ruby or otherwise. "Sustaining Engineering" is a (misnamed) trade that isn't as broadly attractive as new development.
Yeah, ok, maybe you’re glossing over the immense difference that there is between what could be done on a mainframe vs what a Lambda allows, including all complementary services like per-second billing and granular permissions and logging. Now tell me that this is just the same thing as “mainframes” from the 80s.
I can't figure out exactly how it knows which chunk to download. Does it always download the whole index first? Or does it include it in the built JS file itself?
Both the index and table data are btrees. These are trees - the root node sits in some known location (offset) in the file, referenced by the file header and metadata. As SQLite traverses the tree, it encounters new descendents it would like to visit, presumably identified by their byte offset in the file, which is all needed for this VFS magic to issue a suitable range request.
- SQlite opens the file and reads 4kb worth of header -> range request for byte 0-4096
- headers/metadata refers to index table with root node at 8192kb
- user issues SELECT * from index WHERE name = 'foo'
- SQLite reads root node from the file (range request for 8192kb..)
- Root node indicates left branch covers 'foo'. Left branch node at address 12345kb
- Fetch left branch (range request for 12345kb)
- New node contains an index entry for 'foo', row 55 of data page at 919191kb
- SQLite reads data page (range request for 91919191kb..)
SQLite has runtime-pluggable VFS support, i.e. you give it a struct with functions for opening a file, reading some bytes, writing some bytes, synchronizing file contents, closing a file. This project provides such a VFS module, that, because it actually runs in the browser, performs HTTP requests to read data. Emscripten provides a way to run a mix of C/C++ code in the same environment as some JavaScript code inside the browser. The reason SQLite has this pluggable VFS support is to properly support embedded systems, different locking APIs, and things like database encryption.
That's part of SQLite. It has been optimised to reduce disk reads, because those can be slow on spinning hard drives. Coincidentally, this translates well into an optimised algorithm that minimises the amount of HTTP range requests to make.
The B-Tree is a tree that in this case is perfectly balanced. So if you do a query with an index in a database it will fetch an logarithmic amount of data from the index and then a constant amount of data from the table.
For the example the wdi_data table is 300MB and an index on it is 100MB in size. This index has a tree depth of 4 - which means SQLite has to read exactly 4 pages (4KiB) to get to the bottom of it and find the exact position of the actual row data.
you can check the depth of the b-trees with `sqlite3_analyzer`.
Everything in SQLite is stored in B-Trees. Data or indexes. So you don't need to download the whole index first; you only need to download the necessary pages of the trees to access data, whether it's part of an index or actual data
At least the information describing what tables and indices there are and where to find them - and then it gets what it needs once a query is run. Just like sqlite would if running from a local file on disk.
It has to download some parts ("pages") of the index as the query execution proceeds, and some header/schema description pieces of it first before execution starts.
It's not that those commits are in any way useful, it's that this "fix the error" commit might be 3 commits after the commit it belongs to. If there are no conflicts, `!fixup` will handle it for you; But since this is the real world, you're likely going to waste huge amounts of time solving conflicts that help no one.
My solution is:
- small-scoped PRs
- attempt to keep sub-commits readable, but don't waste time on them
Squashed commits also link to the PR where not only do you get the original commit list, but also get the whole discussion around the code.
I’ll add. Fresh Big Sur install on the fastest MacBook 16” in store now: After changing the selected DOM element it takes several seconds to show its CSS on the right sidebar.
It feels like every dev tool sucks for me though. Chrome regularly froze the whole tab and crashes the tools on my old computer. Firefox I don’t remember exactly but also it was a PITA
I think it generally depends on the market and on the route. I got some cheap flights in South East Asia in the week I flew, with prices being lower than average. This led to me postponing “what’s next” decisions to the last possible moment.
Yeah, I think it applies mostly to long distance flights. You can see the effect you just mentioned in the graph in the article! The price < ten days immediately before a trip, when it's steadily creeping up, can still be below the long-term average, cheaper than almost any other period beyond the 30-60d window.