Are you asking how to deal with offboarding or the overall "fill the shoes" problem after CTO leaves?
If it's the former, it looks like Hephaestus wanted to leave a "way back" to the org by not having his account deactivated. Who is the CTO now, you or Koalemos?
> Koalemos has done what he was allegedly instructed to _not_ do
Instructed by a person that is leaving the company? I'm not surprised Koalemos did it their way. Unless there was a really strong reason about not doing it, why that person cannot do the job their way?
>redirected all of Hephaestus's emails to his own inbox
What's bad about it? It's actually god that somebody will be receiving the emails after Hephaestus's departure from the company, its a standard practice when a vital member of the team is leaving.
> it looks like Hephaestus wanted to leave a "way back" to the org
100% sure no on that one.
> Who is the CTO now, you or Koalemos?
Technically - neither. He's "Head of tech" but people have started skirting him and going to me directly instead because he always says he doesn't know and has to talk to me on essentially every topic.
> I'm not surprised Koalemos did it their way. Unless there was a really strong reason about not doing it, why that person cannot do the job their way?
From my understanding because it did exactly what Hephaestus foresaw - things breaking.
> What's bad about it?
Another option could have been to reset the password for the old CTO and then occasionally check their inbox.
Seems like there happened a misscommunication, there are 3 ppl in the org that could be perceived CTO.
1. You
2. The old CTO that wants to keep their inbox
3. The guy hired for the job.
You need to talk to founder to let them decide.
Another thing is - if you hire new people and they didnt finish the onboarding yet, things will be breaking, thats the circle of life.
If the new guy is expected to know these things or he repeats the same mistakes - then he is not meeting expectations and should be let go.
I think you three need to have a candid conversation about the expectations, maybe include a founder who can be a tie breaker.
Definitiely, such a mess in so small org increases the potential of failure drastically.
Logdy author here, thanks for calling out the project! Kubetail is probably best fit for k8s while Logdy is leveraging more unix-like philosophy of being a self contained tool you can tailor to your needs whether that's tailing files, pumping it through TCP socket or REST API.
I have plans to include a sqlite storage so Logdy could be used in environments where permanent storage is needed
It's opensource, there is a full feature docker-compose to host on your own.
Or you can use their hosting service, highly available and with good prices.
30$/monthly for our 380Go + 30k timeseries. Perfect.
> The mistake many teams make is to worry about storage but not querying. Storing data is the easy part. Querying is the hard part. Some columnar data format stored in S3 doesn't solve querying. You need to have some system that loads all those files, creates indices or performs some map reduce logic to get answers out of those files.
That's a nice callout, there's a lack of awareness in our space that producing logs is one thing, but if you do it on a scale, this stuff gets pretty tricky. Storing for effective query becomes crucial and this is what most popular OSS solutions seem to forget and their approach seem to be: we'll index everything and put it into memory for fast and efficient querying.
I'm currently building a storage system just for logs[1] (or timestamped data because you can store events too, whatever you like that is written once and is indexed by a timestamp) which focuses on: data compression and query performance. There's just so much to squeeze if you think things carefully and pay attention to details. This can translate to massive savings. Seeing how much money is spent on observability tools at the company I'm currently working for (they probably spend well over 500k $ per year on: datadog, sumologic, newrelic, sentry, observe) for approximately 40-50TB of data produced per month - it just amazes me. The data could be compressed to like 2-3TB easily and stored for pennies on S3.
This is a really cool project — love the simplicity and the TUI approach, especially with the timeline histogram and remote-first design. I had similar pain points at my end when dealing with logs across many hosts, which led to building Logdy[1] — a tool with a slightly different philosophy.
Logdy is more web-based and focuses on live tailing, structured log search, and quick filtering across multiple sources, also without requiring a centralized server. Not trying to compare directly, but if you're exploring this space, you might find it useful as a complementary approach or for different scenarios. Although we need to work more on adding ability to query multiple hosts.
Anyway, kudos on nerdlog—always great to see lean tools in the logging space that don’t require spinning up half a dozen services.
The optimizations listed in the article are common fallbacks of all serverless databases. Unless you are super diligent with writing queries to your database, it's going to be costly. The only real application I found so far are small projects where less than 5 tables are needed and no JOINs are required. That means projects like: page visitor counts, mailing lists, website pageview tracking are a perfect fit for serverless databases.
I used Mongo serverless few years ago when it was first released, I didn't know how the pricing works so I wasn't aware how much these full table scans will cost me even on a small collection with 100k records...
For example in logdy.dev[1] I'm using D1 to collect all of the things listed above and it works like a charm with Cloudflare Workers.
Just last week I published a post on how to export D1 and analyze it with Meatabase[2], for the next post I think I'm going to describe the whole stack.
They’re common issues with any SQL RDBMS, period. Don’t UPDATE the PK if you don’t need to, don’t paginate with OFFSET, batch your INSERTs. The other shown problem (multiple JOINs with GROUP BY having a relatively high ratio of rows read to rows sent) is more a lack of understanding relational schema and query execution than anything, and could have been solved with CTEs instead of multiple queries.
This line [1] will become a source of many problems and wasted hours of debugging time.
You could at least return a sample document based on random offset or scan, lets say 1000 docs and infer their shape. Mongodb is schemaless, so theres no guarantee all documents look the same.
In fact the comment is inprecise
> Handler for reading a collection's schema
There's also Logdy (https://github.com/logdyhq/logdy-core) that can work with raw files and comes with a UI as well in a single precompiled binary so no need for installs and setups. If you're looking for a simple solution for browsing log files with a web UI, this might be it!
(I'm the author)
Heyo I’ve noticed Lodgy come up a few times on HN now, and was curious if you explored making it a proper desktop application instead of a two-part UI and CLI application. Did you rule that out for some reason?
I'm not ruling that out, however there was no user feedback that's the use case honestly. So far users love that they can just drop a binary on a remote server and sping up a web UI. Similar with the local env. The nature of Logdy is that it's primarily designed to work in the CLI.
What would be the use case for a desktop app?
> Koalemos has done what he was allegedly instructed to _not_ do Instructed by a person that is leaving the company? I'm not surprised Koalemos did it their way. Unless there was a really strong reason about not doing it, why that person cannot do the job their way?
>redirected all of Hephaestus's emails to his own inbox What's bad about it? It's actually god that somebody will be receiving the emails after Hephaestus's departure from the company, its a standard practice when a vital member of the team is leaving.