Hacker Newsnew | past | comments | ask | show | jobs | submit | njpatel's commentslogin

I'm not sure the log message v structured message comparison makes sense - most log and trace events are batched and compressed before being transported, and so the size difference isn't really an issue. On the receiving side most services have some kind of column store as their main datastore and so there's no issue there either. The benefits of structured logging are worth it.

Regarding pricing - there is an option for every kind of usage out there, it's mostly a solved problem that comes with choosing two of large scale, low cost, low latency. For instance we serve large scale and low cost.


We use KQL to for axiom too (well, a version of it) - it's a great query language and very flexible for unstructured + structured data.


I was the lead developer of Unity and the creator of Ubuntu Netbook Remix and can also confirm the article isn’t slightly true.


(co-founder here) Try axiom.co - we support splunk-like query syntax, dashboards, monitors, unlimited sources/hosts/etc, and you get 500GB/mo ingest + 30 days retention on the free plan.


And that resulting in compromises such as high amounts of sampling, ignoring a facet of data altogether because "we won't need it if something goes wrong...", and/or using some kind of log stream processor to divert large amounts of data in S3 instead of allowing it be queried whenever you want.


In the case of Axiom & Vercel, you'll only be paying for ingest (if you need over 2GB/day) and no bandwidth costs. I believe the latter is the case with all of Vercel's observability partners (and with most services providing a log-drains service).

It's a fair point, though...at some point, and with enough TBs of traffic, it could get expensive to provide the log drain support. One thing we've been working on (and more on this in the coming weeks), is putting together a benchmark of log-drain mechanisms to try and work out what would be the most efficient combo of compression/cpu/mem to use for log-drain providers.

Our hope is we can work on making it efficient enough that we can encourage more services to provide log-drains (and therefore allow us to make great experiences around those logs!).


Neil here, CEO of Axiom, glad it’s working out for you! Would love to hear what custom alerts you’ve set up.

Our goal is to provide some out-of-the-box alerting tuned to Vercel deployments as we learn more about how devs are using Axiom & Vercel!


Who watches the Watchmen?


> The solution goes - Splunk is too expensive, let's ditch logs altogether and hope we capture some metrics and plot them in nice dashboards. Instead of dumping the logs highly compressed into cheap s3 and running some Snowflake or Spark on it later.

I don't usually promote on HN but this is exactly why we built https://axiom.co! We've been working on this problem for some time, essentially allowing schema-less/index-free ingest, S3-based storage in a highly-efficient format, and then querying with a Splunk-like (specifically Kusto-inspired) language via serverless functions.

We built it because we also realised we would avoid or think too much about logging (cost, scaling, retention, etc) which led to compromises either in our monitoring or later when we wanted to dive in and try and draw some insights/analytics from that kind of data.


It's harder to find Discord communities than IRC channels. You only had to join a few IRC servers to get huge lists of often-related channels.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: