Hacker Newsnew | past | comments | ask | show | jobs | submit | chanks's commentslogin

You can, but that won't protect you against invalid dates like an actual date type will.


I'm very curious how well this is working for you in practice, since I've been thinking about what it would look like to share a single Rust UI implementation across a webapp and native apps.


So far so good!

Putting the UI in a canvas elements have some distinct drawbacks (https://github.com/emilk/egui/tree/master/crates/eframe#prob...) but for us it is definitely worth it. Having one unified codebase for our web app and native app, and having it all in Rust, is just amazing.

We're currently working on a 3D renderer based on wgpu (https://github.com/gfx-rs/wgpu) that we will likewise use for both web and desktop.



If you have a table that undergoes frequent updates and this is a concern of yours, look into setting a fillfactor on it. This will help Postgres keep writes in the same page, which means it doesn't need to touch indexes for non-indexed columns.

Edit: non-updated columns, I mean.


Looking at your PR for it (https://github.com/LuaJIT/LuaJIT/pull/149) it sounds as if there are a few issues remaining, but I'm not enough of an expert to know how problematic they are in practice. Do you think that this feature is stable enough for production use?


A small tree-based router for Ruby apps.

https://github.com/jeremyevans/roda


That is, indeed, what Sequel does. You can query the database object directly (for example, `DB[:posts].where{comments_count > 5}.exclude(poster_name: "Bob").order(:comments_count.desc).limit(10).all`) and get back an array of hashes.

Sequel's model layer is also faster than ActiveRecord's, because much of the additional functionality that ActiveRecord piles onto all records (dirty tracking, single table inheritance, etc.) are available in Sequel via a plugin system. You can enable the plugins you want to use and not pay the overhead of all the others. You can even enable specific plugins for only the models where you'll actually want to use them.

It also has a lot less magic (no association proxies, unless you enable the plugin for them :)), ridiculously customizable (many more options for associations, custom eager loading logic, and so on), and has an implementation of its Postgres adapter written in C for performance.

Highly, highly recommended.

Edit: Oh, and an issue tracker that is almost always at zero, with a very fast response time. As someone who has contributed a patch to ActiveRecord, I can't tell you how nice that is.


Yeah, that does sound like exactly what I want. Still has the "conventions" problem I mentioned, but maybe worth it, and common enough that it isn't that big a deal.

To your edit: Ha, I have a PR against arel that is similarly neglected[0]. Maybe an advantage of using a less popular library is that the maintainers aren't so overwhelmed that things get lost in the shuffle!

[0]: https://github.com/rails/arel/pull/320


(I'm the author of Que, the job queue discussed in the post most extensively)

This isn't terribly surprising to me, since I have an appreciation for what long-running transactions will do to a system, and I try to design systems to use transactions that are as short-lived as possible on OLTP systems. I realize that this should be explicitly mentioned in the docs, though, I'll fix that.

I'll also note that since the beginning Que has gone out of its way to use session-level locks, not transaction-level ones, to ensure that you can execute long-running jobs without the need to hold open a transaction while they work. So I don't see this so much as a flaw inherent in the library as something that people should keep in mind when they use it.

(It's also something that I expect will be much less of an issue in version 1.0, which is set up to use LISTEN/NOTIFY rather than a polling query to distribute most jobs. That said, 1.0 has been a relatively low priority for much of the last year, due to a lack of free time on my part and since I've never had any complaints with the locking performance before. I hope I'll be able to get it out in the next few months.)


> I'll also note that since the beginning Que has gone out of its way to use session-level locks, not transaction-level ones, to ensure that you can execute long-running jobs without the need to hold open a transaction while they work. So I don't see this so much as a flaw inherent in the library as something that people should keep in mind when they use it.

+1! I tried to clarify in the "Lessons Learnt" section that this isn't so much a problem with Que, but something that should be kept in mind for any kind of "hot" Postgres table (where "hot" means lots of deletions and lots of index lookups). (Although many queues are more vulnerable due to the nature of their locking mechanisms.)

But anyway, thanks for all the hard work on Que. The performance boost upon moving over from QC was nice, but I'd say that the major win was that I could eliminate 90% of the code where I was reaching into QC internal APIs to add metrics, logging, and other missing features.


Thank you!


Isn't LISTEN/NOTIFY basically useless for queues, since NOTIFY will wake up every single consumer, causing a polling stampede?


The major benefit of putting your queue in your RDBMS, which isn't commonly brought up in these discussions, is that it lets you protect your jobs with the same ACID guarantees as the rest of your data. This is very valuable for some use cases.

I have a Postgres-based job queue that uses advisory locks to get around some of the drawbacks he mentions (job lock queries don't incur writes or block one another like SELECT FOR UPDATE would). Feedback is welcome: https://github.com/chanks/que


You can use messaging to distribute the work, but let all workers access the same RDBMS; that way you pretty much get the same ACID properties you are used to.


We are currently building a system that does this, but it feels inefficent.

First, write the job to the DB, then put a message on the queue to notify the job processor(s) that there's work to be done. The processor then updates the DB to indicate that the work is done.

The goal is, of course, to prevent polling the DB via repeated queries for jobs. But wedging in an entirely new layer/API/messaging server just feels like overkill for the simple task of initiating a job.


There are two answers I have to that.

First, scaling often involves some things that feel unnecessary or inefficient

Second, once you have a messaging system, you find lots of good use cases for it (other kinds of notifications, logs, statistics, integration with other systems, ...).


ACID can be done with messaging systems as well, within the realm of messaging that is. The two rules are "don't use a messaging system as a database" and "don't use a database as a messaging system." In a product like WebSphere MQ (I don't have a lot of experience with the open source messaging systems), not losing messages, atomicity, etc. are important use cases.


My point is that the only way to wrap your jobs in the same transactions as the rest of your data is to have your job queue in your RDBMS. If you don't have that, you can't guarantee that they are consistent, that your backups have snapshots of each at the same time, etc.

Inconsistency between jobs and the rest of your data isn't a problem for many (or even most) use cases, but there are certainly times when you need it.


Use a two-phase commit to go into and out of your database when your message needs to be turned into data. There is no reason your data in the database can't be consistent with your messaging system.


You can provide the ACID guarantees by using something like Redis.


It can only be consistent with the rest of your data if the rest of your data is also in Redis.


i'd recommend zookeeper or consul for orchestrating, distributed locks, two- and three-phase commit state, etc. instead of redis if you want to have your transaction state highly available. they both use raft, which is proven, instead of a home-grown mostly-works algorithms (see aphyr's work with jespen).


Hi, I'm the author of Que. It's true that you can't really completely solve the idempotence problem for jobs that write to external web services (unless those web services provide ways for you to check whether you've already performed a write - see the guide to writing reliable jobs in the /docs directory), but that's a limitation that'll apply to any queuing system. I'd definitely say that Que, being transactional and backed by Postgres' durability guarantees, does give you better tooling for writing reliable jobs than a Redis-backed queue would in general.

I'm happy to answer any questions you or anyone else might have.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: