Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As long as SQLite doesn't catastrophically break under load it's fine for production, but only as long as it has a single client; traditional database engines can deal with multiple applications connecting to them.

I'm using sqlite for a production application as well, but its load will be in the range of - at best - thousands per day. The only complexity will be that there has to be a secondary 'fallback' server; the current application just copies the whole .db file over to the secondary server after certain events.



You can have several clients. If you do not have long-live write transactions, performance will be fine.


Thousands per day, let's be generous and say 9,000 queries per day. In 24h and 3,600s/h that is ~0.1 req/s. Are you saying that SQLite cannot deal with 1 request every ~10 seconds? Or were you referring to 900,000 queries when you said "thousands"?

FWIW the typican front-page article in HN sees around 10-30k visits in a single day.


If I have a simple Flask or Django app, backed by sqlite, how do I make sure I only have one client? If I use a prerolled Apache or Nginx frontend (which may or may not be set up with multiple threads or whatever), and have very very low traffic, how do I make sure I'm not corrupting it with multiple writes?


It's probably fine to leave Apache or Nginx with multiple threads (probably even a good idea) but you need to make sure that uwsgi or whatever Python server you're using on the backend only has one process.

If you can somehow architect things so a single process is doing writes (but leave multiple processes doing reads) that's likely even better.

However I would suggest doing a lot of testing. It's possible multiple write processes isn't the end of the world below some threshold.


SQLLite will lock the db while you are writing to it to prevent this happening




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: