Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just some thoughts on your downs (i'm a dev, so opinionated in 1 direction :) )

* At least on my 64-bit linux box, mmap page size is 4k. So if you have to read 100 bytes vs 4k, in the end I don't think its a big difference. From the drives point of view, 100 bytes and 4k are basically the same, and with os file caches, it'll cache the 4k anyway most likely

* The JavaScript support is very optional (only db.eval and $where). When it used it uses very little memory because nothing is really stored in it. Its basically processing 1 record at a time and we've written a very efficient wrapper. Also, v8 actually just released a 64-bit version, so we may look into switching back to that as well if it turns out to be a lot faster than trace monkey.

Why do you think its green and small? Its actually been used in production for over 18 months (www.businessinsider.com for example)



First off, sorry if it reads overly negative. The reason for my post was I got sucked into the hype around the project and ended up wasting a few days for something that didn't work out. It's not the MongoDB team's fault, of course. But I started to read comments here without any counter-balance.

  > At least on my 64-bit linux box, mmap page size is 4k. So if you have to read
  > 100 bytes vs 4k, in the end I don't think its a big difference. From the
  > drives point of view, 100 bytes and 4k are basically the same, and with os file
  > caches, it'll cache the 4k anyway most likely
I beg to differ. 4k is 40 times 100 bytes and that's a lot. I could understand a 512 byte sector cache but not full 8 sectors per every disk read. Especially for metadata.

  > The JavaScript support is very optional (only db.eval and $where). When it used
  > it uses very little memory because nothing is really stored in it. Its
  > basically processing 1 record at a time and we've written a very efficient
  > wrapper.
My bad. I'm very wrong on this point, then.

  > Also, v8 actually just released a 64-bit version, so we may look into
  > switching back to that as well if it turns out to be a lot faster than
  > trace monkey.
Yes, and V8 will probably come soon in other architectures, too. This sounds good.

BTW, I couldn't find a stable release of Spidermonkey with Tracemonkey. But it could be my fault entirely.

  > Why do you think its green and small? Its actually been used in production
  > for over 18 months (www.businessinsider.com for example)
It's just a personal impression based on who is using it (http://www.mongodb.org/display/DOCS/Production+Deployments) and recent bugs. Requiring 64-bit mode for files over 3.3-4GB is a bit strange for a database.

Again, I think it has all the potential and has a fantastic team. And maybe a bit of hard testing and some presentations could dispel [my opinion on] memory usage. As I said in my previous comment, take my opinion with a grain of salt!

[Edit: formatting, little rephrasing]


Nitpick:

> and with os file caches, it'll cache the 4k anyway

Usually in a high-end database direct I/O is used on Linux.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: