Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Heroku Postgres Basic Plan (heroku.com)
109 points by mattsoldo on July 16, 2012 | hide | past | favorite | 38 comments


I always thought that limiting by the "size" of the database in megabytes was unintuitive with the 5MB/25MB plans (how many bytes are in a row?)- pricing per-row is much more developer friendly.

I wonder how they'll handle someone doing something awful like storing images or blobs in the database, though.


We've generally found that this sort abuse rarely happens. Generally developers utilize our services as they are intended. And it is much easier to release new services and watch for abuse than to pro-actively restrict them before-hand.

There is nothing wrong with storing blobs in the database - it is one of the data types. But a database is not a replacement for AWS S3 either.

If we did find someone abusively using the database - especially if it was affecting the overall quality of service - we would reach out to them directly to address it.


While I agree that the old plans were unintuitive, the new plan seems somewhat abuse able. How big can a blob be in PG?


1 GB. You can see the other limits here:

http://www.postgresql.org/about/


actually a lob can go up to 2GB. Byteas are limited to 1GB.

I store images and other binary data in the db all the time, when this is closely associated with data in the db and needs to be maintained together. Usually these files are store once, retrieve seldom. However for practical purposes I would not recommend going above a few megs on a bytea field. lobs are easier because you can seek in them, but access will never be as fast as it would be directly from the filesystem.


A row can be 400GB, so both size and row metrics make sense. I think that the size metric is likely helped tremendously by sane alerts.

http://wiki.postgresql.org/wiki/FAQ#What_is_the_maximum_size...


http://www.postgresql.org/about/ even quotes 1.6TB/row. Guessing it's under the two maximums of 1600 max columns, 1GB max/field (though they may not actually be compatible)


I disagree. By restricting the number of rows I think you're just going to end up with folks creating sloppy schemas specifically to avoid going over the limit.

  post.categories (varchar) = "1,51,78,84,100"
and other wonderful non-normalized approaches.


Hm. If you must do that, remember Postgres is guilt-free-SQL, so don't use a varchar, use the array type.

http://www.postgresql.org/docs/9.1/static/arrays.html



I like that you guys pointed out more efficient ways to do stupid things :)


Well, if the time it takes you to contort your application to work around our row limits is worth less to you than $9/mo, you're probably in need of both technical and financial advice.


I don't disagree nor do I use heroku, it's moot for myself.

I was just commenting how row-limitations were a very NoSQL way of enforcing data limits and inappropriate for an RDBMS.


Oh, you don't need to use Heroku to take advantage of Heroku Postgres -- it's available as a standalone service.


People willing to go to extremes to save a few bucks are probably not hosting on Heroku to begin with. This doesn't seem like a scenario worth worrying about.


10 million rows though is a high enough limit to make that unnecessary.

Of course any limit can be engineered against. When I was in college, the deli used to have a deal where you could get a baked potato with whatever toppings you wanted for $1.50 or something. one of the topings was chilli. So I would fill the container up with chilli so it was more like potato chilli..... After a while they started charging by weight because apparently this was a popular approach to this esp. for college kids who would rather cut food expenses and spend money on other things.


Make a horrible schema just to avoid ten dollars a month? Surely the time of most developers is worth more than that.


Give a person an limit and he will try to overcome it no matter how silly it is.


I find over-normalizing databases to be just as much of a problem. "Perfectly normalized" databases create extremely nasty join statements which can create just as much havoc as poorly normalized databases, especially once an ORM is thrown into the mix, which most will be using.


I actually agree completely. I was looking at hosting on Heroku and I ended up just figuring out how big the biggest possible row would be for each table and basically enforcing a row limit by myself. Having a predefined row limit saves me from re-computing this number when I change my schema.


This is almost certainly not the right place for this kind of question but perhaps someone might be willing to point me in the right direction.

I am currently paying $9.50/mo for a Webfaction plan with 100GB disk, 600GB bandwidth, 256MB RAM + unlimited MySQL and PostgreSQL databases + various other services (webmail, SSH access, etc). What would the use case be for switching to, say, a Heroku plan with 1 Web Dyno and a 10M row PostgreSQL database? If I'm reading correctly, 1 Web Dyno will cost me $0/mo + $9/mo for the basic plan = $9/mo, which is comparable to the Webfaction plan, price-wise.


If you run on the free 1 dyno plan, your web app is not always resident so you will get occasional multi-second loading request times.


The use case would be if you want to use the Heroku toolchain. Which is really nice. VPS is always going to be cheaper. Even in that case, Heroku is on the mid to upper level of costs for a PaaS.


Don't forget the continuous protection backups...


This sounds great, but I'm wondering how long we can expect the "brief" beta period to be. I've got a client who does not need the $200/month Ronin database, but would be perfectly suited for Crane or this new new basic plan. However, I'm reluctant to choose plans that warn of decreased stability. My client's website launches in one month. Is it safe to choose one of these plans?


No. There is a risk.


That is a sweet spot: lots of people need a full service, but smaller PostgreSQL database.

Off topic, but what I would really like to see: I have several long term tiny web apps hosted for free at Heroku. I understand that their costs for hosting these is minimal because unpaid for web apps get swapped out, and thus there is a several second loading request time when they are 'woken up.'

I would love an inexpensive "1 dyno" paid for plan positioned between the 1 dyno free plan and the two dyno paid plan for $35. The 1 dyno paid plan, at about $15/month (or maybe it shoult be 1/2 of $35?) would I bet be popular. I would like all of my apps to be always on, even the little toy/side projects.


I love how Pagoda Box (a PaaS for PHP) solved this; you can "caffeinate" you application for $8 per month to keep if from sleeping, instead of adding another instance (comparable to Heroku's Dyno).

http://help.pagodabox.com/customer/portal/articles/438692


This is a bit hacky, but you can set up pingdom (or new relic) to ping your (toy/side project) apps every few minutes. This essentially ensures that you never have to wait for heroku's spin up time.

cf. http://stackoverflow.com/questions/5480337/easy-way-to-preve...


Hmmm... now they need to bridge explaining the current plans with the new ones. Perhaps in the dashboard they will report both the size of the DB and the number of rows occupied.


That's a very attractive price point. I'm really considering switching Tehula[1]'s backend. The previous "basic" plan was too pricey for our needs right now. But this new option makes using Heroku Postgres really attractive.

[1] http://tehula.com


Q: if you add a $10/month basic plan to a free 1-dyno web app, does that web app become paid for, and thus always be active?


I don't believe so, no.


Does this plan support PostGIS?


I used to think that PostGIS is not avaialable on Heroku, but this seems to indicate otherwise: https://devcenter.heroku.com/articles/is-postgis-available

The article does suggest a simple way of finding out if the plan supports PostGIS.


No, neither of the starter tier plans (dev and basic) support PostGIS.


For 57.6$ per month if we calculate month=30 days on ec2 i can get a small ubuntu linux instance with 1.7G of ram with a lot more than 10.000 rows

http://aws.amazon.com/ec2/instance-types/

After that installing a firebird database can be easy if i choose ubuntu instance

Next you can add nginx/django just for fun For backup there are many python scripts to backup to s3/ebs snapshots


let me rephrase, "I can spend a fair amount of time doing things myself, or I can pay a bit more and not have to do those things".

Depending on who you are, and what you're doing, you may value your time more than a small amount of money. Or, you may value that amount of money more than your time. Neither is right or wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: