Siddarth, any word on when the 2015 edition of CTF will be out? I check the stripe blog every week or so hoping for some mention of timeline. I loved CTF3. You guys do a fantastic job on these. THANK YOU.
Not the op, but it had great docs including links to interesting papers, referenced real world problems (git, cryptocurrencies, consensus), had a set of levels organised around the theme of consensus nicely graded from easy to quite hard, and was doable in about a day of solid effort. Even if you didn't complete it probably felt like a learning experience for most people. Also, it was fun. Finally, the meetup afterward run by Greg explaining their architecture and issues encountered was interesting.
This link is specifically for the chrome extension (which is great), but also worth checking out sourcegraph's main site as well: https://sourcegraph.com/
Their in-browser code analysis is kinda amazing. They index most of the go/python/node libraries in github and make the code browsable as if it were in a local IDE. For example, here's their representation of Flask's app module:
I've found it super helpful as a way to explore new libraries and see how they are used by other developers. Props to the sourcegraph team. Excited to see what else is coming down the road.
Hahah, fair point. I should of hashtagged the post with #historical :) I loved this look back on the early days of networking and getting a little glimpse of Tom Lane's time at CMU. Reminds me of MIT's coffee cam[1].
Wow, talk about hiding the ball. Heroku Postgres 2.0 is changing the cost structure in a dramatic way. Gone is the 1TB of storage on all production plans (now the "standard" tier). Instead, you are limited to 64GB of storage on Heroku's cheapest $50/mo plan. As hoddez mentions above, you'll now need to spend $2000/mo to get the 1TB of storage space that you were able to achieve on yesterday's $50 plan.
What's additionally frustrating is they have made pricing much less granular. Instead of 8 pricing levels based on your ram requirements, you now only have 5. Old price points of $100/$400/$800/$1600 have all been eliminated and now you are stuck choosing between $50/$200/$750/$2000. These are steep price jumps between each level.
I understand that Heroku wants to highlight the new features here, but when they bury the pricing at the bottom of the post, and even include language like this:
"For those already familiar with our pricing our new standard tier is very similar to our now legacy production tier. For some of you this means migrating could actually provide over 45% in cost savings on your production database."
All of the old prices still fully exist, in no way are customers required to choose the new plans. The new plans where there are equal specs are indeed lower in price in many places. We've documented all of the legacy plans within devcenter https://devcenter.heroku.com/articles/heroku-postgres-legacy.... If theres ways we can make this more clear then would love to hear about it at postgres at heroku.com.
In regards to the storage limits, this was put in place actually to prevent users from shooting themselves in the foot. We examined all current users in this process as well as connection limits and looked to what limits were used today, as well as when people were over certain thresholds for the other problems it created. As it exists today you would now hit these limits and have a clear understanding of why, versus other problems that previously arose as a result of having them so high.
Thanks Craig! I missed the legacy pricing page and edited my post to reflect this. To make this more clear and transparent, I would include a link to this directly on your blog post. It certainly would've helped me.
If you wouldn't mind me asking, why the removal of the Kappa/Fugu/Zilla price points in the new tiers?
BART goes directly into the SFO airport. If you are taking a flight from the international terminal, it is a direct walk to the check-in counters. Otherwise you connect to the airtrain which is an escalator ride up from the BART platform.
I've found it extremely convenient getting to/from flights from San Francisco, although I'm lucky to be within walking distance to a BART station from my apartment.
One thing I wish Github would do is allow for a more granular permission structure. It would be fantastic if we could allow people without github accounts do things like submit issues, or allow certain users (e.g. non-technical staff) only have access to the wiki and issue tracker.
Right now the permissions are centered around what you can do with repositories, it's very developer-centric. I think there is a lot that GitHub could do here to expand their service to be more applicable to an entire organization.
Long-time user of boto[1] here. It has been the go to library to hook your python code into AWS and has a fairly active following on github[2].
One API point that I've found lacking in boto is a "sync" command for S3. Take a source directory and a target bucket and push up the differences ala rsync, that's the dream. Boto gives you a the ability to push/get S3 resources, but I've had to write my own sync logic.
So, the first thing I went digging into is the S3 interface of the new CLI, and to my surprise, they've put a direct sync command on the interface[3], huzzah! Their implementation is a little wacky though. Instead of using some computed hashes, they are relying on a combination of file modtimes and filesize. Weird.
Anyways, glad to see AWS is investing in a consistent interface to make managing their services easier.
That is good news. I too wrote a sync layer to sit above boto for a previous project. My use-case is a little different in that I sync from S3 to RackspaceCloud as a backup. I just use file name (object name) as the key because I know that files never change (though are added and removed). I create a complete object listing of S3 and a complete object listing of CF, diff and then sync.
One disappointing issue is that the listing process on CF is a magnitude faster than S3.
CF: real 2m7.628s
S3: real 14m15.680s
Keep in mind that this is all being run from an EC2 box, so really, S3 should win hands down.
The rsync command uses a combination of file modtimes and file sizes as it's default algorithm. It's very fast and efficient. I agree, though, that like rsync, it would be good to add a --checksum option to the s3 sync command in AWS CLI. Feel free to create an issue on our github site https://github.com/aws/aws-cli so we can track that.
Good to hear this feedback. I work for AWS; I will pass this to the team.
Feel free to shoot me an email: simone attt amazon do0tcom if you have more comments.
Please don't apologize! I don't know how many times I've googled for this type of thing, only to end up with a handful of stack overflow articles that don't even scratch the surface. This is one of the best reads I've seen on HN in a long time. Thank you!!
Agreed, this was confusing to me as well, especially since there is no real concept of "development" on the heroku platform. you can't separate your dev/test apps from your production instances in heroku, they are all just treated as "production". might be helpful to clarify this on the status page at some point.