Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The switch from Node to Go seems quite popular right now and it honestly makes me thing there's something wrong with the general perception of Node.

We are currently in a world where we have a huge amount of traffic on almost every web app with just a discrete success, but we still make the mistake to pick a technology that seems "good enough", instead to pick a "great one" because looks slightly harder to manage/learn/deploy. I know that during the early stage, pace is very important and RAILS or Node are easier and faster to handle compare to Scala or Erlang, but sometimes another technology at the beginning would save a lot of headaches and night calls. We still fail at the very early stage to choice the right technology, but nobody is afraid to admit it and to switch, I find this amazing.



Where you see failure, I see natural evolution. You don't know how/where/when your application will fail, so it's pointless to try to prevent it.

People uses Node, Rails or whatever because it's easier to develop with. Then you understand your problem and optimize them away from your core.

I don't think this way of handling thing will ever change.


I think it makes sense, early on you're still "figuring it out" so a flexible framework thats easy to dive into is advantageous.

I don't see anything wrong with planning a rewrite X months into a product, since 90% of things don't make it to month X.


A code rewrite means time spent working on stuff that isn't delivering features, which means your business could be stagnating, making customers dissatisfied, giving competitors an opening.

As an example, I recently migrated a Node app from MongoDB to Postgres. This ended up taking two and a half weeks, due to re-writing a fair portion of the server-side code. That's a long time to go without delivering new features or fixes. We justified it because we had inexplicable data loss (not pinpointed on MongoDB, but a poor reputation is a hard thing to remedy) and our data model did not suit a document store. But you have to then accept it when the business folks say "well, why didn't you get it right the first time? Aren't you supposed to be the expert?".

As technologists, of course we find it fun to try new technologies. But outside of the main tech hubs, a large proportion of developers aren't working for tech companies whose main product consists of web services/APIs, in which case we need to always consider the costs/benefits of any tech switch. If it's not justified, you're stuck supporting flakey apps until you can move on to the next gig / learning experience.


I understand where you're coming from with all of those points, but I take a different view on most of them.

For example, I would argue that going a couple of weeks without delivering new features is far from a long time. If your features are so simple that they can all be added in that sort of time frame, and if failing to do so is enough for a competitor to cause serious damage to your business, then it seems unlikely that you had a strong business model/value proposition in the first place.

Likewise the idea of sitting on a known and unexplained data loss bug indefinitely is horrifying. Again, if you don't consider removing that kind of liability a priority, it seems like a matter of time before something disastrous happens. Depending on the nature of your work and the data involved, this might even amount to negligence and give legitimate grounds for affected customers or regulators to take legal action.

Finally, if you have management who expect everyone technical to be an expert on all technical tools and make perfect choices, then they are both ignorant of how techincal fields work and extremely poor at managing risk on a project. Once again, with that kind of person at the helm, you are already doomed.

Personally I favour tried-and-tested over new-and-shiny for most projects. My experience has been that many new and trendy tools have a good sales pitch but don't stand the test of time. After using them for real for a while, developers often start to understand why things were done the way they were done before and discover that this week's silver bullet comes with limitations or risks of its own.

In any case, whether you're using time-tested tools or expecting that newer really is better, building up technical debt to unmanageable levels will kill any software project. You learn as you go along, and at some stage your greater experience may suggest that a different approach would give much better results, and then you have a cost/benefit question of whether and when it's worth doing that work.


As a counter-example, I recently read Twitter was built on Rails 6 months after it was released. Once the concept was proven out, and the production app was failing like crazy (fail whale everywhere) - they rewrote their entire stack in Scala/on the JVM.

Now should have Twitter spent the first 6 months of their life building out the perfect infrastructure with proven tools spending the little money they had mainly on engineering or were the justified in pushing that technical debt down the road to focus on other things today.

It seems to me, for most startups, that the marginal cost of building it "right" today is much higher than rewriting when you can/if you need to.


Twitter was also originally built as a side project to amuse some friends. It wouldn't surprise me if someone took the view that they were writing a stupid little throwaway thing which everyone would probably get bored of, so why not try learning this hot new web framework everyone's talking about.


"A code rewrite means time spent working on stuff that isn't delivering features, which means your business could be stagnating, making customers dissatisfied, giving competitors an opening."

On the flip side, over-engineering before launch means time spent NOT MAKING ANY MONEY because you haven't launched yet. It means pouring engineering effort into something whose success is still hypothetical.

If the biz folks want to know "why didn't you get it right the first time", ask them why they didn't become millionaires at their first jobs.


I absolutely think there's value in prototyping to explore business opportunities, or to evaluate choice of technology. I think problems arise when you try to do both at once.

Also, once the business gets hold of a prototype, it can sometimes be hard to convince them to pay to replace the prototype with a new codebase which seemingly does exactly the same thing; after all, what they've got works, right? (For some definition of "works".)

It can also be difficult to work out exactly what needs to be kept from the prototype; how do you tell what functionality is intended, and what is just a non-essential byproduct of the implementation? Yes, you can formalise the specification, but this takes time, and customers will complain about any changes to functionality.

You'll be served in good stead if you pick a tech that can support all the normal boring stuff that a robust web app needs, while also providing rapid development capabilities.


Great piece. In fact, this is another big topic. I can't image a bank switching technology in a such easy way for components that maybe are critical.

A technology switch can have multiple sides, sometimes the success is way beyond expectations and the current implementation doesn't fit the real requirements, making the switch looking more to a success rather than a failure. On the opposite it's very common to have the exact same problem you described, new trending technology picked, hit the limit, switch to and old more robust solution, this is definitely a failure and is something managers don't like.

I faced a similar problem with Mongo 2 years ago, had the same switch to PostgreSQL.


The problem with older technologies is that their scaling capabilities are limited. My company is currently trying to use Microsoft's OLAP tool for data analysis when it should be using something more scalabke for the quantity of data we have.


I think you're wrongly equating "old" with more undesirable epithets like "enterprise" and "proprietary". There's a ton of old technologies that scale marvelously, and using software age is a heuristic as probably really poor when you should really be studying internal qualities.


I don't see anything wrong here either, I kinda like the fact that as engineers/developers/coders we still have the opportunity to admit that an early stage decision doesn't fit our needs anymore and we can then change.


You can make a bunch of money with something like Rails - if you're charging people directly, rather than low margin activities like advertising. See: Bingo Card Creator, Basecamp, and a bunch of other stuff. I'm a big Erlang fan, but objectively, Rails has a lot more to 1) get you up and running quickly and 2) iterate until you find some kind of product market fit.

Timely and relevant quote: https://twitter.com/patio11/status/587769019261829120


> The switch from Node to Go seems quite popular right now

TJ Holowaychuk desertion was a strong hit on Node troops' morale.

https://news.ycombinator.com/item?id=7987146


TJ had already written every module possible in Node so he moved on to rewrite them all again in Go. He'll do the same once he's exhausted all Go modules there are to write.

Suffice to say his exit hasn't changed anything, his efforts have been taken over by others and NPM is still growing at a fast rate.


When I read your post I thought "What? I'm using Koa right now and it's maintained by TJ" but the post says it's the only one haha.

Did anything of this change after IO.js was forked?


RAILS or Node are easier and faster to handle compare to Scala or Erlang

I don't know about Scala, but if one were to start from tabula rasa, I'd argue Erlang is easier to handle than Node. You have a standard set of patterns embedded in OTP and a well-defined process model purveying the entire language, whereas Node at its core being a reactor-based event loop means that you're bombarded by a variety of concurrency patterns that are all lacking in some way. This is besides all the advantages of location transparent distributed nodes, the Eshell in general, multi-core scaling (modulo Amdahl) and a pattern matching engine like none else.

Rails isn't an adequate comparison, as that's a framework. Nitrogen/N20 + BossDB or Chicago Boss would be close competitors.


> Chicago Boss would be close competitors.

I have commit access to Chicago Boss, and while it's a really cool effort, and is well suited to some niches like the one I'm using it for, it is very, very far away from being a competitor in terms of the flexibility and oodles of gems you get out of the box.


I'm actually using it myself currently. There are certainly rough edges (In fact, I think I'll be submitting a patch for boss_mail soon to tweak some of the default gen_smtp options that bit me). Migrations aren't as caliber or essential as in Rails, there is no asset pipeline, there isn't a scaffolding generator and so it must be filled manually, but it's still more-or-less a complete experience.

I don't know about "flexibility". It's much less rigid and opinionated than Rails, and it's easier to maintain modified versions of the source code in your project due to how Rebar handles dependencies. So I'd say it's pretty flexible.

Default library support, yes. But it's not that bad. Most major tasks are covered. The module system and resulting encapsulation (on top of runtime + OTP guarantees) means that using dumped libraries is easier and more reliable, so I had no qualms with reading and integrating, e.g. a wrapper to GraphicsMagick that serializes output to native Erlang proplists for batch image uploading.

The ETS session engine, the not strictly OO data mapper, functional tests, inbound mail server, in-memory MQ and model event watchers plus first-class WebSocket gen_servers are all nice perks.

My main concern is that commit activity has been dwindling down and the present lead maintainer (danikp, I think?) seems to be only sporadically active. But it can still be salvaged, I'd wager most of the serious users have private forks.

EDIT: By the way, out of curiosity, what "niches" are you using it for?


I'm entirely in favor of that. At least in a startup context, you can't possibly know what the correct great technology is until later. Great technologies are great because they are optimized for some particular problem, which naturally means they're not as good at other things.

That all successful apps have a lot of traffic doesn't matter, because most apps are not successful. Building for scale from day 1 wastes resources better spent on making something people actually want, because that's what increases the chance of getting to where scale actually matters.

I think the real trick people should learn is to stop building monoliths, so that when item X is problematic it can be easily swapped out. But even that only makes sense if the cost of a more modular approach is relatively low. It would be great to see good early-stage toolkits that encourage novices to build more modularly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: