Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Our Development Workflow (zenpayroll.com)
89 points by martindale on Nov 7, 2014 | hide | past | favorite | 45 comments


I feel it's slightly concerning that the same text contains:

"There’s no room for error."

and

"Someone on the product team will usually play with the feature on our staging environment trying their best to break things."

The first sentence seems to imply that they try to maintain the highest quality possible. Yet the second sentence does not really fit that idea. The article pretty much only lists the tools they use.

But on a scale from Random PHP site to NASA space shuttle software, where is zenpayroll located?


To be fair, the "play" sentence comes after a whole bunch of description of spec writing, test coverage, code reviews, and automated test suites including using anonymized real data. Doing a final human "check it out" step doesn't invalidate all that, and could be helpful.


One presumably talks about the production environment, the other about staging environment.


If you truly have "no room for error" (i.e. you are shipping code to a hundred-billion-dollar space vehicle), you might have:

- Very strict coding guidelines

- Rigorous code review

- An extremely thorough test suite

- Multi-stage approval through a QA process involving many pairs of eyes and thorough, formally defined and scientifically rigorous test procedures

- Static analysis and possibly even algorithms proved correct in Coq

Simply playing with it in staging for a while and determining that everything looks good is NOT the kind of testing that you do when there's "no room for error."


No room for error probably means one thing to people working on a $10^11 space vehicle and something else to webdevs. Does that surprise you?


Webdevs (particularly on HN) are highly vocal about TDD/automated testing in general and there is a great deal of tooling around testing Rails and Django. Code review is also part of the fairly dated Joel Test and widely regarded on HN as a good idea.

These are substantially higher bars than "play with it for a while and see if you can break anything."


As a statement, it is absolute.


As a statement, it is meaningless.


even if software is formally verified, and even assuming the specification is both totally accurate and complete, there is plenty of room for issues at software deployment in a multi-component system such as this


Even so, imho on a staging environment everything should be ready to deploy barring either functional acceptance or very, very unforeseen technical issues that didn't show up systems that mimic production just a tad less closely.

Staging = "ready to roll", not "still needs testing to see if things break". Things shouldn't break on staging.

Of course you can forgo manual QA if your automated testing is so perfect that staging is just a formality. I'm guessing ZenPayroll has that covered, given that they deploy to production "several times a week".

So I assume "trying their best to break things" is just a flippant description of final acceptance, not QA testing.


Testing after merging with develop is a very, very bad practise. And there're plenty of tools to help you don't do that, just kick a build to run all your tests, which will add a tick or a comment to the branch's PR. And never merge anything without that tick or comment.


Totally agree. Testing should be done BEFORE merging, like this article explains: http://www.yegor256.com/2014/07/21/read-only-master-branch.h...


Point 2 (Branch off development) confuses me.

At my workplace we have branches for dev, staging, and production. We're working on feature branches. Now if my coworker merges feature A into dev, and I merge feature B into dev, and feature A is done and ready for release, but feature B needs more work, and my coworker checks out a new feature branch C from dev, he'll be unable to merge into production without merging in my unfinished feature B.

We branch off of production so this won't happen. Are we doing it wrong?


Couldn't you merge the features that are done to your staging branch and from there merge to production? Your staging branch would be the 'release branch' in this model: http://nvie.com/posts/a-successful-git-branching-model/


Thanks for the link.

We merge feature into dev when we want to show a feature to a fellow developer.

We merge feature into staging when a feature is ready to be tested by non-developers and to be included in documentation.

We merge feature into production when it's ready for all users.

Has been working out well so far.


We have a similar process it allows for more releases. imo there are things that require more testing then others, and they shouldn't hold up the process of releasing code.


> We merge feature into dev when we want to show a feature to a fellow developer.

Wouldn't you just show them the feature branch?


It only makes sense to branch off production for hotfixes IMO.

In your example feature-B shouldn't have made it to dev.


Why would we need a dev branch at all then, and not just work on our local branches and merge with staging?


We might be discussing semantics, but try to see staging as a pre-production environment. You want to test the changes prior to deployment in an environment that is as close to production as possible, clean and accurate, while develop stays lean and dirty.

For some projects though this is a bit overkill and if you don't think this is important chances are you can get away with something more akin to the branching model rorykoehein posted.


In theory yes, a feature that requires more work shouldn’t make it’s way into dev really.


But I don't always know it will require more work.


We have the same setup, except that you never merge your feature branches anywhere but development. Then dev is merged to staging and deployed for testing and then, when that's all ready to go (potentially after multiple dev->staging merges to fix issues), staging is merged to production and deployed.

It seems really odd to me to branch from master for a feature branch, then push that to master, then push the feature branch to staging, and then push the feature branch to production. Is that what you are doing?


Why do you have a branch for staging? Staging, to me, means deployment testing on an environment that matches (but is not actually) production. That means code should be considered ready for production before it lands on the staging server. Otherwise it's just another development server with a different name.

edit: spellign


My Team feature branches off master, which automatically merges into Production if all test pass on CI.


This looks like a very sensible workflow in general. The one thing that worries me is long lived feature branches. This workflow is fine as long as features never take more than a few weeks. If it takes longer you end up with bugs that have been fixed on the development branch still appearing on various feature branches. Also if you do a lot of refactoring on a feature branch while development goes on in parallel there will be difficult merges. It is especially problematic when work on a feature stops for some reason and an old branch sits around for months.

In general code should be merged to the development branch as soon as it is working, tested and reviewed, even if not all the functionality of the feature is done.


Can't this be managed by feature branch owners regularly merging from development? If I was coming back to a feature branch that had been dormant for a while, the first thing I'd do is pull from development to catch up. Yeah, it might break the feature branch, but better to break it there than actually on development.


Did you miss the part about rebasing?


It looks like rebasing is only done after the feature is complete. Maybe that was just a simplification.


Why does it matter? All the fixes will already be there after rebase anyway.


The point is that if you're not continually merging development down into your feature branch, then rebasing becomes overly difficult. Merging can introduce bugs if not done correctly.

At my company, we eschewed feature branches about a year ago. We have two branches - development and main, with a branch ("tag" in git speak, we use TFS) created for each release of our product (we're a software company, not a service company). Development happens in the development branch, and is merged up into the main branch after code freeze.

This does not preclude us from releasing often. With judicious use of feature toggles, it's simple to release with a feature that's not done.

Feature branches would just create a mess for us as there's necessarily many communicating parts between teams, and the number of feature branches required to keep them all in sync would be exponential.


1. "Don’t optimize for the short-term" vs "Keep it simple and straightforward".

Often the simplest thing may not work in the long run. But its better to err on the side of keeping things simple.

2. Code reviews + off mainline development is a potential disaster in waiting unless the review process is fast. The fastest review process I have seen is automated testing and pairing when someone is working on something that cannot be caught easily by automated tests like synchronization.


>> 1. "Don’t optimize for the short-term" vs "Keep it simple and straightforward".

>> Often the simplest thing may not work in the long run. But its better to err on the side of keeping things simple.

IMO, the best reason for simplicity is that simple things can be torn out and replaced more easily than complex things. So the simplest thing isn't necessarily right, but you haven't wasted much resources by choosing it, so you've left your options open. You always want to have options.


Ah, that's what I meant by "erring on the side of keeping things simple". It is okay if you make decisions that are not optimal in the long run, but are simple since they can be torn apart.


Nothing special. It is just normal workflow when you use common sense, no? Except the development branch because master is new development branch.


Reading things like this make me long for the simplistic development environment of a web startup. Almost none of this would fly in an "Enterprise IT" environment, except in the very rare case where the team is legitimately building (and owning) a product rather than creating business process optimization crutches.


Dedicated QA team? Person? Feels like they are missing a crucial piece of non-devs testing and accepting features. Only my opinion but in my experience developers have a different definition of done then non developers so having both QA and even some user acceptance testing goes a long way.


Also this article is a year old. Can we get a title change? and maybe someone from ZenPayroll can chime in how their process has changed, if it has?


And THIS is how you write a job posting. Explaining the setup and perhaps some of the culture lets potential candidates know right away if that's an environment matching their style. (And it doubles as an interesting article too.)

Disclaimer: satisfied ZenPayroll user here.


This is the first time I've ever seen a write up of a development/deployment process that makes any sense. Good on ZenPayroll for having their fundamentals straight.


This was an infomercial in the form of a blog... dissapointing


> The name of this test server is Leeroy, who does all this with the help of Jenkins.


No usability testing?


Presumably this happens at the spec stage. At least it's implied.


Error 1008 for me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: