Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> While you notice immediately that something is on time, you'll also notice really quickly that something is of bad quality, unreliable, inconsistent, low performant, etc

Not in my experience. Our team has a couple core perf metrics that are alarmed, like page load and missed frames, but it's easy to do really bad things that won't trigger the alarms. Or do them such that the automated tests are using different content for the pages they test than the real users who will see the commits weeks later. E.g. someone commits a change to feature x that locks up the screen, but the test user pool never uses feature X, or never puts content into it and just sees the empty state screen.

Quite common for developers here to write stuff that works for 99% of users, but falls over otherwise as well. Like today I fixed an issue where tapping a button on one screen to go to another really fast, like under 1 second, crashes because of a race condition. Testers aren't going to notice that. It just shows up in the company's overall crash rate which is spread across 4000 developers. Automatic UI tests caught it, but the responsible team had just filed the crash stack trace JIRA into their backlog and left it to sit for months. Similarly, today, we had a production issue because someone wrote some code that only works for certain users who had already accepted a certain terms of service screen.

Shipping a feature is rewarded heavily. Not screwing up the app for edge cases and perf and people who have to implement the next feature after you? Good test coverage? Not at all. If you dare to give an estimate that includes full test coverage, PMs will just take you off the project and pick a developer who doesn't do that.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: