Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Builds on https://agilemanifesto.org/, maybe overly so.

I disagree with some of these points:

- Minimal Viable Products over prototypes: depends on use case. Prototypes within a timebox evaluation are helpful and don't have the overhead of delivery. Maybe better: prototypes during discovery

- APIs over databases: nah, use both.

- Clever use of computation over convenient assumptions : if the assumptions are well founded, calibrated from external references, etc., then no issue. For example, you don't need to perform raw research to understand how many joules heat water by a degree Celsius

- Dashboards over reports: depends on the use case. Dashboards generally limit use choice.

- Validation, scrutiny and repeatability over convention and ad verecundiam: Reasonable (though argument by authority is the more common name than ad verecundiam).



I lean towards your viewpoint as well. Their assumptions (axioms, postulates?) are highly controversial, while the actual principles seem quite sound to me.

The only issue I can see is with #5. I would argue for decision making, you absolutely need a single metric, otherwise the process collapses into bickering over which measure is more important at the time (often for political or interpersonal reasons). The point is a bit vague on what exactly is being evaluated (product quality, which means what?). For launching products or running A/B tests, aim for a single metric as your decision framework. If you must have more than one, then be explicit about the tradeoffs in a flowchart: e.g., "if X is > 0, we launch. If x <= 0, but y > 2%, we launch, otherwise no launch".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: