Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I start with the lowest-level, simple operations. I then build in layers on top of the prior layers.

I was discussing this with a coleague this week: starting by the lower level details vs starting from the more abstract/whole api. I prefer to start by writing a final API candidate and its integration tests and only write/derive specific lower level components/unit tests as they become required to advance on the integration test. My criticism on starting with the bottom-up is that you may end up leaking implementation details in your tests and API because you already defined how the low level components work. I have even seen cases where the developer end up making the public API more complex than necessary due to the nature of the low level components he has written. Food for thought!



This is the approach suggested in the book "Growing Object-Oriented Software Guided By Tests". You start with an end-to-end integration test and then write code to make it pass, testing each component with unit tests along the way. I find it useful myself, although I don't always adhere to its recommendation to start off with a true end-to-end test (sometimes I can identify two or more layers that can/should be worked on separately).


Your point of view resonates with me.

Over the years, I've learned to test "what a thing is supposed to do, not how it does it". Usually this would mean to write high level tests or at least to draft up what using the API might end up looking like (be it a REST API or just a library).

This approach comes with the benefit that you can focus on designing a nice to use and modular API without worrying on how to solve it from the start. And it tends to produce designs with less leaky abstractions.

Of course YMMV.


I just updated our “best practices” documentation to include the recommendation that tests be written against a public API/class methods and the various expected outcomes rather than testing each individual method.

I think the latter gives an inflated sense of coverage (“But we’re 95% covered!”) but makes the tests far more brittle. What if you update a method and the tests pass but now a chunk of the API that references that method is broken, but you only happened to run a test for the method you changed?

I like to think I’m taking a more holistic view but I could also be deluding myself. =)


Why doesn't your best practices document include a requirement to run the whole test suite?


Ah, I guess that wasn’t clear. Typically, yes, all tests would be run, though as someone else mentioned sometimes during development a smaller subset may be run for the sake of time (but all tests would still be run prior to QA and final release).


In some contexts there may be valuable tests that take long enough to run they should be run out-of-band. That said, I don't see where the parent says they don't run the whole test suite.


> now a chunk of the API that references that method is broken, but you only happened to run a test for the method you changed


Ah, yeah, not sure how I missed that. Maybe they meant the test for the broken method isn't run because it doesn't exist, but I very much agree that that's not the most natural interpretation.


Sorry, this wasn’t totally clear and I thought “run all tests” was implied. What I was trying to get at was the difference between “we have a suite that includes individual tests for every single separate method so we have great coverage” vs “we have a suite of tests that run against public APIs that still manage to touch the methods involved”. The former may test “everything” but not in the right way, if that makes sense.

I should have said “you’ve only written a test” rather than only having run a test.


I agree with this approach. It seems that one of the consequences of "Agile" is a general reluctance to invest a fair bit of time upfront thinking about interfaces. It's as if the interface design process somehow became associated with documentation, regardless if any non-code documentation is ever created.


I'm not convinced this is a consequence of agile in as much as it is more a consequence of bad pragmatism or expectation management. In terms of setting expectations, you are not doing anybody any favours by rejecting some up-front thinking in favour of rolling with the punches. And in the same way, you are not being pragmatic by cutting every corner you possibly can - pragmatism is just as often about deciding when to spend time up front to save time later on, and not just sacrificing everything from the short term (which I think is a common and valid criticism laid against poorly implemented agile workflows).

That said, I don't think TDD is a good way to figure out your API. It has its value for sure, but not if you take a purist or dogmatic attitude towards it. In any case, it seems to assume that more tests are always better (hence you start every new piece of code with a corresponding test), when I'd argue that you want a smaller suite of focussed and precise tests.


> That said, I don't think TDD is a good way to figure out your API.

There are plenty of ways to do it and one way that has worked well for me is in practicing TDD. So it is indeed one good way of doing it.

There are also people who are inexperienced with TDD or who misunderstand it and implement it poorly. There are people who are just not ready to design APIs yet who are forced to learn on the job. None of those things invalidate what I said.

If TDD doesn't work for you then you'll have to find what does and I hope that you test your software in a sufficient manner before deploying it.

> In any case, it seems to assume that more tests are always better

I think you're projecting your own opinions into the matter. There's nothing about TDD that prescribes what kind of tests one should write or how many. You could write acceptance tests and work from there if that suits you better. It's test driven development. The operative word is what we should focus on: testing is a form of specification and we should be clear about what we're building and verify that it works as intended.

It comes from an opposition to other forms of verification that used to be popular: black box/white box testing where the specifications were written long before the software development phase began and when the testing happened long after the software was developed.

That's where it's most useful: as a light-weight, programmer-oriented, executable specification language.


> There are plenty of ways to do it and one way that has worked well for me is in practicing TDD

Unless you're talking about the default state of failing tests when no implementation exists, I'm a bit confused as to how TDD helps with interface design.


Best of both worlds is to design top-down and build bottom-up. By design I don't mean something abstract, it should be translated into code sooner rather than later.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: