In my experience when you do this you end up with anaemic tests that are basically useless. As the page comes close to admitting, all you do is write the same code twice. The result is a lot of wasted effort on unit tests that never fail.
In an app designed like this the integration tests are the useful ones, because they're more contentful and test assumptions that are more likely to be wrong. The article flatly asserts that code written in this extremely decoupled style will require fewer integration tests, but I see no evidence for this. And you still need to figure out how to make them run fast.
This has been my experience as well. It is far more useful to test scenarios, which generally involves some amount of integration. This also ensures the most important level of decoupling, that of the test from the system. The author's approach to TDD results in tests that are heavily coupled to implementation. This may be fine during design but they become a huge liability over the long run.
My experience is that automated testing should test the system not the implementation.
The _unit_ test for the service object might be anemic, but it's far from useless IMO. The unit test helps you design the object under test and it is written first if you are practicing TDD.
The argument for having fewer integration tests goes like this: if it's hard to write isolated unit tests for some classes you end up testing many more execution paths in integrations tests.
> The _unit_ test for the service object might be anemic, but it's far from useless IMO. The unit test helps you design the object under test and it is written first if you are practicing TDD.
But it doesn't help when your test is solving the same problem. If your test says "this object calls collaborator1 and collaborator2, then passes both results to collaborator3" then that's exactly the same design problem you'd have if you were to just write the object straight-up.
This is why people talk about BDD rather than simply TDD. It's not about having tests, it's about identifying the user-facing behaviour that you want, testing that, and then writing your objects to conform to it. This really does help your design, because it lets you start with the appropriate interface on the user side and work down from there, and the resulting object boundaries can be better than the ones you'd have thought of if you started with design. But if you restrict your test to a single class and mock its collaborators then that implies you've already decided what your class boundaries are; in effect you've already done the most important part of the design.
>The argument for having fewer integration tests goes like this: if it's hard to write isolated unit tests for some classes you end up testing many more execution paths in integrations tests.
True enough - if there are things you can't test in a unit test then you need to test them in an integration test. So if you have a complex piece of logic that you want to run several tests of, it's best if you can test that piece of logic in isolation - this is exactly what unit tests are good at.
But you still need to test all your integration pathways. All too often a class doesn't contain any logic as such, particularly when you design the way the article suggests - it's just an integration between its collaborators. For such a class there's no point in unit testing, because the only thing the class actually does is integration.
I'd argue that even a test which is a copy and paste of the implementation method - literally the same code, twice - has value, because you can then refactor the implementation as much as you like in future and know that you're covered. Nothing says that because they are the same now, they will be forever.
As someone who has personal experience making this mistake, I disagree with you. You lose the most important purpose of the test: The confidence that your code works. If you write the code twice all you know is that your code hasn't changed.
And in my personal experience, you CAN'T refactor because the test is so tightly aligned with the production code. EG: moving a variable declaration up or down one will break a test even though it didn't change anything.
Pluggable systems (loose coupling) are the easiest way to create a system that is cheap and easy to maintain. I've found that a lot of Rails developers in particular don't see the benefits of this and they really, really like tightly coupled systems because of the convenience of it all.
Benefits like fast, easy testing, easier maintenance, swappable external things like databases, queues, external api's are apparently not enough of a benefit to change away from rails.
I wrote a pattern (http://obvious.retromocha.com) that makes pluggable systems easy based off of ideas from Uncle Bob, Alistair Cockburn, and Kent Beck which I think makes writing pluggable systems a lot easier, but as with all similar structures like Hexagonal Architecture and the like, Rails devs seem to not see the value in it until their app is a complete mess, at which point they would rather add more rails code than fix the underlying problems.
For a lot of developers, now that rails is popular, rails IS the architecture and things that go off the rails are either a waste of energy or too difficult.
Rails is what Kent Beck calls a connected architecture and as his incredibly scientific graph describes, the cost of change spikes dramatically over time compared to a modular architecture: http://www.threeriversinstitute.org/blog/wp-content/uploads/...
Until developers adopt modular or at least more service oriented designs, nobody should be surprised by slow tests, features taking too long to ship, the cost to ship code to rise dramatically over time. It's mostly our own fault.
Here's Uncle Bob talking on his version of the idea extensively: https://www.youtube.com/watch?v=WpkDN78P884 The main idea is to separate the system into entities, interactors and boundaries, which correspond roughly to model, controllers and interfaces+DTOs (Data Transfer Objects), but with the difference that model classes are not exposed through the interfaces. Rather, the DTOs are used to communicate the changes in the data.
We have been using this approach for a year now, with REST on top, and it so far has worked beautifully. The entities map naturally to REST resources and DTOs makes nice request and response classes.
Armed with CQRS (now I sound enterprise-y), the data storage is easy to work with using the same DTOs (the data storage and its interface do not know anything about the entities either). On the DB implementation level, we work on the DTOs, which are trivial to store to the database (even without ORM, although it does remove much of the boilerplate). Only one class - the entity repository - is needed to convert the DTOs to entities, and this only happens on create, update, and delete. Reads (due to CQRS) go directly to DB without instantiating any entities - neat. We also use 'unit of work' -pattern to abstract the notion of transaction on create, update, and delete operations, which simplifies the interactor code, hiding the nature of the data storage (making it a swappable in this respect too).
But to stay on topic, our tests are fast too as the interactors can be tested by injecting an in-memory database as a data storage to test the business logic, and the interactors have no dependencies on any frameworks, web or otherwise. Testing the interfaces makes BDD feel natural and also supports clear separation of the REST layer (the communication channel) and the interactors (business logic).
Where I work, we just had a long internal debate about the usefulness of monolithic frameworks such as Rails and Symfony. I argued against the frameworks, but much of the team is committed to using them. The convenience of having everything in one place apparently is important to a lot of people, despite all the problems that are caused by that. I posted the debate here:
It's not actually Rails that's the problem, often: it's a cultural issue. You can build a Rails app that has a very thin 'rails' layer and has a super well decoupled domain model layer.
Absolutely, the Rails app I'm working on right now is just like this. Still feels like Rails but the complex logic and interactions can run without it.
I would be interested to more details on how your architect things - we've been in the process of moving the business logic of our app into service objects as 2000 line models have become a bit unmanageable :D
May I advert to you the recorded mutterings of a Gentleman named Fred George who advocates for "Programmer Anarchy" and "micro-web-services" (the last of which is a nice idea)
Surprisingly, even though the original version was built in ruby, people are taking the structure and using it in .NET, Java, etc. A lot of things we had to build in ruby already exist as things like interfaces in other statically typed languages.
In a language like Scala Obvious is literally just a way you would organize your program, not so much a library. It already has everything else you need, even things like immutability (if you're into that kind of thing). I just wish it compiled faster.
Obviouscasts is built on Obvious. I built a cool pluggable newsletter tool that you could trivially plug in Mailchimp, Mailgun, Sendgrid, or just your boring SMTP stuff to it. It was a heroku deployable newsletter so that you didn't have to pay monthly fees to a company like AWeber, Mailchimp, etc. just to collect emails for a list that won't be used very often.
I'm working on building out some new things in go with Obvious architecture behind them.
At my old job, some of the Obvious structure ended up powering the ruby services behind StBaldricks.org, which does $30+ million in donations and millions of page views a year.
I honestly don't know of many more examples beyond that, but it's a very small project and it was open sourced in January.
I looked at the source and saw that you basically built a way to declare a method which asserts a contract at runtime.
I like the idea of an object or collection of objects having a generic interface that can be mapped to all sorts of things such as a commandline or an HTTP POST etc... I am also at a job where we are struggling with the ol' "monolithic codebase" and we're investigating solutions such as this architecture.
Was mostly with the author right up until he shows an example of the worst possible unit test. Testing the implementation and not the intent.
This feels like how Spring must have been born. Trying to think of an alternative, and I think I would be happy with the fat controller example originally given. It's simple enough and straight forward.
Perhaps it's just bad examples all the way around...
I'd be happier with the fat model version than the fat controller. At least that way is amenable to Using the same code via multiple different user interfaces. For example, the normal web site, a mobile app, a data import, and an admin path, and something else that hasn't been though of yet. In general, designing things as though there will be more user interfaces for it in the future is helpful, and tends to push logic out of the controllers.
I see a lot of unit tests like the ones described in this article and have some doubts about their effectiveness for doing much more than spotting syntax errors. Integration testing is still important if the goal is avoiding defects getting out to production.
I think this was only a poor example because the method under test was a "director" style method (not sure what the name of it would actually be) that simply forwards a bunch of messages/commands to other objects and doesn't do much actual logic/computation.
Tests for these "director" style methods must by nature assert that all the right messages were forwarded to the right objects under the right circumstances, so yes, you do end up with tests that look a lot like what they're testing.
Agreed that ideally you are only testing ins and outs though, but I think this case was exceptional.
It is better to not have a test at all than to have a test that looks a lot like what it is testing. I never uderstood that crazy push for 100% unit test coverage that some are advocating. Unit tests are just one tool among many. Use with care and where appropriate.
As I mention in the post, I'd also be perfectly happy with the original if the rest of the application was small and simple. It's hard to come up with an example that's both meaningful and not too long for a blog post.
As for the testing issue, we have a controller-like object there, and a controller’s job is to coordinate sending messages between collaborators so I don't think a unit test for this object should test anything than these interactions. This also might be a consequence of using a simple example.
Well what version of JEE are you using? Compare that to writing unit tests for the version of JEE that existed when Spring was created. A lot of what makes EJBs easier to test today is because of influence by dependency injection frameworks (like Spring) from yesterday.
I don't know (and don't care) if this makes your unit tests faster, you shouldn't write code like that. He took something that should be simple and made it really complicated. The execution time of your tests shouldn't dictate how you will design and write your application. Especially if it makes things harder to maintain and understand.
Edit: I love tests and they should be fast so that you can run them every time before pushing your changes but as I said they shouldn't dictate how you design and write your application.
The point here is not that the tests execution time should dictate the design. On the contrary, the idea of this post was to stress the importance of good design, and that fast tests are just a side effect of it. Now, you might disagree with me on what constitutes good design but that's a different issue.
Don't get me wrong, I agree with the idea that you should look at the root cause of why your tests are slow instead of trying to "fix" the tests. I just feel that what you demonstrated is bad design. So you ended up with bad design and fast tests.
Isn't part of the problem the fact that Ruby is ridiculously slow? I'm not even trolling, I am a Ruby programmer, I've used the language for about 5 years now and seeing friends of mine run their 200+ files test suite in under half a second (C programmers), I can't help but think about how nice that must be to be able to run your tests when you want to without thinking of the consequences (and yes I understand comparing C and Ruby is ludicrous, just pointing to the fact that Ruby is still extremely slow and would very much benefit from a good speed bump eventually).
Cause slow tests as a symptom pretty much means that if you're not an excellent programmer then tough sh*t, testing will be slow and painful, which ultimately drives people away from those good practices.
Also, I find ironic that Rails (as a framework that is supposed to empower people and make them efficient) is, according to many comments in this thread and "tech pundits" conducive to the kind of tight coupling and dependency that will lead to bad design choices and slow tests.
The slowest part of most tests suites are the integration and acceptance tests. Regardless of how decoupled my actual domain logic is, these will usually be the bottleneck, so I can't really agree with slow tests always being a symptom of some underlying architecture problem.
1: Added a users resource beneath the mailing list, ie
resources :users
resources :mailing_lists do
resources :users
end
add_foo is almost always a sign that a new nested controller should be made, and there is almost always a need for a delete as well - in this case users should be able to remove themselves from a mailing list.
2: Put the add method on the mailing list and not on the user, because that is where I'd expect the least logic to be.
because:
3: I'd put the actual mailing list mailer logic in a separate class for the configuration of it.
after reading sandi metz POODR[1], I tend to prefer the approach on the post too. However, I also must say that the time it takes to execute unit tests, even with factories, db access etc is still an order of magnitude faster than running things like feature tests with capybara. This is by far the biggest drain on test time (and resources), and these are also the tests most difficult to write and maintain. Way too often, when we decide to change some functionality, there's a whole battle with tweaking capybara tests to work again.
Perhaps I'm showing my lack of knowledge or experience with capybara, or integration testing, but that's one aspect I wish there were some better tools / approaches. Having to dig through css/xpath elements to click on or perform interaction is still a major PITA. If anybody has some suggestions to share, I'd love to learn more!
I think the key here is not to try to aim for anything like full coverage with capybara, just enough to be effective. Martin Fowler has some words on this here: http://martinfowler.com/bliki/TestPyramid.html
Thanks for the link. We're pretty much using this pyramid approach intuitively, but just as a single data point - one feature test takes around 15% of the execution of our entire test suite (with around 850 tests). This one test is quite comprehensive and touches the core functionality, but nevertheless suffers from all the negative aspects Martin Fowler mentions.
Were we to improve our unit tests to run even faster, as the post suggests, the difference in execution time will become much more pronounced. several orders of magnitude.
I think feature/UI testing is still an interesting space for better tools, but of course the problem is very difficult to solve.
It's worth stating that this is the MailingListsController and there is no mention of a MailingList. It seems to me like some functionality in the "before" example could live in the MailingList class.
In an app designed like this the integration tests are the useful ones, because they're more contentful and test assumptions that are more likely to be wrong. The article flatly asserts that code written in this extremely decoupled style will require fewer integration tests, but I see no evidence for this. And you still need to figure out how to make them run fast.