Unit tests are more commonly written to future proof code from issues down the road, rather than to discover existing bugs. A code base with good test coverage is considered more maintainable — you can make changes without worrying that it will break something in an unexpected place.
I think automating test coverage would be really useful if you needed to refactor a legacy project — you want to be sure that as you change the code, the existing functionality is preserved. I could imagine running this to generate tests and get to good coverage before starting the refactor.
>Unit tests are more commonly written to future proof code from issues down the road, rather than to discover existing bugs. A code base with good test coverage is considered more maintainable — you can make changes without worrying that it will break something in an unexpected place.
The problem is a lot of unit tests could accurately be described as testing "that the code does what the code does." If the future changes to your code also require you to modify your tests (which they likely will) then your tests are largely useless. And if tests for parts of your code that you aren't changing start failing when you make code changes, that means you made terrible design decisions in the first place that led to your code being too tightly coupled (or had too many side effects, or something like global mutable state).
Integration tests are far, far more useful than unit tests. A good type system and avoiding the bad design patterns I mentioned handle 95% of what unit tests could conceivably be useful for.
I disagree in my experience, poorly designed tests test implementation rather than behavior. To test behavior you must know what is actually supposed to happen when the user presses a button.
One of the issues with getting high coverage is that often tests need to be written for testing implementation, rather than desired outcomes.
Why is this an issue? As you mentioned, testing is useful for future proofing codebases and making sure changing the code doesn't break existing use cases.
When test look for desired behavior, this usually means that unless the spec changes, all tests should pass.
The problem is when you test implementation - suppose you do a refactoring, cleanup, or extend the code to support future use cases - the test start failing. Clearly something must be changed in the tests - but what? Which cases encode actual important rules about how the code should behave, and which ones were just tautologically testing that the code did what it did?
This introduces murkiness and diminishes the value of tests.
I think automating test coverage would be really useful if you needed to refactor a legacy project — you want to be sure that as you change the code, the existing functionality is preserved. I could imagine running this to generate tests and get to good coverage before starting the refactor.