> It's probably still true, though, says formal methods expert
Seems like click bait. The thesis is predicated on the idea that people claim this is the result of some study. I’ve never once heard it presented that way. It’s a rule of thumb.
> The thesis is predicated on the idea that people claim this is the result of some study. I’ve never once heard it presented that way. It's a rule of thumb.
Code Complete cites eight sources to support the claim that the average cost to fix a defect introduced during requirements is 10-100x if it's not detected until after release. My qualm with Hillel's original assertion is that "They all use this chart from the 'IBM Systems Sciences Institute'" (emphasis added). I haven't personally vetted Steve McConnell's citations, but I am skeptical that they all share this common origin.
Laurent Bossavit’s The Leorechauns of Software Engineering looks into this claim (and several others) and finds that, yes, many of these studies do share a common origin, and often misquote/misrepresent it.
I'm quite sure that, over the years, I've seen this claim presented many times with a citation or at least reference pointing at a study somewhere; can't find any particular example right now, unfortunately.
(This claim sits in my memory adjacent to things like "fixed number of bugs per 1000 lines of code", in a bucket labeled "seen multiple times, supposedly came out of some study on software engineering, something IBM or ACM or such".)
If you have impressively low error rates in mind, I think they are coming from "They Write the Right Stuff":
> Consider these stats: the last three versions of the program - each
> 420,000 lines long-had just one error each. The last 11 versions of this
> software had a total of 17 errors. Commercial programs of equivalent
> complexity would have 5,000 errors.
I don't recall that. What I had in mind was this result being used in support of higher-level programming languages; the argument as I remember went, when comparing teams writing the same stuff in multiple languages (IIRC Java, and either Assembly or C, were involved), it was found that the number of bugs per KLOC was about the same in each case, but obviously the number of features implemented in the same number of lines was much greater in high-level, more expressive languages, therefore it's better to use high-level languages.
I do buy the general idea (more expressive language -> some class of bugs inexpressible by design + less lines of code for bugs to hide in), but I'm suspicious about the specific result showing a constant bug/KLOC ratio in all tested languages; feels more like a lucky statistical artifact than some deep, fundamental relationship.
> It's probably still true, though, says formal methods expert
Seems like click bait. The thesis is predicated on the idea that people claim this is the result of some study. I’ve never once heard it presented that way. It’s a rule of thumb.