Because software is just not something you can just throw more money and people at and expect it to get better. In fact, that is the surest way to make a piece of software worse.
Software, despite all the attempts to come up with ways to commoditize its production, does not scale.
But we are talking about reading bug reports, not writing software. Also, I would argue that finding and fixing bugs actually does scale, as opposed to designing and writing new systems.
How do you recognize 100,000 reports of the same bug but different manifestation ? What if it's an architectural issue (and many bugs are)? What if you are already working on a total re-write that's going to be released in 2 releases ? What do you do with bugs found internally ?
Also remember that many bugs requires much more then changing a couple of lines of code, they require cross module changes and re-testing which can lead to more bugs.
You need the small subset of people who actually understand how the software works to fix the bugs, lest you just end up playing whack-a-mole and introducing new bugs.
Ideally, it wouldn't be like this, but we don't live in an ideal world, and there's always some unintended consquence when you start pulling the string on anything nontrivial. And there's never sufficient automated testing, especially for the things that haven't been a problem yet.
Fixing the bugs is one thing, but identifying that user reported bugs are in fact bugs is an entirely separate issue. The latter takes a ton of time, requires no developers, and is what everyone above you in this thread is talking about.
Software, despite all the attempts to come up with ways to commoditize its production, does not scale.