It's not that I don't want to give you a sunny solution which makes the problem go away forever, but this is an extremely difficult problem to solve, especially as someone might be located in some foreign country with ineffective law enforcement.
Facebook has been making it harder for random strangers to contact people under a certain age, so that may well help, and we'll see if it does. And we could probably teach teenagers how to remain safe on the internet, and give the support needed to not be too emotionally reliant on the internet. That might get you part of the way.
You could run TV advertisements to raise awareness about how abuse is harmful to try to dissuade people from doing it, but that might make the general public more scared of it (the chances their family specifically will be affected has to be remote), and more inclined to "regulate" their way out of the problem.
You could try to take more children away from their families on the off-chance they may have been abused, but what if you make the wrong call? That could be traumatizing to them.
You could go down the road of artificial child porn to compete with child porn, and robots which look like children, but I don't think the predators specifically are interested in those, are they? And that comes with some serious ethical issues, and is politically impossible.
We can't just profile "whoever looks suspicious" on the street, because people who are mentally ill tend to behave erratically, only have a slightly high chance of being guilty, but have a dramatically high chance of being harassed by police.
If we can get out of the covid pandemic, this may help. Child abuse is said to have risen by a factor of 4 during the lockdowns, and all those other things which were put in place to contain the virus. It's possible that stress from the pandemic, and perhaps, opportunities to commit a crime may have contributed to this. But, this is an international problem, even if the pandemic were to vanish in the U.S., it may still exist overseas.
Security by obscurity has never been particularly effective, and there are some articles which allege that detection algorithms can be defeated fairly easily.
There have certainly been busts in the media, including some depraved individuals who have blackmailed teenagers into sending them images, one of which set the dangerous precedent of tech companies developing exploits, and refusing to disclose them after the fact.
It isn't terribly surprising that a platform like Facebook, which has a lot of children on it, would end up attracting predators who seek to prey on them. Fortunately, Facebook has been deploying a number of tools to improve their safety over the past few years which don't rely on surveillance or even censorship.
Statistically, there have been a number of arrests which have been a product of their activities, although I don't have much info on those. Someone else may.
The real question is whether it is worth sacrificing everyone's privacy, so that a few people can be arrested.
I can imagine iCloud being a lower risk platform than Facebook. Someone can't really groom someone into uploading photos, although the existence of such images is still very condemnable.
It's well-known that this algorithm doesn't have a perfect matching rate. It'd be easy to presume that any false positives are not erroneously tagged images, but the error rate of the underlying algorithm, if all the images were tagged correctly. Who would know?
IIRC Wired reported the algorithm "PhotoDNA" worked around 99% of the time a number of years ago, however newer algorithms may be fuzzier. This is not the same algorithm. And even "PhotoDNA" appears to change over time.
I doubt reviewers of such content are at a liberty to discuss what they see or don't with anyone here. Standard confidentiality agreements.
The FBI doesn't even have the resources to review all the reports they do get (we learned that in 2019), and yet they want to intrude on everyone's rights to get even more to investigate (which they won't).
As many, many people have pointed out, building a mechanism to scan things client-side is something which could easily be extended to encrypted content, and perhaps, is intended to be extended at a moment's notice to encrypted content, if they see an opportunity to do so.
It's like having hundreds of nukes ready for launch, as opposed to having the first launch being a year away.
If they wanted to "do it as all major companies do", then they could have done it on the server-side, and there wouldn't have been a debate about it at all, although it is still extremely questionable, as far as privacy is concerned.
Moving the scanning to the client side is clearly an attempt to move towards scanning content which is about to be posted on encrypted services, otherwise they could do it on the server-side, which is "not categorically unprecedented".