The problem that strikes me here (as a professional pentester) is that these vulnerabilities are so pathological. There is no realistic series of errors that would lead you to really design an app with hardcoded sqli looking credentials or somehow return a cookie with a malformed header, let alone one that somehow automatically authenticated the user.
I am extremely onboard with the idea that pentesting as a whole has a lot of scanner jockeys (and a lot of my current work is finding people who aren't.) But you're not testing for vulnerabilities that would actually show up in the real world, and it is hurting your analysis.
I suspect these would be realistic vulnerabilities introduced by freelance or inexperienced devs hired by a startup, for example.
They might have hardcoded admin-level passwords for debugging, then forgotten about it.
They might have mis-typed HTTP headers like 'set-cookie', after having written manually a lot of auth & session management that should really be done using well-vetted libraries instead.
Respectfully, you are incorrect. Delegating session and cookie management to the framework or (in php's case) the language is so much simpler that I have seen manual implementations of this behavior only 2-3 times in my career. And the idea that logging in incorrectly would somehow return a malformed, but correct, session-setting header is again, pathological.
Most of the time, these types of profound errors come from simply taking the path of least resistance provided in the framework. Sometimes they do come from a complex homebrewed solution to a simple problem.
Back at my startup days, I once worked with a poor guy who was dynamically generating individual IDs for each element on a page and their corresponding CSS for each page. He was unaware that CSS also had _classes_, which perfectly encapsulated the behavior he was trying to create. His work was complex, and full of errors, but it had a certain logic to it- it was the path of least resistance that he saw available to him.
These don't look like that. They just look like weird errors designed to avoid showing up on, or triggering false positives on, DAST tools. I thinking testing people's skills without those tools is a good goal, i just think the methodology here is wrong.
I can see both of your viewpoints, here. I wouldn't make the blanket statement that OP is incorrect. The fact that you have only seen this a handful of times in your career is not surprising. This sort of silly bullshit is far less common nowadays. However, as OP has stated, these things were somewhat more common a while back. Mind you, much of this very poorly written software is still being used in dusty corners by large companies. You should keep an open mind when testing and not dismiss these things are outright impossible or else you're going to miss a lot of bugs :P
Some of the most interesting issues seem pathological at first blush. Can you really think of no scenario where there would be an sql injection string as a password? Perhaps some try-hard came along before you and attempted sql injection on the user creation form that resulted not in sql injection, but in the password being set to the literal string that the person used as input.
I realize that in this scenario it is literally designed-in, but I understand the point the author is trying to prove. If a "scanner jokey" gets results that tells him or her there is SQL injection, a competent tester will try to verify what their tool is telling them. If the tester is doing that in this case, they'll find other injection strings not working and (hopefully) start looking under the hood to see what is going on and discover this pathological hard-coded pw and be able to tell the client that. Maybe it's the work of a malicious dev?
I agree that a pentester with a lot of experience will have skills that are honed to find common bug patterns, but it's nice to be able to find these seemingly bizarre issues and have an explanation for the client. It shows you really understood the app and what it's doing.
Given this is your area of expertise, I'm genuinely interested in how/why you believe that these vulnerabilities wouldn't show up in the real world. Generally speaking, isn't the software developer community littered with developers who aren't adequately skilled/copy-paste-from-Stack-Overflow, etc?