No, not really. If a single HR person is fired, there are likely others to pick up the slack. And others will likely learn something from the firing, and adjust their behavior accordingly if needed.
On the other hand, "firing" an AI from AI-based HR department will likely paralyze it completely, so it's closer to "let's fire every single low-level HR person at once" - something very unlikely to occur.
The same goes with all other applications too: firing a single nurse is relativel easy. Replacing AI system with a new one is a major project which likely takes dozens of people and millions of dollars.
If the system is built externally by vendor, you can change the vendor and it would create pressure on the vendor ecosystem to not create bad system.
If it is built internally you need people to be responsible to create reliable tests and someone to lead the project. In a way it's not very different than if your external system is bad or crashing. You need accountability in the team. Google can't fire "Google Ads" but doesn't mean they can't expect Google Ads to reliably give them money and people to be responsible for maintaining quality.
You cannot punish an AI - it has no sense of ethics or morality, nor a conscience. An AI cannot be made to feel shame. You cannot punish an AI for transgressing.
A person can be held responsible, even when it's indirect responsibility, in a way that serves as a warning to others, to avoid certain behaviors.
It just seems wrong to allow machines to make decisions affecting humans, when those machines are incapable of experiencing the the world as a human being does. And yet, people are eager to offload the responsibility onto machines, to escape responsibility themselves.
Humans making the decision to use AI need to be responsible for what the AI does, in the way that the owner/operator of a car is responsible for what the car does.
Humans can be held accountable when they discriminate against groups of people. Try holding a company accountable for that when they're using an AI system.
I don't think humans actually can be held accountable for discrimination in resume screening. I've only ever heard of cases where companies were held accountable for discriminatory tests or algorithms.
If you mean that a human can be fired when they overlook a resume, an AI system can be be similarly rejected and no longer used.