Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Imagine applying for a job, only to find out that an algorithm powered by artificial intelligence (AI) rejected your resume before a human even saw it

The article says this like it's a new problem. Automated resume screening is a long established practice at this point. That it'll be some LLM doing the screening instead of a keyword matcher doesn't change much. Although, it could be argued that an LLM would better approximate an actual human looking at the resume... including all the biases.

It's not like companies take responsibility for such automated systems today. I think they're used partly for liability cya anyway. The fewer actual employees that look at resumes, the fewer that can screw up and expose the company to a lawsuit. An algorithm can screw up too of course, but it's a lot harder to show intent, which can affect the damages awarded I think. Of course IANAL, so this could be entirely wrong. Interesting to think about though.



> The article says this like it's a new problem.

I suspect though there might be something different today in terms of scale. Bigger corporations perhaps did some kind of screening (I am not aware of it though — at Apple I was asked to personally submit resumés for people I knew that were looking for engineering jobs — perhaps there was automation in other parts of the company). I doubt the restaurants around Omaha were doing any automation on screening resumés. That probably just got a lot easier with the pervasiveness of LLMs.


> It also influences hiring decisions, rental applications, loans, credit scoring, social media feeds, government services and even what news or information we see when we search online.

I think, more frighteningly, the potential for it to make decisions on insurance claims and medical care.


I know someone in this space. The insurance forms are processed first-pass with AI or ML (I forget which). Then the remainder are processed by humans in Viet Nam. This is not for the USA.

I've also vaguely heard of a large company that provides just this as a service -- basically a factory where insurance claims are processed by humans here in VN, in one of the less affluent regions. I recall they had some minor problems with staffing as it's not a particularly pleasant job (it's very boring). On the other hand, the region has few employment opportunities, so perhaps it's good for some people too.

I'm not sure which country this last one is processing forms for. It may, or may not be the USA.

I don't really have an opinion to offer -- I just thought you might find that interesting.


There is an underlying assumption there that is certainly incorrect.

So many stupid comments about AI boil down to "humans are incredibly good at X, we can't risk having AI do it". Humans are bad at all manner of things. There are all kinds of bad human decisions being made in insurance, health care, construction, investing, everywhere. It's one big joke to suggest we are good at all this stuff.


The fear is that by delegating to an ai, there's no recourse if the outcome for the person is undesirable (correctly or not).

What is needed from the AI is a trace/line-of-reasoning to which a decision is derived. Like a court judgement, which has explanations attached. This should be available (or be made as part of the decision documentation).


But also an appeals process where it can be escalated to a real person. There should be nothing where any kind of AI system can make a final decision.

I think the safest would be for a reviewer in an appeal process to even not have any access to any of the AI's decision or reasoning, since if the incorrect decision was based on hallucinated information, a reviewer might be biased to think it's true even if it was imagined.


> There should be nothing where any kind of AI system can make a final decision.

This would forbid things like spam filters.


You never look into your spam folder?


A huge amount of spam never makes it into spam folders; it’s discarded silently.


If you use microsoft outlook most of the spam never makes it into the spam folder because it goes to the inbox.

Do you have a source for your somewhat unbelievable claim?


Mail server op here: mail exchangers (mine included) absolutely silently drop insane amounts of email submissions without any indication to the sender or the envelope recipient. At most there's a log the operator can look at somewhere that notes the rejection and whatever rules it was based on.


I run a family email server. I’ve worked with a few mail sysadmins trying to diagnose why some of our mail doesn’t get through. Even with a devoted and cooperative admin on the other side, I can absolutely believe that general logging levels are low and lots of mail is silently discarded as we’ve seen that in targeted tests when we both wanted to see the mail and associated log trail.


With no access to logs the claim is just handwaving.


But I have access to logs, which is why I can describe to you how minimal the logging of email rejections is


The spam folder (or the inbox, in what you consider outlook) isn't for messages known to be spam; it's for messages that the filter thinks might have a chance of not being spam.

Most spam is so low-effort that the spam rules route it directly to /dev/null. I want to say the numbers are like 90% of spam doesn't even make it past that point, but I'm mostly culling this from recollections of various threads where email admins talk about spam filtering.


I run "gray listing" software on my email servers. Some spam is blocked by that. There are logs, but I never look at them, other than when I do an upgrade. Mail that makes it through will hit a secondary spam filter which will flag it as "spam" and dump it into a different folder (which I look at slightly more frequently.)


Tell me you've never run a mail server without telling me you've never run a mail server.

It's pretty standard practice for there to be a gradient of anti-spam enforcement. The messages the scoring engine thinks are certainly spam don't reach end users. If the scoring engine thinks it's not spam, it gets through. The middle range is what ends up in spam folders.


You need to read through wikipedia's article on anti-spam techniques! There are whole categories of techniques that require no intervention from humans.

https://en.m.wikipedia.org/wiki/Anti-spam_techniques


First paragraph, with references:

> by 2014, it comprised around 90% of all global email traffic

https://en.wikipedia.org/wiki/Email_spam


The EU has already addressed this in the GDPR. Using AI is fine, but individuals have a right to demand a manual review. Companies and government agencies can delegate work to AI, but they can't delegate responsibility.

https://gdpr-info.eu/art-22-gdpr/


A manual review then it’ll be rejected as well. Some HR person who doesn’t know the difference between Java and JavaScript isn’t going to result in better decisions. The problem is and has always been top of funnel screening.


That's also a silly idea. Companies get sued all the time. There is recourse. If you run a bank that runs an AI model to make decisions on home loans and the effect is no black people get a loan, you are going to find yourself in court the exact same way as if you hire loan officers who are a bunch of racists. There is no difference.

It is available to the entity running inference. Every single token and all the probabilities of every choice. But every decision made at every company isn't subject to a court hearing. So no, that's also a silly idea.


Companies pretty much never get sued by anyone who is from a low income or otherwise disadvantaged group. It has to be pro bono work, or a big class action suit.


Completely wrong. Companies get sued by various government agencies all the time. The FTC, DOJ, SEC, EPA, CFPB, probably others I can't think of.

Even if you were right, AI doesn't change any of it - companies are liable.


First, to the extend these checks worked, they are being destroyed.

Second, have you ever look at rhat space? Because the agencies that done this were already weak, the hurdles you had to overcome were massive and the space was abused by companie to the maximum.


So when the corporations and the state decide to be nasty, that's OK?


I've been getting a real "well, they deserved it" vibe off this site tonight. Thanks for not being that way.


The user base for this site is incredibly privileged. Not meant as a judgement just a stated observation.


Speaking of which, I more accurately should have stated that even people from a higher earning working class are not going to sue. Suing requires not only a commitment of cash, but time. People who work don't have the time; especially if they have children. It's a gamble; what if you lose?

Litigation is mainly a form of sport available to and enjoyed by the rich. And I mean serious litigation like taking on some corporation with deep pockets; not pick-on-someone-your-own-size litigation as in neighbor cut down a tree which fell onto your toolshed.


Taking the example of healthcare, a person may not have time to sue over an adverse decision. If the recourse is “you can sue the insurance company, but it’s going to take so long you’ll probably die while you’re waiting on that”, that’s not recourse.


Right, this is the bedrock upon which injunctive relief is made available; viz., when money after the fact would not cancel out the damages caused by a party doing the wrong thing. Unfortunately you can't get that relief without having an expensive lawyer, generally, so it doesn't end up being terribly equitable for low income folks.


I would bet on AI well before humans when it comes to “which of these applications should be granted a home loan?” and then tracking which loans get paid as agreed.


The danger is that AI can be biased, so if we don't know how it's making it's decisions it could be stupid.

I mean, imagine that making an insurance claim with a black-sounding name results in a 5% greater chance of being rejected. How would we even know if this is the case? And, how do we prevent that?

Now, of course humans are biased too, but there's no guarantee that the biases of humans are the same as the biases of whatever AI model is chosen. And with humans we can hold them accountable to some degree. We need that accountability with AI agents.


"A computer can never be held accountable, therefore a computer should never make a management decision"

Quote from an IBM training manual from 1979

Seems just as true and even more relevant today than it was back then


That's why you hold the humans that run the computer accountable.


That's sort of the whole point though

The computer allows the humans a lot of leeway. For one thing, the computer represents a large diffusion of responsibility. Do you hold the programmers responsible? The managers who told them to build the software? How about the hardware manufacturers or the IT people who built or installed the system? Maybe the Executives?

What if the program was built with all good intentions and just has a critical exploit that someone abused?

It's just not so straightforward to hold the humans accountable when there are so many humans that touch any piece of commercial software


That's fair but I was referering to the humans that delegate "business decisions" to computers, which is what I thought the context was...

For example, if American Airlines uses a computer to help decide who gets refunds, they can't then blame the computer when it discriminates against group X, or steals money, because it was their own "business decision" that is responsible for that action (with the assist from a stupid computer they chose to use).

This is different from when their landing gear doesn't go down because of a software flaw in some component. They didn't produce the component and they didn't they didn't choose to delegate their "business decisions" to it, so as long as they used an approved vendor etc they should be ok. Choosing the vendor, the maintainence schedules, etc, etc: those are the "business decisions" they're responsible for.


> For example, if American Airlines uses a computer to help decide who gets refunds, they can't then blame the computer when it discriminates against group X

If American Airlines uses a computer to automatically decline refunds, which human(s) do we hold accountable for these decisions?

The engineers who built the system?

The product people who designed the system, providing the business rules that the engineers followed?

The executives who oversaw the whole thing?

Sometimes there is one person who you can pin blame on, who was responsible for "going rogue" and building some discrimination into the system

Often time it is a failure of a large part of a business. Responsibility is diffused enough that no one is accountable, and essentially we do in fact "blame the computer"


> which human(s) do we hold accountable for these decisions?

Personally I'd be satisfied holding the company as a whole liable rather than a single person.


What does it mean to hold "a company" liable?

All that does is create a situation where decision makers at companies can make the company behave unethically or even illegally and suffer no repercussions for this. They might not even still be at the company when the consequences are finally felt


> What does it mean to hold "a company" liable?

It means that the company is sued and is responsible for damages.

> decision makers at companies can make the company behave unethically or even illegally and suffer no repercussions for this

But now you've just argued yourself back to the "which human(s) do we hold accountable for these decisions?" question you raised that I was trying to get you out of.


Looking at the current state of AI models that assist in software engineering I don't have much faith in it being any better, quite the contrary.


Bad decisions in insurance are, roughly speaking, on the side of over-approving.

AI will perform tirelessly and consistently at maximizing rejections. It will leave no stone unturned in search for justifications why a claim ought to be denied.


This has an easy public policy fix through something like a national insurance claim assessment agency, with an impartial prompt, which AI will make reasonably cheap to fund. It's always been perverse that insurance companies judge the merits of their own liabilities.


That isn't what the incentives point too; in a free market the insurers are aligned to accuracy. An insurer who has a reputation of being unreasonable about payouts won't have any customers - what is the point of taking out a policy if you expect the insurer to be unreasonable about paying? It'd take an odd customer to sign up for that.

If they over-approve they will be unprofitable because their premiums aren't high enough. If they under-approve it'll be because their customers go elsewhere.


What if it isn't actually a free market?

Secondly, the reasonablity or unreasonability of payouts is linked to premiums.

In other words, one way that the parsimonious insurer would still have customers is that they offer low premiums compared to the liberal insurers.

Even people who know about the bad anecdotes from reading online reviews will brush that aside for the better deal. (Hey, reviews are biased toward negativity and miss the other side of the story; chances are that wouldn't happen to me.)

The free market doesn't optimize for quality. Firstly, it optimizes for the lowest price for a given level of quality. But the price optimization has a second-order effect of a downward pressure on quality.

If you're selling something and the margin is optimized: it's about as cheap as can be, what you can do is reduce quality by some epsilon, and make a corresponding decrease in price. It still looks like about the same quality to someone not using a magnifying glass and fine-toothed comb, and you have temporary price edge against competitors. That triggers a kind of "gradient descent" of declining quality which bottoms out at some minimum level of quality below which viability starts to get eroded past a point where the market still finds the thing acceptable.


But I expect my insurer to be unreasonable about paying today.

It’s just that A) I didn’t choose this insurer, my employer did and on balance the total package isn’t such that I want a new employer and B) I expect pretty much all my available insurance companies to be unreasonable.


I did say "free market"; that is sort of the standard disclaimer to show I'm not talking about madness that is the US healthcare system. Or if you have your employer choosing insurance for you and it isn't health-related then that is kinda weird and I'm not surprised it is going badly.


> An insurer who has a reputation of being unreasonable about payouts won't have any customers

If it's health insurance, it's not a free market. You don't have a choice. It's employer provided, so suck it up buttercup.

It's just socialized medicine but implemented in the private sector and, like, 100x more shit.

UHC, one of the largest insurers in the US, has a claim denial rate somewhere in the 30s if I remember correctly. Well... that sucks.


Medical insurance companies have profits capped by the federal Medical Loss Ratio calculation, requiring them to spend a minimum percent of premiums on care and related activities. That means the more care they approve, the more premiums they can collect if they’re near the cap currently.


"Human's aren't perfect at X, so it doesn't matter if an unaccountable person who's plausibly wrong about everything does it or an AI". What's the difference, anyway?


AI will be used to justify the existence of bad decisions. Now that we have an excuse in the form of "AI" we don't need to fix or own our bad decision mistakes.


I know people who’ve died because of AI algorithms years ago. They’ve implemented state programs with no legal oversight and only governed by an algorithm.


> An algorithm can screw up too of course, but it's a lot harder to show intent, which can affect the damages awarded I think.

I personally think an algorithm would be easier to show intent, or a least willful negligence, it would also magnify the harm. A employee might only make mistakes on occasion, but an algorithm will make it every single time. The benefit of an algorithm is it does not need to be reminded do or not to do something, and it's actions are easier to interrogate than a humans.


Good point : like when a killing is done by a machine that wasn't directly operated, the perpetrator might be found guilty, but for manslaughter rather than murder ?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: