Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Does anyone else have a better explanation for why there was such a visceral response?

I can't speak for lawyers in general or what everyone's motivations would be, but my initial reaction was that it seemed like a somewhat unethical experiment. I assume the client would have agreed or represented themselves, but even there -- legal advice is tricky because it's advice -- it feels unethical to tell a person to rely on something that is very likely going to give them sub-par legal representation.

Sneaking it into a courtroom without the judge's knowledge feels a lot like a PR stunt, and one that might encourage further legal malpractice in the future.

I assume there are other factors at play, I assume many lawyers felt insulted or threatened, but ignoring that, it's not an experiment I personally would have lauded even as a non-lawyer who wishes the legal industry was, well... less of an industry. The goal of automating parts of the legal industry and improving access to representation is a good goal that I agree with. And maybe there are ways where AI can help with that, sure. I'm optimistic, I guess. But this feels to me like a startup company taking advantage of someone who's in legal trouble for a publicity stunt, not like an ethically run experiment with controls and with efforts made to mitigate harm.

Details have been scarce, so maybe there were other safety measures put in place; I could be wrong. But my understanding was that this was planned to be secret representation where the judge didn't know. And I can't think of any faster way to get into trouble with a judge then pulling something like that. Even if the AI was brilliant, it apparently wasn't brilliant enough to counsel its own developers that running experiments on judges is a bad legal strategy.



From what I've read recently, the legal profession is the one most at risk of adverse financial effects from AI. Not the court appearances nor the specialized work. But the run-of-the-mill boilerplate legal writing that is the bread and butter profit center of most first. You bet they are threatened and will push back.

Now the question is this. If an AI is doing something illegal like practicing law, how does one sanction an AI?

Edit: found this:

https://jolt.richmond.edu/is-your-artificial-intelligence-gu...

"A person is presumed to be practicing law when engaging in any of the following conduct on behalf of another"

Every state seems to use the word "person" in their rules.

An AI is not a person, and therefore can't be sanctioned for practicing law - my take anyway.

If non-persons can be prosecuted for illegally practicing law, then those non-persons must have the right to get a license. IMHO.


> Now the question is this. If an AI is doing something illegal like practicing law, how does one sanction an AI?

As far as I'm aware, no LLM has reached sentience and started taking on projects of its own volition. So it's easy - you sanction whoever ran the software for an illegal purpose or whoever marketed and sold the software for an illegal purpose.


Lots of legal software is marketed and sold.


And legal software is very, very careful to avoid constituting legal advice, as opposed to merely legal information.


you cannot sanction the seller of a software, any more than you can sanction the seller of the gun for a murderer.


People have been trying exactly this tho?

https://apnews.com/article/sandy-hook-school-shooting-reming...

https://www.gov.ca.gov/2022/07/12/new-california-law-holds-g...

The second link feeling much closer to direct government action.


> An AI is not a person, and therefore can't be sanctioned for practicing law - my take anyway.

"Personhood" in a legal sense doesn't necessarily mean a natural person. In this case, the company behind it is a person and is practicing law (so no pro se litigant using the company to generate legal arguments). In addition, if you want something entered into court, you need a (natural person) lawyer to do it, who has a binding ethical duty to supervise the work of his or her subordinates. Blindly dumping AI-generated work product into open court is about as clear-cut an ethical violation as you can find.

To your larger point, law firms would love to automate a bunch of paralegal and associate-level work; I've been involved in some earlier efforts to do things like automated deposition analysis, and there's plenty of precedent in the way the legal profession jumped on shepardizing tools to rapidly cite cases. Increased productivity isn't going to be reflected by partners earning any less, after all.


The legal profession is at the least risk of adverse financial effects from anything, because the people who make the laws are largely lawyers, and will shape the law to their advantage.


Automating boilerplate seems like a great use for AI if you can then have someone go over the writing and check that it's accurate.

I'd prefer that the boilerplate actually be reduced instead, but... I don't have any issue with someone using AI to target tasks that are essentially copy-paste operations anyway. I think this was kind of different.

> If an AI is doing something illegal like practicing law, how does one sanction an AI?

IANAL, but AIs don't have legal personhood, so it would be kind of like trying to sanction a hammer. I don't think that the AI was being threatened with legal action over this stunt, DoNotPay was being threatened.

In an instance where an AI just exists and is Open Source and there is no party at fault beyond the person who decides to download and use it, then as long as that person isn't violating court procedure there's probably no one to sanction? It's likely a bad move, but :shrug:.

But this comes into play with stuff like self-driving as well. The law doesn't think of AI as something that's special. If your AI drives you into the side of the wall, it's the same situation as if your back-up camera didn't beep and you backed into another car. Either the manufacturer is at fault because the tool failed, or you're at fault and you didn't have a reasonable expectation that the tool wouldn't fail or you used it improperly. Or maybe nobody's at fault because everyone (both you and the manufacturer) acted reasonably. In all of those cases, the AI doesn't have any more legal rights or masking of liability than your break pads do, it's not treated as a unique entity -- and using an AI doesn't change a manufacturer's liability around advertising.

That gets slightly more complicated with copyright law surrounding AIs, but even there, it's not that AIs are special entities that have their own legal status that can't own copyright, it's that (currently, we'll see if that precedent holds in the future) US courts rule that using an AI is not a sufficiently creative act to generate copyright protections.


This is different from self-driving or software dev apps.

Law is different because the bar has a legally enforced monopoly on doing legal work.

DoNotPay was being threatened. But they weren't practicing law - they were just providing legal tools.

My point is that were in uncharted legal territory. Perhaps ask the AI what it thinks ;)


Actually, by telling a client what specific arguments to make in court, they were giving big-L Legal Advice, and thus literally practicing law.


> Law is different because the bar has a legally enforced monopoly on doing legal work.

I don't see how this would decrease DoNotPay's liability.

Regardless of how you feel about the bar, I don't think that changes anything about who they would sanction or why. Having a legal monopoly means they're even less likely to go along with a "the AI did it, not me" explanation than a normal market would be.

I mean, no matter what, they're not sanctioning the AI. They don't recognize the AI as a person, they recognize it as a tool that a person/organization is using to perform an action.


> Now the question is this. If an AI is doing something illegal like practicing law, how does one sanction an AI?

Its not and you don’t.

When a legal person (either a natural person or corporation) is doing something illegal like unauthorized practice of law, you sanction that person. The fact that they use an AI as a key tool in their unauthorized law practice is not particularly significant, legally.


The AI is a tool, belonging to a person, thar is using that tool to sell advice.


That's a different situation that what I am discussing - where the defendant is directly using AI.


> they felt ... threatened

I'm going to sit on that particular hill and see what happens. Even if DoNotPay's AI is not ready to do the job, the idea that AI could one day argue the law by focusing on logic and precedent instead of circumstance and interpretation is exceedingly threatening to a lawyer's career. No offense intended to the lawyers out there, of course. Were I in your shoes, I'd feel a bit fidgity over this, too.


i feel like lawyers will be able to legally keep AI out of their field for a while yet. they have the tools at their disposal to do so and a huge incentive.

other fields like journalism not so much.


> i feel like lawyers will be able to legally keep AI out of their field for a while yet. they have the tools at their disposal to do so and a huge incentive, other fields like journalism not so much.

That was my initial response too.

Artists, programmers, musicians, teachers are threatened... but shrug and say "that's the future, what can you do". If lawyers feel "threatened" by AI, they get it shot down.

I suddenly have a newfound respect for lawyers :)

Yet if we think about it, we all have exactly the same tools at our disposal - which is just not playing that game. Difference is, while most professions have got used to rolling with whatever "progressive technology" is foisted on us, lawyers have a long tradition of caution and moderating external pressure to "modernise". I'm not sure Microsoft have much influence in the legal field.


When you're poor you have the choice between an AI that may work or you'll be defending yourself. Access to legal assistance is almost as unobtainable as a dentist these days.


> When you're poor you have the choice between an AI that may work or you'll be defending yourself.

This is a thing that lots of people say about unethical businesses, and I'm a little skeptical about it at this point. A couple of objections I have:

- You have a constitutional right to legal representation when accused of a crime by the US government, and while we don't to abandon people who are suffering now because of some theoretical future fix, we also don't want to normalize the idea that constitutional rights only exist when a private market accommodates them. That's explicitly a bad direction for the country to go.

- Saying "well, this works here and now, and people don't have access to anything better" is in my mind only a really effective argument when we know that the thing here and now actually works. But we don't know that this works, which changes a lot about the equation.

- Is sneaking an AI into a courtroom through an earpiece really a cost-effective accessible strategy for poor people? Nothing about this screams "accessibility" to me.

I think summing up the last two points, if the AI was proven to actually work in a court of law, and was an accessible option, then sure, at that point I think the argument would have a lot more weight. It wouldn't be ideal, it would be a bad state for us to be in because your constitutional rights should not depend on an AI. But I could see a strong argument for using the AI in the meantime.

But that doesn't mean that DoNotPay should do unethical things right now to get to that point. The way that your choice is being phrased is begging the question: it assumes that the AI is the only choice other than no representation, that it does work, and that it will produce better outcomes.

But we don't actually know if the AI does work in a court of law, and DoNotPay's decision was to "move fast and break things"; it was to start releasing it into the wild without knowing what would happen. We don't know if asking people to represent themselves with a secret earpiece is a good legal strategy or if it's accessible. We don't know what happens when something goes wrong. We don't know that this actually is a working solution. But they were putting someone's legal outcome on the line anyway.

I think there's a big difference between making an imperfect solution available to poor people because we don't have anything better to offer, and using poor people as experimental fodder to build an imperfect solution that might not work at all. There's a lot of assumption here that using their AI would be better than representing yourself, and I don't know that's true. A judge is not going to pleased with being used as an experiment. And I've been hearing people say that the AI subpoenaed the officer involved in the ticket? That's not a good legal strategy.

The proper way to build a solution like this is to make sure it works before you start using it on people, and I think it's unethical to give someone bad legal advice and to try and justify it because giving that person bad legal advice might allow the company to help other people down the line. A lot of our laws around legal representation are predicated on the idea that legal advice should be solely focused on the good of the client, and not focused on the lawyer's career, or on someone else the lawyer wants to help, or on what the lawyer will be able to do in the future. Based on what we know about the state of the AI today, it doesn't seem like DoNotPay was thinking solely about the good of the person they were advising. We really don't want the legal industry to be an industry that embraces "the ends justify the means."


Yeah I feel like you're right on the money on re: the ethics of using someone who is in legal trouble who will have to live with the results. It's not as sexy but they should just build a fake case (or just use an already settled one if possible) and play out the scenario. No reason it wouldn't be just as effective as a "real" case.


I'd have no objections at all to them setting up a fake test case with a real judge or real prosecutors and doing controlled experiments where there's no actual legal risk and where everyone knows it's not a real court case. You're right that it wouldn't be as attention-grabbing, but I suspect it would be a lot more useful for actually determining the AI's capabilities, with basically zero of the ethical downsides. I'd be fully in support of an experiment like that.

Run it multiple times with multiple defendants, set up a control group that's receiving remote advice from actual lawyers, mask which group is which to the judges, then ask the judge(s) at the end to rank the cases and see which defendants did best.

That would be a lot more work, but it would also be much higher quality data than what they were trying to do.


And in some ways it’s less work! The risks of using a real court case are massive if you ask me. We are a wildly litigious country. No amount of waivers will stop an angry American.


> Run it multiple times with multiple defendants, set up a control group

And also

> That would be a lot more work, but it would also be much higher quality data

I don’t know much about the field of law, but anecdotally it doesn’t strike me as particularly data driven. So I think, even before introducing any kind of AI, the above would be met with a healthy dose of gatekeeping.

Like the whole sport of referencing prior rulings, based on opinions at a point in time doesn’t seem much different than anecdotes to me.

But I’d love to be proven wrong though.


It's about volume. A fake case would be expensive to run and running dozens of them a day would be hard.

That said. The consequence of most traffic tickets is increased insurance and a fine. Yes these do have an impact on the accused, but they are the least impactful legal cases, so it would make sense to focus on them as test cases.


Is this not what moot court is? Seems like a great place to test and refine this kind of technology. The same place lawyers in training are tested and refined.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: