Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What makes humans special? Because they can marry and go to jail? That's not relevant, and what is relevant is the difference in outcome. If something takes in information, learns from it, and spits out information as a result of its learning, then why does the implementation matter at all? So what if it's software? That's the point. You're speaking as if it's a given that the issue comes down to software not being people, therefore society as a group should make software do whatever it wants even if it doesn't own that software. Not everyone is going to agree with that because it's not clear why, if something is wrong for software to do, it isn't also wrong for a human brain.


Humans are special for two reasons: you can’t clone them infinitely, and they are time-bound.

If I learn to write news articles by reading the NYT, I’m not then able to duplicate myself infinitely and fill every role at every news publisher. I am but one human, producing one human’s output and consuming one human’s space within whatever pursuit I undertake. Critically, I still leave room for others to do the same thing.

Eventually I also die and then someone else can come along to replace me. I’m finite, AI is not. It doesn’t get used up or retire.

If you consider that there’s a fixed amount of societal capacity for whatever undertaking is in question (news journalism, art generation, etc.) then as a human, I take up only a certain amount and only for a certain amount of time. I will never arbitrarily duplicate to produce the work of 10, 100, 1000, etc. humans. I will also shuffle on after about 50 years and someone else, having potentially learnt from me, can now gainfully exist within the world in my stead.

The capacity for infinite commoditisation that AI brings is necessarily a critical distinction to humans when it comes to considering them performing equivalent functions. They must be treated differently.


> If I learn to write news articles by reading the NYT, I’m not then able to duplicate myself infinitely and fill every role at every news publisher. I am but one human, producing one human’s output and consuming one human’s space within whatever pursuit I undertake. Critically, I still leave room for others to do the same thing.

This is a luddite argument that can equally apply to any automation. A robotic arm can be trained to do the same thing a human line worker does, but the robotic arm can be copied infinitely and work 24/7 leaving zero room for other humans to do the same thing. Should we ban robotic arms?


We have already banned robotic arms in this case. It’s illegal to make a robot that mass manufactures someone else’s IP. It’s considered copyright infringement and is a well trodden law, the introduction of a machine in the middle doesn’t magically launder the copyright infringement.


Like I say to my toddler, there is no need for rudeness to make a point.

Nowhere did the poster say that is “sufficient” reason to ban ai. They were clarifying how software is different from humans and only that. You need to go up a couple of comments and combine this explanation with the other part of copyright infringement concerns to see why the “whole” thing is concerning for the news industry.


> This is a luddite argument

And that is empty statement.


Personally, those criteria seem irrelevant. If people were immortal and infinitely replicable, what they're allowed to read/learn/speak shouldn't change! Ditto for the machine counterfactual (limited AI reproduction + mortality). Maybe I'm just being unhelpfully sci-fi/abstract here.

If humans are contextually special here, a "passing the torch" argument seems unconvincing, to me.


If you ask that, may I ask:

What is the purpose of laws at all if humans aren't special?

> If something takes in information, learns from it, and spits out information as a result of its learning, then why does the implementation matter at all?

Yeah, so who cares if the implementation is human and if that implementation breaks?

I really don't want to troll you, I believe it is worth it to point out the absurdity of the "humans aren't special" argument this way.

Humans are not machines and machines don't have human rights.


The law is written for the benefit of humans, American law specifically for Americans. It is not written to benefit software. Humans are special


The massive body of corporate law shows how false this statement is.


Corporations are made up of people. LLMs are merely programs.


> The law is written for the benefit of humans

And humans using AI as a tool changes this fact?


> If something takes in information, learns from it, and spits out information as a result of its learning, then why does the implementation matter at all?

Legally, the implementation may not matter at all, but the scale does.

The precedent is absolutely clear in legislation, for just about any category of crime or civil tort you can think of.

Just one example: you get caught with a single joint (in a place where it isn't legal, of course) you're looking at a fine ... maybe. Most places have had exemptions for possession of a single joint.

You get caught with 225 tons of weed all processed and packaged in a warehouse, you're going to jail!

You need to justify why you believe that, in the case of LLMs and AIs, an exception should be made so that the scale is not considered.

I haven't seen any justification why the justice system should make an exemption for LLMs when it comes to scale.

Scale matters, and has matter in every modern 1st world jurisdiction going back hundreds of years.

You want to overturn that? Provide an argument other than "If it doesn't matter when a single article is used to learn, it shouldn't matter when billions of words are used in the learning."


> What makes humans special?

If we were so eager to give personhood to corporations and now to software can we finally give it to other animals as well?


> You're speaking as if it's a given that the issue comes down to software not being people, therefore society as a group can make software do whatever it wants even if it doesn't own it.

Yes. It's an object or writing. Not a person. You're arguing for giving personhood to software right now and that's crazy.


> You're arguing for giving personhood to software right now

I'm not sure anyone in the thread is actually arguing that. I think what they are saying is that we should look at what behaviour is considered acceptable for humans in order to help us decide what behaviour is acceptable for the tools humans use.


My first reply was rebutting that kind of equivalence, then the reply to me was saying that there's no special difference between humans and software. The reply you've replied to is me saying that's crazy.


But all that stuff about "software can't marry" etc doesn't change the point that we need to make decisions about what behaviour is acceptable for software, and it makes sense to base those decisions on what is already considered acceptable by the humans using the software. I just don't see how personhood comes into it and I feel like that's a hyperbolic interpretation of what they're saying.


The article is about people (as a corporate legal entity) being sued for things people did when creating something. ChatGPT, the software, is not being sued. It can't be sued. It's software. It's not a person that can be taken to court. It can't be held liable.

(I heavily edited this comment after realizing I could make the point in far fewer words. Sorry.)


I still think you're getting too far into the weeds here. If we decide that a certain kind of usage of software shouldn't be considered acceptable, then we could sue the user who used it, or the developers who created it, or something. I don't see why software personhood is the only resolution here.


> I don't see why software personhood is the only resolution here.

It's not. That was the point of my replies. That it's time to assert software personhood is crazy.

> If we decide that a certain kind of usage of software shouldn't be considered acceptable, then we could sue the user who used it, or the developers who created it, or something.

We can already do that. That's what this article is about. The people are being sued. That's what all of my replies are about. I don't understand why you are replying to my comment with a re-summary of my comments as if it's a rebuttal to them.


But I don't think anyone here is asserting that. I'm not sure how to make my point any differently so that you see what I mean. I simply don't think that basing our judgement about what's acceptable for software around what's acceptable for humans necessarily implies anything about software personhood like you are saying it does. We don't need to be able to sue a piece of software in order to make judgements about what kinds of software behaviours are acceptable.


> I simply don't think that basing our judgement about what's acceptable for software around what's acceptable for humans necessarily implies anything about software personhood like you are saying it does.

It doesn't. And all of my comments are about that. Like I just said in the previous reply. You're replying to my comment where I also just said this. Please stop replying to me saying that I'm saying that.

The article is about people (well, companies) being sued. Not about software being sued. Software can't be sued.

Whether or not there are additional laws written about what's acceptable behavior for software (whatever that means? It's assuming software can make decisions) is irrelevant. You can't sue software. People are being sued because the plaintiffs think that people broke people laws and are liable for damages. Software can't break laws and can't be held liable.

I'm having to reword this over and over because you keep replying to me. I think you might be replying to me repeatedly just to have the last word.


If we give personhood to software, wouldn't it mean that you cannot shutdown or delete it ever? You cannot destroy the equipment it is on? As clearly that would be murder.

What would be your financial responsibility to keep AI running?


These are famously the types of questions surfaced in countless sci-fi books. And as long as humans don’t destroy themselves first, it is likely that we will have to address them eventually. In most stories it generally happens too late after some terrible war/conflict, so it wouldn’t be unreasonable to tackle them proactively. And then maybe it’s not so weird to think about these concepts even if their realization isn’t imminent. Working backwards in such a framework would probably give much better laws for today.


This has nothing to do with personhood of software. Restricting the freedom of human beings, which includes the ones that run companies, based on the tool they choose to use, without the basis of obvious direct harm, is questionable. The fact that AI can operate autonomously is a side tangent; they are created by humans and, so far, their only proximal purpose is to serve humans.


Corporate personhood is a thing in the US, and you are allowed to shutdown your company just fine.


> Yes. It's an object or writing. Not a person. You're arguing for giving personhood to software right now and that's crazy.

No, arguing that a human using an artificial brain instead of their own biological brain for learning and derivative creation is an implementation detail of little relevance, and dismissing personhood related arguments like "software can't marry" has nothing to do with arguing for the personhood of software. Explain how one leads to the other, because I'm not seeing it, and that's not at all what I was attempting to communicate.


You're saying that the software itself should be held liable, instead of the people that created it. Meaning that the software would need legal status as a person (or equivalent) so that it can be taken to court, instead of the people that created it.

There is a possibility that you're not saying that, but it's the only interpretation of your comment I could come up with. Because your comment consists entirely of comparisons of software to human brains about whether or not something should be considered legal, and this only makes sense if the software itself can be held liable.


> You're saying that the software itself should be held liable, instead of the people that created it.

Respectfully, I don't know how you're interpreting it that way. Until we demonstrate that the current generation of AI is genuinely intelligent, instead of clever algorithms, a piece of software is no more or less liable than an individual firearm is after it's been fired at someone. My observation is that your argument appears to be that there is something special about humans learning and creating derivative works from that learning over humans using a tool that does the learning and create derivative works.

> There is a possibility that you're not saying that, but it's the only interpretation of your comment I could come up with.

That's fair. I just don't get it.

> Because your comment consists entirely of comparisons of software to human brains about whether or not something should be considered legal, and this only makes sense if the software itself can be held liable.

Human brains and software are both tools. The question I'm invoking is what is it about a person doing the learning and the derivative creation that's different from a person (since, as you say, software itself has no personhood) using an artificial brain to learn and perform derivative creation.

I think the disconnect here may be that I'm operating from the assumption that of course there are human beings liable for the software, but your interpretation of what I'm saying is that software in a vacuum should effectively have personhood applied to it. These are two different things. I'm referring to both humans/brains and software as the interchangeable variable in the question of why the choice of tool means applying entirely different legal principles.

Sorry if I wasn't clear or still am not being clear here. I wanted to make sure I was being understood correctly, but if all we can do from here is agree to disagree, that's fine, and I'd offer to just shake hands.


The way your comment was phrased made it seem, to me, like you were rebutting what I was saying and that regular human things are all irrelevant for whether or not something is a person.

There is one other way I have figured out to read your comment. Which is that it doesn't matter how software or a brain functions since it's only the action of the outcome that matters. But this is not really a relevant statement regardless of whether or not you agree with it, because the article is about a lawsuit and liability. A group of people, acting as a company, is suing other groups of people as companies. And software is not a person, and can't be held liable. So for that to be the case, the software would need to be made into a person, or equivalent. The fact that software and brains are or are not similar is irrelevant, because software is not a person and cannot be held liable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: