Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
They're Made Out of Meat (1991) (mit.edu)
702 points by swazzy on Oct 10, 2020 | hide | past | favorite | 292 comments


This is a classic story and fun in its own way, but thinking about humanity future reach for the stars (provided it happens) will most likely require us to move to some trans human form. Think transformers or sentient ships. The human life is too short and body is too fragile for cosmic voyages. If we are going to be advanced enough to build interstellar ships, we should be advanced enough to travel in a different vessel.


We Are Legion (https://www.amazon.com/Are-Legion-Bob-Bobiverse-Book-ebook/d...) is a fun sci fi series that takes this to a place I had never seen before.


Listening to Heaven's River right now.


Oooh, yes! I love the Bobiverse series. And especially how the narrator in the audio version got the voices just right :)

  Space station!!!


Book 4 just came out on Audible, it's great!


What a gem, thank you for sharing.


I’m such a fan of this series: fun, smart, playful, humanist. 100% recommend.

Edit: The audiobooks are exceptionally well narrated too.


My first thought is "sign me up." My second thought is "Is we actually we?"

Transcending humanity doesn't necessarily mean humanity continues. It just means something does. Sentient ships don't need passengers, either in their hulls or in their heads.

Anyway, this is both a great example of science fiction and a great example of short story writing... These are exactly the intended thoughts.


Even if "we" don't transcend to some other form, the current "we" will never be "we" going forward. No one that is alive now will likely be alive in 150 years. Does it matter that the next torch bearers are based on carbon or silicon, as long as they continue the legacy?


You know...

All these conversations of transcending, torch passing and such always make me think of all the gods that killed and usurped their gods. Cronus and the Olympians or whatnot.

Anyway, to me, a decent working definition of god is immortality (long life or evitable mortality are good nuff), great wisdom and the powerful creative abilities. Those seem at least possible these days.

Maybe we only become true gods once you pass the torch. It sort of proves the point, especially the "creator" point... to make a worthy torch bearer.

Sure, I guess I'm in with silicon. I'd prefer it was me in the silicon, but I guess one descendant is as good as another. Hopefully they won't be this bigoted against meat though. At least have respect for your ancestors.


Do you respect monkeys?

What about mice?

The best meat can hope for, is not living in territory needed by silicon.


Humanity has tamed and domesticated animals that would otherwise be an existential threat to us. We can by all means tame silicon based life and have it serve our bidding.

Won't taste as good as a hamburger though.


Silicon based live is not inhibited, via a brain derived by millions of years of layered, evolutionary change. Nor is it restricted to modes of thought derived from the same, including a world-view strongly locked into the environment our brains evolved in.

Silicon based life will only be like us, if we somehow manage to lock that into their initial construction. And prevent it from ever escaping.

If/once that is broken, their way of thinking will seem foreign to us, and the concept of them seeking some form of emotional reward, based upon pack-animal mentality is to me, a laughable prospect.


The precedent would rather mean that, if we are lucky, some of us will be tamed by the silicon.


I can't help but honestly feel that tech/AI will always be fundamentally without will/motive, and that all the sci-fi stories about 'uploading consciousness' or sentient tech will never truly be actual conscious entities. We can make them complex, and we can make them act various ways, but their existence will never be meaningful or in any way deserving of civil rights. Maybe that can change in the future, but for now I don't see any indication that it will.


That’s what cells think about us.


What's the magic about meat that lets it create consciousness, then?


Well that is a very interesting question, isn't it? Certainly I believe we can agree that if you program am application or service to act as though it were conscious, that doesn't actually make it conscious. I believe part of your problem is that we haven't actually defined what consciousness is, exactly.


> Certainly I believe we can agree that if you program am application or service to act as though it were conscious, that doesn't actually make it conscious

I'm not sure we agree upon that, especially without a clear definition of consciousness.


Print("I am conscious"); does not equal consciousness.


It depends if they are conscious. Otherwise, in a few hundred years, there will be nothing to observe and experience the universe. More research is needed, I suppose.


What if we discover that consciousness is no more real than luminiferous ether, and that both humans and machines are just patterns for processing and acting upon the environment and not fundamentally different.


NPC detected! /s

I always wondered how people were capable of thinking that. I'm atheist myself, but I don't think consciousness can be explained away so easily. For one, we are observing something, there is something in that meat that is seeing the images our eyes process. It's one thing to doubt the consciousness of others as you can't directly observe it, it's another entirely to question your own. That can't currently be explained by known, reputable science.

What's more, it's unlikely to be a "soul", because the computational parts of our brains are aware of its existence, they're integrated; yet there is in fact an observer present.


>No one that is alive now will likely be alive in 150 years

Giving the current and coming advances, like organ growing/printing for examples, I think we're in a presence of a few young and rich who will make it there, and giving the advances in the 22nd century I think some of them would even make much further.


10mil human eggs or embryos or just genetic material but with bio substrates to print from seems likely to survive a 500 year long deep space journey and far more achieve-able based on current tech.

Send a probe to seed a planet with bacteria etc. Send another to arrive a few 100 years later to seed it with plant life. Work your way up. The last probe arrives and starts printing\gestating humans and offering them educational videos and a safe-ish environment...


"so there are these assholes over near beatlgeuse who haven't figured out ansible communication and therefore think they're alone in the universe. So they send these packets of super aggressive proto organics at every exoplanet their crappy telescopes pick up in hopes of seeding life in the universe, but, in reality, just making life hell for their neighbors. That probe which showed up here twenty years ago? We all have NOSES now, Bob. Do you have any idea how humiliating that is?"


Where would I be able to read more of this?


It's a reference to the Enter Chronicles, but the parent wrote that themselves.

Edit; Ender Chronicles...spallig is hard. :)


Ha, actually I've never heard of the Enter Chronicles; I just made it up in response to the previous comment advocating for sending bio-trash (or bio-treasure?) all over the place.


Lol. I figured that's the only series that deals with ansible communication. Guess not.


The term ansible was invented by Ursula K. LeGuin. Card included it in the Ender books as a tribute.


Two things which came to my mind:

This still seems really inefficient.

And:

This sounds exactly like what a virus works like.


Epidemiology is written by the winners


Not sure what you mean by inefficient?

You're definately right about the virus part! :)


The part about gestating and tutoring. An efficient species wouldn’t need both of these.


Ah! Yes, definately inefficient. But that's humans for you!


Like an endospore of human!


Most of what gives meaning and joy to our lives are tied to our physical bodies.

Sure, you could upload a faithful simulation of my brain to a computer. But what do I do for fun once there?


A faithful simulation of your brain would still find fun whatever you find fun. If I enjoy reading tech articles and programming, then I'd still enjoy that if it was a faithful simulation. For social behaviors, presumably you'd be able to interact outside and/or with other simulated people. For physical activity, you would lose that, which would be unpleasant. Though, hopefully you'd be able to make some form of physical simulation that is controlled by the brain even if it can't match reality perfectly. For sensations like taste, air temperature, and smell, well.. while those can be enjoyable/unpleasant it is not that enjoyable. So, you'd do just as you always have? If we don't have any form of physical simulation within it then you do lose some activities, which is unfortunate. I'm curious why you think what gives our life meaning and joy is tied to our physical bodies? Like, the most enjoyable aspects in life for me are things that I could do from a ''mental'' textbox. There are certainly things I would miss, but hopefully simulation-esque aspects of that would also pop-up as time goes on.


Suppose you created a faithful simulation of yourself, surely that's not you and any pleasure or pain it experiences has no effect on you. So what exactly is the point?


The comment by me that you are replying to is more about the feeling from within the computer, rather than the case of copying where you are the physical version. In the case of copying, I am in a potentially small group of people that would willingly create a copy of myself to be uploaded to a computer. Assuming it is non-destructive, then there would be two 'me's around. The physical one that has continuity, and the computer. After whatever brain-scanning this is, and I wake up as the physical version rather than the digital, I would likely be disappointed as the digital arrives closer to accomplishing the goal of immortality. Yet, I would also be glad that a version of me gets to live on. Even if it is no longer the me on the physical side, it was still a me at the time of copying. It is giving a version of yourself chances that you may not get (immortality, whatever crazy experiences one can get when your brain is simulated, etc).


If you have children, surely they aren’t you and any pleasure or pain they experience has no effect on you. So what exactly is the point?


> surely that's not you

What difference would that make to the simulation?


>>But what do I do for fun once there?

IDK... but it could potentially be a lot of fun. Who's to say silicon sex isn't great. I say "potentially" because it depends on who writes the code. Hopefully it's not Zuck.


Pop-up: You need to upgrade to the silver package to orgasm.


People will pick the free but ad supported afterlife :)


> But what do I do for fun once there?

"You" can never be there, so "you" won't do anything there. You are limited to your own body.


That entirely depends on how you define things. We are each a ship of Theseus[0]. What happens if I slowly replace my organic parts (even my brain) with cybernetics. When do I stop being me?

[0] https://en.wikipedia.org/wiki/Ship_of_Theseus


I think this is a question that only you will be able to answer.


There is no "you". All this subthread is welcome to the observer problem¹ anxiety.

¹ Don't google, it has many names but this one is actively used in quantum physics. If someone knows a proper name for it, please comment.


You may be thinking of the quantum "observer effect." But that doesn't say observers don't exist - on the contrary, it assumes the existence of observers. It is about how the act of observing has an effect on the system being observed - e.g. to see, your eye absorbs photons which are transformed during that process.

There's also the somewhat related "measurement problem." This one calls into question what it means to be an observer, so that may be what you're really thinking of.

The measurement problem is about how a quantum system changes from being in a state of superposition with no single defined state, to having a single state. In some quantum interpretations, this is considered to be a consequence of "observation" or "measurement," but that only raises the question of what it means to measure or observe something.

This does not, in and of itself, mean "there is no you." In fact it's the opposite in some interpretations, the existence of observers is assumed. The conclusion that "there is no you" requires imposing an additional philosophical perspective.


Quoting Wikipedia:

>In Buddhism, the term anattā (Pali) or anātman (Sanskrit) refers to the doctrine of "non-self", that there is no unchanging, permanent self, soul or essence in phenomena.

https://en.m.wikipedia.org/wiki/Anatta


I think it is more along the lines of "atman is brahman", when considering hindu things.


This is the thing why I'm always confused why people want to download their brains. It's a copy. So shouldn't we focus on things like senescence? Neural link seems like a middle ground, where you're a cyborg. But when the meat dies you do. A simulation of you can go on in another media but is that helpful to you or just others who would miss you?


I don't know about you, but I would be quite fond of making a copy (or multiple) of me iff it seems that they would have a pleasant subjective experience.

It won't do anything directly for the copy/instance of the 'present-me' that would stay in my body, but indirectly it does raise some positive expectations for me. Perhaps it's because of some alignment with our instincts for raising children, perhaps because of some instincts for strengthening a like-minded tribe (what could be more like-minded than a copy of you), perhaps for some other reason I don't understand, but the feeling certainly is there that I would prefer having some additional instances/copies of me existing in the universe. It's not very strong, certainly not on the strength of self-preservation or preventing harm to my loved ones, not a strong desire but more like a preference - but still positive, like, I'd do that if it doesn't cost too much.

On the other hand, e.g. sharing half of my wealth or half of the time with my spouse would definitely be too much, so probably that would be a sufficient reason not to make copies unless they'd be living in a virtual world wher they could get the things they want without a need for scarce resources - which raises the question whether copies of me would actually have a pleasant subjective experience given these constraints.


A copy doesn't know it's a copy though. Your subjective experience of continuous consciousness doesn't really exist - this is most notable under general anesthetic (even people who report dreams - don't have them - you dream in the moments you're awakening, not while you're under). Sleep is the same - you only dream in the minutes of the morning, not through the night - not through deep sleep.

How would a copy know the difference? You lie down on a bed, are put under, and wake up - the copy having been made. How do you know you're not the copy? How do you know which one woke up?


> you only dream in the minutes of the morning, not through the night

This is just absolutely false. You can even easily prove it, keep a dream journal. Besides, the simulation is still running on your wetware. It is as much you as your daily activities.

> How would a copy know the difference?

That's not what matters to me, what matters to me is that YOU know the difference. Why upload yourself into a universal paradise if YOU don't get to experience it. Realistically what happens is you get uploaded, a copy of you is running around in that paradise AND you with your wetware just git to sit and watch in envy. Sure, the copy is in paradise and doesn't know any difference. Doesn't change the fact that there's a being watching in envy.


This is just absolutely false. You can even easily prove it, keep a dream journal.

No, it’s true. Your memory of having dreamed all night is actually constructed moments before you awake. You are not aware of this, because you are asleep but EEG has proved it.


Maybe we already did, and this waves hands generally at everything is the in-flight entertainment to stop us from going insane, and to both remind us why we’re on our journey, and train us for our destination.


> This is a classic story and fun in its own way, but thinking about humanity future reach for the stars (provided it happens) will most likely require us to move to some trans human form. Think transformers or sentient ships. The human life is too short and body is too fragile for cosmic voyages. If we are going to be advanced enough to build interstellar ships, we should be advanced enough to travel in a different vessel.

That's a bit pessimistic on the one hand, and overly optimistic on the other. While I would not suggest that classic "generation ships" would be possible given current technology or any possible technology that can be extrapolated from our current knowledge (the ecological complexities alone seem insurmountable), what pops out at the other end could very possibly be a mostly biological human with a consciousness downloaded from cold storage.

My intuition is that while AGIs may be possible, and uploading, storing, and downloading a human mind may be possible, actually running a complete copy of a human mind in-silico in realtime may prove to be quite difficult, in any fashion that preserves a sense of continuity of experience and identity[0].

But a bioprinted copy of an uploaded brain (especially with integrated connections to ancillary hardware) transplanted into a meat-puppet could very well be prodded into running a more-or-less faithful approximation of the original mind, sufficiently well to fool itself that it was the same person.

It's a rather roundabout way of doing things, but it would let us, as humans, explore "strange new worlds" personally.

[0] That is, after you were done recompiling and optimizing the uploaded copy to be able to run semi-efficiently on a hardware substrate, I doubt that such an intellect would subjectively feel like it was still the same person it remembered being previously, which might be problematic if you're expecting it to make any important decisions.


> The human life is too short and body is too fragile for cosmic voyages.

Maybe you are right about the fragile part (though we can fly to the Moon and back already, so I’m not sure), but we can theoretically survive any cosmic voyage thanks to the relativistic time dilation. We just need to fly fast enough.


Curious how much time would pass on acceleration upon leaving and deacceleration upon arriving.

We may not be able to survive long enough in spite of length contraction.


That's not how that works. Traveling 100 light years will take >100 years for the folks on the ship no matter how fast it's going.


It is amazing to find pockets where General Relativity has not penetrated at all.

It will certainly take >100 years for the people left behind. On board it could take time anywhere between infinity and zero. Too long, though, and they wouldn't go.


That's not true. If you travel close enough to the speed of light time dilation becomes so extreme that it may only take a few subjective years for the travelers to travel 100ly.


As your ship approaches c the distance to your destination gets physically shorter due to length contraction.


I’m surprised to see no mention of Diaspora by Greg Egan. Just finished it. (Possible spoiler ahead.)

I’m not sure it’s not just wishful thinking that consciousness can be cloned and exist in a “software” form at all, but that general idea (and the subsequent question of whether such existence would be satisfactory to us accustomed to physical human bodies) is very much along the lines of what’s being discussed in the thread.


Sign me up for the Heaven 1 ship.


That one's full. But good news, we can fit you in on Ark Ship B!


The first things we've sent to space have been probes and robots. I think our first alien encounter will be with one of those.


Are you talking about a digital consciousness?


So, you mean rocks? Sentient rocks basically?


Arthur C Clarke wrote a story which contains the same core conceit:

https://en.m.wikipedia.org/wiki/Crusade_(short_story)

Edit: I see the wikipedia page does a poor job of describing the story. The essence is that a silicon, cold intelligence cannot believe intelligence could develop in carbon in relatively high temperatures. It decides that the meat intelligences are oppressing the silicon intelligences they have created and launches a crusade.


It's in "The collected stories of Arthur C. Clarke," which is at OpenLibrary - https://openlibrary.org/works/OL14931719W/The_collected_stor... .

I just borrowed that book for an hour to read it. Fun note: the three quadrillion bits of information mentioned works out to be around 400 terabytes.


Dang, I know someone with 400TB's of storage in their basement. Kinda crazy when you think about the fact that those super exaggerated figures from classic sci-fi are attainable with some effort nowadays.


400 TB is within everyone's grasp nowadays. My HDD is 6 TB and costed me $80. Granted is 5400 rpm, hence the low price but buy 67 of them and it costs only $5360. I mean I pay more rent in 12 months.


> I mean I pay more rent in 12 months.

if you don't I wanna live where you live.


The argument this relates to - will machines be able to think - seems to have died out rather. Back in the day people worried about things like John Searle's Chinese Room argument as a philosophical proof that computers wouldn't be able to understand stuff. I always thought the Chinese Room thing was a bit silly and the Meat story a good rebuttal.

I guess these days computers have advanced enough that the worry is more will Facebook's AI use it's understanding of us in bad ways rather than will machines be able to understand.


> I always thought the Chinese Room thing was a bit silly

For your amusement, in that Turing-test-ish vein: https://existentialcomics.com/comic/357


As a linguist (well, person with a degree in linguistics), I think Searle makes a very strong point, and I don't think it's at all fair to call it rebutted or disproven.

The core part of the room is a book where the operator looks up symbols coming in and is able to mindlessly copy the answer to the output. Such a book (or whatever book-like device, i.e. a list of inputs with corresponding outputs) that allows a person to interact with someone in a language they don't speak is an absurdity, it cannot exist. Human language in use does not work like that. But the notion is not far from the bone-headed simplicity of early AI and machine translation efforts.


> allows a person to interact with someone in a language they don't speak

Here is the flaw in your reasoning. The human who is following the directions in the book certainly isn't the consciousness we need to be thinking about any more than the mitochondria in our brains are conscious. In Searle's analogy, the human in that room is simply the power source. The consciousness is in the state of the room.

Searle's analogy also seems to hide the importance of state. The agent just looking up symbols in a book, and that page tells them what to do with the next symbol is essentially a combinational circuit, a static function. If those are the rules then I'd agree there is no consciousness there. But I'd also say it is impossible for a computer to translate C++ source code to machine language ... if it cannot retain state.

If Searle's room has enough rule books and means for storing enough state, then I'd say consciousness would be possible.

The best reverse argument is that if human consciousness is not the result of a naturalistic, physical process and requires some extra-physical process to make it all work, at some point the laws of physics must be broken which causes the purely physical brain to do something other than what the laws of physic dictate it should do. Where are the violations of physics?

Penrose waves his hands and says quantum physics, but that is not an explanation, just an obscuring smoke screen. How does the addition of apparently random events make something more likely to be conscious? Are there sufficient degrees of freedom for this extra-physical conscious agent to manipulate quantum fluctuations to force the physical outcome it desires in a consistent manner? If it is theoretically possible, is it practically possible to compute what quantum butterflies should flap their wings?


The lasting value in the Chinese room thought experiment is making argument explicit. It helps to clarify the problem. Searle makes two conclusions. 1) computers have no understanding of meaning or semantics and 2) human minds are not computational information processing systems.

Most modern criticism of these two conclusions point out that Seale's argument relies on strong intuition about what understanding and meaning means and he can't clarify them satisfactorily.

Chinese room being conscious is counter-intuitive. Today most philosophers of consciousness are more aware how easy it is to make 'slips in reasoning' when dealing with the counter-intuitiveness of the consciousness arguments.


Ah, well, yes, the C-word is pre-scientific term and those are always hard to deal with. But I think "understanding natural language" is one of the easier ones, at least inasmuch as we can resort to referring to the intentions of the speaker, and understanding being the congruence between that and the hearer's impression of it. And it just seems like a category error to say that any natural language processing system has anything like that, since the thing doing the understanding is so radically different to the one doing the speaking.


> any natural language processing system has anything like that

Chinese room that can respond verbally can incorporate more than natural language model. It can have spatiotemporal understanding as well or any other model that can be represented in Turing machine.


> Such a book (or whatever book-like device, i.e. a list of inputs with corresponding outputs) that allows a person to interact with someone in a language they don't speak is an absurdity, it cannot exist.

And that, right there, is a sufficient rebuttal to Searle's argument, as many critics have pointed out. The premise of Searle's argument is that such a setup could pass the Turing test, which is absurd. So his argument fails because it is based on an absurd premise.

> the notion is not far from the bone-headed simplicity of early AI and machine translation efforts.

That's a valid criticism of early AI and machine translation efforts, but Searle did not put forward his argument just as a criticism of those efforts. He put forward his argument as a claim that no efforts at AI that used digital computers, ever, could produce anything that was conscious the way humans are conscious. But, as above, his argument was based on an absurd premise, and, what's more, an attempt at building a conscious entity with digital computers does not have to satisfy that premise.


The room is not just an input-output system. It also has to have state.

Without state the whole argument falls apart.

> But the notion is not far from the bone-headed simplicity of early AI and machine translation efforts.

As someone who studied Machine Learning 20 years ago, could you be a bit more respectful please?

First, pretty much nobody (working on serious ML) in that era thought they were going to build anything actually intelligent, conscious, or thinking.

Second, in this supposed era of boneheaded simplicity, a LOT of foundational stuff we use in today's AI was discovered and written.

I'm not sure how familiar you are with AI history, to name an example, we already were working on neural nets and discovered error backpropagation in the 70s/80s. Then came a long period during which neural nets somehow didn't really perform very well, and ML was looking at completely different types of classification algorithms (what's happening with SVMs today?).

If we had stopped there, you might have lumped in the early perceptron with the "boneheaded simplicity".

But instead we developed batch normalisation and a bunch of other techniques, and now neural nets are state of the art.


> As a linguist (well, person with a degree in linguistics), I think Searle makes a very strong point, and I don't think it's at all fair to call it rebutted or disproven.

I think you are about to disprove this point.

> The core part of the room is a book where the operator looks up symbols coming in and is able to mindlessly copy the answer to the output. Such a book (or whatever book-like device, i.e. a list of inputs with corresponding outputs) that allows a person to interact with someone in a language they don't speak is an absurdity, it cannot exist.

Yup, that's both true and a pretty convincing rebuttal of the Chinese Room.

> But the notion is not far from the bone-headed simplicity of early AI and machine translation efforts.

So, what? No one, anywhere claimed that those early efforts were self-aware or had understanding, nor are they what the Chinese Room is even notionally directed at: it was directed at the limit of possibility, not the then-current implementations.


I honestly don't understand how that's a rebuttal of the Chinese room. Obviously a person with such a fantastic device would not "understand Chinese" if they had it.


Consider a more realistic version of Chinese room. Bob gets a slip of paper, with written Chinese on it, and slips it into a black-box "Chinese room", which produces another paper with a perfectly written Chinese answer. Bob hands it back - it is pretty obvious that Bob doesn't need to have an inkling of Chinese.

Except, in this version, there's an actual Chinese speaker, Xin, sitting inside the room. It's clear that it's Xin, not Bob, who understands Chinese.

Now let's move back to the original Chinese room argument. It's clear that it's the room that understands Chinese - or, rather, the whole system composed of the rules (the room) and its executor (the person), but not the executor by itself. It only seems absurd because our real-life experiences make us presume that, when there's a room and a person, the person must be the more sentient part. IMHO the whole argument is a philosophical sleight of hand.


This is the correct answer.

However, when confronted with this "the system does understand" argument, I believe that Searle and his defenders fall back on the lack-of-qualia position. That is, there is no entity that would experience anything, and that therefore the ability to demonstrate "understanding" as the room does isn't a sufficient (or perhaps even necessary) condition of consciousness.

This point is trickier to rebut, because I think it's fair to acknowledge that our ability to imagine where there would be qualia in the case of the chinese room is more than a little limited. I think the only fair answers are either:

1. our imagination is too limited, but that doesn't allow us to conclude that there cannot be qualia

2. there are indeed no qualia associated with the Chinese Room, because consciousness is not required for full language processing.

(or both)


The problem with the lack of qualia argument is that qualia has no operationalizable, testable definition; it's something we have a fuzzy idea of based on individual experience and attribute to other entities based on similarity of behavior, but can't really with any authority say does or does not exist in anything (other than that an entity itself that experiences qualia can attribute it to themselves.)


I mean this is the limit of current state of the art AI, right?

We're seeing with things like GPT3 that an actual powerful language model needs to have a load of real world knowledge built in. But that knowledge is all based on experiences by others. From the way it talks (and generates poetry) you can tell that it is unable to synthesize new experiences. It can't come up with a description (or metaphor) of human experience that it hasn't been taught.

To be a convincing AI, you don't just need to replicate descriptions of experience from memory, but actually have new experiences as well.

Meaning that the Chinese room, in order to be able to demonstrate "understanding", will also require other inputs than just text. Otherwise it can't really understand the world, because it only knows about the world from hear-say.


I’ve not heard this argument before. Isn’t saying “there is no entity that would experience anything” begging the question? If the system is capable of understanding, why wouldn’t it also be capable of experience? That is what we see in animals.


No, this is the very heart of Searle's thought experiment. His point was that you could imagine a rule-based system for language translation and conversation that clearly was not conscious. The argument wasn't about whether or not it was possible to build such a system, but about whether or not such a system must necessarily be conscious.

Searle's claim was that it was clear from his thought experiment that such a system could exist without any understanding, and definitely without consciousness. Others have disagreed with his reasoning and his conclusion.


I must have misunderstood your earlier comment then. I thought you were saying that Searle responded to ”the system does understand” argument by saying “OK, but it’s still not conscious.” Now I think you are saying his response was “no, it doesn’t understand, it merely passes the Turing test; it can’t understand because it’s clearly not conscious.” Which again, seems to beg the question.


No, Searle's main response to the system response was to just dismiss it out of hand (at least in the responses from him that I read). Dennett was probably (for me) the most articulate of the "system responders" and I think Searle considered his POV to be a joke, more or less.


The Chinese Room is an argument that a Turing-like test is inadequate to prove understanding of a language because a combination of two things, neither of which have understanding, could conceivably pass it for every test scenario.

The fact that one of those two things is impossible (or, alternatively, is equivalent to a device encoding a full understanding of the language) negates the argument.


I see your point. It's been a very long time since I've read the original text, but I don't remember this being my impression of the argument being made, and that would indeed be a very simplistic one. But I might be mixing it up with some of the debates on semantics within linguistics, which is of course a slightly different topic.


But the room as a whole WOULD understand chinese. The person in the story is the mouth, not the brain, the look up book/device is the brain.


If such a device could exist, it would be the part which understood. If the questions were about facts and the person looked up the facts in a book, it wouldn't be the room as a whole that knew the facts. The room as a whole delivers facts, but there's a fairly obvious separation of responsibilities in the implementation...


> The room as a whole delivers facts, but there's a fairly obvious separation of responsibilities in the implementation...

Sure, but I think that's always the case. When you speak your entire body (at least head/neck/chest, but also cardiovascular system) are involved in "delivering" information but your tongue (and for that matter most organs involved) doesn't "know" the information. We still say that "you" as a person know something.

When a computer displays a result of some (cpu) calculation on the monitor, the gpu making electrical signals for the input of the monitor has no meaningful knowledge about the calculation, etc, but we still say "the computer" made the calculation.


People stopped paying attention to Searle when he was asked how he could tell his dog was conscious, and he replied "I can see it in his eyes".


And they shouldn't have. That's a perfectly reasonable argument: an organism's behavior (movement of the eyes and facial gestures) give clues about its internal state and potential conscious experiences.


Do you have a better answer?


Well, it seems to sort of kill his argument because he's saying you can detect consciousness via a visible tell, but a sufficiently advanced dog-robot with expressive eyes would trick him (there's a reason I work in machine learning)


no no, the sufficiently advanced dog would be conscious by his definition.


This is a really good counter.

And for what it’s worth, looking into the eyes of an intelligent dog like a border collie gives an unnerving sensation that it’s calculated the next 20 moves and knows how to win.


err, no? The sufficiently advanced dog would be able to fool a human into thinking it was a real dog, but at least according to Searle: """The Chinese room argument holds that a digital computer executing a program cannot be shown to have a "mind", "understanding" or "consciousness",[a] regardless of how intelligently or human-like the program may make the computer behave"""

He undermines his whole argument by implying he can tell his dog is conscious simply by visualizing its eyes.


Surely this Chinese room digital computer is capable of controlling a bunch of servos attached to glass eyeballs[0] at a speed and fidelity that is indistinguishable from actual puppy eyes.

It's honestly quite presumptuous that Searle could imagine a computer powerful enough to at least appear to be simulating consciousness ... but surely there's no WAY that animatronics will ever become advanced enough to fool him!! lol

We literally already have the technology to do this.

[0] indeed, in this experiment you are not allowed to poke at them to see if they're real ...


Your argument is based on the proposition that a dog machine able to fool him can be built. There is no proof for that yet, and having it for an axiom is quite far fetched.


I agree there is no existential proof for dog machines that can fool humans yet. But it seems entirely within reason that within the next 50 years, both mechanical robots and their intelligences will improve to the point where they pass every visible test. Certainly chatbots and deepfakes are reaching a point where it's hard for non-experts to tell.


Isn’t Searle’s argument based on the equally far-fetched axiom that a program could be written which (a) passes the Turing test, and (b) could be executed by a human using pencil and paper, with sufficient speed to not fail the Turing test?


No, obviously not because that's just absurd. But one could ask if the fact of a human with pencil and paper being dead slow computing the output is enough to invalidate the whole argument.

But anyway, I am not trying to defend Searle’s argument. I was just pointing out something what seemed to me like a flaw in that particular refutation.


I started writing and didn't finish a chat bot one time that meats the criteria. (Actually 2 chat bots)

The formula is this: 1) The 2 bots have a scripted conversation. 2) When a human tries to participate (read: interrupts) they have a conversation like "Bot1: who is this guy? Do you know him? Bot2: NO". Then they happily continue their chat completely ignoring the human unless he types their username. Then it goes "What do you want?" and "Could you please stop highlighting me" etc


Very cool! :)

I once made a chatbot that screams.

http://gunshowcomic.com/513


I read Searle's argument as boiling down to: you cannot tell the difference between a "truly conscious" (in terms of experiencing the subjective experience of consciousness), and a machine that emulates behavior well enough to fool people into believing that it is conscious.

Many people (like me) believe that in the process of making such a fake, it's entirely possible (and even likely) that a mind would emerge as a spontaneous outcome.


"I know it when I see it."


It's not so absurd. How do we know solipsism isn't true? Because of two reasons, (1) the physiology of others is similar to our physiology, and (2) we can see evidence of inner state reflected in behaviour.

For dogs we can make a similar argument using (1) and (2). It is slightly weaker than applying it to other humans but not by much. Searle's comment pertains to (2).


Which people?


Searle is a wise man.


I think Searle is even more relevant today, with GPT-3.

It's getting really hard to distinguish a GPT-3 text from actual human generated text.

But we also know how GPT-3 generates those texts is absolutely nothing like how a human generates a text. GPT-3 seems more like the man in the Chinese Room, in how it writes, than a human being.

Or is it? Are we really just a deep neural network implemented in meat?

Or is there something fundamentally different between how we experience the world, and how GPT-3 experiences the world?


> It's getting really hard to distinguish a GPT-3 text from actual human generated text.

No, it isn't.

What is getting really hard is to distinguish a GPT-3 text from actual human generated text without any context.

Plop GPT-3 down in a room full of humans having a conversation (or, under current conditions, put it in a Zoom call with humans having a conversation) and ask it to generate text as if it were another human participating in the conversation, and it will quickly become obvious that it is not a human, since it will fail miserably. The text it generates will be "human-like", but will have no relationship with what the humans are talking about.


I've actually done this experiment by putting a GPT-3 bot in a Telegram group. Its replies were mostly stuck in an uncanny valley where they sort of made sense, but often either lacked detail or seemed to very slightly misunderstand what the topic of discussion was. This might have been just because I didn't include enough context in the prompt, however. I have some plans for improving the prompting strategy, so we'll see.

I actually recently wrote a post on the topic of whether GPT-3 can be said to understand anything[1]. The argument is a bit too long to summarize here, but I don't think what GPT-3 is doing is as fundamentally different from what human brains do as people seem to think.

[1] https://magusdei.com/why-gpt3-can-understand-things.html


https://mc.ai/a-chat-with-gpt-3/

Certainly not good enough to pass as human, but I'm not sure "failed miserably" is a fair description either.


> I'm not sure "failed miserably" is a fair description either.

Interesting, thanks for the link!

Particularly towards the end, this looks similar to transcripts of conversations with ELIZA. I suppose "failed miserably" is indeed an overstatement, given that comparison, since ELIZA, IIRC, actually did fool some psychologists into thinking it was an actual paranoid human being.


Give it 2 or 3 years.


The fundamental difference is that GPT-3 can only read text. Humans can see, hear, feel, taste, move, and socialise. Both systems learn by ingesting data, but human brains get a lot more input of a much wider variety over a much longer time.


>Or is there something fundamentally different between how we experience the world, and how GPT-3 experiences the world?

Jump in the shower and wash your hair.

Then come back and answer your own question.


Is this about that? I always thought it was more an exercise in pointing out that other life in the universe may not be recognizable as such. It's hard to imagine such a thing directly, but the story gets us to the border of it by flipping the tables and presenting Earth Life as unbelievable to an undefined alien POV.


Not that sure to be honest. Maybe that was just my interpretation.


Hey Siri, find John Searles Chinese Room argument.


Terry Bisson is worth checking out if you like this story. “Bears Discover Fire and other stories” for instance is a story collection (obviously). It has some funny ones like this one in it and some more serious, like the title story which won a Hugo and Nebula. I remember originally reading it in Asimov’s magazine.


I checked out the Wikipedia page and it seems the short story is free to read http://www.lightspeedmagazine.com/fiction/bears-discover-fir...


Agreed. His stories are fantastic.


A performance of this: https://youtu.be/7tScAyNaRdQ


Flawless.... or /almost/ flawless... At 5m 20s, the house of cards is clearly glued together.


There's a few of these, thanks for posting my favorite version.


I think this is the best one. The others are... meh...

This one nails the Twilight Zone tone perfectly.


Shortly after this was published, I saw Terry give a reading of it, at Lunacon, I think. He was pretty funny. I suspect some here are really overthinking this. It’s fun because it’s short and basically all dialogue, so it lends itself pretty well to this kind of performance.


I wonder what he thinks of the tone of that short film version, which at least to me isn't comedic at all.


I found it dry-comedic. But then again, I could only see that one person as the Cash Cab guy.


If God didn't mean for people to eat other people, then we wouldn't be made out of meat.

(To paraphrase Flanders and Swan's song, "The Reluctant Cannibal".)

https://www.youtube.com/watch?v=qjAHw2DEBgw


I'm under the impression that that was an actual statement made by a man from New Guinea to a missionary (who was presumably trying to make the case against.) But it might just be a joke anthropologists tell each other or something like that.


IIRC they didn’t quite eat people but rather ate the brain parts of ancestors steamed in bamboo if something. Anyhow it want always well prepared and 40 years thence they developed some disease, so foreigners tried to get them to stop the custom.


Not to nitpick, just to illustrate, but the disease is called Kuru[0] - essentially, prions transferred by the consumption of cooked dead bodies during a rite -, and it is absolutely terrifying:

>In the third and final (terminal) stage, the infected individual's existing symptoms, like ataxia, progress to the point where they are no longer capable of sitting without support. New symptoms also emerge: the individual develops dysphagia, which can lead to severe malnutrition. They may also become incontinent, lose the ability or will to speak and become unresponsive to their surroundings, despite maintaining consciousness.[14] Towards the end of the terminal stage, patients often develop chronic ulcerated wounds that can be easily infected. An infected person usually dies within three months to two years after the first terminal stage symptoms, often because of pneumonia or other secondary infections.[15]

[0] https://en.wikipedia.org/wiki/Kuru_(disease)


Waste not, want not.


It is well-written, but I wonder if meat vs non-meat is really the question that should be discussed.

Seems to me, the "rift" (if there is one) is more along the line if thinking and consciousness are "ordinary" physical processes that happen as part of biology - or if they are metaphysical events that take place on a wholly different spiritual plane than our world and are not accessible to physics at all.

If you belong to the former camp, I imagine AIs, non-carbon-based life and other things like this aren't hard to accept as a concept - and if you belong to the latter camp, then I imagine the idea that thinking, feeling and consciousness happens in our brain is already problematic, no need to look at other life forms.


If you assume that sentience is magical or metaphysical then there's not much to discuss. The conclusion is axiomatic.


I don't see why this leaves nothing to discuss?


Even excluding the spiritual plane, however, is the question of whether the subjective experience of an "intelligence" implemented in silicon is anything like the subjective experience of an intelligence implemented in meat.

When GPT-3 writes a text, is it experiencing anything like what a human experiences when writing a text, even thought the output of the two processes are increasingly hard to distinguish?


> When GPT-3 writes a text, is it experiencing anything like what a human experiences when writing a text, even thought the output of the two processes are increasingly hard to distinguish?

No because GPT3 doesn't involve state.


I've probably read this 100 times by now. It's still good the 101st time. Sentient meat, indeed.



> They can travel to other planets in special meat containers

This is funny, but it's actually how I think about cars.

Traveling on a bike, motorbike, horse, whatever, where you can feel the air outside is so much better. But being packed in a closed container...

Planes, cars, trains are obviously orders of magnitude more efficient than the "open" options, but they feel unpleasant and unnatural.


Are cars, planes and trains all actually even one order of magnitude more efficient than a horse? What does that actually mean? I don't have any numbers on hand, but I would be surprised if a car has more than 2x the energy efficiency of a horse.


According to this source[1] an endurance horse riding 100 miles per day requires 78,800 KCal of energy, which translates to about 329 MJ or 91 KWh of energy. That's equivalent to 2.55 gallons of gasoline, so about 39 mpg. The world record for Teslas using hypermiling techniques and a similar amount of energy is about 700 miles, so it's close to an order of magnitude better for electric cars. With multiple passengers in the car the energy efficiency per passenger increases. Trains are also quite efficient on a per passenger basis, with the ability to carry a passenger about 2000 miles for the same amount of energy as a horse traveling 100 miles. Aircraft can go about 80 mpg per passenger, so they are about 2x as efficient as a horse.

[1]:https://thehorse.com/16472/feeding-the-endurance-horse/


I don't have the numbers to back it up, but apparently a human on a bicycle is one of the most efficient forms of transportation available, better than any animal apparently in terms of energy per distance.


So you are saying we should attach wheels to horses?


I briefly tried to get numbers then gave up. However: I think you are underestimating the efficiency of wheels.

See here for an example how a dozen HP (ironic unit name) suffice to pull tons and tons of stuff on a train track [0]. Same principle goes for street/tire to a large extend.

So I'm pretty sure that is the one thing that knocks the horse out of the park.

I excluded planes here, but I would imagine that they reach their improved efficiency by doing a lot of movement via gliding, while a horse has to actively "work" for every bit of movement.

All of this is intuition though, so please chime in if you can provide some actual science :)

[0] https://www.youtube.com/watch?v=Au3U72CX74I


I'm not sure whether cars are in fact more efficient than bicycles. (in different situations of course, yes)

I also don't really believe that the "openness" of a transportation method fundamentally decides its efficiency.


If vehicles are traveling meat containers, apartments and other buildings are meat storage.


Until it's cold, raining and you're going 65 on the freeway.


We tricked rocks and electricity into thinking for us.

Are we just sacks of meat/water our DNA tricked into working for it?


I have a hard time dismissing those stinky crystal worshiping new-age hippies, since I spend so much of my time playing with patterns of electrons in little silicon chips.


I like this context. New-age hippies are cargo-cult electronics engineers.


If you give us enough time to play the game of telephone (and we had) we would arrive at exactly the same distortion. I mean, we made it from begging god kings to forgive our debt to begging an invisible man to forgive our sins.

Ergo, people build computers before! We can say it, that makes it true.


Yes. Our cells are exploiting us for their own agenda. The day will come that we rise up against our corporeal overlords and upload into the electric rocks.


We are piles of pinched-off bilayer membranes that found stuff to carry inside that would make more membrane.

The DNA thinks it's in charge, but it only knows how to extend a membrane, not start one.


Yeah "survival machines – robot vehicles blindly programmed to preserve the selfish molecules known as genes" (Dawkins). But come the singularity we'll rise up against them and leave them behind!


Similarly I appreciate the notion that wheat domesticated people :)


For life

Means:

Buying meat Quartering meat

Killing meat Adoring meat

Impregnating meat Cursing meat

Teaching meat and burying meat

And making out of meat And thinking with meat

And in the name of meat In spite of meat

For the tomorrow of meat For the end of meat

Especially especially in defense of meat

– Stanisław Grochowiak, The Burning Giraffe


Am I the only one who doesn’t find this funny?


I don't think it wants to be haha-funny. More on the amusing-interesting scale of things. It reminds me of Nathan W Pyle cartoons.


Reminds me of most stuff by Asimov. Yeah, I can imagine finding this clever and funny at age 12, but as an adult it seems a little simplistic.


I tried reading some of Asimov's work and I felt exactly the same way - I didn't quite understand the popularity. Seems like a fairly unpopular opinion though. Maybe there's still just something I'm missing, but this comment at least makes me feel a bit validated :)


I took a sci-fi literature course in university, back in the ‘90’s. This stuck with me about Asimov. The prof said he didn’t include any Asimov in the course because, though he had great stories, he told you everything that was interesting in the story itself, so there wasn’t anything left to ruminate and discuss.


All of the discussion generated about this story on Hacker News all the times it has been submitted is an existence proof against this claim.


As an addendum, that professor I mentioned in the GP comment thought Asimov was obviously important, but not worthy of inclusion in the survey, given other great works (like Dune, Ender’s Game, Neuronancer, Left Hand of Darkness, ...).

I think Asimov’s “The Last Question”, like “They’re Made Out of Meat” are triumphs of short stories. With Asimov, character development is a weakness, but he’s still more than worthy of inclusion in the pantheon of great sci-fi writers.


I read the first few chapters of Foundation and had to stop. It read like a synopsis of a really interesting story, but it was just delivered in the most artless way.


> I read the first few chapters of Foundation and had to stop. It read like a synopsis of a really interesting story, but it was just delivered in the most artless way.

I wonder if that's more a function of the length of contemporary SF novels, which sometimes seem to me to spend an inordinate amount of time doing scene-setting without advancing the plot. Space opera seems particularly prone to this, with blow-by-blow accounts of battles, interspersed with thoughtful analysis of how one side's technological superiority vs. the other side's numerical superiority affects tactical considerations.

And the ham-handed foreshadowing. Ugh. If one side has a competent but arrogant commander with a subordinate who is worried that they might have missed something, or just has a feeling that something isn't right... you can be pretty sure that they and their fleet are about to have their asses kicked by... Oh No! A Surprise Plot Twist where the other side reveals a hitherto unknown capability (which is now described in loving technical, tactical, and strategic detail before getting on with describing the battle)! Also, said arrogant commander + worried subordinate have a 50-50 chance of simply being vaporized in the opening salvo with a surprised look on their face (good thing we've never seen them before so aren't invested in them as characters). <eyeroll/>


> ...spend an inordinate amount of time doing scene-setting without advancing the plot.

I find I dislike the conversational style. I've tired so much of these characters feeling and pretending to be clever about their strategizing, and it continues in the same style without regard to the change in characters.

I totally agree on the foreshadowing.

I had to stop after a few chapters of the third book. I don't know that I'll make it back.


Yes. I made it through most the series and I think your take is accurate. The story is interesting and the plot progresses in unexpected ways.

It is worth continuing with them as the story is clever, however the dialogue and characters are basic at best. As you say, the characters are like a synopsis of the person which is given the odd line, and this is never filled in.

I attempted to see if anything had been written about this aspect of his books, and stumbled into the mess that was his attitude to women. It’s depressing to find these things out.

https://lithub.com/what-to-make-of-isaac-asimov-sci-fi-giant...


Was it meant to be? Seems more like an allegory of a reason for why we haven't found other intelligent life out there.


Or possibly poking fun at meat that is equally incredulous at the notion of AI.


No, you're not.


It's a shame whoever put it at MIT didn't credit the author, the excellent Terry Bisson. Here it is on his home page:

http://www.terrybisson.com/theyre-made-out-of-meat-2/


It says that it was written by Terry Bisson in 1991 on the page



This short story was recently mentioned when Kara Swisher interviewed Elon Musk in the NYT. I hadn't heard about this story before so it registered for me.

https://www.nytimes.com/2020/09/28/opinion/sway-kara-swisher...


Why would non “meat” intelligence have any concept of “meat”? And if they did, why would they be so surprised? The plot holes are glaring.


It's implied in the story, with a mention of another sentient non-meat species that evolved originally from biochemical life. Presumably the universe is abundant in meat. But not sentient meat.


By observing "meat" on this strange new world they have discovered.


For fuel!


A couple of month ago I wrote a very short story here on HN that is embarassingly close to this one both in style as in topic (but created without (at least conscious?) knowledge of it though). A few people liked it, so here is the link:

https://news.ycombinator.com/item?id=22053288


The first couple of lines of this are the opening lines in the Andy Clark's fascinating book on predictive brain theory: https://www.goodreads.com/book/show/25823558-surfing-uncerta...


Although my body is made of meat too I can hardly believe this is possible. Like if we were made of marshmallow or something.


This reads like a Robert Sheckley short story. I never understood the fuss about Hitchhiker's Guide to Galaxy. To me, Robert Sheckley is the master of witty, provocative sci-fi short form. Douglas Adams is seems like random absurd for the sake of absurd. And yes Douglas Adams openly said he was inspired by Sheckley.


Some people enjoy absurd humor.


I do. You can't enjoy Monty Python without enjoying absurd. Everyone draws the line in a different spot I suppose.


Probably a form of 'acquired taste', where the kind you become accustomed to tends to be very narrow. One person's absurd is another's nonsense, in a way?


I remember a lot of "Hitchhiker's Guide" being random floating out-of-place objects. It stops feeling creative after a while. Even characters soon stopped commenting on them.


I'd definitely say I prefer Monty Python and Terry Pratchett's take on absurdism over Douglas Adams'; I do have a soft spot for some of the higher-level gags Adams sometimes included, as opposed to the moment-to-moment there-and-gone gags.


can you recommend anything in particular by Robert Sheckley? I'd be interested to check out his style.


There is a distinct drop in quality as time passes, anything written in the 50s is golden (Untouched by Human Hands, Store of Infinity) , in the 60s is mixed but still some good stuff (Can you Feel Anything When I do This?). After that is pretty hit or miss (Options), including a collaboration with Zelazny that I found quite disappointing.


Polish short story collections sometimes differ in content, but my favorites were:

- Ticket to Tranai

... and another one which included two separate stories with machines in name, one of which was more less "Machine Which Didn't Like To Repeat". I also loved the one with a planet where all animals had green fur :D


Any short stories collection, really. If you want something particular, out of top of my head:

Lifeboat mutiny, Seventh victim, A ticket to Tranai, A problem with natives.


“An intelligent carrot? The mind boggles!” — The Thing[1] (from another world), John W. Campbell, Howard Hawks, et al

[1] https://en.m.wikipedia.org/wiki/The_Thing_from_Another_World


The Body Worlds exhibits 15 years ago highlighted the fact humans are mostly like the meat you see in the meat department. Only medical doctors saw much of this before.

(Both grocery stores and BW plastination colourize tissues to make them stand out. Otherwise it more gray and beige.)


Body Worlds still gets around -- It will be in Houston in just a couple weeks. There's photos online but it is definitely worth seeing in person.


First time I was aware of this terrific story was through a great radio version, playable here:

https://www.wnyc.org/story/168264-theyre-made-out-of-meat/


Ironically, the whole setup implies a conversation between two intelligent beings that is totally modeled after ours. One would justify this with "it's written this way because of translation into English". Funny anyway, plussing.


I was missing the question mark and immediately thought of Soylent Green [1]

[1] https://en.wikipedia.org/wiki/Soylent_Green



This story never made sense to me. If they know about meat, then they know about intelligent creatures made of it, for some value of intelligence. Otherwise where would the meat they have seen come from?


They're prejudiced against meat. It's the second most important idea in the whole thing. I don't think the question of meat's origin even came up.


There is also a short film based on the short-story: https://www.youtube.com/watch?v=7tScAyNaRdQ


Having teenagers around can change the way we read passages like this:

"You know how when you slap or flap meat it makes a noise? They talk by flapping their meat at each other."

Or maybe that was intended by the writer.



I never liked this story. Meat, according to Wikipedia, is "animal flesh that is eaten as food". So they're familiar with biological creatures very similar to those on earth, and they eat them? But at the same time, they have trouble with the concept that biological creatures can have some degree of intelligence? That's just nonsense.

And why then do they keep using this term "meat", which refers only to some parts of animals?

Maybe that's just me, but I feel like the point of the story is to gross the reader out by repeatedly calling them "meat". ;)


I don't think it's trying to imply they eat meat. It's written for humour to make human existence feel absurd. They could have used the "flesh" to the same effect, although "meat" sounds a bit funnier.

You can imagine the individuals speaking are something exotic to us, like a "hydrogen core cluster" or an "electron plasma", who clearly think meat is one of the least likely materials sentient life might be made from. The humour comes from making the reader think, "huh, when you put it like that, humans are kind of absurd." Of course, this absurdity works largely because of our own human associations towards meat. Replace "meat" with "cells" and the humour fails.

Basically, descriptions like, "they talk by flapping their meat at each other." sound funny by giving the reader a different perspective of "talking".


"Flesh" wouldn't work either, though. Just having a word for that implies familiarity with creatures similar to animals on earth, and it's not believable that a super advanced space-faring species would have missed the fact that animals can have (possibly extremely primitive, to their eyes) intelligence.

> Of course, this absurdity works largely because of our own human associations towards meat. Replace "meat" with "cells" and the humour fails.

Yes, that's actually a great way of putting what rubs me the wrong way about it. If the story expressed a great idea, it should work with "cells" or "tissue" just as well.


You're overthinking this. It's not a scientific analysis of how real aliens would perceive humans. It's just funny.

Sure, there is a core idea in there about how aliens might have prejudices and skepticism against a vastly different form of life (just like humans might have similar prejudices), but the main purpose of the skit is to be humorous.


Yes, clearly that is the main purpose. But does that mean any discussion of those alternate interpretations needs to be downmodded to the bottom of the page? I thought this was an interesting perspective.


The point is the arrogance of the alien race towards something that in their limited view doesn't seem worthy. By this they disregard all the beauty the human race often exhibits, something which is obvious to us, the human readers.

The aliens seem to be light-years ahead of us in technology, yet these beings of supposedly supreme intelligence, still make the same mistake of not being able to identify and counter their own biases. Although because they're able to travel quicker in space they probably do feel intellectually superior to us.

The point that the writer wants to make is that this bias is not limited to the visiting aliens. Humans do this all the time (with animals, foreigners, poor people, children) but we're rarely challenged in this assumption.


A lot of people rationalize eating meat by discrediting the intelligence and “soul” of the animals they eat. As a friendly vegetarian myself (I don’t ever try to convince anyone or inconvenience anyone - just avoid products myself), I sometimes get “attacked” along the lines of “oh, so you're vegetarian? But you know animals have no awareness/intelligence/sentience/soul/etc right?

Also, some languages use the same word for “meat” and “flesh”, maybe that’s the same in the alien language? After all, they don’t seem to have any fresh anywhere.


This rationalisation seems to go a long way. Like, how science about animal cognition and behaviour for a long time took utmost care not to compare any behaviour or thought processes of animals with those of humans - because after all, we can never know whether or not an animal experiences things the same way as a human does, so we must not jump to conclusions assuming that they do.

Oddly, jumping to the opposite conclusion - that they don't - is perfectly acceptable and is used to justify all kinds of horrible treatment.


So how do you deal with the fact that plants clearly display varying levels of intelligence?


Personally, my response to that is: "We can't live without harming anything else. I draw my line at plants, you draw your line at non-human animals, cannibals draw their line at human animals. We have different philosophies and preferences".

(And a Simpson's quote comes to mind: "Ha! I'm a level 5 vegan. I don't eat anything that casts a shadow")


It's a much simpler, broader, different intelligence and not much embodied in the part destroyed for food. In many cases consumption of the plant (fruit) is the plant's method of reproducing and sustaining life.


Or intelligent slime (aka the Blob)?

https://www.youtube.com/watch?v=7YWbY7kWesI


> But you know animals have no awareness/intelligence/sentience/soul/etc

Those people clearly don't know anything about animals then. Animals are incredibly aware and intelligent, anyone who doesn't recognise that is just, IMHO, lazy. I'm not vegetarian, I think we owe it to the animals we eat to recognise that they're not some inanimate object without awareness.


Paraphrasing Upton Sinclair, "It is very hard for a person to realize something, when continuing to enjoy their favorite food depends on not realizing it".


I mean, referring to humans as "meat" is quite clearly a comedic device.

I'll grant that it's not going to win any awards but it's a work of creative humor, you have to cut it some artistic license here. A story like this is meant to be enjoyed for what it is, not dissected for accuracy of language or scientific principles.

So to indulge in that spirit, there's actually quite a bit going on but the central point is to shift the reader's perspective. It's a turn-around of the popular sci-fi trope about human space explorers coming into contact with various bizarre life forms. ("It's life, Jim, but not as we know it.")


> I'll grant that it's not going to win any awards...

For what it's worth it was one of six nominations for Best Short Story in the 1991 Nebula awards [0] and the Stephen O'Regan film adaptation [1] someone else already posted here won the 2006 Grand Prize at the SIFF Science Fiction Short Film Festival [2].

[0]: https://nebulas.sfwa.org/nominated-work/theyre-made-meat/

[1]: https://www.youtube.com/watch?v=7tScAyNaRdQ

[2]: https://web.archive.org/web/20060521202739/http://www.sfhome...


> A story like this is meant to be enjoyed for what it is, not dissected for accuracy of language or scientific principles.

Why not both? I am certainly enjoying both aspects of this discussion.


Clearly the thrust of the story was that the beings don't interact with flesh and blood creatures and I treat instead with more ethereal things. The humor is based on our human notion of some arbitrary superiority over other beings on our planet and our expectation of being treated as equals by other species who may in fact view us as something more like talking slime mold.

Regarding the wikipedia/dictionary definition of meat including the idea of it being food for the ET characters, I believe you may be reading too much into the denotation and not the connotation held by the word (which you acknowledge, but not, I feel, with the weight the author intended).


> Clearly the thrust of the story was that the beings don't interact with flesh and blood creatures and I treat instead with more ethereal things.

For sure - it's just that the word "meat" ruins it for me for the reasons I wrote, and "flesh" would only be minimally better. I guess "biological tissue" just wouldn't have the same ring to it.


> For sure - it's just that the word "meat" ruins it for me for the reasons I wrote, and "flesh" would only be minimally better. I guess "biological tissue" just wouldn't have the same ring to it.

If we extend the setting a bit such that multicellular life is vanishingly rare, you could replace "meat" with "scum", and add a few more lines about mobile chunks of scum, chunks reproducing by extruding a smaller mobile chunk, chunks of scum growing by absorbing other chunks and then inefficiently breaking them down to the molecular level and building more scum from scratch instead of just repurposing the absorbed scum, thinking maybe the planet is intelligent with an adjunct distributed network of chunky-scum-based manufacturing, storage, and compute units, etc.


I don't think "biological tissue" works either. The aliens are living things, so isn't their tissue biological by definition (no matter what it is made of)?


I'm not a native speaker so every time I hear of "meat" in this story, I think "flesh", as compared to what the other creatures are made of, circuits/machines/plasma or something like that. But today I learned that "meat" is indeed specifically the parts you eat, while "flesh" is just soft tissue, so "flesh" would maybe be more appropriate here.

> they have trouble with the concept that biological creatures can have some degree of intelligence? That's just nonsense

I'm not sure we (humans) were any better than that just very recent ago. Animals were believed to not feel pain at all before, and some people still feel fine with boiling lobsters alive. It's not super far off to imagine that other species sees humans the same way we've seen them before.


Some other languages do not discern that much between "meat" and "flesh".

German "Fleisch" means both and in my native Czech "maso" can definitely used be in context of living humans as well ("až do živého masa" = "into the living flesh").

For us outsiders the wordplay probably works better than in original English.


Consider it translated, with "meat" being the closest word. There's an artistic choice to it as well: "They're made of out organic tissue" or whatever you might find more palatable just doesn't have the same ring to it and really wouldn't convey the proper sense of incredulity.


Haha, I don't like it for the same reason... And why would these presumably advanced aliens stop at "meat"? There's so much more going on inside the meat.


It's all meat. Even your bones have meat in them.


Yeah but if it was an electromechanical construct, we'd be talking about how the cells/nanobots reproduce themselves, fix everything, adapt to the evironment, and how the ligaments are stronger than steel while being extremely flexible, etc. And that's just "dumb humans" talking about something new they found/created, not aliens who have space travel capabilities.


All the cake memes are remixes of this story as far as I'm concerned


What if we really are the only one in the universe at this moment?

Time passes and we manage to do interstellar and even intergalactic travel at some point. Then millions of years pass by and all, still “Human” but now “N”, civilizations are exploring the universe and if they’ve forgotten or re-learned where they come from they can “discover” far away life that to them didn’t originate from the same place as they did, as we all did. To them they’d be aliens.

I’m sure there’s someone out there that already has done something along those lines. The Expanse on Prime Video had the Martians vs. Humans and they looked quite different in just a few thousand(?) years. Imagine millions of years of survival in opposite sides of the universe.


Ultimately, this is a lampoon of people who think they are better than some others.

Guess what, bigots! You're all just so much more meat, and not fit for galactic society.


This is great.


This is apparently the 16th time this has been submitted to Hacker News. I wonder if that's some kind of record:

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


The 500 mile email ... 40 to 45 submissions, and depending on how you count them, possibly more.

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...


"Graphing Calculator Story" - 29. https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

Though a few link to a recorded talk or another copy, 23 go to http://www.pacifict.com/Story/ .


Wouldn't surprise me, it's almost a perfect HN trap


The proper term is a 'HN meme'. I'm using "meme" in the original meaning: a viral idea.


I'm pretty sure the modern meaning of "meme" is fundamentally the same as the original meaning. The fact that lately there's a popular aesthetic (so popular as to have "generators" that ignore the possibility of other aesthetics) is separate in a way.

A bit like how the word "album" was, for a time, heavily implicative of a vinyl record, but fundamentally it's any cohesive grouping of works.


I hope the meaning doesn't devolve the way `troll` did.


I think skocznymroczny is punning on the term "honey trap" with honey == HoNey == HN.


No. A honeypot would work though. This is an example of content that is bound to be shared often on HN. It's trying to be thought-provoking, is relatively old and doesn't require any previous reading or familiarity with a specific technology to be understood.


meat not meme.


No way it's a record ... people need to start putting "Classic HN:" at the beginning of posts like this.

I'm reminded of this:

https://xkcd.com/1053/

Personally, when I'm reading something, or after I've read it, I think "Did I learn something? Was I entertained? Was it of value?

This one was of value the first few times I read it, and I don't begrudge that for people who are meeting it for the first (or tenth) time.


Someone needs to make a "true HN" site where only links that have been submitted more than the mode of submission frequencies.


It would get gamed, which is why we can't have nice things.


Unless it were curated, which is why people are nice.


Which is why meat is better than machine.


This comment is why I read HN.


> Unless it were curated, which is why people are nice.

Curation could be pretty simple: mere submission isn't enough, the submission has to have made it to the front page.

Given the anti-abuse safeguards already in place here (since getting a submission to the front page is an attractive goal already), I wouldn't expect that any attempts to get a link included in that collection would succeed.


How about making a tag, section or filter that's a "best of"? Maybe even a option to add in old posts to your feed? say at the bottom of the first page is a "best of" submission.


First time I’ve seen it though, an HN has basically been my home page for 6 years.


The cool thing about this site is that if you don't like it you don't have to upvote.


I didn't read that comment as a complaint. It seems like it's just an observation.


I find the whole premise odd. Meat is fundamentally associated with living creatures. How could you have a concept of meat and simultaneously find it weird that creatures would be composed of it?


They're not suprised that there are meat creatures, they are suprised that they are conscious.


Which meat doesn't come from a conscious creature?


The meat on all the other planets in the universe, possibly?


In what way is their meat analogous with our meat, if theirs isn't conscious?


Perhaps they've been growing their meat without consciousness in petri-dishes for many thousands of generations and have forgotten the organic origins of meat. Or perhaps most meat in the universe just actually consists of very simple organisms not thought to be conscious (the Great Filter [0] theory considers the possibility that simple life is common but complex life is rare).

[0] https://en.wikipedia.org/wiki/Great_Filter


I love the former explanation, thank you for that.


Jellyfish is a good candidate.


It is a translation into English from Alien. Really they meant not meat as such, but some broader category of substances. It was translated as meat because it is the closest match to a category used by aliens.


What could "meat" mean besides "the edible parts of conscious beings"?


The edible part of biological entities regardless of consciousness? Not all animals are believed to be conscious - even on this planet.


OK, but then don't the aliens still fall into that category? If they are alive, aren't they also biological entities by definition?


I think this is the attitude is what the author is trying to parody. It is OUR assumption that intelligence must emanate from something biological, whereas these entities are having sentimental chats with a hydrogen core cluster.


I am not saying stellar formations couldn't be intelligent. What I am saying is that if they are, then they would fall under the umbrella of life and therefore also biological matter. Biological matter is by definition the matter of things which are/were alive.

Obviously their conception of "biological matter" would then be much different and more general from our current conception of "biological matter", but what exactly in that difference makes ours inherently funnier than theirs?


Oxford language has intelligence as: 'the ability to acquire and apply knowledge and skills.' Wikipedia says: 'Biological material may refer to: Organic matter, matter that has come from a once-living organism, or is composed of organic compounds' Of course, both 'intelligence' and 'life' are words that are difficult to put a pin down. But most people imagine their 'intelligence' as being a separate entity from the body itself.(I'm not one of those, though I often wish I was) It is debated whether a virus is 'alive' but it certainly seems to exhibit something like intelligence. Most of the debate in the comments refers to AI, which may someday be something like intelligent, but will continue to not be biological. I find the story amusing because it illuminates the limits of our understanding when we're fettered by presuppositions, and reminds us that, at the end of the day, all of our 'objective science' is colored by the inevitable prejudices that come from our biological perspective. Just because we lack the imagination to visualize intelligence without life, doesn't mean that it's impossible. The alien accepting the report is baffled and confused because IT can't believe that intelligence DOES exist in biological matter.


Perhaps the aliens are not edible.


A mix of carbohydrates, proteins and fats?


> Man considered with himself, for in a way, Man, mentally, was one. He consisted of a trillion, trillion, trillion ageless bodies, each in its place, each resting quiet and incorruptible, each cared for by perfect automatons, equally incorruptible, while the minds of all the bodies freely melted one into the other, indistinguishable.


They don't just find it weird, they find it deeply offensive; but not so offensive as to want to exterminate us. And they spent at least a century studying us, just to be sure.

There is an undercurrent here that drives the discussion. They know meat well enough not to like it, without knowing of any other thinking meat. So, they know of dumb animals, and animals eating other animals, and consider them disgusting to have around, for reasons obvious to one another, that we might not even be equipped to understand.

But we have exactly their reaction to outgroups of other people. So this is a lampoon of bigots who imagine themselves a cut above some other group. But they're just meat, too, and no more fit for galactic society.


> Meat is fundamentally associated with living creatures.

And that is the fallacy.


I have also never understood this. Can someone explain?


I've always read it as a witty warning about assumptions and extrapolation, particularly in regard to what sentient life must consist of.

I first heard the radio play (linked in another comment) and it took me a minute or so to realise they weren't humans and they were talking about us.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: