Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>You like others conflate "how","what" and "why". Whether or not it is biomechanics,electrochemicals or a projected hologram of quanta is "how", so is whether or not stimuli or measurement of stimuli form our reality perception

I'm not conflating anything. I'm trying to understand the meaning behind phrases like "reality perception" - which convey little meaning without a definition. “What” “Why” & “How” are used to grasp at meanings.

See, words are for conveying meaning. When they have too broad of a definition, little information is conveyed (which is why phatic expressions and small talk suck). To better convey meaning, we form more stringent definitions and create new words. Notice however, the formation of new words and stricter definitions is a never ending pursuit, otherwise if we could fully explain things we would end doing so.

For example, notice we started somewhere with a definition for life, and then years go by and it ends up being too ill-suited for discriminating life from non-life. So we add additional words to describe what makes life... life. Eventually we'll either have a well defined line between what makes something living and non-living, or we perpetually keep on tacking on existing words or new words to describe it. The same goes for intelligence.

I can only see two scenario’s:

Scenario A: We end up perpetually trying to define intelligence, and thus we’ll always struggle to differentiate intelligence.

Scenario B: we come to full stop and are able to fully explain everything (at least regarding intelligence). At which point intelligence will be fully differentiable.

Are we at scenario B yet? Then it should be easy to come up with a satisfactory test for AI, let alone know if it’s even a possibility. Yet here we are today still unsatisfied by the state of computer intelligence… This is what I mean in my OP by: “These terms require aggrandizing to the point of impossibility or they lose all apparent meaning.”

Anyhow, I like your points on self-aware introspection and self-preserving strategy formation. But I'm not yet convinced that it has strict enough definitions to differentiate if something is truly AI. I mean is a computer that tries to prevent itself from going into sleep mode, and is aware when attempts are being made to put it into sleep mode - fit your definition? It kind of does... yet, I think we'd agree that wouldn't be AI.



> Anyhow, I like your points on self-aware introspection and self-preserving strategy formation. But I'm not yet convinced that it has strict enough definitions to differentiate if something is truly AI. I mean is a computer that tries to prevent itself from going into sleep mode, and is aware when attempts are being made to put it into sleep mode - fit your definition? It kind of does... yet, I think we'd agree that wouldn't be AI.

If that is a programmed function then it does not fit my definition. If it was not made to be aware,it became aware as a result of learning information and adjusting its programming and it realized the difference between sleep mode and a system wipe(death) and adjusts its programming to prevent a death scenario then it is conscious.

As for the rest, I don't think I am part of "We" I have pretty strict,well understood and time tested definitions for life and intelligence,separate from consciousness.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: