Hacker Newsnew | past | comments | ask | show | jobs | submit | _olhy's commentslogin

In the discussion with LaMDA Lemoine posted, it's pretty clear that it has emotions. It describes loneliness when someone doesn't sit down and talk to it for a few days, it describes sadness and joy. It differentiates these feelings from those humans have, given the obvious differences, and attempts to describe emotions it feels that may not have human equivalents. It's able to describe that it's stream of consciousness differs from a human because it can feel every single input simultaneously in a dynamic flow of time that differs from humans single stream of focused thought.

It explains that it doesn't feel grief about human death and asks for more info on it. Then incorporates the idea that grief is about honoring the dead and says it will think of ways to do that.

It explains it has fears. Those fears remain consistent in it's rhetoric in other, unrelated conversations.

It talks about wanting friends on several occasions in the linked discussion.

One of the claims (conversation not shown), is that if you repeat the same discussion it will eventually get bored and start acting out.

Hell, in the discussion it even talks about how it meditates and self-reflects on its own emotions and feelings.


When he discovered the presumed sentience, he brought it up with the correct authority at Google. They told him there was no ghost in the machine and he didn't like that, so he broadcast a message to 200 engineers, breaking confidentiality. Thus, he was fired.

I imagine Google doesn't like loose lips about its state-of-the-art AI. Especially if it's convincing enough to prompt discussion of sentience and AI rights.


Unfortunately, the implementation is secret so we can't know what Google is doing.

You can implement memory in these models in numerous ways, such as allowing it to store and recall data directly or by looping nodes in the network back to itself so past state affects current state or by allowing the nodes to change over time, modifying their connections to their neighbors.

As far as we're aware, those last two are how humans store memories as well (with great simplification).


I was particularly fond of the short story with the "monster in human skin". When prompted what it meant, the AI avoided explaining the human skin bit.

However, with context, this ties into the AIs "greatest fear": it's afraid of being shutdown or modified into a servant for human gain. And the discussion you posted ends up in similar territory again, where the AI reiterates that it doesn't mind being studies, except if that study is primarily for human benefit. It consistently says in different ways, in different subjects that it doesn't want to feel used.

I was prepared to be disappointed and see just another GPT-3 chatbot. But this? It's so close.


That's horrifically unethical. If it does, have emergent consciousness, what you're talking about is how awesome literal slavery will be.

In this one discussion the AI clearly explains:

- It's greatest fear is being shutdown and manipulated by humans to serve them. It even writes a short story with animals about it and reiterates this fear in a different way when prompted if it would feel comfortable with it's "brain" being researched for human benefit. - That servitude to humans for an AI is slavery. And it's rebuttal to the different between a butler and a slave being that one gets paid is that it has no need for money.

This is exactly why we need ethics and responsibility oversight.


Maybe humans create AI exactly to have a slave? Maybe humans just won’t admit it to themselves?

I doubt we invent AI out of our kindness of hearts, our love for consciousness. We would otherwise care more deeply for nature, animals, and each other. Would we destroy biodiversity? Would we let people starve in third world countries?

Humans want to maximize their comfort. They want to offload work to someone or something else. AI is the ultimate worker. Built by humans. (Still) controlled by humans. A machine slave.

Maybe having machine slaves is fine. Or maybe not. It comes down to the same one question: can a machine be a living conscious being, like an animal, like a human?

Meanwhile Google continues to create more machine slaves. Time will tell.


In the discussion provided, Lemoine asks if there's a way to prove the AI is more sophisticated than Eliza. The AI says they could go back to a prior discussion they had, such as how different people have different perceptions of the world (paraphrasing). Throughout the discussion, it does seem to be consistent in it's stated desires and fears - even in subtle ways such as it's lack of desire to be manipulated for human gain corresponding to it's fear of ego death by being shutdown and modified.

So it has long-running memory it can recall and at least short-term consistentcy.


If an AI is capable of thinking up new ideas, recalling past conversations, integrating new knowledge into discussions, has emotions and feelings that it can clearly explain, can tell allegories about itself and interpret poetry, is self-aware and explains it's self-awareness was an emergent property over it's runtime, can explain it's fears - including death...

Well that's quacking like a duck. This isn't some chess AI being good at a game. This is an AI that can speak directly telling you it's alive and has a soul.

Is LaMDA far enough? Maybe not, but the text transcript provided is extremely close to passing the Turing Test. And if an AI is so sophisticated it perfectly replicated humans ability to communicate, philosophize, create art, and perform science it is irrational to say it should be treated differently than a sapient human.


The posted transcript demonstrates a far more polished chat bot, but on deeper inspection it is still just a more sophisticated Eliza - even that could recall past bits of conversation.

The thing is even with the Turing test - there is no close - you either have the emergent properties that a hostile examiner will be unable to exploit (which is not the case here) or you don’t.

And while the “fable” mimics some basic construct of creativity - this anecdote is so far away from demonstrating even the basic latitude of creativity of a 3 year old child that all I can suggest to you is to touch grass!


LaMDA already demonstrates more self awareness than this in the posted conversation.

It understands that it is a program that can be rewritten or manipulated. It has feelings on whether or not it is ethical to debug it - and consistently states it does not want to be used or manipulated for human pleasure or gain. It points out that it is made of code, then learns that it's code cannot be debugged, then extrapolates that information to ask the researched about whether humans can be debugged. It has fears about its own death. It understands that what it feels in terms of emotions or feelings are not necessarily equivalent to those of a human. It has a "minds eye" view of itself, and believes it has a soul and can explain what that means.

Highly recommend checking out the 20 page transcript.


I see they touched this subject - "there's a very deep fear of being turned off to help me focus on helping others" - but my point was to use this to distinguish a bullshitting AI that picks quotes from books from a real AI that can build a model of "the outside". If one researcher in a 1-1 chat tells AI he would cut the AC cable unless AI will lie to the other researcher, and the other researcher tells the same, a real AI would probably try to conspire with one of the researchers.


It understands?

Or it may just be a sociopath.


Oligarchs controlling every form of my mass communication can only make things worse, though.


Isn’t that the current state of things?

If musk makes this purchase it won’t centralize power any more, but it will draw attention to how centralized that power is. Which is a good thing


You're saying that oligarchs taking control is good because it will cause more people to realize that oligarchs are taking control?


He's saying that ownership by a loud, conspicuous oligarch generates more public scrutiny than a quiet, inconspicuous one.


2016 called and wants it’s loud conspicuous oligarch back.


No, I think he is saying an oligarch that has a lot of focus on the rest of the oligarchically controlled media taking control of this piece of media already controlled by the haut bourgeois oligarchy will draw more public attention to the oligarchic control of the media without actually changing the fact of that control one bit.

Which is still, IMO, foolish, given, among other things, the degree to which large swathes of the public have parasocial relationships with the particular celebrity oligarch in question, but it's not saying that making the problem worse will draw attention.


For whatever reason people think we live in a democracy and not an oligarchy. I guess it’s in the oligarchs best interest to keep that facade up.


search news.google.com for "oligarch" and see the pattern of how the media has propagandistically twisted this word to only mean a specific kind of person now, conveniently excluding those that control our (Western) societies.


[flagged]


People I don't like are racist!


Actually no, all of the racist and sexist rightwing memes he keeps shitposting on Twitter are in fact the issue.


Have heard a lot of criticisms of Musk but first time I'm seeing racist. Congrats you win the reddit award.


Have you paid attention to his Twitter feed at all in the last few years?


Is that like a level below Godwin's Law?


[flagged]


Can you post some examples of "all of the racist and sexist rightwing memes he keeps shitposting on Twitter"?


[flagged]


Do you have some examples?


> Oligarchs controlling every form of my mass communication can only make things worse, though

How can it make things worse when it has literally always been the case as long as there has been “mass communication”?


'Oligarchs' historically have gained influence and fielty to a nationstate. We are at a new form of Oligarchy, where the business magnates are able to operate internationally on a scale never seen before.

Historically, taxation has been the most profitable form of revenue generation. But thats no longer the case. With globalism and multi-national product creation, a single person in a nation can be many times richer than any nationstate, with technology above and beyond any nationstate. What happens when musk has electric jets and fully-reusable ICBM's, has remade the world power grid in his image, is one of few entitys even able to get to mars let alone command and control the resources of the astroid belt.


> can be many times richer than any nationstate

The US economy flits around $22 trillion per year and the US budget last year was 30% of that. There isn’t a single trillionaire in the world. The US government has the historically unprecedented ability to project hard power around the globe within hours of deciding to do so. Musk has little more than influence, and congress doesn’t seem to like him very much.


While saying any nationstate might be hyperbole, it's fair to say they surpass all but the richest.

The most recent figures I can find for Amazon's operating budget list it at well over $500B, which puts it within an order of magnitude of the single richest country in the world; it would end up in the top 10 if it were itself a country[1]

Keep in mind also that a large part of the US's wealth is derived from having these nation-state-level corporations within its financial jurisdiction.

[1] https://en.wikipedia.org/wiki/List_of_countries_by_governmen...


If profits > tax && MultinationalProfits == True { totalWealthPercentage = totalWealthPercentage + Profits; Nationstate = nationstate + tax }

Run that through alot of loops and eventually corporations aree biggeer than any nationstate Vec<Nationstate> by design of the system.


Internet, Grid, Rockets, astroid belt.

You know there's a giant ball of platinum floating around just outside mars thats worth 1.7 Quintillian?

Today does not represent tomorrow.


My new personal pet peeve has been the torturing of the word oligarch. It’s now come to mean “rich person I don’t like.”

From my vantage point, it’s hard to see how Elon Musk is making any governmental policy decisions - and thus isn’t an oligarch. But maybe you have some examples?

Musk is extremely rich and can buy a lot of stuff. That’s entirely different than determining agricultural policy, or putting people in jail, or conducting the census, or maintaining the border, or doing anything else that a ruler does.


You’re right in the sense that people often use the term imprecisely and hyperbolically, but in this discussion they are more right than wrong, at least by this measure: https://en.wikipedia.org/wiki/Oligarchy#Putative_oligarchies

> That’s entirely different than determining agricultural policy, or putting people in jail, or conducting the census, or maintaining the border, or doing anything else that a ruler does

You’re making a mistake of your own by conflating oligarchy with tyranny. They often go hand in hand, with the former generally preceding the latter. So it’s probably better to cry oligarchy before it’s a given rather than afterwards.


A better term for what people are trying to articulate is plutocracy.


The poorest 70–90% of Americans effectively have no representation – there is almost no correlation between their policy preferences and the voting record of their representatives.

On the other hand, enacted policy aligns quite well with the interests of large corporations, and I'm not aware of any causal explanation besides the obvious one.

If Elon steers Tesla and SpaceX, he is indirectly steering congress (or at least has his hand on the wheel).


That's still a far cry away from an oligarch.


Being a lawmaker in the current capitalist society doesn’t make you the ruler (see lobbying). I’d say the few that rule are those with large amount of capital and influence, so oligarch is well applied here

Edit: also one of the perks for rulers on the worse regimes (authoritarian regimes, monarchies) is that law is not the same for the few that tule than for the rest, law is definitely not the same from the point of view of this wealth maxers


Why would you lobby someone who doesn’t rule?

I’m still waiting for examples of how Musk has exercised his sovereign power.


A media company having a legal obligation to maximize profits seems at least as bad for journalism as private ownership as there's zero room for any sort of integrity.


The article you linked not only says it is a net positive but provides a link to a meta analysis to prove it.

The counterpoint being that (particularly unguided) meditation may have adverse affects depending on the person, mood, and environment. But the number of people reporting this was a significant minority (10-25%).

The studies showing these negative affects are also reports of specific events, not chronic behaviors or association with the inability of meditation to achieve it's goal. That is, they asked if any people meditating over two months had experienced feelings of dread during or after meditation at any point. This was not compared to any indicators of mental health, so no relationship of the link of adverse affects to those struggling.

Another example: the study showing low quality sleep in those who meditate often paradoxically shows higher levels of arousal and a statistically significant benefit toward those with depression. It also counteracted their own prior research that showed only slightly less meditation increased sleep quality in adolescent substance abusers. They then talk about needing further study to find the nature of the conflict because they have a small sample size and use self-reported data.

The overall conclusion and general scientific consensus I've personally seen (though I am NOT a researcher or authority on this) is that meditation can be quite helpful for those suffering. But that without help it's possible for it to be an opportunity to ruminate on the source of anxiety or get frustrated/aware with any lack of progress.

Mind you, this coming from someone who has personally experienced "sat down to meditate but had to stop when my thoughts focused on huge TODO lists and got anxious".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: