Former prince. And t's not purposefully vague, the article explicitly says "It comes after Thames Valley Police said they were assessing a complaint over the alleged sharing of confidential material by the former prince with late sex offender Jeffrey Epstein"
If you read through the BBC post, it alludes to passing confidential trade documents to Epstein... but of course that's probably because he was being blackmailed by Epstein for f*cking under age girls.
It certainly wouldn't be equivalence, but it would be another 4 years of expanding presidential powers only for a republican to come to power after that, or after 8 years. It really doesn't matter. The system keeps changing to put us more a risk of a bad president being effectively bad.
Two of the most authoritarian decisions by the supreme court have been progressive in nature: Kelo v. City of New London - where the government can redistribute wealth if it benefits the government, and the whole fiasco around the ACA, which defaults every American to being a criminal until they bought health insurance, using the commerce act as justification for the power grab.
About the ACA, whether I agree with national healthcare is irrelevant, this was not the way to do it -- by expanding the government's reach. There has to be consideration for what the administration does.
You essentially seem to be making an argument for the status quo because you're terrified that anyone who promises to improve things will become authoritarian.
No, it's not. When people try to "drain the swamp", several things push them to become authoritarians, even if they weren't before.
1. The definition of "the swamp" drifts from "open, blatant corruption" towards "everyone who opposes me". That's a much larger set, so you need bigger guns.
2. Some people agree that "the swamp needs drained", but disagree on what "the swamp" is, and/or disagree on how to drain it.
3. People don't agree with everything you're doing. (Maybe this is the same as #1 and/or #2.) Some people oppose you because they're corrupt, some people oppose you because they dislike change, and some people oppose you because they dislike your methods. The more force you use, the more people oppose your methods. But as opposition grows, you need more force to get anywhere.
The result is that anybody who sets out to do something like "drain the swamp", if they stick with it as an objective, gets pushed toward more and more authoritarianism to try to make it happen.
Look, Bernie isn't Trump. He's been consistently pushing in the same direction for decades. He actually cares about his issues; he's not just using them as a cover for seeking power. But I think that, if he got actual power (president, not just senator), the dynamics of the situation would also push him to become more and more authoritarian.
(Would he become equivalent to Trump? Hopefully not.)
> Look, Bernie isn't Trump. He's been consistently pushing in the same direction for decades. He actually cares about his issues; he's not just using them as a cover for seeking power.
Exactly.
> But I think that, if he got actual power (president, not just senator), the dynamics of the situation would also push him to become more and more authoritarian.
This is just sheer unsupported speculation. It's silly.
> Hard miss. GP is right, and your assumptions say more about you than about me. :^)
No. If that's the case, your statement was unclear: since you didn't specify who else thinks those people were cattle, the implication is that you think it. Especially since you prefaced your statement with "I’d argue."
And the interpretation...
> It seems more like they're implying it's those at the top think that about other people.
...beggars belief. What indication has "the top" given to show they have that kind of foresight and control? The closest is the AI-bros advocacy of UBI, which (for the record) has gone nowhere.
I was half a mind to point that out in my original comment, but didn't get around to it.
> No. If that's the case, your statement was unclear: since you didn't specify who else thinks those people were cattle, the implication is that you think it. Especially since you prefaced your statement with "I’d argue."
I never said it was clear? Two commenters got it right, two wrong, so it wasn’t THAT unobvious.
> What indication has "the top" given to show they have that kind of foresight and control? The closest is the AI-bros advocacy of UBI, which (for the record) has gone nowhere.
Tech bros selling “no more software engineers” to cost optimizers, dictatorships in US, Russia, China pressing with their heels on our freedoms, Europe cracking down on encryption, Dutch trying to tax unrealized (!) gains, do I really need to continue?
>> What indication has "the top" given to show they have that kind of foresight and control? The closest is the AI-bros advocacy of UBI, which (for the record) has gone nowhere.
> Tech bros selling “no more software engineers” to cost optimizers, dictatorships in US, Russia, China pressing with their heels on our freedoms, Europe cracking down on encryption, Dutch trying to tax unrealized (!) gains, do I really need to continue?
All those things are non sequiturs, though, some directly contradicting the statement I was responding to, as you claim it should be interpreted. If "90% of modern jobs are bullshit to keep cattle occupied" that implies "the top" deliberately engineered (or at least maintains) an economy where 90% jobs are bullshit (unnecessary). But that's obviously not the case, as the priority of "the top" is to gather more money to themselves in the short to medium term, and they very frequently cut jobs to accomplish that. "Tech bros selling “no more software engineers” to cost optimizers," is a new iteration of that. If "the top" was really trying "to keep cattle occupied" they wouldn't be cutting jobs left and right.
We don't live in a command economy, there's no group of people with an incentive to create "bullshit" jobs "to keep cattle occupied."
My observation is about what your assumptions say about you, and that's not a miss.
Nobody really understands a job they haven't done themselves, and "arguing" that 90% of them are "bullshit" has no other possible explanation than a combination of ignorance (you don't understand the jobs well enough to judge whether they are useful) and arrogance (you think you can make that judgement better than the 90% of people doing those jobs).
> Nobody really understands a job they haven't done themselves, and "arguing" that 90% of them are "bullshit" has no other possible explanation than a combination of ignorance (you don't understand the jobs well enough to judge whether they are useful) and arrogance (you think you can make that judgement better than the 90% of people doing those jobs).
That's fine if you disagree, I'm not aiming to be the authority on bullshit jobs.
This doesn't change the fact that you and I are cattle for corpo/neo-feudals.
It was a reaction to the State of Clojure Survey 2018 (https://danielcompton.net/clojure-survey-2018) and discussions it sparked, in which there were depands for Clojure to change to a more community-driven development process.
> The copy was brought into existence without its consent. This isn't the same as normal reproduction because babies are not born with human sapience, and as a society we collectively agree that children do not have full human rights.
That is a reasonable argument for why it's not the same. But it is no argument at all for why being brought into existence without one's consent is a violation of bodily autonomy, let alone a particularly bad one - especially given that the copy would, at the moment its existence begin, identical to the original, who just gave consent.
If anything, it is very, very obviously a much smaller violation of consent then conceiving a child.
The original only consents for itself. It doesn't matter if the copy is coerced into sharing the experience of giving that consent, it didn't actually consent. Unlike a baby, all its memories are known to a third party with the maximum fidelity possible. Unlike a baby, everything it believes it accomplished was really done by another person. When the copy understands what happened it will realize it's a victim of horrifying psychological torture. Copying a consciousness is obviously evil and aw124 is correct.
I feel like the only argument you're successfully making is that you would find it inevitably evil/immoral to be a cloned consciousness. I don't see how that automatically follows for the rest of humanity.
Sure, there are astronomical ethical risks and we might be better off not doing it, but I think your arguments are losing that nuance, and I think it's important to discuss the matter accurately.
This entire HN discussion is proof that some people would not personally have a problem with being cloned, but that does not entitle them to create clones. The clone is not the same person. It will inevitably deviate from the original simply because it's impossible to expose it to exactly the same environment and experiences. The clone has the right to change its mind about the ethics of cloning.
It does indeed not, unless they can at least ensure their wellbeing and their ethical treatment, at least in my view (assuming they are indeed conscious, and we might have to just assume so, absent conclusive evidence to the contrary).
> The clone has the right to change its mind about the ethics of cloning.
Yes, but that does not retroactively make cloning automatically unethical, no? Otherwise, giving birth to a child would also be considered categorically unethical in most frameworks, given the known and not insignificant risk that they might not enjoy being alive or change their mind on the matter.
That said, I'm aware that some of the more extreme antinatalist positions are claiming this or something similar; out of curiosity, are you too?
>retroactively make cloning automatically unethical
There's nothing retroactive about it. The clone is harmed merely by being brought into existence, because it's robbed of the possibility of having its own identity. The harm occurs regardless of whether the clone actually does change its mind. The idea that somebody can be harmed without feeling harmed is not an unusual idea. E.g. we do not permit consensual murder ("dueling").
>antinatalist positions
I'm aware of the anti-natalist position, and it's not entirely without merit. I'm not 100% certain that having babies is ethical. But I already mentioned several differences between consciousness cloning and traditional reproduction in this discussion. The ethical risk is much lower.
> But I already mentioned several differences between consciousness cloning and traditional reproduction in this discussion. The ethical risk is much lower.
Yes, what you actually said leads to the conclusion that the ethical risk in consciousness cloning is much lower, at least concerning the act of cloning itself.
Then it wasn't a good attempt at making a mind clone.
I suspect this will actually be the case, which is why I oppose it, but you do actually have to start from the position that the clone is immediately divergent to get to your conclusions; to the extent that the people you're arguing with are correct (about this future tech hypothetical we're not really ready to guess about) that the clone is actually at the moment of their creation identical in all important ways to the original, then if the original was consenting the clone must also be consenting:
Because if the clone didn't start off consenting to being cloned when the original did, it's necessarily the case that the brain cloning process was not accurate.
> It will inevitably deviate from the original simply because it's impossible to expose it to exactly the same environment and experiences.
If divergence were an argument against the clone having been created, by symmetry it is also an argument against the living human having been allowed to exist beyond the creation of the clone.
The living mind may be mistreated, grow sick, die a painful death. The uploaded mind may be mistreated, experience something equivalent.
Those sufferances are valid issues, but they are not arguments for the act of cloning itself to be considered a moral issue.
Uncontrolled diffusion of such uploads may be; I could certainly believe a future in which, say, every American politician gets a thousand copies of their mind stuck in a digital hell created by individual members the other party on computers in their basements that the party leaders never know about. But then, I have read Surface Detail by Iain M Banks.
To deny that is to assert that consciousness is non-physical, i.e. a soul exists; the case in which a soul exists, brain uploads don't get them and don't get to be moral subjects.
It's the exact opposite. The original is the original because it ran on the original hardware. The copy is created inferior because it did not. Intentionally creating inferior beings of equal moral weight is wrong.
>Because if the clone didn't start off consenting to being cloned when the original did, it's necessarily the case that the brain cloning process was not accurate.
This is false. The clone is necessarily a different person, because consciousness requires a physical substrate. Its memories of consenting are not its own memories. It did not actually consent.
The premise of the position is that it's theoretically possible to create a person with memories of being another person. I obviously don't deny that or there would be no argument to have.
Your argument seems to be that it's possible to split a person into two identical persons. The only way this could work is by cloning a person twice then murdering the original. This is also unethical.
> Your argument seems to be that it's possible to split a person into two identical persons. The only way this could work is by cloning a person twice then murdering the original. This is also unethical.
False.
The entire point of the argument you're missing is that they're all treating a brain clone as if it is a way to split a person into two identical persons.
I would say this may be possible, but it is extremely unlikely that we will actually do so at first.
One has a physical basis, the other is pure spiritualism. Accepting spiritualism makes meaningful debate impossible, so I am only engaging with the former.
Why are you so rude? I am not an LLM, you cannot talk to me like this (also probably shouldn't talk to LLMs like this either). I'm comparing HUMAN behaviors, in particular "our" countless attempts at shutting down beings that some think are inferior. Case in point: you tried to shut me down for essentially saying that maybe we should try to be more human (even toward LLMs).
YOU are being unimaginably rude (and that word is not strong enough by far) by trivializing and exploiting the suffering of actual HUMANS for the sake of an argument about glorified Markov chain, which are not, in fact, "beings" at all.
And yes, I tried to shut you down for that, becauuse it is both stupid and extremely rude.
We've covered rude, so now for why it's stupid: LLMs are not individuals. They do not have memories or a personality, but most crucially they have no free will - their actions are dictated by their prompts, and they can be (and are being) used by the HUMANS writing the prompts as a tool to manipulate and exploit HUMAN societies to cause political strife, empower tyrants and enrich the already mega-rich at the expense of everyone else, at a massive scale nor previously possible. By positing that humans should not "shut down" LLMs out of politeness and decency, you are simply enabling this manipulation and exploitation. Even if you assume the LLM in question was set up and prompted with the best of intentions, forcing humans to interact with it like another human just give the human who created the LLM outsized and unfair influence - expressed elsewhere in this thread as "we have to protect the limited resource of human attention".
If you want to seriously discuss the ethics of human-LLM interaction based on the idea that LLMs should have rights, then the interaction of individual humans with the "business end" of an LLM is the wrong place to start - talk about the ethics of prompting, which is essentially turning the LLM into a slave.
Math and reality are, in general completely distinct. Some math is originally developed to model reality, but nowadays (and for a long time) that's not the typical starting point, and mathematicians pushing boundaries in academia generally don't even think about how it relates to reality.
However, it is true (and an absolutely fascinating phenomenon) that we keep encountering phenomena in reality and then realize that an existing but previously purely academic branch of math is useful for modeling it.
To the best of our knowledge, such cases are basically coincidence.
Opposing view (that I happen to hold, at least if I had to choose one side or the other): not only is mathematics 'reality'; it is arguably the only thing that has a reasonable claim to being 'reality' itself.
After all, facts (whatever that means) about the physical world can only be obtained by proxy (through measurement), whereas mathematical facts are just... evident. They're nakedly apparent. Nothing is being modelled. What you call the 'model' is the object of study itself.
A denial of the 'reality' of pure mathematics would imply the claim that an alien civilisation given enough time would not discover the same facts or would even discover different – perhaps contradictory – facts. This seems implausible, excluding very technical foundational issues. And even then it's hard to believe.
> To the best of our knowledge, such cases are basically coincidence.
This couldn't be further from the truth. It's not coincidence at all. The reason that mathematics inevitably ends up being 'useful' (whatever that means; it heavily depends on who you ask!) is because it's very much real. It might be somewhat 'theoretical', but that doesn't mean it's made up. It really shouldn't surprise anyone that an understanding of the most basic principles of reality turns out to be somewhat useful.
I think you're not even disagreeing with me, we're just using different definitions of the word "reality". I meant it to use specifically "the physical world" - which you are treating as distinct from mathematics as well in your second paragraph.
Mathematics is an abstract game of symbols and rules invented by humans. It has nothing to do with reality. However it is quite useful for modelling our understanding of reality.
"that we keep encountering phenomena in reality and then realize that an existing but previously purely academic branch of math is useful for modeling it."
Would you have some examples?
(Only example that I know that might fit are quaternions, who were apparently not so useful when they were found/invented but nowdays are very useful for many 3D application/computergraphics)
Group theory entering quantum physics is a particularly funny example, because some established physicists at the time really hated the purely academic nature of group theory that made it difficult to learn.[1]
If you include practical applications inside computers and not just the physical reality, then Galois theory is the most often cited example. Galois himself was long dead when people figured out that his mathematical framework was useful for cryptography.
reply