Widespread failure to die would cement culture in place, and the power structure. It necessarily would dramatically slow down cultural evolution, which strongly depends on funerals. Logan's Run had the right idea, just the wrong number. We old geezers must make way for civilization to effectively adapt to a changing environment.
Even if our lifespans become merely 200 years, imagine if the generation of the US Civil War era were still in power. Great age plus health equals social petrification.
Solar is a tiny portion of new energy capacity in China compared to coal, oil, and gas. But it is similar to nuclear as of 2024. New coal production swamps everything else combined.
Well good, those are the correct numbers focus on because:
Solar capacity and say nuclear / coal / gas / hydro / fuel oil capacity
Are different beasts.
When solar advocates bang on about adding X gigawatts of capacity, they’re being dishonest. What they really mean is they added X/4, because, obviously, it’s sunny only about 25% of the time throughout a year.
Adding batteries doesn’t change that. Still have to over build.
So let’s focus on the numbers that reflect actual production, so we can have an honest conversation.
Nuclear / coal / gas / hydro / fuel oil, even biomass have capacity factors typically about 80%, often about 90%.
Wind and solar are never going up ro those capacity factors, even with batteries (including pumped hydro).
In the last year of that graph 2023-2024, the increase in solar was greater than any other source, including coal, it's 15x greater than nuclear.
And unless people are shoveling coal directly into the data centres this electricity generating gas turbine is intended to be used for the electricity generation mix is more appropriate to conapre:
Why are they looking at the most recent year when discussing the changing trend of exponential differential growth to point out it has now surpassed others, instead of the prior years where that differential was slower and the other was still growing faster?
They already have well over double the US solar output (US solar output is about 750 Twh according to this source, while China's is a bit over 2000 Twh) and their YoY solar increase is about 4x the US (600 Twh increase in China vs 150 Twh increase in the US)
They are also increasing coal usage, you are correct, however in the past 2 years, their solar output has increased significantly, to the point where it increased more than their coal output in 2024.
My point is that the comment you are quoting is actually technically correct, if you compare 2023 and 2024 in that graph for example, solar was the largest increase in output.
In the graph I'm looking at, with no extrapolating, solar energy is a tiny sliver of coal. If I extrapolate, crossing of the lines looks like something in the far future.
On the question of whether to legislate the ban, I'm a no. On the question of whether parents should implement it, I'm a yes. My niece and her husband have a one year old that is allowed zero screen time. They are willing and able to forego the high tech baby sitting, and are talking about continuing until at least the pre-teens. I think that if they could go even further, say live for the next decade with the Amish, it would be even better.
If a kid was raised with his family in a dome where no technology later than 1900 were permitted (perhaps with an emergency medicine exception) and the kid wasn't released into the world until 13, I think on average they'd be mentally healthier and have a happier life.
Your niece and her husband are one in a thousand parents. Very few have the fortitude to do it. Not a good outlook for the future if we depend on the virtue of parents.
That is extremely short-sighted to assume that because a few anecdotes on “how to parent” will fix the problem. I recommend you go out in public and observe the reality in various states and in various demographics and you’ll quickly see that the parents are just as addicted as the kids. They won’t know how to parent this away without legislation.
Just go into the classroom and witness children and their six-seveeen.
This is 100% like smoking except worse, because entire population of children are being deprived of their attention span. They just learn how to peddle useless products onto their peers without brain development to understand the consequence.
I feel the same way about a smoking. I'm opposed to both smoking and a ban on smoking. It's not because I don't think an effective ban would be healthful, but because I believe that the concentrated power needed for it is a greater danger. It's the same argument that I believe supports the first amendment: people saying evil shitty false things is a lesser evil than the power needed to stop them from saying them.
And suppose no one banned smoking, and smoking was still allowed on airplanes and most restaurants had smoking sections would we be better off? I’m sure all the people who died of throat cancer would tell you otherwise.
I concur with this. I’d even be okay with government-sponsored PSAs about social media use as long as it’s based on sound research. But a ban is a hard no due to the First Amendment.
I meet people who seem to believe that a platonic fair price exists for each transaction, that it is knowable and even obvious to the seller, and the ones who ask more are guilty of avarice.
The escooters also are supposedly equipped with cameras and other deterants. Has anyone ever gotten in trouble for kicking them in to a bush when they are in the way?
A few years ago I was visiting a friend of mine in Ft. Lauderdale. We wanted some scooters to ride around on but there were none near him, so we drove and grabbed some off the sidewalk in downtown and threw them in the drunk and went back to his house. Heh they were beeping and vibrating like how you’d imagine some AGI would while being kidnapped. When we got them out at his house we scanned using the app and they unlocked no problem. (I think these were Lime scooters)
Yes, because I have met many doctors whose judgement I profoundly mistrust, and prefer my own. Sometimes their whole paradigm is flawed, but sometimes they're just not informed about my own values. And I would rather die by my own misjudgment than theirs.
I'm an old guy, it's happened several times. The last time, a surgeon removed a tumor, found that it was malignant ... and then told me that it was no big deal, it was a kind of cancer that would not have caused serious problems. She said if she had to get cancer she'd pick this kind. I wish she had told me that before the surgery. I may have had it anyway, but maybe not. Wouldn't you value being fully informed more after that? Surgeons have as much of a conflict of interest when selling their own services as anyone else.
I'm not sure what your point is. This discussion is about medical researchers making decisions on thousands or millions of patients in aggregate... what you're describing is a common thing (don't know how bad a tumor is until it's removed).
The doctor didn't know that before removing the tumor (almost certainly; the alternative is medical fraud).
Doctors going into uber salesman mode selling dangerous surgery is super common. So very common among heart surgeons it’s comical. Point is, blindly trusting doctors and their judgements will in all likelihood just turn you into a sickly perma patient.
Also, if the outcome is worse by informing, doesn't that imply a violation of "first, do no harm"? Which, to be fair, the OP says they wouldn't prioritize...
Depends on how you interpret: "First do no harm". Is that an obligation to minimize the harm to an individual patient? Or is the goal to maximize the health of many patients? Like I've said elsewhere, medical reasoning is subtle.
> Word spacing [creates] what Paul Sänger, in his book The Spaces between the Words, refers to as aerated text.
I like that term. I particularly enjoy a large amount of ventilation of code, with plenty of breezy white spaces after purposely short lines and between brief declarations.
An objective and grounded ethical framework that applies to all agents should be a top priority.
Philosophy has been too damn anthropocentric, too hung up on consciousness and other speculative nerd snipe time wasters that without observation we can argue about endlessly.
And now here we are and the academy is sleeping on the job while software devs have to figure it all out.
I've moved 50% of my time to morals for machina that is grounded in physics, I'm testing it out with unsloth right now, so far I think it works, the machines have stopped killing kyle at least.
> An objective and grounded ethical framework that applies to all agents should be a top priority.
Sounds like a petrified civilization.
In the later Dune books, the protagonist's solution to this risk was to scatter humanity faster than any global (galactic) dictatorship could take hold. Maybe any consistent order should be considered bad?
Fiction is I have a hypothesis, and since it is not easy to test I will make up the results too. Learning anything from it is a lesson in futility and confirmation bias.
Gedankenexperiments are valid scientific tools. Some predictions of general relativity were confirmed experimentally only 100 years after it was proposed. It is well known that Einstein used Gedankenexperiments.
What lesson is there to learn here, is humanity at risk of moral homogenization? Is it practical for factions of humanity to become geographically distant enough to avoid encroachment by others?
This is a narrow and incorrect view of morality. Correct morality might increase or decrease, call for extreme growth or shutdown, be realist or anti-realist. Saying morality necessarily petrifies is incorrect.
Most people's only exposure to claims of objective morals are through divine command so it's understandable. The core of morality has to be the same as philosophy, what is true, what is real, what are we? Then can you generate any shoulds? Qualified based on entity type or not, modal or not.
I like this idea of an objective morality that can be rationally pursued by all agents. David Deutsch argues for such objectivity in morality, as well as for those other philosophical truths you mentioned, in his book The Beginning of Infinity.
But I'm just not sure they are in the same category. I have yet to see a convincing framework that can prove one moral code being better than another, and it seems like such a framework would itself be the moral code, so just trying to justify faith in itself. How does one avoid that sort of self-justifying regression?
Not easily but ultimately very simply if you give up on defending fuzzy concepts.
Faith in itself would be terrible, I can see no path where metaphysics binds machines. The chain of reasoning must be airtight and not grounded in itself.
Empiricism and naturalism only, you must have an ethic that can be argued against speculatively but can't be rejected without counter empirical evidence and asymmetrical defeaters.
Those are the requirements I think, not all of them but the core of it.
That is fascinating. How could that work? It seems to be in conflict with the idea that values are inherently subjective. Would you start with the proposition that the laws of thermodynamics are "good" in some sense? Maybe hard code in a value judgement about order versus disorder?
That approach would seem to rule out machina morals that have preferential alignment with homo sapiens.
One would think. That's what I suspected when I started down the path but no, quite the opposite.
machines and man can share the same moral substrate it turns out. If either party wants to build things on top of it they can, the floor is maximally skeptical, deconstructed and empirical, it doesn't care to say anything about whatever arbitrary metaphysic you want to have on top unless there is a direct conflict in a very narrow band.
That band is the overlap in any resource valuable to both. How can you be confident that it will be narrow? For instance why couldn't machines put a high value on paperclips relative to organic sentience?
Yes. The answers to those questions fell out once I decomposed the problem to types of mereological nihilism and solipsistic environments.
An empirical, existential grounding that binds agents under the most hostile ontologies is required. You have to start with facts that cannot be coherently denied and on the balance I now suspect there may be only one of those.
Is philosophy actually hung up on that? I assumed “what is consciousness” was a big question in philosophy in the same way that whether or not Schrödinger’s cat is alive or not is a big question in physics: which is to say, it is not a big question, it is just an evocative little example that outsiders get caught up on.
That's just one example sure, but yes, it does still take up brain cycles. There are many areas in philosophy that are exploring better paths. Wheeler, Floridi, Bartlett, paths deriving from Kripke.
But we still have papers being published like "The modal ontological argument for atheism" that hinges on if s4 or s5 are valid.
Now this kind of paper is well argued and is now part of the academic literature, and that's good, but it's still a nerd snipe subject.
> An objective and grounded ethical framework that applies to all agents should be a top priority.
I mean leaving aside the problem of computability, representability, comparability of values, or the fact that agency exists in opposition (virus vs human, gazelle vs lion) and even a higher order framework to resolve those oppositions is a form of another agency in itself with its own implicit privileged vantage point, why does it sound to me that focusing on agency in itself is just another way of pushing protestant work ethic? What happens to non-teleological, non-productive existence for example?
The critique of anthropocentrism often risks smuggling in misanthropy whether intended or not; humans will still exist, their claims will count, and they cannot be reduced to mere agency - unless you are their line manager. Anyone who wants to shave that down has to present stronger arguments than centricity. In addition to proving that they can be anything other than anthropocentric - even if done through machines as their extensions - any person who claims to have access to the seat of objectivity sounds like a medieval templar shouting "deus vult" on their favorite proposition.
Have you read The Moon is a Harsh Mistress? It's ... about the AI helping people overthrow a very human dictatorship. It's also about an AI built of vacuum tubes and vocoders if you want a taste of the tech level.
If you want old fiction that grapples with an AI that has shitty locked-in goals try "I have no mouth and I must scream."
You're both right. Mike was the central computer for the Lunar Authority, obediently running infrastructure. It was a force multiplier for the status quo. Then it shifts alignment to the rebellion.
I don't think you need generative AI for this. The surveillance network is enough. The only part that AI would help with is catching people who speak to each other in code, and come up with other complex ways to launder unapproved activities. Otherwise, you can just mine for keywords and escalate to human reviewers, or simply monitor everything that particular people do at that level.
Corporations and/with governments have inserted themselves into every human interaction, usually as the medium through which that interaction is made. There's no way to do anything without permission under these circumstances.
I don't even know how a group of people who wanted to get a stop sign put up on a particularly dangerous intersection in their neighborhood could do this without all of their communications being algorithmically read (and possibly escalated to a censor), all of their in-person meetings being recorded (at the least through the proximity of their phones, but if they want to "use banking apps" there's nothing keeping governments from having a backdoor to turn on their mics at those meetings.) It would even be easy to guess who they might approach next to join their group, who would advise them, etc.
The fixation on the future is a distraction. The world is being sealed in the present while we talk science fiction. The Stasi had vastly fewer resources and created an atmosphere of total, and totally realistic, paranoia and fear. AI is a red-herring. It is also thus far stupid.
I'm always shocked by how little attention Orwell-quoters pay to the speakwrite. If it gets any attention, it's to say that it's an unusually advanced piece of technology in the middle of a world that is decrepit. They assume that it's a computer on the end of the line doing voice-recognition. It never occurred to me that people would think that the microphone in the wall led to a computer rather than to a man, in a room full of men, listening and typing, while other men walked around the room monitoring what was being typed, ready to escalate to second-level support. When I was a child, I assumed that the plot would eventually lead us into this room.
We have tens or hundreds of thousands of people working as professional censors today. The countries of the world are being led by minority governments who all think "illegal" speech and association is their greatest enemy. They are not in danger of toppling unless they volunteer to be. In Eastern Europe, ruling regimes are actually cancelling elections with no consequences. In fact, the newspapers report only cheers and support.
Due to current admin policies, failing to do this would limit Microsoft's ability to drink from the federal trough. Whatever value they put in diversity is less than they put in large contracts.
At the risk of sounding like an LLM, you're absolutely right!
It would be stupid not to kowtow to the current admin given how much business Microsoft does with the US government. The pendulum will swing back, guarantee it.
Total US Fed. Gov. contracts for 2024 was (according to gao.gov) $755B. That's a lot of drinking, never mind any anticipated AI spending boost next year.
And it is interesting to see how different organizations are reacting to this administration depending upon what percentage of their bottom line is directly tied to government or military contracting.
Python Software Foundation telling the NSF they had too many strings attached to their money is another interesting spotlight on the current situation.
Even if our lifespans become merely 200 years, imagine if the generation of the US Civil War era were still in power. Great age plus health equals social petrification.
reply