well we can stick planes in the upper atmosphere and sprinkle sulfur around to get some cooling, but it'll get worse before then. gonna be interesting.
So I've been on a journey of discovering basically this - limits to growth - for the last few years. It's been .... an emotional roller coaster as someone living in the developed world. I'm following the work of Nate Hagens and others in the space, but The Dread still ebbs and flows.
How do you hold this dispassionately? How do you get to a point of wanting to reproduce, or even wanting to continue, as an act of radical hope? Absurdism? Pure interest in watching it all unfold? I'm pretty aware that we are going to have constraints forced on us as like, a thermodynamic function, but ... how to cope? Go back to the tragedy?
Just don't do things that are absolutely not sustainable...
Sustainable meaning: if everyone does this we need 5 planets...
The good news is that with technology there will be fewer and fewer of those...
But if you really wanna minimize / lead by example you could live in a small appartment in a big city... It's the most sustainable way to live. Besides that help improve / maintain the common infrastructure...
Libraries
Swimming pools
Toolsheds / Makerspaces
Schools
Etc etc
A tiny garden at your home < a big park and shared city vegetable plots
Electric bicycle for 80℅ of your commutes and share / rental car when needed.
Getting rid of stuff you no longer need (helps with living in a small place as well)
Countless little big things.
Also:
- buying second hand phones
- investing in solar projects
-...
Throughout human history entire families, tribes, villages, and cities were on the edge of death, whether it was by disease, famine, or invaders. This is nothing new. Don't by into the people selling fear.
So acquiring immunity to a lower-risk version of the service before it's ramped up? e.g. jumping on FB now as a new user is vastly different from doing so in 2014 - so while you might go through the same noob-patterms, you're doing so with a lower-octane version of the thing. Like the risk of AI psychosis has probably gone up for new users, like the risk of someone getting too high since we started optimizing weed for maximum THC. ?
He speaks in the present tense, so I assume so. This guy seems detached from reality, calling[AI] his "most important relationship". I sure hope for her sake she runs as far as she can away from this robot dude.
I do sometimes wonder if we will get "detailed enough" vector embeddings in LLMs to bring the grain of resolution down below human perception - like having enough bits to fully capture what's on tape in audio world. Maybe this is never possible, and (I hope) some details are unresolvable, but it will be interesting to see.
LLMs are already used in signal processing so the idea is explored.
Simply put anything that can be encoded is a language, so you just need sensors to capture and classify the incoming data and build that into a model. The real question is post training the model to behave correctly as these places are far less explored than things at the human scale. RLHF may be a poor choice because the models may see actual behaviors that humans don't and humans will discount it as being incorrect.
I suspect the curse of dimensionality makes this an optimization dead end. You hit prohibitive latency limits on retrieval long before the resolution approaches human perception. Even with current dimensions, the trade-off between index size and query speed is already the main constraint for production systems.
>When I was a child I used to ask my mother—of course—all sorts of ridiculous questions that every child asks, and when she got bored with my questions she would say, “Darling, there are just some things we’re just not meant to know.” I said, “Will we ever know?” She said, “Yes, of course, when we die and go to heaven, God will make everything plain.” So I used to imagine on wet afternoons in heaven, we’d all sit around the throne of grace and say to God, “Well, now, why did you do this?” and “How did you do that?” and he would explain it to us. “Heavenly father, why are the leaves green?” And he would say, “Because of the chlorophyll.” And we’d say, “Oh.”
1:00:09
But in he Hindu universe, you would say to God, “How did you make the mountains?” And he would say: well, I just did it. Because what you’re (Text sourced from https://www.organism.earth/library/document/out-of-your-mind...) asking me for—when you ask me how did I make the mountains, you’re asking me to describe in words how I made the mountains, and there are no words which can do this. Words cannot tell you how I made the mountains any more than I can drink the ocean with a fork. A fork may be useful for sticking into a piece of something and eating it, but it’s of no use for imbibing the ocean. It would take millions of years. So it would take millions of years, and you would be bored with my description long before I got through it, if I put it to you in words. Because I didn’t create the mountains with words, I just did it. Like you open and close your hand. You know how to do this, but can you describe in words how you do it? But you do it. You are conscious, aren’t you? Don’t you know how you manage to be conscious? Do you know how you beat your heart? Can you say in words, explain correctly, how this is done? You do it, but you can’t put it into words! Because words are too clumsy, and yet you manage this expertly for as long as you’re able to do it.[1]
It's such a wonderful thing to be reminded of how silly it is to take language seriously. IMO it's prickles and goo[1] all the way down - and the prickles help us share meaning and exchange information, but there is no project of exactitude to be completed.
The hubris it takes to maintain the view that we can just keep figuring things out if we are rational enough is also sometimes overwhelming to me. It's not that we can't understand things better through analysis, just that it sometimes seems foolish to me to try to get all of it through system-2 type behavior. We will always miss something crucial[2].
An algorithm written in a well specified language with precise semantics might have bugs. A "logical" argument made with natural language is orders of magnitude less precise
What I've always wondered, though, is whether that lack of precision is what allows for meaning to arise in the first place. In the gap between language and - this - .