It's still good for asking questions, I'm not aware of many more other places where friendly subject matter experts hang around (IRC, Mathstodon, Reddit?).
It would be better if closing questions would cost 1000 reputation. That's one advantage AI has over it - it will at least try to answer your question every time and not just randomly shut you down for its own (wrong) reasons.
OS-level sandboxes are way too coarse grained to achieve a good "hollowing out" of the attack surface. The principle of least privilege should extend down to/start at the individual language library level (because this is where the actual trust boundaries are), or even finer grained, at the individual function or code segment level (thereby providing maximum control), and therefore not be limited to larger domains.
Most software today relies on many (imported, third party) libraries, so the security architecture should provide primitives/abstractions to manage rights at that level, which requires programming languages to implement the ability to sandbox (managing the effects of) code. If they did this with lightweight, portable virtual machines like WebAssembly, that could work.
The vast majority of code out there should be limited to pure computation and have no ability to access anything external at all (and otherwise, only what it actually requires) - yet most languages are simply incapable of providing any such guarantees. If the programmer of software cannot get ironclad assurances, they cannot in turn provide them to their users.
I'm not saying that OS-level sandboxing isn't good, just that it doesn't go far enough. And depending on the setup, it may not sufficiently limit the effects of compromised elements, and it provides no "monitoring in the small". It's also not convenient or efficient to have an entire OS instance for every single system component. Compartmented microkernel operating systems like Genode do it better imo.
"Mathematics is a field of study that discovers and organizes methods, theories, and theorems that are developed and proved for the needs of empirical sciences and mathematics itself."
In order to understand mathematics you must first understand mathematics.
Two of the most fascinating open questions about the Game of Life are in my opinion:
1. What is the behavior of Conway's Game of Life when the initial position is random? Paraphrasing Boris Bukh's comment on the post linked below, the Game of Life supports self-replication and is Turing-complete, and therefore can support arbitrarily intelligent programs. So, will a random initial position (tend to) be filled with super-intelligent life forms, or will the chaos reign?
There exist uncountably infinitely many particular initial configurations out of which a random one may be drawn, which makes this more difficult (a particular infinite grid configuration can be represented as the binary digits (fractional part) of a real number, spiraling outwards from a given center coordinate cell: 0.0000... represents an empty infinite grid, 0.1111... a fully alive infinite grid).
2. Relatedly, does a superstable configuration exist? One that continues to exist despite any possible external interference pattern on its border? Perhaps even an expanding one?
Your first question is discussed in the book The Recursive Universe by William Poundstone (1984).
One of the chapters asks "what is life?". It considers (and rejects) various options, and finally settles upon a definition based on Von Neumann-style self-replicating machines using blueprints and universal constructors, and explains why this is the most (only?) meaningful definition of life.
Later, it talks about how one would go about creating such a machine in Conway's Game of Life. When the book was written in 1984, no one had actually created one (they need to be very large, and computers weren't really powerful enough then). But in 2010 Andrew J. Wade created Gemini, the first successful self-replicating machine in GoL, which I believe meets the criteria - and hence is "alive" according to that definition (but only in the sense that, say, a simple bacteria is alive). And I think it works somewhat like how it was sketched out in the book.
Another chapter estimated how big (and how densely populated) a randomly-initialized hypothetical GoL universe would need to be in order for "life" (as defined earlier) to appear by chance. I don't recall the details - but the answer was mind-boggling big, and also very sparsely populated.
All that only gives you life though, not intelligence. But life (by this definition) has the potential to evolve through a process of natural selection to achieve higher levels of complexity and eventually intelligence, at least in theory.
One problem is that, even though it is turing-complete, many practical operations are very difficult. Patterns tend towards chaos and they tend towards fading out, which are not good properties for useful computation. Simply moving information from one part of the grid to another requires complex structures like spaceships.
You might have better luck with other variants. Reversible cellular automata have a sort of 'conservation of mass' where cells act more like particles. Continuous cellular automata (like Lenia) have less chaotic behavior. Neural cellular automata can be trained with gradient descent.
‘Random’ configurations are going to be dominated by fixed scale noise of a general 50% density, which is going to have very common global evolutionary patterns - it’s almost homogenous so there’s little opportunity for interesting things to occur. You need to start with more scale free noise patterns, so there are more opportunities for global structures to emerge.
> the Game of Life supports self-replication and is Turing-complete, and therefore can support arbitrarily intelligent programs.
I think people will disagree about whether “Turing-complete” is powerful enough for supporting intelligence but let’s assume it does.
> So, will a random initial position (tend to) be filled with super-intelligent life forms, or will the chaos reign?
Even if it doesn’t, it might take only one intelligent life form for the space to (eventually) get filled with it (the game of life doesn’t heave energy constraints that make it hard to travel over long distances, so I don’t see a reason why it wouldn’t. On the other hand, maybe my assumption that all intelligent life would want to expand is wrong), and in an infinite plane, it’s likely (¿certain?) one will exist.
On the other hand it’s likely more than one exists, and they might be able to exterminate each other.
> it might take only one intelligent life form for the space to (eventually) get filled with it
It wouldn't need to be intelligent to do this; it could be a self-replicating machine with no intelligence at all - which is orders of magnitude simpler and therefore more likely.
Chaotic initial state -> self-replicating machine -> intelligence is much more likely than chaotic initial state -> intelligence.
(See my other reply to the GP comment about The Recursive Universe, where all this is discussed.)
An interesting thing related to these questions in the context of physics: there was an interesting discussion on Scott Aaronson's blog a few years ago about why the universe should be quantum mechanical. One idea that was brought up is quite related to the open questions you name here.
Here's an excerpt from a comment of Daniel Harlow (a prof at MIT):
> In order for us to be having this discussion at all, the laws of physics need to have the ability to generate interesting complex structures in a reasonable amount of time starting from a simple initial state. Now I know that as a computer scientist you are trained to think that is a trivial problem because of Turing completeness, universality, blah blah blah, but really I don’t think it is so simple. Why should the laws of physics allow a Turing machine to be built? And even if a Turing machine is possible, why should one exist? I think the CS intuition that “most things are universal” comes with baked-in assumptions about the stability of matter and the existence of low-entropy objects, and I think it is not so easy to achieve these with arbitrary laws of physics.
Scott replies:
> Multiple people made the case to me that it’s far from obvious how well
(1) stable matter,
(2) complex chemistry,
(3) Lorentzian and other continuous symmetries,
(4) robustness against small perturbations,
(5) complex structures
being not just possible but likely from “generic” initial data,…can actually be achieved in simple Turing-universal classical cellular automaton models.
in an infinite plane, if we keep adding random points ( similar to sun continuously giving earth low entropy energy ) , eventually, it will reach to intelligent life form, which are very efficient at converting low entropy energy to high entropy energy.
The thing that blows my mind is: say you start filling the plane with pi. Pi has been proven to contain every finite sequence. That means that somewhere in the plane is a full physics simulation of YOU in the room you are in right now.
Does that you exist any less fully because its not currently in the memory of a computer being evaluated?
Depending on the infinite grid filling scheme even these properties may not be sufficient to guarantee that every two dimensional pattern is initially generated because the grid is two-dimensional, but the number property is "one-dimensional". A spiral pattern for example may always make it line up in a way such that certain 2d patterns are never generated.
Since it's not provable with pi, then we'd have to do a more circuitous proof of every finite pattern occurring. Inspired by Champernowne's constant, I propose a Pontifier Pattern that is simple, inefficient, but provably contains every finite pattern.
Starting at the origin, mark off rows of squares. the Nth row would contain NxN^2 squares of size n x n. Each square would be filled in left to right reading order with successive binary numbers with the most significant digit at the top left.
Somewhere in that pattern is the physics simulation of you reading this comment :)
Yes, sounds like it! Though I'm thinking that the relative arrangement of patterns would also make a difference. I wonder if such a thing as "all (infinitely many) possible arrangements of all patterns" can exist
That "feature" is so egregiously bad. I regularly consume content in three languages, and hearing the wrong language coming from my speakers is so jarring. It is a uniquely awful experience that I had never encountered before, nor even imagined.
While we’re at it can we also fire the guy who made it that we now have to click the channel’s mini thumbnail to open it, EXCEPT, when the channel is live and clicking the thumbnail takes you to the live video where you have to click the thumbnail again.
Oh, are we talking about bad YouTube UX? How about the "feature" where the right and left arrows seek the video 5s forward and back, while the up and down arrows increase and decrease the volume? That is, unless the last UI element you've touched was the volume bar, in which case the side arrows will also change the volume, and you'll have to use the mouse to clear the focus away from that volume bar to be able to seek the video again. I still wonder how they managed to break this despite it having had a sane, consistent, defined behavior for probably over a decade before that point.
> That is, unless the last UI element you've touched was the volume bar, in which case the side arrows will also change the volume, and you'll have to use the mouse to clear the focus away from that volume bar to be able to seek the video again
That is a feature (of the browser). The volume bar is selected so it takes up the controls for left/right (this is what a horizontal slider does I suppose). You can also select the volume button and mute/unmute with spacebar (spacebar does the action of the UI element, like click a button). You can tab around the buttons under the video to select options, etc. all with a keyboard. If a control doesn't support an action, it'll be propagated up to the parent, which leads to the jarring feeling that controls are inconsistent (and also the effects, left-right just adjusts the volume, up-down also plays an animation).
It's the usual low quality Google product, but it does make sense why it is so.
Oh, I know why it works that way. The point is that overriding parts of the normal focus behavior makes sense within video players - not only does every function already have a key shortcut (reducing the need for tabbing through every button), but some shortcuts can only work if the player understands what can and can't be overriden. Losing focus on the volume bar for the sake of keeping the arrow key assignments consistent is how it was done. Many other video platforms have figured it out, and so did YouTube many years ago, but then during one of their endless player redesigns, they seem to have simply forgotten about that basic behavior. I have no idea how they allowed it to remain this way for years, with all of them being extremely intelligent and well-paid engineers that are working on probably what is the world's most popular video player UI.
Googlers are obviously mentally challenged by the concept that there might be anybody in the world who has learned English as a second language.
Bet the idea to force outdated TTS whose robotic droning that is the pinnacle of annoyance on every single user who speaks more than one language was worth a nice bonus.
I agree. But for the benefit of other people struggling, I haven't found a way to disable them as a user setting, but you can at least turn them off on a per-video basis by changing the video language in the playback settings (the little gear icon).
They could at least try to vaguely match the voice and maybe cadence of the original. AFAIU it's one of these things that would have been too hard ten years ago but is fairly easy now. Too computationally expensive probably.
Yeah ElevenLabs had this over a year ago where you could just upload a 30 second clip of someone's voice in another language and hear what it was like in English and it worked really well.
Same for auto-translated titles. It's like they can't fathom the idea of people speaking more than one language. At least give an option to turn it off!
I was playing a game with a friend and the chat was increasingly full of angry people complaining about cheaters easily obtaining very hard to get items. He asked what I thought about it....
Well, the game is clearly very important to these people, it is increasingly visible. They are clearly very emotionally engaged. I'd say things are going really well!
Youtube was once a miraculous technical website running circles around Google video. I'm told they used a secret technology called python. Eventually Google threw the towel and didn't want to compete anymore. They were basically on the ground in a pool of bodily liquids then the referee counted all the way to 1.65 billion.
Some time went by and now you can just slap a <video> tag on a html document and call it a day. Your website will run similar circles around the new google video only much much faster.
The only problem is that [even] developers forgot <s>how</s> why to make HTML websites. I'm sure someone remembers the anchor tag and among those some even remember that you can put full paths inthere that point at other website that could [in theory] also have videos on them (if they knew <s>how</s> why)
If this was my homepage I would definitely add a picture of Dark Helmet.
I've been having fun with the following AI prompt recently:
> You roleplay as the various Ancient Roman (Year 0) people I encounter as an accidental time traveler. Respond in a manner and in a language they would actually use to respond to me. Describe only what I can hear and see and sense in English, never translate or indicate what others are trying to say. I am suddenly and surprisingly teleported back in time and space, wearing normal clothes, jeans, socks and a t-shirt into the rural outskirts of Ancient Rome.
In think this is a fun way to learn languages too.
Sounds like a really interesting story, but the reviews of the English edition by Dutch and German speakers leaves me wondering Is there a better English translation available? It’s hard to tell from the reviews if there’s only one.
I can't believe in that blog they use a simulated video. How hard is it Microsoft to have literally someone talking in a mic connected to two different laptops seriously.
Can a bluetooth mic connect wideband to one laptop and normal to another at the same time? Regardless, the simulation is very accurate IME. It is, after all, all digital anyways.
It’s that when you have legal agreements with guilds and unions, even produced promotional material can be considered a production requiring minimum staff (I.e. makeup, camera technician, etc.) On productions, any person wearing multiple hats is tightly controlled.
A cartoon I watched growing up ran into this when they needed to insert live action, so they deliberately recorded at 1 FPS for that episode to make it ineligible for budget reasons (https://phineasandferb.fandom.com/wiki/Tri-Stone_Area).
If you’re ever wondering why a company can’t do something simple and obvious, it’s probably due to a legal agreement.
The trick I'm using (at least on laptops, cannot do this on phones AFAICT) is to change the input device to the laptop's own microphone to get my earphones to not use HFP (Hands Free Profile) and instead stay in a better quality codec (AAC, LDAC, AptX, SBC, whatever your devices agree upon).
Sound quality for my calls on both sides improved dramatically! Since I've discovered this, I tell all my colleagues in our zoom meetings to switch microphones and it's immediately better for everyone on the call (not just the user that was using HFP).
This is because if you use the hands free profile, it'll use a codec that encodes your voice in a terribly bad bitrate, and even worse, the sound you hear in the headphones is also using a terribly low bitrate.
They should finally fix HFP (Hands Free Profile) spec as it's literally impacting call quality for billions of people.
Edit:
apparently LE audio is a thing, but device support is still terrible.
I do this same technique, but typically with external mic mounted to my desk. Another benefit beyond higher fidelity audio is that it also reduces the latency for other people to hear your audio by about 100-250ms.
> And using the headset mic is probably better if the room is loud or has poor acoustics.
Not to mention the combination of "microphone in the laptop body + person who doesn't turn off their microphone when they're not speaking + person who seems to never stop typing during a call" tends to be distracting at best.
Sure, but janky $30 wireless gaming headsets have even lower latency, with better audio quality, than Bluetooth handsfree, so it's still sad that anything still uses it.
The problem with this trick is that it's very important for your callers to hear you clearly, and laptop mics usually suck, and pick up fan noise.
Maybe not a problem with Macs, but call quality on most laptops using the built in mic is bad enough that people on the other side will have a bad impression of you.
I have a friend who works in sales and business development. He was fighting with his Bluetooth headset and his laptop all the time. I told him to just get a simple USB podcast microphone. You can get a decent one for next to nothing. Problem solved. Those are designed to make you sound good. And if you do sales, you should want to sound amazing.
I actually told him many salespeople get this completely wrong and sound like an absolute Muppet on their expensive headsets without even realizing it and explained to him that anything Bluetooth is basically never going to sound amazing. There’s a lot of snake oil in the market. I got some nice Sony earbuds recently. Tried it once and I was barely audible apparently. That’s supposedly a high-end option. It’s OK, I got them for music and podcasts and wasn’t expecting much for calls. But it managed to underwhelm me on that front. The weakness is Bluetooth and the standard codecs supported on Mac/Windows. You are basically screwed no matter what BT headset you use. For phones, it depends.
Apple fixes this with AirPods by doing a proprietary codec and probably quite a bit of non-trivial sound processing. None of that is part of the Bluetooth standard, and what little is supported in some newer codecs typically does not work in Windows/Mac. So it will still fall back to that low-bitrate codec that distorts your voice and makes you sound like a Muppet.
If you need to use a phone, getting a USB-C headset can be an alternative. Not that many wired headsets these days, sadly. Even Apple now uses USB-C. And both Android and iOS support most USB-based sound equipment.
I take most calls with my Mac. I configured an aggregate device with the MIDI tool so that my headset doesn’t hijack the microphone. Nice little hack if you have some decent BT headphones. On a Mac, the microphones in the laptop are likely way better than the vast majority of headsets. And that’s before you consider the latency and heavy compression Bluetooth adds to the mix.
> Apple fixes this with AirPods by doing a proprietary codec and probably quite a bit of non-trivial sound processing. None of that is part of the Bluetooth standard
Do you have any sources for that claim?
As far as I understand (and based on what I've seen in some Bluetooth debugging menus at least a few macOS versions back), for HFP they just use regular mSBC.
That's an optional codec for HFP (while SBC is mandated for A2DP), and a step above absolute potato quality G.711/PCM u-law, but still part of the regular Bluetooth HFP specs.
Yes, see my comment about their proprietary codec that is exclusive to Apple platforms only. Won't work on Android. Won't work on Linux. Won't work on Windows. Only works with Apple headphones with Apple phones/laptops.
For real quality improvement which is 48kHz stereo + mic, you'll also need GMAP(Gaming Audio Profile) support both on BLE adapter and headset.
I've tried multiple combinations with my WH-1000XM6 and WF-1000XM5, but nothing works stable on Windows. Linux requires hand-patching bluez and friends which also failed for me. Android does not support GMAP and just when using LE, a lot of messengers unable to detect it properly(Google Meet works, Telegram and Viber does not).
I've finally gave up on that idea. Just thinking about fact we cannot use duplex wireless audio in 2025 pisses me off so much tbh.
Worse yet, I got a new Bose headset with USB C audio support - and the microphone doesn't work at all on either the USB or Bluetooth while USB C is playing audio!
My WH-1000XM5 set broke, and it was going to cost more to repair it than buy than simply buying a new pair. So I decided to check out the cheaper end of the market, and bought a pair of Edifier W830NB.
They are pretty decent (notable downgrade in most aspects, you do get what you pay for, but good enough for my daily needs). But I was very happy to discover that when plugged in via USB-C, the microphone works over usb with full quality, that's one thing my WH-1000XM5s couldn't do, nor the newer XM6s
It's been difficult for me to find headphones with LE support. And also I've seen some of them announced support, just to remove it later because the firmware was behaving so bad.
Haven't checked in a while, so I don't know if is there something reasonable now that doesn't cost like $500 or so.
Classic and LE are completely different protocols, from physical layer and up. It must be that it doesn't make a lot of sense for manufacturers to invest substantial effort in it.
> Classic and LE are completely different protocols, from physical layer and up.
Which makes sense when you know it started life[1] as a separate protocol called Wibree by Nokia, which was specifically designed[2] to be usable by Bluetooth RF gear:
A major tenet of their design was that “it can be deployed with minor effort into devices already having Bluetooth, e.g. cellphones” with the added requirement that a “common RF section with Bluetooth must be possible”.
Yes, very frustrating...
I was on the lookout for new headphones that "just work" and LE Audio / LC3 support was a must for me.
One of the more frustrating tech shopping experiences I've had so far.
Landed on the JBL Tour One M3, they sound okay and support LE Audio.
They have some interface problems (Auto-Pause and automatic speech detection is way to sensitive for me) but you can tweak it so it does "just work" (mostly).
> Landed on the JBL Tour One M3, they sound okay and support LE Audio.
I recently got the Tour One M2 and was pretty disappointed with the audio quality (both normal bluetooth and LE audio). Noticeably worse than my previous wired headphones, which were also cheaper. The touch controls are also terrible, and I dont like that noise cancelling is on by default with seemingly no way to change the default setting.
The WF-1000XM5 beta Bluetooth is pretty good in the latest firmware update. Even though it is listed as beta I use it all the time. And they are pretty decently priced at the moment
Yeah this is a dealbreaker, same with when I found out my sennheiser headphones made me have like 500ms reaction time on audio cues, I get it was an older bluetooth protocol but yeah... no, I'll stick to wired for my pc.
Oh yeah I also LOVE Teams and Meet completely breaking my mic forcing me to use some other mic because it doesn't work with the one on my headphones half the time
It would be better if closing questions would cost 1000 reputation. That's one advantage AI has over it - it will at least try to answer your question every time and not just randomly shut you down for its own (wrong) reasons.
reply