At least on the HomePod side, intercom is at best a half-baked feature and at worst an infuriation machine. It uses a cumbersome voice trigger ("siri, tell <room>…") to begin recording audio, with no clear indication of when recording began and no way to know for sure that the audio was directed where you wanted it to go.
To respond is similarly cumbersome and soon you give up completely. I can only assume it was designed by someone whose parents were killed in an intercom-related disaster and has sworn revenge.
I bought a mini for my office with this purpose in mind, but it has been a total waste.
losers, clueless never had to be productive, just scapegoats. But now losers dont get that buffer window to try and become sociopaths, they just dont get hired at all.
Nope. Still does not. I have 2 macs on my desk and no simple way to connect them to a single Apple display! It's a glaring hole that to me suggests they have no idea who their market is for these.
Unlike many of those approaches which concern themselves with delivery of human-designed static UI, this seems to be a tool designed to support generative UIs. I personally think that's a non-starter and much prefer the more incremental "let the agent call a tool that renders a specific pre-made UI" approach of MCP UI/Apps, OpenAI Apps SDK, etc for now.
that's not quite what parent was talking about, which is — don't just use one giant long conversation. resetting "memories" is a totally different thing (which still might be valuable to do occasionally, if they still let you)
Actually, it's kind of the same. LLMs don't have a "new memory" system. They're like the guy from Memento. Context memory and long term from the training data. Can't make new memories from the context though.
(Not addressed to parent comment, but the inevitable others: Yes, this is an analogy, I don't need to hear another halfwit lecture on how LLMs don't really think or have memories. Thank you.)
Context memory arguably is new memory, but because we abused the metaphor of “learning” rather than something more like shaping inborn instinct for trained model weights, we have no fitting metaphor what happens during the “lifetime” of the interaction with a model via its context window as formation of skills/memories.
i'm geniunely curious about how you made the jump from "here's a single regulation" all the way down the slippery slope to "can't regulate away ALL parenting". does this one regulation cross that threshold? how'd you get there?
in an ideal world, parents would also prevent their kids from smoking, but the fact that in many places minors aren't allowed to purchase tobacco sends a social signal and actually does seem to put a speed bump in place deterring casual use.
is it not _also_ ideal to have some of these regulations in place? does it not help parents make the case to their kids?
it does help. i think this is a good step in the right direction.
but there's still a lot of stuff that only parents can do. for example, screentime in the home. you can't really create a law that says no screens for anyone under the age of X because there will exceptions (movie night, homework, etc).
Screentime helps, but it doesn't really solve the problem. They still see the exact same content shared by friends at school, and 15 minutes a day is enough to do damage.
I think for enterprise it’s going to become part of the subscription you’re already paying for, not a new line item. And then prices will simply rise.
Optionality will kill adoption, and these things are absolutely things you HAVE to be able to play with to discover the value (because it’s a new and very weird kind of tool that doesn’t work like existing tools)
To respond is similarly cumbersome and soon you give up completely. I can only assume it was designed by someone whose parents were killed in an intercom-related disaster and has sworn revenge.
I bought a mini for my office with this purpose in mind, but it has been a total waste.
reply