In general I agree, but The Verge in particular tends to just say exactly what the press release says with less detail. If we're going to do a non-press-release source it should be because they're offering context and information that the company would not willingly choose to provide themselves.
Yeah, also agree with you in general, if it's the same, doesn't really matter :)
But at least the last paragraph seems to be adding something, although the rest of the article is indeed just a re-hash of the press-release.
> Meta recently signed a multi-year deal with EssilorLuxottica, the parent company behind Ray-Ban, Oakley, and other eyewear brands. The Meta Ray-Bans have sold over two million pairs to date, and EssilorLuxottica recently disclosed that it plans to sell 10 million smart glasses with Meta annually by 2026. “This is our first step into the performance category,” Alex Himel, Meta’s head of wearables, tells me. “There’s more to come.”
> “They are all my children and will all have the same rights! I don’t want them to tear each other apart after my death,” he said, after revealing that he recently wrote his will.
I agree that it's pretty cringe to refer to all of them as his children when he's literally a sperm donor, but he definitely did call them that.
That depends a lot on your definition of "children" and "father".
Many people with uninvolved biological fathers would disagree with you that the guy who impregnated their mother counted as their father, especially if they were raised by another man who actually did stick around, and that's even for dads who actually impregnated the mother. Sperm donation takes this even further because he claims to have actually done it anonymously [0], meaning he's as uninvolved as he possibly could have been in the process of being a father.
Many or most of these kids have real men who were actually there helping to raise them through their childhood who they refer to as "father", and it's pretty disrespectful of Durov towards those men to attempt to usurp that title on the grounds of what was supposed to be an anonymous donation.
I'll grant that Durov is more likely than most sperm donors to have some of these kids actually claim him as their father, but that's in no small part because there's now a substantial amount of money tied to them identifying him as such. Cynically I wonder if that's a major motivator for him doing this, because he knows that the kids wouldn't otherwise know or care who he is.
What term would you prefer he use? "Offspring"? "Biological children"? I agree that he is in no way a father in the same sense as others who have actually helped raise children, but I also don't think he's claiming to be, and his phrasing makes sense to me. He is literally their father (in the most uninvolved way possible), and they are literally his children.
As an adopted person "biological <insert title>" has worked well for the parents that had sex to make me. For a donor parent, I probably just use "donor <insert title>". I'd advise not worrying too much about the language though. Being kind and thoughtful is far more important than selecting the correct words. A snap judgment selection of proximal words is sloppy but it's impractical to pause to select exactly the right language in all cases for all statements. So too with something this sensitive it might be good to slow a little.
My main point in the last comment is that inserting himself into their lives at all is disrespectful. He doesn't need a word for them because he has no relationship to them: he was an anonymous donor to enable their actual parents to have kids that they wouldn't have otherwise been able to have.
Agreed, it's highly unlikely. But they have a choice, it's not forced by anyone. If they do nothing (maybe their parents never tell them, or they don't read news, or facial recognition never informs them of similarity), there's no inheritance. If they take consensual action to make a claim via DNA paternity test, the inheritance can be claimed.
To be fair there's not really a good word standardized for what you're describing ("biological progeny without parental relationship"). People are going to use shorthand if they don't have a good term.
Right before LLMs broke into the scene we had a few techniques I was aware of:
* Personality Forge uses a rules-based scripting approach [0]. This is basically ELIZA extended to take advantage of modern processing power.
* Rasa [1] used traditional NLP/NLU techniques and small-model ML to match intents and parse user requests. This is the same kind of tooling that Google/Alexa historically used, just without the voice layer and with more effort to keep the context in mind.
Rasa is actually open source [2], so you can poke around the internals to see how it's implemented. It doesn't look like it's changed architecture substantially since the pre-LLM days. Rhasspy [3] (also open source) uses similar techniques but in the voice assistant space rather than as a full chatbot.
Google glass was a display that was up and to the right of where you want to be looking.
I don't know about everyone, but I found it pretty hard to use. Caveat, I didn't get them fit to me, I was supervising an intern working on a speculative Glass project, and they were fit to him.
AR would be neat, but voice interfaces are acheivable at an approachable cost. I'm not one to talk to a computer, and I wear prescription lenses, so these glasses don't appeal to me, but I can see there's a market there, not sure how big or if Meta can capture it.
The camera to capture 'what you see' seems like using the form factor pretty well.
Mic and speakers, too.
Glass attempted a display, but IMHO, it was unusable, so I understand why you would try the same thing with no display. Or the same thing, but mounted on your wrist (Google Wear).
If you can ask "Hey Meta, ..." while holding a golf club and unable to touch a button (which the promo video [0] shows you can) then the mic is always on. It may not always be beaming data to Meta, but that's a matter of trust, which I don't have much of for Meta given their history.
The camera may or may not be always on, but it can be turned on by software activated by the always-on mic (again, demonstrated by the promo video), so it would be best to treat it as though it is.
The “Hey *” (Meta, Siri, Alexa) is typically handled by a simpler mechanism on a short buffer that triggers the proper recording and speech recognition workflow in order to save battery. But if you’re not going to trust the company, then the fact that it responds to Hey Meta shouldn’t make any difference because it could still be quietly recording. The fact that it responds to a wakeup prompt changes nothing.
I'm aware of the mechanism, but that mechanism relies on a mic that is always on.
I agree that the primary issue is that it's a software-controlled microphone with no off switch controlled by software written by Meta. I only emphasized the wake word listening in response to OP's claim that it's not always on when it must be.
I responded to that above. If the mic is always on and controls the camera (both of which are demonstrated in the promo video), any reasonable approach to infosec needs to treat the camera as always on as well.
Maybe, but that doesn’t mean that the camera is always on. It’s like saying a person holding an empty gun and a magazine is holding a loaded weapon because they can quickly reload it. It doesn’t really change the effect but it’s still an error.
Whether an empty gun and a magazine counts as a loaded gun varies state-by-state, so the distinction is not as clear-cut as you make it sound. New York State penal code defines a loaded gun as follows:
> 15. "Loaded firearm" means any firearm loaded with ammunition or any
firearm which is possessed by one who, at the same time, possesses a
quantity of ammunition which may be used to discharge such firearm.
So I guess I'm using the New York definition of an always-on camera.
and you trust meta with this? i don’t mean to be crass but that would be crazy.
they have proven over and over and over and over again they are absolutely not trustworthy.
at some point we have to come to grips with the fact that people like zuck, elon, andreeson, and other tech monarchs are openly hostile and despise us when we ask for anything remotely resembling transparency for their companies but repeatedly abuse us and openly scoff at our privacy.
the fact that we collectively don’t understand the repercussions of this really is a bad sign.
i very well may have misunderstood your meaning, tho. i hope so.
I've been totally breaking Linux installs trying to get Nvidia to work for 15 years now, and that's on X11. On the other hand I recently did the first OS upgrade that I've ever done successfully without breaking Nvidia and that was running Wayland.
Nvidia is just really really bad on Linux in general, so it's always a coin toss if you'll be able to boot your system after messing with their drivers, regardless of display server.
The [flagged] indicator on a submission usually indicates user flagging. Moderators and algorithms just quietly downweight submissions without any visible indicator. So this isn't an HN moderator position, the question to resolve is why users would flag it.
In this case, I'd have flagged them too if I saw them. The "long live" post is an aggressive tirade that reflects poorly on the author and led to a poor-quality discussion. The second is a link to a git commit history, which is weird in its own right and provides no explanation, and the context provided in the comments shows that a generally dislikable figure with extreme political views is now leading a fork of X11 that has yet to prove itself viable. So I'd probably have flagged that one too as pointless drama until proven otherwise.
Given that we're apparently discussing an entire k8s 2.0 based on HCL that hardly seems like a barrier. You'd have needed to write the HCL tooling to get the 2.0 working anyway.
> And if you restrict it, they'll just fork your code and overwrite whatever they need.
More power to them. They can take responsibility for that code and maintain it and I don't have to worry about breaking them when I release a new version. Everyone's happy.
This talk is different from his others because it's directed at aspiring startup founders. It's about how we conceptualize the place of an LLM in a new business. It's designed to provide a series of analogies any one of which which may or may not help a given startup founder to break out of the tired, binary talking points they've absorbed from the internet ("AI all the things" vs "AI is terrible") in favor of a more nuanced perspective of the role of AI in their plans. It's soft and squishy rhetoric because it's not about engineering, it's about business and strategy.
I honestly left impressed that Karpathy has the dynamic range necessary to speak to both engineers and business people, but it also makes sense that a lot of engineers would come out of this very confused at what he's on about.
I get that, motivating young founders is difficult, and I think he has a charming geeky way of provoking some thoughts. But on the other hand: Why mainframes with time-sharing from the 60s? Why operating systems? LLMs to tell you how to boil an egg, seriously?
Putting my engineering hat on, I understand his idea of the "autonomy slider" as lazy workaround for a software implementation that deals with one system boundary. He should aspire people there to seek out for unknown boundaries, not provide implementation details to existing boundaries. His MenuGen app would probably be better off using a web image search instead of LLM image generation. Enhancing deployment pipelines with LLM setups is something for the last generation of DevOps companies, not the next one.
Please mention just once the value proposition and responsibilities when handling large quantities of valuable data - LLMs wouldn't exist without them! What makes quality data for an LLM, or personal data?
Just looking through the settings they have stuff which would have no obvious automation angle. The reduce transparency option for example, the noise recognition feature to alert you of fire alarms, pairing with hearing aides, etc. Tim Cook said himself in an interview that they provide accessibility features regardless of ROI.
That's not the initial point. The point is that UI tests are so much easier and less brittle if they're going to be using the accessibility tree, rather than relying on a few selectors.
click("Enable Frobzinator") is infinitely more maintainable than click(button().index(7)), and if you eventually remove the accessibility description, your test fails, letting you immediately know what's wrong, and whether it's the test no longer up to date, or if you broke something.