I've looked over the code, and some things seem a little odd to me.
The article starts by mentioning how insecure the browser is, apparently even cookies aren't secure. But then the API to talk to the BFF uses.... a server-side session tracked via a client cookie. If the BFF is holding the oauth credentials, then someone could steal the client cookie to make requests to the BFF to do whatever it can do.
It's not impossible to secure the browser from having credentials stolen from inside it, but it can be tricky to ensure that when the browser sends the credential in the request it doesn't leak somehow.
There's some irony as OAuth has DPoP now which can reduce the usefulness of stolen in-flight credentials but that can't be used in this BFF setup because the browser client needs the private key to sign the requests.
Properly securing the browser content on a login page, or the subdomain handling authentication credentials is definitely a challenge, and many don't like having to eliminate/audit any 3rd party JS they include on the page. I can see the appeal of a solution like this, but the trade-off isn't great.
Self-hosted FreshRSS, NetNewsWire on Mac, Fluent reader on linux/windows/ios. Any reader compatible with Google Reader API works with FreshRSS, and Fluent was the nicest UI I've seen (hasn't been updated recently, but I don't need new features).
I was able to make a uWebsockets adapter for NestJS pretty easily. It's a bit sensitive of a library to integrate though, a single write when the connection is gone and you get a segfault, which means a lot of checking before writing if you've yielded since you last checked. This was a few years ago, perhaps they fixed that.
I was under the impression that the underlying net/http library uses a new goroutine for every connection, so each websocket gets its own goroutine. Or is there somewhere else you were expecting goroutines in addition to the one per connection?
That's just an arms race. The kid will find a new favorite website to play games on, there seems no end to them. There's endless websites out there that are more appealing than doing homework. I have a very locked down network, there's always some new website that has games of some sort to play.
If schools are going to provide these things, they should have the sites the kids might need to access white-listed and block everything else. Telling parents to try and block things is not realistic.
With SSDs costing under $50/TB now, it's hard to see why you couldn't put everything the kids need onto the laptop itself. The entirety of Wikipedia with pictures is 110 GB. Throw in a selection of reference books, videos, and software, and there's essentially no reason to have it go online. Provision it with the full year's worth of material at the beginning of the year and that's it.
Definitely agree this is possible and a great idea, but I think one challenge might be if you need access on a school laptop to do the majority of the homework. Not sure if that’s the OP’s case
I self-host Immich and its definitely my favorite web photo system. One thing with Ente that aligns more with Mozilla's approach to data however is end-to-end encryption, which Ente has, but Immich doesn't. So I can see why Mozilla funded this option instead.
I personally wish that self-hosting was a more reliable and simplified process for the average person such that simpler and more powerful software like Immich was the best choice for all.
Self hosted Immich doesn't need end-to-end encryption, and the lack of it enables a number of very useful server-side features. If your end-to-end encryption has not undergone a security audit, it's as good as if there was no encryption at all.
Yes, that was kind of my point. Self-hosted negates the need, but most can't self host.... so that leaves end-to-end encryption the best intermediate step.
The mix package manger for Elixir has a release option which compiles and bundles everything to a single binary. It appears possible to use Gleam libraries/code with mix, which should allow one to compile it all down to a single binary as well (though I haven't attempted this myself).
notably though, a release isn’t itself runnable in the same way a go binary artifact is, for example. there are a couple of projects like burrito that create runnable artifacts but in my (limited) experience with them they can be a little finicky.
I don't think that's true, or at least, I'd be surprised that I've never heard of it; it's my understanding that due to the multilayered nature of running Elixir, this is actually difficult to do. I know about Burrito, but that's not the same thing.
I've been self-hosting a lot of things on a home kubernetes cluster lately, though via gitops using flux (apparently this genre is now home-ops?). I was kind of expecting this article to be along those lines, using the fairly popular gitops starting template cluster-template: https://github.com/onedr0p/cluster-template
I set one of these up on a few cheap odroid-h4's, and have quite enjoyed having a fairly automated (though quite complex of course) setup, that has centralized logging, metrics, dashboards, backup, etc. by copying/adapting other people's setups.
I really wish The Lounge supported something like a PostgreSQL/MySQL backend. Having to keep state in files on a persistent volume is a pain for any app, it's so much nicer when I can just connect to a DB _elsewhere_. The *arr media apps recently added support for PostgreSQL
TIL about Talos (https://github.com/siderolabs/talos, via your github/onedr0p/cluster-template link). I'd been previously running k3s cluster on a mixture of x86 and ARM (RPi) nodes, and frankly it was a bit of a PiTA to maintain.
Talos is great. I'd recommend using Omni (from the same people) to manage Talos. I was surprised how easy it was to add new machines with full disk encryption managed by remote keys.
Practically, its not a problem as you can always create a privileged container and mount the root filesystem into it. I have an alias I use for exactly such things.
I remember when I saw a presentation by the macaroon authors a few years back, there were pending patents that Google filed around them. While the authors claimed Google wouldn't sue anyone, I'm always a bit skeptical about such claims. I thought macaroons would be helpful for some of my use-cases, but since I now knew there were patents that'd be wilful infringement so I didn't bother.
I can't find the patents now, so perhaps they were rejected or withdrawn. I had assumed that was why macaroons hadn't caught on more widely.
There are so many stupid patents out there about everything we could possibly work on, it is actually reassuring to see that Google is assigned to some of them, rather than to some storefront in Marshall, Texas.
What does a "open pledge" like that realistically mean, in case they someday broke that pledge? Would the court-case 100% surely get thrown out? Am I legally protected because of this pledge?
I guess if one looks at those, it'll look like they won't sue you?
But my hypothetical case is what about if they do sue you, does this actually protect you, or not? Does a "pledge" have enough meaning that it could change the outcome in a court case?
That's my take on the death of don't be evil. Doubleclick was the worst kind of company and the merger with google seems to have diluted enough evil into google that google's immune system failed to kill it.
maybe this is similar to google's patenting of dropout for neural networks? you can never know but so far there haven't been many adverse effects and they claim that they patent it so others can't maliciously patent and enforce it.
That was what the authors claimed when I asked them about the macaroon patent. It'd be nice if Google had a legal document associated with patents they never plan to enforce, or the constraints around when they might enforce them (e.g. only against patent trolls) that a company could rely on.
I don't disagree but maintaining an arsenal of defensive parents is Enterprise IP legal 101. All the big companies do this. The goal is to avoid litigation by mutually assured destruction. At least that's what they tell you. Many projects
grant you patents as part of, for example, an OSS project's license.
It's not unusual to include a clause that voids any such terms of you litigate over your own parents, for example -- hence the mutually assured destruction. Can you imagine what would happen if Google and Facebook tried to duke out some dumb software patent in court? A waste for all parties.
The article starts by mentioning how insecure the browser is, apparently even cookies aren't secure. But then the API to talk to the BFF uses.... a server-side session tracked via a client cookie. If the BFF is holding the oauth credentials, then someone could steal the client cookie to make requests to the BFF to do whatever it can do.
It's not impossible to secure the browser from having credentials stolen from inside it, but it can be tricky to ensure that when the browser sends the credential in the request it doesn't leak somehow.
There's some irony as OAuth has DPoP now which can reduce the usefulness of stolen in-flight credentials but that can't be used in this BFF setup because the browser client needs the private key to sign the requests.
Properly securing the browser content on a login page, or the subdomain handling authentication credentials is definitely a challenge, and many don't like having to eliminate/audit any 3rd party JS they include on the page. I can see the appeal of a solution like this, but the trade-off isn't great.