I think based on what I think is the author's comments here "Flatpak on top of immutable distros are the future of Linux"? Given that context, I can see how the author produced the text.
As an aussie, I'd say instant is never good, it's the minimum acceptable coffee. If your coffee is worst than instant (yes, you LAX, how do you make coffee taste like literal dirt water!), then you should learn to make coffee properly!
The checksums are verified automatically, based on a key bootstrapped by the original install (which could, though likely not done, be verified by other means). As happened with xz, you either get everyone or no-one.
I'm not sure how someone is supposed to use attestations if PyPI refuses to support the forge they use? I'm not sure how this prevents a package getting maliciously uploaded via Github Actions? To me, this is going to lead to another bincode incident, because it conflates trust in the maintainer with trust in the platform.
Is it 99% of users? Of the linux (desktop/laptop) users I know, the majority use X-forwarding over ssh at least occasionally, while non-linux (desktop/laptop) also use X-forwarding (this is in an academic context), so while this may be an improvement for a subset of linux (desktop/laptop) users, across the whole linux user base (excluding both Android, which does not use wayland, and embedded, which I understand does use wayland), it's not.
I don't think I have used X-forwarding in the last 10 years unless for checking whether it's still there. Most of the time, it was, and running a browser even on a nearby machine was not a pleasant experience. Running Emacs was less bad, but the only things that actually worked well were probably xlogo and xload.
It does do a fair amount of filtering of submissions, and it's a long term archive (e.g. for the next 100+ years). I suspect both (but with the former dominating) are the issue.
everyone has a first time they see a thing and don't yet know what it is.
Using a brand as a filter where you have to already know what it means to get it is exactly the opposite of what it's supposed to achieve.
Consider the most exclusive (successful) brands that exist. Even there, where exclusivity is a brand goal, none of them have this property of being obscure on first contact.
You usually get introduced to it by your academic supervisor or collaborators as a masters or PhD student. If you're a solo researcher who has made a significant contribution on the frontier of science, I'm sure you'll be able to understand how Arxiv works as well. Because I assume you have had some conversations with other experts in the field. If you're a full on autodidact with no contact to any other researchers in the field, well, maybe it's better if you chat with some other people in that field.
Its reasonable to have a tradeoff here to avoid cranks and now AI psychosis slop. You can still post on research gate and academia.edu or you own github page or webhosting.
Is an SSH jump server a VPN (or forwarding a port from another machine at VPN)? I'd suggest neither are because it's connection-based rather than setting up a network (with routing etc). Absent a network, it's a proxy (which can be used like some deployments of a VPN).
I see your point, but I think that might label many uses of wireguard in tailscale "not a VPN" because they use imaginary network devices that only exist inside the tailscale process. Saying that would feel very wrong. On the other hand if process internals can be the deciding factor, then optimizing the code one way or the other could change whether a system is "VPN" or "not a VPN" even though it looks exactly the same from the outside. That doesn't feel great either.
And do we even know if Opera uses internal network addresses for its "VPN"?
I think I'm willing to say that routing all internet traffic from a program through a tunnel can be called either a VPN or a proxy.
I'm not up-to-date with the internals of tailscale, but my impression was they run additional services on top of the actual VPN (that is their "value-add" to wireguard), some of which are actual proxies, which hence blur that line in the minds of users (along with some so-called "VPN" providers who are just providing proxies).
In the modes I'm talking about, there's a real wireguard VPN that your local tailscale process is participating in. But instead of attaching it to a TAP device, there's a whole virtual networking stack inside the tailscale process.
You could treat it like running a normal VPN app inside a virtual machine. Surely that's still be a VPN, or the distinction gets weird. But if we do agree it's a VPN, a couple examples based on this one will force the distinction to get weird anyway. The line of VPN or not is surprisingly blurry.
Really none of these VPNs are VPNs either since they don't establish a virtual private network. They are just tunnels for your internet access. Tailscale is actual VPN software. It simulates a private network.
I've a bachelors of science (first) in computer science, and currently doing a dissertation for a master's in cyber security, on route for a first but that might change depending on the mark for this dissertation.
My experience with the bachelors was that despite my project being derailed by the bullshit around formatting the document, doing "research" by searching the library for peer reviewed papers that backed up my claims, etc, etc; I got a excellent mark. In short I set out to make something and due to the academic processes failed in making anything, but because I was able to critically reflect on it, I got a good mark. Waste of time, unless you were just are a good mark.
For my masters I know the project doesn't matter, I'm concentrating on the academic nonsense because that's where the marks are.
The work you were given in your undergraduate and master’s was not research, it was homework. The task was critical reflection, which is repeatable and achievable for students; whereas research is expensive, one off, and generally out of reach for undergrads, and requires intensive oversight by an experienced researcher.
The waste of time would be for a professor to train you up to be a researcher before you’ve proven you are ready, hence the homework assignments.
If that's the case then and researching is way above masters level then how is it you get on a PhD? Genuine question. If everything I've done to date is a pale imitation of the real thing how can I make a fair assessment as to whether I want to pursue a PhD?
You don’t really, and why a lot of people become researchers only to discover they hate it. But that’s true with all things.
I think the way to know if you want to be a researcher is more along the lines of: do you like finding the answers to questions no k e has thought to ask let alone answered? If so then it doesn’t really matter the training you’ve or the amount t of the field you’ve experienced, you can focus on that bit as your guiding force.
No, it's not about whether masters or PhD, it's whether you did something new (the novelty aspect). It sounds to me that you did a coursework masters of some kind, which gave some basic literature analysis projects. This is like the first month of any research project, and is so you understand the context of the project. The actual work is doing the novel thing, and dealing with the repeated failures.
My suggestion is do a summer research project, and see if you enjoy it. If no-one will take you on, reflect on why that is (and to me that's a strong reason not to do it).
Where do you think they get the training data from ;) Galaxy Zoo has been used to train ML models for at least a decade, it's a standard dataset for intro to ML courses.
reply