Hacker Newsnew | past | comments | ask | show | jobs | submit | roblabla's commentslogin

> The best solution is skin-in-the-game, for-profit enterprise coupled with rigorous antitrust enforcement.

Don't we have enough examples showing that this simply cannot work long-term, because the for-profit enterprises will _inevitably_ grow larger than the government can handle through antitrust? And once they reach that size, they become impossible to rein in. Just look at all the stupid large american corporations who can't be broken up anymore because the corporation has the lobbying power and media budget to make any attempt to enforce antitrust a carrier killer for a politician.

I think it's very myopic to say that corporate structure is the "best solution".


> to make any attempt to enforce antitrust a carrier killer for a politician

Any example of a politician carrier killed by an attempt to enforce antitrust?


Biden.

Him putting Lisa Khan in charge of antitrust enraged the tech oligarchs, who then all went MAGA and bought Trump the election.


> went MAGA and bought Trump the election

Didn’t Harris actually raise and spend more than Trump on that election?


Yeah but the tech spend was way more effective. Elon took over Twitter.

It seems like you have an unfalsifiable belief. If one side raises more money and wins, it because of the money. If one side raises more money and loses, it is still the money because the other side spend it more effectively.

You almost got it. We all lose as long as money determines power in social relations.

So no, they didn't "bought Trump the election".

And the fact that a 3rd party supports an opponent does not kill any politician's career. Biden retired by himself, following his own party's pressure. And Harris is still around, I believe.


Of course they did. They used their capital to influence democracy. That's capitalism baby!

No a bad thing if you desire the corporate power to eventually become the main force shaping the world :)

To be fair, that seems to be where some of the IA lawsuits are going. The argument goes that the models themselves aren't derivative works, but the output they produce can absolutely be - in much the same way that reproducing a book from memory could be copyright violation, trademark infringement, or generally go afoul of the various IP laws.


> Traits? Nope. We need some way for code reuse.

Says who? You can totally do code reuse using manually-written dynamic dispatch in "rust without traits". That's how C does it, and it works just fine (in fact, it's often faster than Rust's monomorphic approach that results in a huge amount of code bloat that is often very unfriendly to the icache).

Granted, a lot of safety features depend on traits today (send/sync for instance) but traits is a much more powerful and complex feature than you need for all of this. It seems to me like it's absolutely possible to create a simpler language than Rust that retains its borrow checker and thread safety capabilities.

Now whether that'd be a better language is up to individual taste. I personally much prefer Rust's expressiveness. But not all of it is necessary if your goal is only "get the same memory and thread safety guarantees".


> Says who? You can totally do code reuse using manually-written dynamic dispatch in "rust without traits". That's how C does it, and it works just fine.

Rust can monomorphize functions when you pass in types that adhere to specific traits. This is super-handy, because it avoids a bounce through a pointer.

The C++ equivalent would be a templated function call with concept-enforced constraints, which was only well-supported as of C++20 (!!!) and requires you to move your code into a header or module.

Zig can monomorphize with comptime, but the lack of trait-based constraint mechanism means you either write your own constraints by hand with reflection or rely on duck typing.

C doesn't monomorphize at all, unless you count preprocessor hacks.


At what point did they make it _worse_? Tailwind didn't remove any existing functionality here. What they did was refuse to merge a PR while they're trying to figure out how to navigate a difficult financial problem, all while being fully transparent about what's going on, and saying that they're open to merging the PR if/when they manage to get things together.

This is very different from, say, the minio situation, where they were actively removing feature before finally closing development down entirely. Whether tailwind will end up going down this route, time will tell. But as of right now, I find this reading to be quite uncharitable.


It's not even funcationality to the library code, it's a PR to their docs. If you just want optimized docs for your LLM to consume, isn't that what [Context7](https://context7.com/websites/tailwindcss) already has? Why force this new responsibility to the maintainer.


C and C++ as defined by their current standards are memory unsafe. You may argue that some specific implementations manage to stay as memory safe as they can get away with, but even then, features like union prevents a fully memory-safe implementation.


> C and C++ as defined by their current standards are memory unsafe.

I don’t think the spec says one way or another (but please correct me if you find verbiage indicating that the language must be memory unsafe).

It’s possible to make the whole language memory safe, including unions. It’s tricky, but possible.

Someone else mentioned Fil-C but Fil-C builds on a lot of prior art. The fact that C and C++ can be memory safe is no secret to those who understand language implementation.


By definition, C and C++ are memory safe as long as you follow the rules. The problem is that the rules cannot be automatically checked and in practice are the source of unenumerable issues from straight up bugs to subtle standards violations that trigger the optimizer to rewrite your code into what you didn’t intend.

But yes, fil-c is a huge improvement (afaik though it doesn’t solve the UB problem - it just guarantees you can’t have a memory safety issue as a result)


> By definition, C and C++ are memory safe as long as you follow the rules.

This statement doesn't make sense to me.

Memory safety is a property of language implementations, which is all about what happens when the programmer does not follow the rules.

> The problem is that the rules cannot be automatically checked and in practice are the source of unenumerable issues from straight up bugs to subtle standards violations that trigger the optimizer to rewrite your code into what you didn’t intend.

They can be automatically checked and Fil-C proves this. The prior art had already proved it before Fil-C existed.

> But yes, fil-c is a huge improvement (afaik though it doesn’t solve the UB problem - it just guarantees you can’t have a memory safety issue as a result)

Fil-C doesn't have UB. If you find anything that looks like UB to you, please file a GH issue.

Let's also be clear that you're referring to nasal demons specifically, not UB generally. In some contexts, like CPU ISAs, UB means a trap, rather than nasal demons. So let's use the term "nasal demons".

C and C++ only have nasal demons because:

- Policy decisions. For example, making signed integer addition have nasal demons is because someone wanted to cook a benchmark.

- Lack of memory safety in most implementations, combined with a refusal to acknowledge what happens when the wrong kind of memory access occurs. (Note that CPU ISAs like x86 and ARM are not memory safe, but have no nasal demons, because they do define what happens when any kind of memory access occurs.)

So anyway, Fil-C has no nasal demons, because:

- I turned off all of those silly policy decisions for cooking benchmarks.

- The memory safety means that I define what happens when the wrong kind of memory access occurs: the program gets killed with a panic.


First, let me say that I really respect the work you’re doing in fil-c. Nothing I say is intended as a knock and you’re doing fantastic engineering work moving the field forward and I hope you find success.

That’s good to know about nasal demons. Are you saying you somehow inhibit the optimizer from injecting a security vulnerability due to UB ala https://www.cve.org/CVERecord?id=CVE-2009-1897 ? I’m kinda curious how you trick LLVM into not optimizing through UB since it’s UB model is so tuned to the C/C++ standard.

Anyway, Fil-C is only currently working on (a lot of, but not all yet I think right?) Linux userspace while C and C++ as a standard language definition span a lot more environments. I agree the website should call out Fil-C as memory safe but I think it’s also fair to say that Fil-C is more an independent dialect of C/C++ (eg you do have to patch some existing software) - IMHO it’s too confusing for communicating out to say that C/C++ is memory safe and I’d rather it say something like Fil-C is memory safe or C/C++ code running under Fil-C is memory safe.

> Memory safety is a property of language implementations, which is all about what happens when the programmer does not follow the rules.

By this argument no language is memory safe because every language has bugs that can result in memory safety issues. Certainly rustc definitely has soundness issues that haven’t been fixed and I believe this is also true of Python, JavaScript, etc but I think it’s an unhelpful bar or framing of the problem. The language itself is memory safe and any safety issues within the language spec or implementation are a bug to be fixed. That isn’t true of C/C++ where there’s going to always exist environments where it’s impossible to even have a memory safe implementation (eg microcontrollers) let alone mandate one in the spec. And also fil-C does have a performance impact so some software may not ever be a good fit for it (eg video encoders/decoders). For example, a non memory safe conforming implementation of JavaScript is not possible. Same goes for safe rust, Python or Java. By comparison that isn’t true for c/c++.


At a certain point, it's a trade-off. A systems language will offer facilities that can be used to break encapsulation and abstractions, and access memory as a sequences of bytes. (Anything capable of file I/O on stock Linux can write to /proc/self/mem, for example.) The difference to (typical) C and C++ is that these facilities are less likely to be invoked by accident.

Reasonable people will disagree about what memory safety (and type safety) mean to them. Personally, bounds checking for arrays and strings, some solution for safe deallocation of memory, and an obviously correct way to write manual bounds checks is more interesting than (for example) no access to machine addresses and no FFI.

Regarding bounds checking, GNAT offers some interesting (non-standard) options: https://gcc.gnu.org/onlinedocs/gnat_ugn/Management-of-Overfl... Basically, you can write a bounds check in the most natural way, and the compiler will evaluate the check with infinite precision (or almost, to improve performance). In standard, you might end up with an exception in some corner cases where the check should pass. I wish more languages would offer something like this. Among widely used languages, only Python offers this capability because it uses infinite-precision integers.


> Are you saying you somehow inhibit the optimizer from injecting a security vulnerability due to UB ala https://www.cve.org/CVERecord?id=CVE-2009-1897 ? I’m kinda curious how you trick LLVM into not optimizing through UB since it’s UB model is so tuned to the C/C++ standard.

Yes that is inhibited. There’s no trick. LLVM (and other compilers) choose to do those stupid things by policy, and the policy can be turned off. It’s not even hard to do it.

> Fil-C is more an independent dialect of C/C++ (eg you do have to patch some existing software)

Fil-C is not a dialect. The patches are similar to what you’d have to do if you were porting a C program to a new CPU architecture or a different compiler.

> By this argument no language is memory safe because every language has bugs that can result in memory safety issues.

You rebutted this argument for me:

> any safety issues within the language spec or implementation are a bug to be fixed

Exactly this. A memory safe language implementation treats outstanding memory safety issues as a bug to be fixed.

This is what makes almost all JS implementations, and Fil-C, memory safe.


The standard(s) very often say that a certain piece of C code has undefined behavior. Having UB means that there is behavior that is not necessarily explainable by the standard. This includes e.g. the programming seemingly continuing just fine, the program crashing, or arbitrary code running as part of an exploited stack buffer overflow.

Now, certain implementations of C might give your more guarantees for some (or all) of the behavior that the standard says is undefined. Fil-C is an example of an implementation taking this to the extreme. But it's not what is meant when one just says "C." Otherwise I would be able to compile my C code with any of my standard-compliant compilers and get a memory-safe executable, which is definitely not the case.


Question: why is a union memory unsafe?

My meager understanding of unions is that they allow data of different types to be overlayed in the same area of memory, with the typical use case being for data structures that may contain different types of data (and the union typically being embedded in a struct that identifies the data type). This certainly presents problems with the interpretation of data stored in the union, but it also strikes me that the union object would have a clearly defined sized and the compiler would be able to flag any memory accesses outside of the bounds of the union. While this is clearly problematic, especially if at least one of the elements is a pointer, it also seems like the sort of problem that a compiler can catch (which is the benefit of Rust on this front).

Please correct me if I'm wrong. This sort of software development is a hobby for me (anything that I do for work is done in languages like Python).


A trivial example of this would be a tagged union that represents variants with control structures of different sizes; if the attacker can induce a confusion between the tag and the union member at runtime, they can (typically) perform a controlled read of memory outside of the intended range.

Rust avoids this by having sum types, as well as preventing the user from constructing a tag that’s inconsistent with the union member. So it’s not that a union is inherent unsafe, but that the language’s design needs to control the construction and invariants of a union.


Canonical example:

    union {
        char* p;
        long i;
    };
Then say that the attacker can write arbitrary integers into `i` and then trigger dereferences on `p`.


The standard does not assign meaning to this sequence of execution, so an implementation can detect this and abort. This is not just hypothetical: existing implementations with pointer capabilities (Fil-C, CHERI targets, possibly even compilers for IBM i) already do this. Of course, such C implementations are not widely used.

The union example is not particularly problematic in this regard. Much more challenging is pointer arithmetic through uintptr_t because it's quite common. It's probably still solvable, but at a certain point, changes the sources becomes easier, even at at scale (say if something uses the %p format specifier with sprintf/sscanf).


> The standard does not assign meaning to this sequence of execution, so an implementation can detect this and abort.

Real C programs use these kinds of unions and real C compilers ascribe bitcast semantics to this union. LLVM has a lot of heavy machinery to make sure that the programmer gets exactly what then expected here.

The spec is brain damage. You should ignore it if you want to be able to reason about C.

> This is not just hypothetical: existing implementations with pointer capabilities (Fil-C, CHERI targets, possibly even compilers for IBM i) already do this

Fil-C does not abort when you use this union. You get memory safe semantics:

- you can use `i` to change the pointer’s intval. But the capability can’t be changed that way. So if you make a mistake you’ll end up with an OOB pointer.

- you can use `i` to read the pointer’s current intval just as if you had done an ptrtoint cast.

I think CHERI also does not abort on the union itself. I think storing to `i` removes the capability bit so `p` crashes on deref.

> The union example is not particularly problematic in this regard. Much more challenging is pointer arithmetic through uintptr_t because it's quite common.

The union problem is one of the reasons why C is not memory safe, because C compilers give unions the expected structured assembly semantics, not whatever nonsense is in the spec.


He's talking about Fil-c


Procmon won't show you every type of resource access. Even when it does, it won't tell you which entity in the resource chain caused the issue.

And then you get security product who have the fun idea of removing privileges when a program creates a handle (I'm not joking, that's a thing some products do). So when you open a file with write access, and then try to write to the file, you end up with permission errors durig the write (and not the open) and end up debugging for hours on end only to discover that some shitty security product is doing stupid stuff...

Granted, thats not related to ACLs. But for every OK idea microsoft had, they have dozen of terrible ideas that make the whole system horrible.


Especially when the permission issue is up the chain from the application. Sure it is allowed to access that subkey, but not the great great grandparent key.


Shitty security products being inscrutable isn't limited to Windows. "Disable SELinux" anyone?


While that's true, linux _tends_ to follow the rules a bit better, and not change how APIs work from under your feets. For instance on Linux, permission checks are done when you open a handle. An LSM like SELinux can only allow or deny your rights to open the handle at the permission level you requested, that's it. It cannot allow the handle to be opened, but with less privileges than requested, nor can it do permission check at operation time. So once your open is successful, you can be pretty sure that you've cleared the permission checks bar, and are good to go.

This makes writing robust code under those systems a lot easier, which in turns makes debugging things when it goes wrong nicer. Now, I'm not going to say debugging those systems is great - SELinux errors are still an inscrutable mess and writing SELinux policy is fairly painful.

But there is real value in limiting where errors can crop up, and how they can happen.

Of course, there is stuff like FUSE that can throw a wrench into this: instead of an LSM, a linux security product could write their own FS overlay to do these kind of shenanigans. But those seem to be extremely rare on Linux, whereas they're very commonplace on Windows - mostly because MS doesn't provide the necessary tools to properly write security modules, so everyone's just winging it.


At this point you're just arguing for the sake of bashing on Microsoft. You said it yourself, that's not related to ACL, so what are you doing, mate? This is not healthy foundation for a constructive discussion.


Linux (well, more accurately, X11), has had a SAK for ages now, in the form of the CTRL+ALT+BACKSPACE that immediately kills X11, booting you back to the login screen.

I personally doubt SAK/SAS is a good security measure anyways. If you've got untrusted programs running on your machine, you're probably already pwn'd.


That's not a SAK, you can disable it with setxkbmap. A SAK is on purpose impossible to disable, and it exists on Linux: Alt+SysRq+K.

Unfortunately it doesn't take any display server into consideration, both X11 and Wayland will just get killed.


There are many a ways to disable CTRL+ALT+DEL on windows too, from registry tricks to group policy options. Overall, SAK seems to be a relic of the past that should be kept far away from any security consideration.


There shouldn't be any non-privileged ways to disable ctrl-alt-del.


The "threat model" (if anyone even called it that) of applications back then was bugs resulting in unintended spin-locks, and the user not realizing they're critically short on RAM or disk space.


This setup came from the era of Windows running basically everything as administrator or something close to it.

The whole windows ecosystem had us trained to right click on any Windows 9X/XP program that wasn’t working right and “run as administrator” to get it to work in Vista/7.


In this context, it's talking about Internet Research Agency: https://en.wikipedia.org/wiki/Internet_Research_Agency


The founder of the agency is not with us anymore.


the rest of the organization sure is, though


haha also couldn't understand how the Irish IRA was in anyway relevant. Makes a lot more sense now.


Ha all good. I almost put the full name, elected not to for some reason, and here we are


Wouldn't it make sense for a remote control to need to access local network & devices? Like, without this permission, the only way the controller would work is through a cloud service, so I would personally be pretty happy to discover the app requests this permission, as it would likely mean the app will keep working when LG inevitably shuts down their cloud server...


You're giving a lot of charity to LG. They're probably trying to fingerprint people with the extra permissions


If you're that paranoid, you _can_ just chose not to fly.

The bigger problem is if the UK has an extradition treaty with the country you live in.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: