Hacker Newsnew | past | comments | ask | show | jobs | submit | genrilz's commentslogin

There is plenty of research on social media outcomes. I've looked through some of it before by just searching semanticscholar.org, and the general consensus is that it has both positive and negative effects.

I don't want to have to do a literature review again, and sharing papers is hard because they are often paywalled unless you are associated with a university or are willing to pirate them.

Luckily, The American Psychological Association [0] has shared this nice health advisory [1] which goes into detail. The APA has stewarded psychology research and communicated it to the public in the US for a long time. They have a good track record.

[0]: https://en.wikipedia.org/wiki/American_Psychological_Associa...

[1]: https://www.apa.org/topics/social-media-internet/health-advi...


One non-obvious strategy is that the number of mines that are left on the field is known. Especially near the end, this can break a tie between two patterns of mines.


I thought of that but it hasn't been enough for me. Or I'm bad at counting


There isn't a formal definition of how the borrow checking algorithm works, but if anyone is interested, [0] is a fairly detailed if not mathematically rigorous description of how the current non-lexical lifetime algorithm works.

The upcoming Polonius borrow checking algorithm was prototyped using Datalog, which is a logical programming language. So the source code of the prototype [1] effectively is a formal definition. However, I don't think that the version which is in the compiler now exactly matches this early prototype.

EDIT: to be clear, there is a polonius implementation in the rust compiler, but you need to use '-Zpolonius=next' flag on a nightly rust compiler to access it.

[0]: https://rust-lang.github.io/rfcs/2094-nll.html

[1]: https://github.com/rust-lang/polonius/tree/master


Interesting. The change to sets of loans is interesting. Datalog, related to Prolog, is not a language family I have a lot of experience with, only a smidgen. They use some kind of solving as I recall, and are great at certain types of problems and explorative programming. Analyzing the performance of them is not always easy, but they are also often used for problems that already are exponential.

I read something curious.

https://users.rust-lang.org/t/polonius-is-more-ergonomic-tha...

>I recommend watching the video @nerditation linked. I believe Amanda mentioned somewhere that Polonius is 5000x slower than the existing borrow-checker; IIRC the plan isn't to use Polonius instead of NLL, but rather use NLL and kick off Polonius for certain failure cases.

That slowdown might be temporary, as it is optimized over time, if I had to guess, since otherwise there might then be two solvers in compilers for Rust. It would be line with some other languages if the worst-case complexity class is something exponential.


> IRC the plan isn't to use Polonius instead of NLL, but rather use NLL and kick off Polonius for certain failure cases.

Indeed. Based on the last comment on the tracking issue [0], it looks like they have not figured out whether they will be able to optimize Polonius enough before stabilization, or if they will try non-lexical lifetimes first.

[0]: https://github.com/rust-lang/rust-project-goals/issues/118


Actually, the x86 'ADD' instruction automagically sets the 'OF' and 'CF' flags for you for signed and unsigned overflow respectively [0]. So all you need to do to panic would be to follow that with an 'JO' or 'JB' instruction to the panic handling code. This is about as efficient as you could ask for, but a large sequence of arithmetic operations is still going to gum up the branch predictor, resulting in more pipeline stalls.

[0]: https://www.felixcloutier.com/x86/add


The reason '+ 1' is fine in the example you gave is that length is always less than or equal to capacity. If you follow 'grow_one' which was earlier in the function to grow the capacity by one if needed, you will find that it leads to the checked addition in [0], which returns an error that [1] catches and turns into a panic. So using '+1' prevents a redundant check in release mode while still adding the check in debug mode in case future code changes break the 'len <= capacity' invariant.

Of course, if you don't trust the standard library, you can turn on overflow checks in release mode too. However, the standard library is well tested and I think most people would appreciate the speed from eliding redundant checks.

  [0]: https://doc.rust-lang.org/src/alloc/raw_vec.rs.html#651
  [1]: https://doc.rust-lang.org/src/alloc/raw_vec.rs.html#567


If you obscure the implementation a bit, you can change GP's example to a runtime overflow [0]. Note that by default the checks will only occur when using the unoptimized development profile. If you want your optimized release build to also have checks, you can put 'overflow-checks = true' in the '[profile.release]' section of your cargo.toml file [1].

  [0]: https://play.rust-lang.org/?version=stable&mode=debug&edition=2024&gist=847dc401e16fdff14ecf3724a3b15a93
  [1]: https://doc.rust-lang.org/cargo/reference/profiles.html


This is bad because the program behaves different depending on build flags. What's the point of having "use unsafe addition" flag, and having it enabled by default?

Rust developers made a poor choice. They should have made a special function for unchecked addition and have "+" operator always panic on overflow.


The point I'm sure was to prevent the checks from incurring runtime overhead in production. Even in release mode, the overflow will only wrap rather than trigger undefined behavior, so this won't cause memory corruption unless you are writing unsafe code that ignores the possibility of overflow.

The checks being on in the debug config means your tests and replications of bug reports will catch overflow if they occur. If you are working on some sensitive application where you can't afford logic bugs from overflows but can afford panics/crashes, you can just turn on checks in release mode.

If you are working on a library which is meant to do something sensible on overflow, you can use the wide variety of member functions such as 'wrapping_add' or 'checked_add' to control what happens on overflow regardless of build configuration.

Finally, if your application can't afford to have logic bugs from overflows and also can't panic, you can use kani [0] to prove that overflow never happens.

All in all, it seems to me like Rust supports a wide variety of use cases pretty nicely.

[0]: https://github.com/model-checking/kani


I think it is important to note that in 59nadir's example, the reason Rust gives an error and Odin doesn't is not memory safety. Rust uses move semantics by default in a loop while Odin appears to use copy semantics by default. I don't really know Odin, but it seems like it is a language that doesn't have RAII. In which case, copy semantics are fine for Odin, but in Rust they could result in a lot of extra allocations if your vector was holding RAII heap allocating objects. Obviously that means you would need to be careful about how to use pointers in Odin, but the choice of moving or copying by default for a loop has nothing to do with this. For reference:

Odin (from documentation):

  for x in some_array { // copy semantics
  for &x in some_array { // reference semantics
  // no move semantics? (could be wrong on this)
Rust:

  for x in vec.iter_ref().copied() { // bytewise copy semantics (only for POD types)
  for x in vec.iter_ref().cloned() { // RAII copy semantics
  for x in &vec { // reference semantics
  for x in vec { // move semantics
C++:

  for (auto x : vec) { // copy semantics
  for (auto &x : vec) { // reference semantics
  for (auto &&x : vec) { // move semantics


I think 0xdeafbeef is roughly recreating the first code snippet from the article (which is one of the things diath is complaining about) in C++ to show that the compiler should produce an error or else undefined behavior could occur on resize.


We get into a bit of a weird space though when they know your opinions about them. I'm sure there are quite a few people who can only build a billion dollar startup if someone emotionally supports them in that endeavor. I'm sure more people could build such a startup if those around them provide knowledge or financial support. In the limit, pretty much anyone can build a billion dollar startup if handed a billion dollars. Are these people capable or not capable of building a building a billion dollar startup.

EDIT: To be clear, I somehow doubt an LLM would be able to provide the level of support needed in most scenarios. However, you and others around the potential founder might make the difference. Since your assessment of the person likely influences the level of support you provide to them, your assessment can affect the chances of whether or not they successfully build a billion dollar startup.


EDIT: A summary of this is that it is impossible to write a sound std::Vec implementation if NonZero::new_unchecked is a safe function. This is specifically because creating a value of NonZero which is 0 is undefined behavior which is exploited by niche optimization. If you created your own `struct MyNonZero(u8)`, then you wouldn't need to mark MyNonZero::new_unchecked as unsafe because creating MyNonZero(0) is a "valid" value which doesn't trigger undefined behavior.

The issue is that this could potentially allow creating a struct whose invariants are broken in safe rust. This breaks encapsulation, which means modules which use unsafe code (like `std::vec`) have no way to stop safe code from calling them with the invariants they rely on for safety broken. Let me give an example starting with an enum definition:

  // Assume std::vec has this definition
  struct Vec<T> {
    capacity: usize,
    length:   usize,
    arena:    * T
  }
  
  enum Example {
    First {
      capacity: usize,
      length:   usize,
      arena:    usize,
      discriminator: NonZero<u8>
    },
    Second {
      vec: Vec<u8>
    }
  }
Now assume the compiler has used niche optimization so that if the byte corresponding to `discriminator` is 0, then the enum is `Example::Second`, while if the byte corresponding to `discriminator` is not 0, then the enum is `Example::First` with discriminator being equal to its given non-zero value. Furthermore, assume that `Example::First`'s `capacity`, `length`, and `arena` fields are in the in the same position as the fields of the same name for `Example::Second.vec`. If we allow `fn NonZero::new_unchecked(u8) -> NonZero<u8>` to be a safe function, we can create an invalid Vec:

  fn main() {
    let evil = NonZero::new_unchecked(0);
  
    // We write as an Example::First,
    // but this is read as an Example::Second
    // because discriminator == 0 and niche optimization
    let first = Example::First {
      capacity: 9001, length: 9001,
      arena: 0x20202020,
      discriminator: evil
    }

    if let Example::Second{ vec: bad_vec } = first {
      // If the layout of Example is as I described,
      // and no optimizations occur, we should end up in here.

      // This writes 255 to address 0x20202020
      bad_vec[0] = 255;
    }
  }
So if we allowed new_unchecked to be safe, then it would be impossible to write a sound definition of Vec.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: