I wonder if there's a (simple syntactical) way to unlink the constness of unique pointers and their targets. It's not a huge obstruction, but seems like an unnecessary restriction not found in the near-equivalent C++ construction.
Yes, you can do it, with Cell and RefCell, though the former is restricted to POD types and the latter comes with some minor runtime overhead.
Inherited mutability is not an unnecessary restriction from the point of view of memory safety. It's critical to Rust's ability to prevent iterator invalidation and related bugs at compile time. The fact that C++ doesn't do it is the source of many of its memory safety problems, such as dangling references and iterator invalidation.
Within context (the article is for C++ programmers) POD is a commonly used acronym.
If you're a C++ programmer and don't know about POD types (data types with zero implicit C++ behaviors), you should brush up on your fundamentals – it's essential to understand when and why C++ behaviors (i.e. behaviors above and beyond plain C) are invoked implicitly.
I believe it actually means that y can be made to reference something other than x.
let x = 5
let mut y = &x
let i = 13
y = &i
This is similar to C++'s const-pointers and pointers-to-consts, where either a pointer cannot be made to point at something different or the pointer cannot be used to change what it points at.
He means immutability when he says "const-ness". There are four possibilities for mutability of a single pointer:
1. The pointer is mutable, but its contents are immutable.
2. The pointer is immutable, but its contents are mutable.
3. The pointer is immutable and the contents are immutable.
4. The pointer is mutable and its contents are mutable.
Right now for owned pointers Rust gives us (3) and (4), but no obvious way to achieve (1) and (2). Although you might argue that the borrowing semantics give us these powers, just not directly with owned pointers - which we shouldn't be using directly if we're asking for that control, we should be lending them out in a well-controlled manner.
You can use privacy for (1); admittedly it's a little hokey, but I feel it's not worth the added complexity to add field-level immutability directly into the language since you can use other language features to effectively achieve it. `std::cell::Cell` and `std::cell::RefCell` give you (2).
It is, but sometimes you have to do it for practicality. Usually this comes up when people use `Rc<T>`, as that only supports immutable types (so any mutability you want needs to be in the form of `Cell`/`RefCell`).
I don't think so, because then you'd get an overconservative iterator invalidation checker. For example:
struct Foo {
a: Vec<int>,
b: Vec<int>,
}
let foo = Rc::new(RefCell::new(Foo::new(...)));
for x in foo.borrow_mut().a.mut_iter() {
for y in foo.borrow_mut().b.mut_iter() {
// ^^^ FAILURE: a is already borrowed mutably
}
}
The failure happens because the RefCell is checking to make sure there are no two `&mut` references at the same time to `Foo`, to prevent iterator invalidation. But this is silly, because it only needs to prevent access to `a` while you're iterating over it, not both `a` and `b`. Changing the definition of `Foo` to this fixes the problem:
Makes sense. Though Cells would really stand out in the code as a potential for race conditions (unless you get a run time failure?). Thanks for the insight.
There is a special bound--Share--that is required on thread-safe data structures and forbids Cell/RefCell. The thread-safe equivalents of Cell and RefCell are Atomic and Mutex, respectively.
I'm usually pretty happy that ~T works just like T in terms of ownership and mutability and move-semantics and whatnot. When would you want to unlink that?
It's what allows Rust to have memory safety without garbage collection, which is something that systems programmers still care quite a bit about in 2014 AD.
I'm not sure what you're trying to say. Perhaps I can try to sum up my view (and what I think is the view of many other people on this thread):
If you start from the premise that garbage collection is not an option in some circumstances, and you want to maintain memory safety, then I think you inevitably end up in a situation where programmers must make their memory access patterns explicit. Rust achieves this through its type system.
My guess is that you're disagreeing with the above premise.
How could the compiler achieve safe memory management without any annotations, runtime overhead, or garbage collection? I would be interested in answers to this question, but they are not obvious to me after years of thinking about this problem :)
The short replies from ExpiredLink lead others to misinterpret his response as pro-GC and therefore the downvotes but now that I've read all his other comments in this thread, I think I've pieced together what his actual thesis is: without using any runtime GC, a sophisticated enough compiler should have enough knowledge from analyzing the source code to "automatically" know the type of memory management classification for any variables. The programmer shouldn't have to manually annotate them.
I'm not a compiler theory expert but my hunch is that ExpiredLink's premise is wrong. I'm guessing that any non-trivial program requires a priori knowledge from the programmer's mind to properly inform the compiler which variables do what and when.
To use an analogy in the spirit of ExpiredLink's thinking: we don't manually annotate statements such as "int [register_cx] bank_account = balance + deposit"
We don't have to "annotate" statements with micro-details such as specific x86 registers bx,cx,dx,etc. "It would be the wrong abstraction" to borrow his wording. The compiler assigns registers on our behalf. It's been doing that for decades. I'm guessing memory safety is a much more difficult problem. If perfectly optimal register allocation is an NP-complete problem, it seems safe to assume that compiler generated safety annotations for memory is NP-complete.
Therefore, the proof for ExpiredLink would be a short example source code that demonstrates how a priori knowledge is unavoidable to achieve extra memory safety and maintain C/C++ performance. The only other option is some mythical compiler that runs for 10 hours analyzing 1,000 lines of source code trying to trace all possible memory accesses only to come up with a suboptimal solution to an NP-complete problem. And on top of that, would the compiler have to be so sophisticated as to have to create a VMware/Virtualbox virtual machine to simulate all memory accesses? I have no idea.
> a sophisticated enough compiler should have enough knowledge from analyzing the source code to "automatically" know the type of memory management classification for any variables. The programmer shouldn't have to manually annotate them.
This might be possible, but there are a few big problems with it:
1. We'd be throwing separate compilation out the window. What makes separate compilation work is that the types are fully deducible for each function without knowledge of the functions that call it. In our system, this requires some amount of manual annotation.
2. Systems programmers often need to know whether allocation is happening; they don't want a compiler doing it behind their back. For example, in interrupt or async signal handling context, you can't allocate. A language that implicitly allocates is not usable for such contexts.
3. When you get an error relating to memory management—for example, a borrow check error—it's important that the programmer be able to diagnose what happened and fix it. Without explicit annotation, it's notoriously harder to report type errors—the programmer has to try to reconstruct what the compiler was trying to do.
I think the problem with this is that even if compiler can somehow semi-magically deduce that, it can be still be not what the author really intended. So making memory ownership semantic explicit improves clarity and avoids any kind of ambiguity and unintended logic. Also, it helps communicating the intention to others who might read or maintain the code later. Obscuring this is not helpful at all.
>even if compiler can somehow semi-magically deduce that, it can be still be not what the author really intended.
Right, but to entertain what I think ExpiredLink is proposing... you have to reframe it as "there's no intention for ownership/lifetime for the programmer to even express" therefore, there can't be a disconnect between what the compiler did and what the programmer thinks it did.
To use the x86 register analogy again, we don't explictly notation our intentions of which registers to use and therefore we don't type a bunch of extra verbage to annotate it. The compiler figures it out and it's opaque to us. There is never a conversation about how the compiler didn't match our intentions because we're not working "at that abstraction level" at all.
But as I mentioned before, I believe that memory lifetime/ownership is a totally different class of automated analysis (on behalf of the programmer) than register allocation.
I think the only way to raise the level of discourse on why annotation is unavoidable is to show a short source code example that proves that ExpiredLink's premise is impossible for any compiler analysis to fulfill.
Also, Rust avoids taking control and clarity away from the programmer. Such automated memory management (which will automatically deduce ownership and lifetime for example) will make the result less predictable to the author which is a downside. So even if feasible (which I'm not sure about), it doesn't fit to be a mandatory feature.
> I'm not a compiler theory expert but my hunch is that ExpiredLink's premise is wrong. I'm guessing that any non-trivial program requires a priori knowledge from the programmer's mind to properly inform the compiler which variables do what and when.
I also think the thesis is wrong, but i'd put it differently. I suspect that in many cases, a sufficiently smart compiler could analyse a program and assign plausible classifications to every variable. However, i suspect that there will be some cases which are not - there is no set of classifications which can be assigned which will conform to the rules. Inevitably, these would be exactly the cases you actually hit when writing real programs. In other words, i think this ends up looking a bit like the halting problem.
As a thought experiment, imagine Rust, but without any of the memory management classifications on variables: no tildes, ampersands, or single quotes. You could compile and execute this with fairly conventional garbage-collected memory management semantics. But you could also try to assign memory management classifications to every variable, and the compile it as Rust. The number of possible assignments over the whole program is clearly finite - there are a finite number of variables, a finite number of pointer types, a finite number of lifetimes, and so a finite number of arrangements of all those. Large, but finite. So you could, in principle, enumerate them by brute force.
If you took an existing, correct, Rust program, erased all the classifications, and fed it into this brute force re-classifying compiler, it would eventually find a legal classification, and compile it. It might be the original classification, or it might not. This means that there isn't a requirement for "a priori knowledge from the programmer's mind". Any program which could be given a plausible classification can be given one automatically.
However, i think there is no such certainty that it would find a legal classification for a arbitrary program. Indeed, i suspect it wouldn't be too hard to find a counterexample: you just need a Rust program which doesn't compile, and can't be fixed purely by changing memory management classifications. Anyone want to give that a go?
I totally disagree; Rust is shaping up to be the best alternative to C++ by far. As for being 2014, I'd argue that linear and substructural typing is pretty much in the cutting edge of language research.
I disagree. Even in 2014 we have a place for different kinds of new languages. If we didn't, the likes of C++ would live forever with all their shortcomings.
Yup. I might add that `shared_ptr` is thread-safe, while `Rc` isn't (and is enforced by the compiler to not migrate between threads). Because of that `Rc` tends to be much faster than C++ `shared_ptr`, since you only opt into the thread safety if you need it.
Instead of saying that thread safety is opt-in, I would say that thread safety (at least, avoiding data races) is mandatory in Rust and enforced by the type system... but you only pay for sharing if you need it.
C++ has a "pay for what you need" mentality, but do to weak aliasing guarantees, you have to pay for atomic ops on `shared_ptr` whether you need them or not. Just like you have to pay for using `string` because two strings can't safely share data without runtime checks: either you pay for COW (which is less common these days) or you pay for copying. In Rust, the safety of string aliasing is enforced by lifetime analysis.
> In Rust, the safety of string aliasing is enforced by lifetime analysis.
Still I have seen a lot of examples where the static strings already present in the data sections have to be explicitly created on the heap just to be used somewhere else. Something like (I write in pseudo, I never programmed in Rust)
return (put on heap)"some static string"
and it will be probably copied from the data section to the heap just for the content to be read (from the copy on the heap instead of the executable data section) and then the copy on the heap discarded. That's how I understood it at least, please correct me if I'm wrong. Why not having some COW mechanism for such cases built-in in the language, avoiding unnecessary allocations, copying and deallocations?
In C++ this is a pretty common thing due to the fact that `const char *` is pretty annoying to work with, so people often use heap-allocated `std::string` instead. In Rust `&'static str` and `StrBuf` are on more equal footing, so it's not necessary to place objects on the heap to make it easier to work with them. Still, some people do it accidentally, so we've recently made some changes to make it harder to shoot yourself in the foot in terms of performance, most notably removing the `~"foo"` syntax for heap allocated string literals (which also had the side effect of making the language easier to read).
lines. Can you please point me to some complete-enough example of the new "lighter" string return? Thanks.
Edit: I guess the key is in 'if you must return the string on the heap.' Why should be common that anybody must return the string on the heap? Why shouldn't be possible to program so that only the receiver must place something on the heap due to his own needs? Why shouldn't the one doing return be able to always return "something"? Couldn't it be made possible in the language? The one doing return shouldn't need to put something he knows it's static on the heap, especially as most probably the receiver will just discard it as soon as it reads the content.
Just use `return "something"`. That returns a `&'static str`. If you must return a heap allocated string, the you can use `return StrBuf::new("something")`.
Why is there a "must" for returning the string on the heap? Can't it be solved in the language to use the static string up to the point the reallocation is needed?
Rust has builtin types &'static str for strings that live for the entire runtime of the program and ~str for strings that live in dynamically managed heap memory (from the more general ~T for T that live there with T=str). If the function always returns a static string it can return the former type and won't need a ~ or any allocations for that.
If you need something more dynamic... you could have a user-defined type that can hold either and remembers whether it's responsible for freeing the memory, but that doesn't fall out of the builtin types naturally.
As far as I understand, the effective overhead of "free" can be practically zero if the compile-time construction of the static strings would contain some "nothing to deallocate here" info versus any info which anyway has to be run-time maintained to anything that is really allocated. Any non-trivial allocator has a bunch of checks to do (e.g. different-sized small objects are typically allocated in separate blocks, differently to the big allocations etc).
As soon as your language doesn't use C-like zero-terminated characters but anything with more information, it's trivial to avoid these unnecessary copy-to-heap-just-to-use-and-deallocate steps.
As other have said: it can be solved in the library by defining a string type with the appropriate semantics (e.g. the MaybeOwned[1] type they refer to).
What's the archetype use for Rc anyway? I don't see the point in having a reference-counted object with single thread ownership.
Also you only pay to synchronise shared_ptr when you take a copy of the pointer... I can't think of anywhere in a sane, well-structured algorithm where this would bite.
`&`/`mut &` in Rust is equivalent to `const &`/`&` in C++, but with the addition that the compiler makes sure that you don't keep it around for longer than is valid.
Well this is all very humbling. I'm a mostly "managed language" weenie at the moment, but I swear I've read the phrase "ownership semantics" in reference to something in this neck of the C++ woods. Rust is shaping up to pass the Alan Perlis test quite handily as a language worth studying.
Yes, see, in C++, you still need to think about ownership, the compiler just can't help you out. One way of thinking about Rust is "C++, but the compiler understands ownership and memory lifetime."
It's worth noting, however, that unique_ptr was introduced in C++11 because it relies on move semantics, which were not introduced until then. auto_ptr is not quite the same. See http://www.drdobbs.com/cpp/c11-uniqueptr/240002708
What were his arguments? Can you point to something? I spent about 5 minutes googling, but I could only come up with his explanations for when to use it, and when not to use it. For example: http://www.stroustrup.com/C++11FAQ.html#std-shared_ptr
Not sure why you are being downvoted. I agree with you, the whole thing seems archaic. Downvoters care to comment on why this is relevant to a modern programming language and try to convince us (and not the other way around)? Give me a good example of why I should care. Performance? Do you have numbers?
Garbage collection is not acceptable in some scenarios, because garbage collection implies unpredictable performance, slower performance, and comparatively heavy runtime dependencies. This is all fine for many applications, but not for stuff like operating systems, JITs, layout engines and other stuff generally called 'systems programming' where performance and lack of runtime dependencies is key. Those languages generally do not require garbage collection (although Rust does support it optionally) and therefore must provide memory management semantics.
Rust is a language that caters to the oft-neglected field of systems programming. It does so by solving many problems of C++ which is mostly used in that area, and introducing/combining many features newish to systems programming languages and programming languages in general. ExpiredLink is being downvoted for missing that point.
If someone who knows a lot about garbage collection stumbles onto this, I've always wondered if stop-the-world garbage collection is only a necessity for multithreaded code.
For example, imagine a single-threaded language like Go where the only way to communicate with any other threads/processes was through sockets, with no shared memory whatsoever. Would it still be necessary to stop the world? Or would it be possible to stop only the current thread periodically, and collect garbage with a known or fixed overhead?
Bonus points: if so, then would it be possible to abstract shared memory with something like a software transactional memory (STM) that uses only sockets and copy-on-write somehow to completely avoid the stop-the-world issue?
It just seems to me that if stop-the-world was tackled once and for all, so that we could always limit collection times, then so much of the complexity of low level programming would go away, and we wouldn't have to worry about reference counting, refcount loops, strong/weak references, etc etc etc..
Go (by itself without modifications) is a poor example because it has shared memory.
Though Rust would be a good example of a language where a garbage collection could be added, because the only way to data is transferred between other tasks is through channels wherein only PODs or owned pointers may pass.
A linear type system really helps with coordinating multiple garbage collectors across many threads.
Repeatedly traversing a linked-list-like structure to find a free block for malloc is noticeably more expensive than never having to. Also repeatedly putting freed block back into allocator's structure for future use is noticeable more expensive then not having to. See? You can't get away with simple "even if relatively rarely". For some values of rarely, the cost would be lower.
Absolutely agreed :) That's why there are arena allocators for programs for which allocation performance is particularly sensitive (and have other nice properties). Not having a mandatory garbage collector frees you up to try other approaches where they make sense. Nobody in this thread is disputing (I think?) that for many problems garbage collection is a good choice, because it clearly is, only that the one-size-fits-all nature of GC is appropriate in all situations.
malloc/free suck. They are strawmen for manual memory management though, because the vast majority of manual memory management (when done carefully) avoid malloc/free.
Instead, you embed allocations within each other when possible. i.e: Every time you have a bunch of objects with (near) identical lifetimes, you allocate a single struct that embeds all of the allocations within it (and amortize the allocation cost).
More-over, you use a slub allocator for that particular struct size. Often, you have hard limits on the allocation due to external reasons, meaning you can use a static array with a simple free-list to manage it.
Then, doing a singly-linked list insert/delete is extremely cheap. It is slightly more expensive than a GC pointer bump to allocate, but vastly cheaper to free. Also, it has far better cache locality.
Additionally, the avoidance of indirections and pointer-chasing incurred by doing lots of tiny allocations that indirectly refer to each other also helps cache locality significantly.
There are some rare cases where you want GC-style memory management, simply because the lifetimes are too complex, but in my experience, it is quite a rare situation. Even when you do, you still want to make use of allocations' aggregations which is not possible in most GC'd languages.
For the vast majority of cases, manual MM via coarse-grained refcounting (where large objects pay for a refcount), and slub allocators are used with allocation embedding, you'll have far cheaper MM overhead as well as better cache behavior.
Lots of GC vs Manual MM do a disservice to manual MM, by comparing GC to a strawman with lots of malloc/free.
IME, manual MM often embeds allocations within each other and/or places allocation on the stack. Both of these options make manual MM far cheaper than GC, rather than slightly cheaper as shown in these papers.
There is nothing that prevents a GCed language to allocate things on the stack. Also I'm afraid using std::string in C++ causes more allocs/frees than typical use of Strings in Java/C#, because the former can't be safely shared and must be copied - on the heap.
A GCed language requires good stack escape analysis. Unless you expose it to the programmer for at least semi-manual MM, it is bound to have false negatives, where you pay for the allocation on the heap unnecessarily.
I totally agree about std::string.
I think C++ with the conventional libraries is actually an example of how to do manual MM badly.
Generally I agree that you don't want GC in all scenarios, but the paper you cite has several flaws which makes its conclusions very far from truth.
1. It is based on non-production JVM and does not use the same algorithms that are used in production level VMs. There is a huge performance difference between any GC from 1990s and modern generational GC used in production-level VMs.
2. Some part of the experiment involved simulation. You must be very careful with extrapolating simulation results onto real world. Actually someone on the Internet rerun one of the same experiment using non-simulated VM and got totally different results, but now I can't find that blog post :(
3. It compares GC to "ideal" manual memory management, ignoring the fact that manual memory management is not free either. Things like heap fragmentation or computational cost of running allocation/deallocation code do exist and may cause also some real problems. The costs lie elsewhere, but that doesn't mean they don't exist.
My experience with large applications is completely different than the conclusion of the paper. Unless you're doing something completely crazy like generating several GBs of garbage per second, modern generational GCs overhead is typically very small (<5%) with only 20-50% more memory than the live set size, not 3-5x as the paper claims.
The biggest pain not solved completely are pauses, but researchers are actively working on it and things are getting much better.
> 3. It compares GC to "ideal" manual memory management, ignoring the fact that manual memory management is not free either. Things like heap fragmentation or computational cost of running allocation/deallocation code do exist and may cause also some real problems. The costs lie elsewhere, but that doesn't mean they don't exist.
No, it doesn't; it uses malloc and free, counting the costs of allocation and deallocation using a traditional memory allocator. (In fact, if anything, that's unfair to traditional memory allocators, as high-performance code will often use things like bump-allocating arenas which significantly outperform malloc and free. It also uses dlmalloc, which is outpaced by jemalloc/tcmalloc these days, although if the benchmarks are not multithreaded the difference will be small.)
Heap fragmentation exists in GC systems as well, and fragmentation in modern allocators like jemalloc is very small.
Ok, point taken, however their cost analysis is based on "simulated cycles", which is extremely simplified. With modern CPUs doing caching, prefetching, out-of-order execution I seriously doubt its accurate. malloc/free have typically a tendency to scatter objects around the whole heap, while compacting GCs allocate from a contiguous region - so a properly designed research experiment would take that into account. Hans Boehm did experiments on real programs and found that using compacting GCs actually speeded up some programs because of better cache friendliness.
As for heap fragmentation - it does not exist in some GC systems, like G1 or C4. And fragmentation is also extremely workload dependent - it might be "very small" for most cases and for some might be as much as 5x (Firefox struggled a lot from this).
> , modern generational GCs overhead is typically very small (<5%) with only 20-50% more memory than the live set size, not 3-5x as the paper claims.
Another set of GC tests mentioned by the famous Drew Crawford article [1] said 4x to 6x was the sweet spot. A followup commenter wanted to clarify that the "best GC algorithms" worked within 2.5x. Whether its 2.5x or 4x, it's a counterpoint to the claims of only 50% more memory. Perhaps there are drastically different workloads skewing the tests. (I didn't thoroughly read both cited papers.)
This rant is about GCs in JS engines on mobile devices, which are nowhere near state-of-the art generational GCs used for JVM or CLR.
This is all very much dependent on the garbage production rate. If the application is priducing garbage like crazy, then it is possible to require even 2.5x more memory, but this is a rare edge case, just as 2.5x more memory required due to fragmentation. Any performance aware programmer will avoid dynamic allocation in tight loops, regardless of the type of memory management.
The web blog topic talked about Javascript GC but the cited paper about GC behavior was measuring desktop Java/JVM. The 2.5x to 6x memory sweet spot was talking about state-of-the-art JVM not Javascript. However, he's extrapolating that the difficulties unsolved by the best GC algorithms in Java to be the same technical challenges for GC research and progress in Javascript.
Those GC challenges point back to why Rust's focus on memory management via different types of pointers is valid. The best GC algorithms haven't solved many of the performance problems that Rust's approach can address.
This is the same paper I mentioned above, not "another" research paper. It is based on simulation, non-production JVM, doesn't use state of the art GC algorithms and rerunning the same benchmarks with Oracle JVM gives completely different results.
Also stating that GC CPU/memory overhead is X is generally incorrect because it depends on far too many things, like garbage production rate, eden space size, survivor rates, GC parallelism level etc.
I do have at least two coputational heavy applications running on JVM (one is doing lot of sparse LU-decompositions, another one is evolutionary optimization; both doing some allocations in the tight loop) and none of them requires 2x memory to run at top speed; they require much, much less. Everything depends on how it is coded. If you keep garbage production rates sane, and most objects die with first minor GC, the overhead of GC is extremely small, both CPU and memory-wise. I don't know what they did to get 6x overhead - this looks extremely fishy. Even memory heavy apps like databases don't reserve 6x more heap than the live set size.
Of course, there are some use cases, when GC definitely sucks (pauses, huge heaps) and therefore Rust is a nice thing to have. I hope some of those techniques get adapted in the future by GCed languages in order to reduce memory pressure.
The GC argument is a straw man. You can have automatic resource management without GC (actually, GC-languages like Java and C# are at odds with automatic resource management) and without manual ownership semantics a la C++ and Rust.
To be fair, C++ has an approximation of it with move semantics/rvalue references—and I'm really glad that it does, since it makes unique pointers much more mainstream of a concept than it would have been otherwise. (Rust tries to avoid unfamiliar concepts with no analogues in other languages.) It's just not safe in C++; there are lots of ways to get undefined behavior via use-after-move.
While the data and anecdotal experience certainly support that manual memory management is faster, I think focusing on that specifically misses the point a bit, because you don't just get performance by taking out the garbage collector.
For example, in systems programming memory is often not the only limited resource that needs to be cleaned up. In such situations, C++-style RAII is extremely valuable. While finalizers can be added to languages with garbage collectors, and indeed have been [1][2], it's not possible to predict when the resource destruction will occur, meaning that they end up requiring explicit (de)allocation of resources in pratice [3][4]. And if you still don't think it's a hard problem, take a look at some of the discussion the D community is currently engaged in over the same issue [5].
Another problem with GC that's already been brought up is the difficulty of using it in hard realtime systems. IBM has done a considerable amount of work in this area, including the development of a garbage collector designed specifically for realtime Java programming [6]. Some things to note about their solution include the heavy restrictions on the hard-realtime threads [7] (they are essentially disallowed from interacting with garbage collected objects, and in any case are not totally free of garbage collection), and that what are called realtime threads are still subject to nondeterministic pauses [8].
There are many more potential issues than this (e.g. signal handling [9]) but I hope you will agree that there is a clear need for languages without mandatory garbage collection. Given that such languages are necessary, it follows that it would be nice for there to be safe languages without mandatory garbage collection that can perform these tasks. While that doesn't necessarily mean you should care, it probably does, because even if you don't personally write programs with such requirements, you almost certainly use them.
[9] See, for example, Asynchronous Signals in Standard ML, http://www.smlnj.org/compiler-notes/90-tr-reppy.ps ; section 5.2 gives a good overview of why truly preemptive signal handling, often a necessary part of systems programming, is not in general possible in a system with garbage collection.