Hacker Newsnew | past | comments | ask | show | jobs | submit | ninkendo's commentslogin

> IMO, it's best to keep things that are "your fault" (e.g. produced by your editor or OS) in your global gitignore, and only put things that are "the repository's fault" (e.g. build artifacts, test coverage reports) in the repository's gitignore file.

Very well put. This should be in the git-ignore manpage.


Was this translated automatically from C? I picked a spot totally at random and saw in https://github.com/Ragnaroek/iron-wolf/blob/main/src/act1.rs in place_item_type:

    let mut found_info = None;
    for info in &STAT_INFO {
        if info.kind == item_type {
            found_info = Some(info);
            break;
        }
    }
When typically in rust this is just:

    let found_info = STAT_INFO.iter().find(|info| info.kind() == item_type);
Now I want to go through and feng shui all the code to look more like idiomatic rust just to waste some time on a saturday...

(equivalent C file: https://github.com/id-Software/wolf3d/blob/master/WOLFSRC/WL... )

> Was this translated automatically from C?

I'll note that when I convert code between languages, I often go out of my way to minimize on-the-fly refactoring, instead relying on a much more mechanical, 1:1 style. The result might not be idiomatic in the target language, but the bugs tend to be a bit fewer and shallower, and it assists with debugging the unfamiliar code when there are bugs - careful side-by-side comparison will make the mistakes clear even when I don't actually yet grok what the code is doing.

That's not to say that the code should be left in such a state permanently, but I'll note there's significantly more changes in function structure than I'd personally put into an initial C-to-Rust rewrite.

The author of this rewrite appears to be taking a different approach, understanding the codebase in detail and porting it bit by bit, refactoring at least some along the way. Here's the commit that introduced that fn, doesn't look like automatic translation to me: https://github.com/Ragnaroek/iron-wolf/commit/9014fcd6eb7b10...


I actually find 1:1 to be helpful when learning a language.

How debug-able is the internals of the rust lambda version?

I will often write the code so I can simply insert a break point for debugging versus pure anonymous and flow-style functions.

C# example:

    #if DEBUG
    const string TestPoint = "xxxx";
    #endif

    var filtered = items.Where(x =>
    {
        #if DEBUG
        if (x.Name == TestPoint)
            x.ToString()
        #endif
        .....
    });
vs

    var filtered = items.Where(x => ....);

As a non-Rust guy, I keep writing the example above. I didn't even know about the second option!

If you do that, please share a link so I can learn from you! This is awesome!


Look into rust iterators and their associated functions for rust specific implementation. Additionally look into functional programming à la lambda calculus and Haskell for the extreme end of this type of programming if you’d like to learn more about it

Yes, the code is _very, very_ close to the C-Code. All over the place.

Sounds like something an LLM agent might be good at?

It probably would. But this port was mostly done to understand Wolfenstein 3D in detail, not for the source port itself. I could have generated big parts of the code. But I would have learning by doing that.

Literally nothing to do with that distinction.

> The question is: Whose job is it to manage the nulls. The language? Or the programmer?

> These languages are like the little Dutch boy sticking his fingers in the dike. Every time there’s a new kind of bug, we add a language feature to prevent that kind of bug. And so these languages accumulate more and more fingers in holes in dikes. The problem is, eventually you run out of fingers and toes.

I'm going to try to best to hide my rage with just how awful this whole article is, and try to focus my criticism. I can imagine that reasonable people can disagree as to whether `try` should be required to call a function that throws, or whether classes should be sealed by default.

But good god man, the null reference problem is so obvious, it's plain and simply a bug in the type system of every language that has it. There's basically no room for disagreement here: If a function accepts a String, and you can pass null to it, that's a hole in the type system. Because null can't be a String. It doesn't adhere to String's contract. If you try to call .length() on it (or whatever), your program crashes.

The only excuse we've had in the past is that expressing "optional" values is hard to do in a language that doesn't have sum types and generics. And although we could've always special-cased the concept of "Optional value of type T" in languages via special syntax (like Kotlin or Swift do, although they do have sum types and generics), no language seems to have done this... the only languages that seem to support Optionals are languages that do have sum types and generics. So I get it, it's "hard to do" for a language. And some languages value simplicity so much that it's not worth it to them.

But nowadays (and even in 2017) there's simply no excuse any more. If you can pass `null` to a function that expects a valid reference, that language is broken. Fixing this is not something you lump in with "adding a language feature for every class of bug", it's simply the correct behavior a language should have for references.


One thing that distinguishes macOS here is that the mach kernel has the concept of “vouchers” which helps the scheduler understand logical calls across IPC boundaries. So if you have a high-priority (UserInitiated) process, and it makes an IPC call out to a daemon that is usually a low-priority background daemon, the high-priority process passes a voucher to the low-priority one, which allows the daemon’s ipc handling thread to run high-priority (and thus access P-cores) so long as it’s holding the voucher.

This lets Apple architect things as small, single-responsibility processes, but make their priority dynamic, such that they’re usually low-priority unless a foreground user process is blocked on their work. I’m not sure the Linux kernel has this.


That it actually quite simple and nifty. It reminds me of the 4 priorities RPC requests can have within the Google stack. 0 being if this fails it will result in a big fat error for the user to 3, we don’t care if this fails because we will run the analysis job again in a month or so.

IIRC in macOS you do need to pass the voucher, it isn’t inherited automatically. Linux has no knowledge of it, so first it has to be introduced as a concept and then apps have to start using it.


Being explicit is a good thing. Especially for async threads they may handle work for many different clients with different priorities and may delegate work to other processes.

There is automatic priority donation across a handful of APIs

This sounds like Solaris doors. The remainder of the time slice of the door client is given to the door server.


Vouchers are related to turnstiles, which are from Solaris.

This is also how binder works in android.

> Now, those 600 processes and 2000 threads are blasting thousands of log entries per second, with dozens of errors happening in unrecognizable daemons doing thrice-delegated work.

This is the kind of thing that makes me want to grab Craig Federighi by the scruff and rub his nose in it. Every event that’s scrolling by here, an engineer thought was a bad enough scenario to log it at Error level. There should be zero of these on a standard customer install. How many of these are legitimate bugs? Do they even know? (Hahaha, of course they don’t.)

Something about the invisibility of background daemons makes them like flypaper for really stupid, face-palm level bugs. Because approximately zero customers look at the console errors and the crash files, they’re just sort of invisible and tolerated. Nobody seems to give a damn at Apple any more.


Are you sure they don’t get sent to Apple as part of some telemetry / diagnostics implementation?


Oh they absolutely are. But Apple clearly doesn’t care enough to actually fix them. They seem to get worse every release.


You don't need them to be sent to Apple. And if errors in console get sent to Apple, it's surely filtered through a heavy suppression list. You can open the Errors and Faults view in Console on any Mac and see many errors and faults every second.

They could start attacking those common errors first, so that a typical Mac system has no regular errors or faults showing up. Then, you could start looking at errors which show up on weirdly configured end user systems, when you've gotten rid of all the noise.

But as long as every system produces tens of thousands of errors and faults every day, it's clear that nobody cares about fixing any of that.


I wouldn't call UBI a "game plan" so much as a thing people can point to so justify their actions to themselves. It helps you pretend you're not ruining people's lives, because you can point to UBI as the escape hatch that will let them continue to have an existence. It's not surprising that so many in the tech industry are proponents of UBI. Because it helps them sleep at night.

Never mind that UBI has never actually existed, it probably never will exist, and it's very, very likely that it won't even work.

People need to face the possibility that we will destroy people's way of life the way we're headed, and to not just wave their hands and pretend that UBI will solve everything.

(Edited to tone back the certainty in the language: I'm not actually sure whether AI will be a net positive or negative on most people's lives, but I just think it's dishonest to say "it's ok, UBI will save them.")


OK, maybe take it down a few notches?

I'm only "in the tech industry" in the literal sense, not in the cultural sense. I work in academia, making programs for professors and students, and I think the stuff "the tech industry" is doing is as rotten as you appear to.

UBI has never existed because the level of production required to support it has only just started to exist. (It's possible that we're actually not quite there, but that's something we can only determine by trying it out—and if we're not, then I'm 100% confident we can get there with further refinement of existing processes.) If we have the political will to actually, genuinely do UBI—enough to support people's basic needs of food, clothing, shelter, and a little bit of buffer, without any kind of means testing or similar requirements—then it's very, very likely that it will work. All the pilot programs give very positive data.

I'm not pushing UBI because I think it's a fix to the problem of automation. I'm pushing UBI because I think it's the fulfillment of the promise of automation.


There's no reason why UBI wouldn't work.

The reason why it doesn't exist is because, for all that those in positions of power love to talk about it, they very consistently shoot down any actual attempt to implement them. I mean, for starters, it'd mean much higher taxes, and especially higher taxes on those very people (who currently pay lower rates on capital gains than people who actually produce value pay on their wages). When was the last time you've seen one of the Big Tech luminaries advocate for higher capital gains taxes?


C's string handling is so abominably terrible that sometimes all people really need is "C with std::string".

Oh, and smart pointers too.

And hash maps.

Vectors too while we're at it.

I think that's it.


When I developed D, a major priority was string handling. I was inspired by Basic, which had very straightforward, natural strings. The goal was to be as good as Basic strings.

And it wasn't hard to achieve. The idea was to use length delimited strings rather than 0 terminated. This meant that slices of strings being strings is a superpower. No more did one have to constantly allocate memory for a slice, and then keep track of that memory.

Length-delimited also super speeded string manipulation. One no longer had to scan a string to find its length. This is a big deal for memory caching.

Static strings are length delimited too, but also have a 0 at the end, which makes it easy to pass string literals to C functions like printf. And, of course, you can append a 0 to a string anytime.


Just want to off-topic-nerd-out for a second and thank you for Empire.


You're welcome!

One of the fun things about Empire is one isn't out to save humanity, but to conquer! Hahahaha.

BTW, one of my friends is using ClodCode to generate an Empire clone by feeding it the manual. Lots of fun!


I agree on the former two (std::string and smart pointers) because they can't be nicely implemented without some help from the language itself.

The latter two (hash maps and vectors), though, are just compound data types that can be built on top of standard C. All it would need is to agree on a new common library, more modern than the one designed in the 70s.


I think a vec is important for the same reason a string is… because being able to properly get the length, and standardized ways to push/pop from them that don’t require manual bounds checking and calls to realloc.

Hash maps are mostly only important because everyone ought to standardize on a way of hashing keys.

But I suppose they can both be “bring your own”… to me it’s more that these types are so fundamental and so “table stakes” that having one base implementation of them guaranteed by the language’s standard lib is important.


why not std::string?


You can surely create a std::string-like type in C, call it "newstring", and write functions that accept and return newstrings, and re-implement the whole standard library to work with newstrings, from printf() onwards. But you'll never have the comfort of newstring literals. The nice syntax with quotes is tied to zero-terminated strings. Of course you can litter your code with preprocessor macros, but it's inelegant and brittle.


Because C wants to run on bare metal, an allocating type like C++ std::string (or Rust's String) isn't affordable for what you mean here.

I think you want the string slice reference type, what C++ called std::string_view and Rust calls &str. This type is just two facts about some text, where it is in memory and how long it is (or equivalently where it ends, storing the length is often in practice slightly faster in real machines so if you're making a new one do that)

In C++ this is maybe non-obvious because it took until 2020 for C++ to get this type - WG21 are crazy, but this is the type you actually want as a fundamental, not an allocating type like std::string.

Alternatively, if you're not yet ready to accept that all text should use UTF-8 encoding, -- and maybe C isn't ready for that yet - you don't want this type you just want byte slice references, Rust's &[u8] or C++ std::span<char>



If only WG14 added something similar to C.

Yes, SDS exists, however vocabulary types are quite relevant for adoption at scale.


It's a class, so it doesn't work in C.


Sure, but you can have a similar string abstraction in C. What would you miss? The overloaded operators?


Automatic memory accounting — construct/copy/destruct. You can't abstract these in C. You always have to call i_copied_the_string(&string) after copying the string and you always have to call the_string_is_out_of_scope_now(&string) just before it goes out of scope


This seems orthogonal to std::string. People who pick C do not want automatic memory management, but might want better strings.


Automatic memory management is literally what makes them better


For many string operations such as appending, inserting, overwriting etc. the memory management can be made automatic as well in C, and I think this is the main advantage. Just automatic free at scope end does not work (without extensions).


You can make strings (or bignums or matrices) more convenient than the C default but you can never make them as convenient as ints, while in C++ you can.


Yes, but I do not think this is a good thing. A programming language has to fulfill many requirements, and convenience for the programmer is not the most important.


Empirically it is. All the most used languages are the most convenient ones.


The C++ std::string is both very complicated mechanically and underspecified, which is why Raymond Chen's article about std::string has to explain three different types (one for each of the three popular C++ stdlib implementations) and still got some details wrong resulting in a cycle of corrections.

So that wouldn't really fit C very well and I'd suggest that Rust's String, which is essentially just Vec<u8> plus a promise that this is a UTF-8 encoded string, is closer.



Yeah, WG14 has had enough time to provide safer alternatives for string and arrays in C, but that isn't a priority, apparently.


Add concurrency and you more or less came up with same list C's own creator came up when he started working on a new language.


And constructors and destructors to be able to use those vectors and hash maps properly without worrying about memory leaks.

And const references.

And lambdas.


Nit: please don’t push to my browser history every time I expand one of the sections… I had to press my browser’s back button a dozen or so times to get back out of your site.


You can also hold down the back button to get a menu of previous pages in order to skip multiple back button presses. (I still agree with your point and you might already know that. Maybe it helps someone.)


Thanks. I'll look into that. It was recommended exactly for backtracking but I get that if you want to leave it's a whole lot of backpaddling :-)


Use history.replaceState() instead of history.pushState() and you're all good.


Thanks. It makes sense. I'll switch.


Playing music doesn’t require unlocking though, at least not from the Music app. If YouTube requires an unlock that’s actually a setting YouTube sets in their SiriKit configuration.

For reading messages, IIRC it depends on whether you have text notification previews enabled on the lock screen (they don’t document this anywhere that I can see.) The logic is that if you block people from seeing your texts from the lock screen without unlocking your device, Siri should be blocked from reading them too.

Edit: Nope, you’re right. I just enabled notification previews for Messages on the lock screen and Siri still requires an unlock. That’s a bug. One of many, many, many Siri bugs that just sort of pile up over time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: