Imagine a comparison function that needs to call sort() as part of its implementation. You could argue that's probably a bad idea, but it would be a problem for this case.
(You could solve that with a manually maintained stack for the context in a thread local, but you'd have to do that case-by-case)
Anyway, the larger point is that a re-entrant general solution is desirable. The sort example might be a bit misguided, because who calls sort-inside-sort[0]? Nobody, realistically, but these types of issues are prevalent in the "how to do closures" area... and In C every API does it slightly differently, even if they're even aware of the issues.
[0] Because there's no community that likes nitpicking like the C (or C++) community. I considered preempting that objection :). C++ has solved this, so there's that.
That you do not call it recursively by checking that the thread local is nil before invocation.
> a re-entrant general solution is desirable.
I know what you mean, but I just don't know why you want to emulate that in C. There is a real problem of people writing APIs that don't let you pass in data with your function pointer - the thread local method can solve 99% of those without changes to the original API.
But if you really want to do all kinds of first class functions with data, do you want to use C?
I can't speak for the parent poster, but for global function declarations, yes, absolutely.
It's infuriating when a type error can "jump" across global functions just because you weren't clear about what types those functions should have had, even if those types are very abstract. So early adopters learned to sprinkle in type annotations at certain points until they discovered that the top-level was a good place. In OCaml this pain is somewhat lessened when you use module interface files, but without that... it's pain.
> I think it's pretty widely agreed that requiring type annotations at the function level is a good thing anyway. Apparently it's considered good practice in Haskell even though Haskell doesn't require it.
In Haskell-land: At the global scope, yes, that's considered good practice, especially if the function is exported from a module. When you just want a local helper function for some tail-recursive fun it's a bit of extra ceremony for little benefit.
(... but for Rust specifically local functions are not really a big thing, so... In Scala it can be a bit annoying, but the ol' subtyping inference undecidability thing rears its ugly head there, so there's that...)
Languages with local type inference can sometimes omit type annotations from lambdas, if that lambda is being returned or passed as an argument to another function. In those situations we know what the expected type of the argument should be and can omit it.
Yeah, that's true and that's a good convenience even if it's not full inference. In the case of Scala, the parameter types may often be required, but at least the return type can be omitted, so there's that.
I think it's been a commonly held opinion in security circles for at least 15+ years that the Robustness principle is generally counterproductive to security. It (almost inevitably) leads to unexpected interactions between different systems which, ultimately, allow for Weird Machines to be constructed.
An argument can be made that it was instrumental in bootstrapping the early Internet, but it's not really necessary these days. People should know what they're doing 35+ years on.
It is usually better to just state fully formally up front what is acceptable and reject anything else out of hand. Of course some stuff does need dynamic checks, e.g. ACLs and such, but that's fine... rejecting "iffy" input before we get to that stage doesn't interfere with that.
> I think it's been a commonly held opinion in security circles for at least 15+ years that the Robustness principle is generally counterproductive to security
Well yes, that's because people have been misapplying and misunderstanding it. The original idea was predicated on the concept of "assume that the network is filled with malevolent entities that will send in packets designed to have the worst possible effect"
But then the Fail Fast, Fail Often stupidity started spreading like wildfire and companies realized that the consequence for data breaches or other security failures was an acceptable cost of doing business (even if not always true) vs the cost of actually paying devs and sec teams to implement things properly and people kinda lost the plot on it. They just focused on the "be liberal in what you accept" part, went "Wow! That makes thing easy" and maybe only checked for the most common potential abuses/failure/exploit modes, if they bothered at all and only patched things retroactively as issues and exploits popped up in the wild.
Doing it correctly, like building anything robust and/or secure, is a non-trivial task.
No, LSPs return the name/metadata of a concrete type. Dependent typing means that the return type of any given function in your (static) program can depend on a runtime value, e.g. user input... In well-defined ways ofc.
So, you're saying it's outside the scope of an LSP to return information about a dependent type because it's .. not a concrete type? That sounds wrong.
I can make literally any language support dependent types that have struct, enum, switch, and assert. You make a boxed type (tagged union, algebraic datatype, variant, discriminated union, fucking, whatever), and just assert whenever you pack/unpack it on function entry/exit. I do this all the time.
In plain English, my quip boils down to 'why do we tolerate network requests in our syntax highlighters, when we don't tolerate them in our compiler frontends?'
Magit[0] is so good that I haven't felt any real need to use jj... yet. I'm sure I'll switch if it gets emacs integration of a similar level to magit, but the one I tried[1] isn't quite there yet.
I think the big thing (potentially, for me) is the ability to postpone conflict resolution during a rebase. That can be quite painful in regular old git, but git-mediate helps make that less painful in practice in my particular situation and workflow.
We'll see once better non-cli UX appears. I'm low-key excited for what could be possible in this space.
I am excited too! It is probably too much to hope, but I nonetheless am hoping that magit gets a jj backend before I have enough motivation or need to learn a new tool to do the same old stuff :D
The Ocaml module system is great, but the module system in Scala isn't the usual Java package thing... it's traits. It's about as powerful as the OCaml module system on any axis I've ever used, but it's easy to miss how powerful it is. (Scala 3 added some ergonomics to make it easier to use, but it was all technically accessible in Scala 2 with 'workarounds'.)
"You want many folds!" We gottem!
reply