There are old idioms in C where null pointers are intentionally dereferenced to induce the expected outcome. Not the best way to write that code because beyond being less explicit about intent it also isn't guaranteed to work.
I was getting to a point in the code. I could tell by a log statement or some such. But I didn't know in what circumstances I was getting there - what path through the code. So I put in something like
char *p = 0;
*p = 1;
in order to cause a core dump. That core dump gave me the stack trace, which let me see how I got there.
But I never checked that in. If I did, I would expect a severe verbal beating at the code review. Even more, it never made it into release.
Early returns makes the code more linear, reduces conditional/indent depth, and in some cases makes the code faster. In short, it often makes code simpler. The “no early returns” is a soft version of “no gotos”. There are cases where it is not possible to produce good code while following those heuristics. A software engineer should strive to produce the best possible code, not rigidly follow heuristics even when they don’t make sense.
There is an element of taste. Don’t create random early returns if it doesn’t improve the code. But there are many, many cases where it makes the code much more readable and maintainable.
What comparable alternative is available today? None of the European companies has a production 5th generation aircraft nor the integrated sensing capabilities. This is what is driving the incredible demand despite misgivings. You can't survive in a near peer combat environment without it.
Countries are buying it because it is the only game in town for certain high-value capabilities, not because they necessarily like the implications of there being a single seller of those capabilities. For better or worse, the US has been flying these for 30 years and has 6th generation aircraft in production. Everyone else is still figuring out their first 5th generation offering.
Closing that gap is a tall order. Either way, European countries need these modern capabilities to have a capable deterrent.
I'm no expert, but the narrative is that it really depends what you need them for. And keep in mind that joining the jet fighter programme also means joining the development of it, enacting a certain amount of influence through your funding. For example, it is conceivable that a sufficiently upgraded Gripen tailored to our needs would be just as effective (which aren't really dogfighting, as I understand it), and cheaper.
Anyway we're all just crossing our fingers that the US is just temporarily insane and will eventually come to its senses. What else can you do.
In many regards, the F-35 was the first aircraft explicitly engineered for the requirements of drone-centric warfare. Its limitations are that this capability was grafted onto an older (by US standards) 5th generation tech stack that wasn't designed for this role from first principles. I think this is what ultimately limited production of the F-22, which is not upgradeable even to the standard of the F-35 for drone-centric environments.
The new 6th generation platforms being rolled out (B-21, F-47, et al) are all pure first-principles drone-warfare native platforms.
Drones were not discussed much when the requirements for the F-35 were formed.
The F-22 was considered very open and upgradable for it's era. It's just that freakin' old where FireWire was the unproven new hotness.
Current AF efforts do focus on drone and loyal wingman concepts, but these don't have much material impact on avionics. There everything the AF is talking about is agility in deliverin capabilities through open systems architecture. That's why they're doing things like trying out k8s on military aircraft. It's not about drones specifically but things like delivering new EW capabilities in days or hours instead of decades.
For a dive on the latter stuff look into what Dr Will Roper was talking about during his tenure.
My recollection is that it came down to two factors. Pragmatically, the pool of highly skilled C++ programmers was vastly larger and the ecosystem was much more vibrant, so development scaled more easily and had a lower maintenance risk. By 2005 they had empirical evidence that it was possible, albeit more difficult, to build high-reliability software in C++ as the language and tooling matured.
These days they are even more comfortable using C++ than they were back then due to improvements in process, tooling, and language.
There isn't much of a conversation to be had here. For low-level systems code, exceptions introduce a bunch of issues and ugly edge cases. Error codes are cleaner, faster, and easier to reason about in this context. Pretty much all systems languages use error codes.
In C++, which supports both, exceptions are commonly disabled at compile-time for systems code. This is pretty idiomatic, I've never worked on a C++ code base that used exceptions. On the other hand, high-level non-systems C++ code may use exceptions.
What you wrote is historically correct, but new analisys shows exceptions are faster that error codes if you actually check the error codes. Of course checking error codes is tedious and so often you don't. Also is micro benchmarks error codes are faster and only when you do more complex benchmarks do exceptions show up as faster.
The performance benefits of exceptions are not borne out in practice in my experience relative to other error handling mechanisms. It doesn't replicate. But that is not the main reason to avoid them.
Exceptions have very brittle interaction with some types of low-level systems code because unwinding the stack can't be guaranteed to be safe. Trying to make this code robustly exception-safe requires a lot of extra code and has runtime overhead.
Using exceptions in these kinds of software contexts is strictly worse from a safety and maintainability standpoint.
From quickly glancing over a couple of pages, that looks sensible. Which makes me curious to see some exceptions to the "shall" rules. With a project of this size, that should give some idea about the usefulness of such standards.
Why would the modern environment materially change this? The initialized resource allocation reflects the limitations of the hardware. That budget is what it is.
I can't think of anything about "modern AI-guided drones" that would change the fundamental mechanics. Some systems support very elastic and dynamic workloads under fixed allocation constraints.
The overwhelming majority of embedded systems are desired around a max buffer size and known worst case execution time. Attempting to balance resources dynamically in a fine grained way is almost always a mistake in these systems.
Putting the words "modern" and "drone" in your sentence doesn't change this.
The compute side of real-time tracking and analysis of entity behavior in the environment is bottlenecked by what the sensors can resolve at this point. On the software side you really can’t flood the zone with enough drones etc such that software can’t keep up.
These systems have limits but they are extremely high and in the improbable scenario that you hit them then it is a priority problem. That design problem has mature solutions from several decades ago when the limits were a few dozen simultaneous tracks.
There are missiles in which the allocation rate is calculated per second and then the hardware just has enough memory for the entire duration of the missile's flight plus a bit more. Garbage collection is then done by exploding the missile on the target ;)
What you are actually doing here is moving allocation logic from the heap allocator to your program logic.
In this way you can use pools or buffers of which you know exactly the size.
But, unless your program is always using exactly the same amount of memory at all times, you now have to manage memory allocations in your pool/buffers.
"AI" comes in various flavors. It could be a expert system, a decision forest, a CNN, a Transformer, etc. In most inference scenarios the model is fixed, the input/output shapes are pre-defined and actions are prescribed. So it's not that dynamic after all.
This is also true of LLMs. I’m really not sure of OP’s point - AI (really all ML) generally is like the canonical “trivial to preallocate” problem.
Where dynamic allocation starts to be really helpful is if you want to minimize your peak RAM usage for coexistence purposes (eg you have other processes running) or want to undersize your physical RAM requirements by leveraging temporal differences between different parts of code (ie components A and B never use memory simultaneously so either A or B can reuse the same RAM). It also does simplify some algorithms and also if you’re ever dealing with variable length inputs then it can help you not have to reason about maximums at design time (provided you just correctly handle an allocations failure).
In general, are these good recommendations for building software for embedded or lower-spec devices? I don't know how to do preprocessor macros anyhow, for instance - so as i am reading this i am like "yeah, i agree..." until the no stdio.h!
The first time I came across this document, someone was using it as an example how the c++ you write for an Arduino Uno is still c++ despite missing so many features.
The font used for code samples looks nearly the same as "The C++ Programming Languages" (3rd edition / "Wave") by Bjarne Stroustrup. Looking back, yeah, I guess it was weird that he used italic variable width text for code samples, but uses tab stops to align the comments!
My guess is that you're assuming all user defined types, and maybe even all non-trivial built-in types too, are boxed, meaning they're allocated on the heap when we create them.
That's not the case in C++ (the language in question here) and it's rarely the case in other modern languages because it has terrible performance qualities.
I think usefulcat interpreted "std::vector<int> allocated and freed on the stack" as creating a default std::vector<int> and then destroying it without pushing elements to it. That's what their godbolt link shows, at least, though to be fair MSVC seems to match the described GCC/Clang behavior these days.
Well if you're using the standard library then you're not really paying attention to allocations and deallocations for one. For instance, the use of std::string. So I guess I'm wondering if you work in an industry that avoids std?
I work in high-scale data infrastructure. It is common practice to do no memory allocation after bootstrap. Much of the standard library is still available despite this, though there are other reasons to not use the standard containers. For example, it is common to need containers that can be paged to storage across process boundaries.
For hunting in a way you want? Not having to pay taxes? Raise your children in the nomadic hunter livestyle? I think schooling (and lots of other things) is mandatory in the US as well. And child protection service etc. exist. So it might be easier in the US to cosplay as a forest nomad for some time (and I know some people did it as eremits for a bit longer) but a real nomadic livestyle means living with other people together in a tribe. That does not work (just the rule to move camp after 2 weeks prevents that).
It isn't common but it definitely happens in some parts of the US.
There are no taxes to pay if you aren't earning anything. It is legal, if inadvisable, to raise children this way in much of the US. There is a "live and let live" ethos around it, especially in the western US. The true nomads are probably most common in the mountain West of the US in my experience. While the rule is two weeks in one location, in many remote areas there is no enforcement and no one really cares. They sometimes have mutually beneficial arrangements with ranchers in the area. These groups tend to be relatively small.
Alaska is famously popular for groups of families disappearing into the remote wilderness to create villages far from modern civilization. It is broadly tolerated there. Often many years will pass between sightings of people that disappeared into the wilderness.
I always wondered what a high-resolution satellite survey of the Inside Passage of Alaska and the north coast of British Columbia would find in that vast and impenetrable wilderness. Anecdotally there should be dozens of villages hidden in there that have been operating for decades.
I think I did read about it and met folks who are into that. I have never been in the US, though, but the main complaint I got is pretty much, state laws make it impossible. But I am open for reading suggestions.
There’s what is explicitly legal, there is what you can get away with, and there is moving between jurisdictions before they even know you’re there.
The US is large and if you keep your head down and homeschool to some level of competence I bet you could go many generations- especially if you were willing to blend in as necessary.
The rule is likely speaking to this code.
reply