From quickly glancing over a couple of pages, that looks sensible. Which makes me curious to see some exceptions to the "shall" rules. With a project of this size, that should give some idea about the usefulness of such standards.
Why would the modern environment materially change this? The initialized resource allocation reflects the limitations of the hardware. That budget is what it is.
I can't think of anything about "modern AI-guided drones" that would change the fundamental mechanics. Some systems support very elastic and dynamic workloads under fixed allocation constraints.
The overwhelming majority of embedded systems are desired around a max buffer size and known worst case execution time. Attempting to balance resources dynamically in a fine grained way is almost always a mistake in these systems.
Putting the words "modern" and "drone" in your sentence doesn't change this.
The compute side of real-time tracking and analysis of entity behavior in the environment is bottlenecked by what the sensors can resolve at this point. On the software side you really can’t flood the zone with enough drones etc such that software can’t keep up.
These systems have limits but they are extremely high and in the improbable scenario that you hit them then it is a priority problem. That design problem has mature solutions from several decades ago when the limits were a few dozen simultaneous tracks.
There are missiles in which the allocation rate is calculated per second and then the hardware just has enough memory for the entire duration of the missile's flight plus a bit more. Garbage collection is then done by exploding the missile on the target ;)
What you are actually doing here is moving allocation logic from the heap allocator to your program logic.
In this way you can use pools or buffers of which you know exactly the size.
But, unless your program is always using exactly the same amount of memory at all times, you now have to manage memory allocations in your pool/buffers.
"AI" comes in various flavors. It could be a expert system, a decision forest, a CNN, a Transformer, etc. In most inference scenarios the model is fixed, the input/output shapes are pre-defined and actions are prescribed. So it's not that dynamic after all.
This is also true of LLMs. I’m really not sure of OP’s point - AI (really all ML) generally is like the canonical “trivial to preallocate” problem.
Where dynamic allocation starts to be really helpful is if you want to minimize your peak RAM usage for coexistence purposes (eg you have other processes running) or want to undersize your physical RAM requirements by leveraging temporal differences between different parts of code (ie components A and B never use memory simultaneously so either A or B can reuse the same RAM). It also does simplify some algorithms and also if you’re ever dealing with variable length inputs then it can help you not have to reason about maximums at design time (provided you just correctly handle an allocations failure).
In general, are these good recommendations for building software for embedded or lower-spec devices? I don't know how to do preprocessor macros anyhow, for instance - so as i am reading this i am like "yeah, i agree..." until the no stdio.h!
The first time I came across this document, someone was using it as an example how the c++ you write for an Arduino Uno is still c++ despite missing so many features.
The font used for code samples looks nearly the same as "The C++ Programming Languages" (3rd edition / "Wave") by Bjarne Stroustrup. Looking back, yeah, I guess it was weird that he used italic variable width text for code samples, but uses tab stops to align the comments!
https://www.stroustrup.com/JSF-AV-rules.pdf