The new operator in Java doesn't always heap allocate, it depends on the JIT and escape analysis, it might be stack/register allocated if proven safe to do so.
* Series battery cells in a battery pack inevitably become imbalanced. This is extremely common and why cell balancing was invented.
* Dyson uses a very nice ISL94208 battery management IC that includes cell balancing. It only requires 6 resistors that cost $0.00371 each, or 2.2 cents in total for six.
* Dyson did not install these resistors. (They even designed the V6 board, PCB 61462, to support them. They just left them out.)
* Rather than letting an unbalanced pack naturally result in lower usable capacity, when the cells go moderately (300mV) out of balance (by design, see step 3) Dyson programmed the battery to stop working...permanently. It will give you the 32 red blinks of death and will not charge or discharge again. It could not be fixed. Until now.
It's largely down to NASA's "waterfall" development approach. Everything must be custom designed from the ground up, all things made sure to be correct and safe before the first bolt is installed.
That's why a lot of the "new space" oriented programs emphasize "commercial-off-the-shelf" solutions so much. They're only just barely catching on to the relatively obvious bit that costs can be reduced a lot by relying more heavily on existing commercial solutions. Space isn't the most extreme environment compared to what a lot of industrial gear has to be able to deal with.
Some examples which come to mind are the cameras which were used to record video of Perseverance's landing, as well as most of the Ingenuity copter itself, were COTS parts. IIRC as a result Ingenuity has more processing power on board than Perseverance's main computers. These were all low cost lower priority components, but they did a great job of showcasing the usability of COTS parts.
Careful, careful. The COTS part is a bit misleading as a great part of the total cost is in testing and certification. Some might be under the impression that JPL just took some COTS parts, stuck them on Ingenuity and that's that.
This is not the case. Of course every component had to be tested for space worthiness and possible interference with other systems. All that takes time, money, and specialised facilities.
It's also important to keep in mind that the helicopter was a technology demonstrator, a proof-of-concept that played no critical part in the overall mission. Its job was to perform one flight to show it can be done. It's a big difference if your components only need to do their job once, or if you have to have a guaranteed minimum endurance and the entire mission depends on them.
I've been using a simple unity-build ad-hoc build.bat/build.sh "system" for years now, works wonders.
YAGNI will serve you well, 99.99999(repeating)% of everyone's code will only be built and run on 1, maybe 2 platforms, why bother with these insane monstrosities that we call 'build systems"?
The few times I've needed to build for a new platform I just wrote that build script then and there, took a few minutes and that was it.
Modern machines can churn through a tens if not hundreds of K lines of C code in less than a second, so incremental builds aren't needed either (and if anything, with too many translation units you end up with linking being a bottleneck).
Single TU benefits:
- Global optimizations "for free".
- Make all functions static (except main) and you get --gc-sections "for free".
- Linking is blazingly fast.
- Don't have to bother with header files.
- No one has to download anything to built my code, I make it work on a default msvc/gcc/clang install (i.e if you have gcc, cl or clang in your path when running build.bat/build.sh, it will build).
* one TU with code that is fast to compile and modified often
* another TU with code that is slow to compile but modified rarely
I still use header files, but the fast-to-compile and modified-often code goes directly into headers, so I can still organize my code into separate files.
I got sick of juggling code that migrated from one category to the other, so I wrote a little script that deals with chopping up a large source file into multiple TUs before feeding them to the compiler.
>This would prevent integer operation reordering as an optimization, leading to slower code.
The sane way to address that is to add explicit opt-in annotations like 'restrict'.
#push_optimize(assume_no_integer_overflow)
int x = a + b;
// more performance orientated code
#pop_optimize
// back to sane C
#push_optimize(assume_no_alias(a, b), assume_stride(a, 16), assume_stride(b, 16))
void compute(float *a, float *b, int index)
{
// here the compiler can assume a and b do not alias
// and it can assume it can always load 16 bytes at a time
// the programmer has made sure it's aligned and padded to so with any index
// there's always 16 bytes to load
// so go on, use any vectorized simd instruction you want
}
#pop_optimize
// back to sane C
That’s a lot uglier and clunkier than just using the ckd_add, ckd_mul etc. safe checked arithmetic. Plus if an overflow occurs you still get an incorrect result which you probably don’t want.
Or maybe I’m wrong? Do people actually want overflows to occur and incorrect results? If they’re willing to tolerate incorrect results, why would they also want optimizations disabled?
The thing is it's ugly in the rare case that absolute performance is worth fighting for. And not ugly in the majority case where it isn't in the top three important things.
No, GP's proposal is ugly in the majority case. If you're going to make signed overflow defined behaviour then every time you write:
int c = a + b;
You have to assume it will overflow and give an incorrect result. So now you need to check everything, everywhere, and you don't get any optimizations unless you explicitly ask for them with those ugly #push_optimize annotations. I completely fail to see how this is an advantage.
The way C works right now, the assumption is that you want optimization by default and safety is opt-in. The GP's proposal takes away the optimization by default. It then makes incorrect results the default, but it does not make safety the default. To make safety the default you would have to force people to write conditionals all over the place to check for the overflows with ckd_add, ckd_mul etc. Merely writing:
int c = a + b;
Does not give you any assurances that your answer will be correct.
You misunderstood what I was saying. Those statements are from the perspective of the hypothetical language proposal I was critiquing. That proposal turns off all the optimizations by default and forces you to add annotations to turn them back on. At the same time, it does not actually give you anything useful for your trouble because it still doesn't solve the problem of signed overflow giving incorrect results.
The way C is now, you get the performance by default and safety is opt-in. That's the tradeoff C makes and it's a good one. Other languages give safety by default and make performance opt-in. The proposal I was responding to gives neither.
But why put in unreachable?
Doesn't make any sense to me.
If a branch is truly not supposed to ever happen, why have a branch at all? Just remove that code from the source entirely- that helps the optimizer even more, because the most optimal code is of course no code at all.
> But why put in unreachable? Doesn't make any sense to me.
Because sometimes you don't have a choice e.g. say you have a switch/case, if you don't do anything and none of the cases match, then it's equivalent to having an empty `default`. But you may want a `default: unreachable()` instead, to tell the compiler that it needs no fallback.
> If a branch is truly not supposed to ever happen, why have a branch at all? Just remove that code from the source entirely- that helps the optimizer even more, because the most optimal code is of course no code at all.
Except the compiler may compile code with the assumption that it needs to handle edge cases you "know" are not valid. By providing these branches-which-are-not, you're giving the compiler more data to work with. That extra data might turn out to be useless, but it might not.
That is something a decent type system should solve - make it impossible to pass 'incomplete values' on, so any state further on which depends on the error handled/not handled will expect the appropriate type and compiler will error at that call site.
An unused variable means you weren't passing it on anywhere so there is no code which depends on its value, so how can it be a bug?
future = x.do_async();
return;
should not error out because of 'unused variable', it should give a error message concerning the lifetime of the future object.
By making text flow to 100% of the width you can easily resize the browser window for everyone's perfect reading width.
I usually run my browser tiled to 50% of the screen, the new design wastes a lot of space at that size (you have to shrink to really narrow width before it enters 'mobile mode' and uses 100% of the width again - I mean it's pretty insane, you can see more text at say 30% width, but at 40%-60% it pops out that useless white space padding on the left leading to a narrower effective text width)
Yeah, I just tried it out and it looks pretty good! (If I resize my window to be 1/3 the width of my monitor, that is.)
I totally agree with you: I like to use my browser at ~50% of screen width on a 27" screen, so it looks terrible there because of all the wasted space. If they made it work at 50% width like it does at 30% width, I'd have no complaints.
Politicians only starts caring about peoples problems if they are either personally effected or one of their close friends/benefactors are effected.