You know, I never understood this. Limits (how a function behaves as dx approaches zero) are much easier (for me) to understand than infinitesimals (an idea which obviously can't even exist on the real line), and it's much harder to end up in contradiction.
Yet, the idea of infinitesimals is very popular in applications. For example, in undergrad physics courses, when deriving formulas (essentially, doing math!) they keep talking about a "small area" or a "small volume" (where a given function is assumed to stay constant).
When you're actually computing things, such as numerically approximating an integral, you do so by breaking it into small pieces and adding them together. Infinitesimals are the limit of that process as the smallness goes to zero. I think the key problem here is that students use the symbolic rules but never actually do numerical computations of this kind.
PS. Please don't take this as an endorsement of labview. It has its place in non-CS fields when there's high turnover so it's really bad if only the group's current programmer understands it. Talking academia here, I probably wouldn't use it in a more stable environment.
I just had this problem in a GUI. There was an edit view, correctly created and initialized with a default value. Then the user could enter a value, but whatever the user entered would be ignored.
TLDR -- Event structures are my favorite new feature. Try them out and see if you still hate LabView nearly as much. I also suggest not actually using any of your GUI variables. Treat these like declarations and use local variables everywhere else. Otherwise the wires are impossible to trace and buggy to change.
You've probably fixed it, but you can configure it to execute a code block when a value changes with Event structures.
This structure is a little new, but I started LabView before it existed so I only started using it in the last year or two.
It lets you define code to execute based on interrupt style events like "mouse release" on a button whereas in older versions I would wrap all my button checking logic inside of a big timed or while loop because otherwise as you've noted it won't update the value when doing the logic check.
So my old standard style would be that there's a timed loop around each case structure that is checking buttons at 1kHz against a local variable version of the button's boolean, with a sequence inside the case structure that always resets the button state to zero at the end. Otherwise it doesn't check for button presses if one has already been pressed and is running something, or if you have no while/timed loop it doesn't even check at all since it's a run-once program.
LabView patterns are really not very clear, and my least favorite bit is that there's at least ten ways to do anything, and only one or two are good but the others all logically seem like they should work until you look really close.
Edit: Or maybe my least favorite part is that despite being a system designed for instrumentation and controls and data acquisition, my god is it a pain to plot anything.