The consequence of Noether's theorem is that if a system is time symmetric then energy is conserved. On a global perspective, the universe isn't time symmetric. It has a beginning and an expansion through time. This isn't reversible so energy isn't conserved.
Please explain. Noether's theorem equates global symmetry laws with local conservation laws. The universe does not in fact have global symmetry across time.
You are making the same mistake as OP. Formal models and their associated ontology are not equivalent to reality. If you don't think conservation principles are valid then write a paper & win a prize instead of telling me you know for a fact that there are no global symmetries.
RL before LLMs can very much learn new behaviors. Take a look at AlphaGo for that. It can also learn to drive in simulated environments. RL in LLMs is not learning the same way, so it can't create it's own behaviors.
It is well known that Rust's async model can lead to deadlocks. However, in the futurelock case I have come around to blaming the Async Locks. The issues is that they are not "fair" in that they don't poll the future holding the lock. There may be some other tradeoffs that would happen if the locks were in some way "fairer" but I think they should be explored.
> The issues is that they are not "fair" in that they don't poll the future holding the lock.
How would you change that? A section of code doesn't have access to its own future to call into.
Best I can think is that you can't just call `let guard = mut.lock().await` but instead have to do `mut.do_locked(fut).await`, so that other `do_locked` calls can poll `fut`. I think that would work, but it seems quite awkward. Then again, imho async mutexes are something that should be used quite sparingly, so maybe that's okay.
Disclaimer - I'm not a Tokio dev so what I say may be very wrong. Some definitions:
Future = a structure with a method poll(self: Pin<&mut Self>, ...) -> Poll<Self::Output>; Futures are often composed of other futures and need to poll them.
Tokio task = A top-level future that is driven by the Tokio runtime. These are the only futures that will be run even if not polled.
My understanding is that Tokio async locks have a queue of tasks waiting on lock. When a lock is unlocked, the runtime polls the task at the front of the queue. Futurelock happens when the task locks the lock, then attempts to lock it a second time. This can happen when a sub-future of the top level task already has the lock, then it polls a different future which tries to take the lock.
This situation should be detectable because Tokio tracks which task is holding an async lock. One improvement could be to panic when this deadlock is spotted. This would at least make the issue easier to debug.
But yes, I think you are right in that the async mutex would need to take the future by value if it has the capability of polling it.
> This situation should be detectable because Tokio tracks which task is holding an async lock. One improvement could be to panic when this deadlock is spotted. This would at least make the issue easier to debug.
That'd be a nice improvement! It could give a clear error message instead of hanging.
...but if they actually are polling both futures correctly via `tokio::join!` or similar, wouldn't it also cause an error where otherwise it'd actually work?
Oof, I think that you are right. The issue with Futurelock is a failure of liveness, where the Future holding the lock doesn't get polled. tokio::join! would keep it alive and therefore my suggestion would mistakenly panic.
Yeah, the true fix is probably some form of the fabled Linear types/Structured concurrency where you can guarantee liveness properties.
On third thought, maybe your detection idea would work. I think you're right that the tokio runtime knows the lock is owned by this task's future A, and that this task's future B is waiting for the same task. So far that's arguably fine (if inefficient to try acquiring the lock twice in parallel from the same task).
I think it also should know that after future A has been awoken, the next call into the task's outermost future is returning `Poll::Pending` without polling future A, which is the suss part.
> Yeah, the true fix is probably some form of the fabled Linear types/Structured concurrency where you can guarantee liveness properties.
Maybe? I really don't know the details well enough to say a linear types thing could guarantee not only that the thing isn't dropped but also that it continues getting polled in a timely way.
It passes ownership directly to the next future in the lock queue. If it was instead more similar to a futex [1], this problem could have been avoided.
My assumption is that tokio went with this design because simple Future subexecutors [2] tend to have very poor scheduling. Often they poll each of their child futures in turn regardless of which were actually woken. With an async locks closer in design to a futex, this could lead to subexecutor child futures being starved out.
If that was truly tokio’s reasoning for the design of their Mutex, I still kinda disagree with the choice; it shouldn’t be the lock’s job to fix tokio::select! being bad at scheduling.
[0]: We should be specific that we’re discussing tokio’s Mutex. this is one particular implementation of async locks.
[1]: wake next in queue, but don’t pass ownership. the woken task must CAS to actually acquire.
[2]: think tokio::select! or futures_lite::future::Or. but not FuturesUnordered which does child wakeups properly.
> It passes ownership directly to the next future in the lock queue. If it was instead more similar to a futex [1], this problem could have been avoided.
...sometimes? Wouldn't we still have the problem if the future runs (actually getting the lock) but then awaits again while holding it? I think that's common—if you're not awaiting while holding the lock, then why didn't you just use a simple std::sync::Mutex?
futurelock [0] was special specifically because of this aspect where even a future which seemingly acquires and releases a lock in a single poll triggers a deadlock.
what you describe is just a standard async deadlock. much easier to spot when debugging. and one can reason about those deadlocks in pretty much the same way one would reason about deadlocks between threads.
> futurelock [0] was special specifically because of this aspect where even a future which seemingly acquires and releases a lock in a single poll triggers a deadlock.
Their description at the top doesn't seem to match that:
RFD> This RFD describes futurelock: a type of deadlock where a resource owned by Future A is required for another Future B to proceed, while the Task responsible for both Futures is no longer polling A. Futurelock is a particularly subtle risk in writing asynchronous Rust.
...and further on they describe lock acquisition as an example of the resource:
RFD> future F1 is blocked on future F2 in some way (e.g., acquiring a shared Mutex)
...so I think they meant it to be more general.
> what you describe is just a standard async deadlock. much easier to spot when debugging. and one can reason about those deadlocks in pretty much the same way one would reason about deadlocks between threads.
I think the not-being-polled aspect of it is a bit more subtle than between threads. More like thread vs signal/interrupt handler actually, except it's not as well-known that "branch taken after a `select!`" or "place where two futures exist and `join!`/`spawn` isn't being used" is such a special case for scheduling.
...and anyway, with a mutex that has an actual reason to be async, how can you have only a acquire bug but not also have a potential mid-holding bug? You can say the latter is a different class of bug so you've solved futurelock, but you still have a bug any time you would have had futurelock, so semantics 1 working program 0.
If do_async_thing was implemented as below, i can’t imagine the futurelock post getting anywhere near this much attention. As for why you might use an async lock in a future which acquires and unlocks in the same poll, there still may be other actors which hold the lock across multiple polls. (there is one in the RFD’s minimized example).
I would say, though, that for most programs any one of the most popular languages would do the trick. By this I mean Java, Go, C#. Javascript, Python, C++. All of those are general purpose multi-paradigm languages that you can code almost anything in.
That being said, some programs can only be written in one of those. Browser code is JS exclusive, low-level needs C++, secure code needs not C++. Machine Learning needs Python and high performance can't use Python. Some Windows things need C#. Those cases are the obvious ones where there is basically no choice. Beyond those, it is mostly about the team.
My view of this is that its closer to the basic 2 lock Deadlock.
Thread 1 acquires A.
Thread 2 acquires B.
Thread 1 tries to acquire B.
Thread 2 tries to acquire A.
In this case, the role "A" is being played by the front of the Mutex's lock queue. Role "B" is being played by the Tokio's actively executed task.
Based on this understanding, I agree that the surprising behavior is due to Tokio's Mutex/Lock Queue implementation. If this was an OS Mutex, and a thread waiting for the Mutex can't wake for some reason, the OS can wake a different thread waiting for that Mutex. I think the difficulty in this approach has to do with how Rust's async is implemented. My guess is the algorithm for releasing a lock goes something like this:
1. Pop the head of the wait queue.
2. Poll the top level tokio::spawn'ed task of the Future that is holding the Mutex.
What you want is something like this
For each Future in the wait queue (Front to Back):
Poll the Future.
If Success -
Break
???Something if everything fails???
The reason this doesn't work has to do with how futures compose. Futures compile to states within a state machine. What happens when a future polled within the wait queue completes? How is control flow handed back to the caller?
I guess you might be able to have some fallback that polls the futures independently then polls the top level future to try and get things unstuck. But this could cause confusing behavior where futures are being polled even though no code path within your code is await'ing them. Maybe this is better though?
Video processing is one of those things that need caution when doing serverlessly. This solution makes sense, especially because S3s durability guarantees aren't needed.
First, I don't think the error term is contributing much to the solution. It almost never affects the high bit. In addition, it isn't fed back into updating the secret vectors, so I think an analysis can pretend it doesn't exist.
The nonlinear step where each value is squared looks questionable to me. You will only produce quadratic residues (https://en.wikipedia.org/wiki/Quadratic_residue) when you square the numbers. This roughly halves the number of possibilities.
So what this really boils down to is this:
You have a matrix G and a vector s and a prime p. You repeatedly compute s' = Gs (Hadamard) Gs mod p. Each time you run this step you are projecting into a space with dimensionality (p/2)^N from a space p^N. My guess is that most operations will get trapped into short cycles.
Using you example values, after 10 iterations it gets to [9, 16, 13, 8]. This then repeats with a cycle length of 20. Given 4 values with p = 17 you could get up to 83520 values before repeating.
In some random tests, 6 values with p=97 enters a cycle at iteration 3802 even though there were 832,972,004,929 values.
6 values with p=271 enters a cycle at iteration 166,684 even though there were 396,109,944,105,121 values.
Ah, but have they verified how far down the turtles go, and has that changed since they verified it?
In the mid-2000s most of the conference call traffic started leaving copper T1s and going onto fiber and/or SIP switches managed by Level3, Global Crossing, Qwest, etc. Those companies combined over time into Century Link which was then rebranded Lumen.