Yeah, that's what I meant by foveated rendering. You can spend your samples where they are most valuable based on where the user is looking among other factors.
That doesn't really have all that much to do with what is considered finance. It's maybe a bit like saying an open issue in computer science is having a great keyboard.
It seems like the author has realised that there are a few common functional programming patterns (folds, maps) that are also common ways of combining information and operating on data structures, and seen the parallel to some operations that we frequently want to do within neural networks. A 'function' is simply a thing that takes another thing and produces a third thing. This doesn't seem that revolutionary or insightful - do these ideas give us any extra knowledge about neural networks or is this just a nice parallel?
I think that the theory is pretty much already clear to people who have a lot of experience with neural networks. I think the contribution is to explicitly write it down in a clear and concise way for people who it wouldn't have otherwise occurred to.
If this formalism works out then a few interesting directions spring to mind:
- we suddenly have a huge new toolbox from FP / category theory that we can apply to understanding and extending DNNs. E.g. what happens if we apply differentiation in the manner of zippers to these structures? I have no idea but it might lead somewhere.
- The deep learning world gets a precise way to describe network architecture which makes communication much easier and research much more reproducible.
- With a formal model you can automate building and optimising implementations.
I've updated it with quad, cubic and sinusoidal modes. They all look good. Only problem is that because it's no longer linear, the various points can change at different rates and thus create 'kinks' in the curve. When it's linear, the 'line points' and the associated 'control points' are always in a line locally tangential to the curve (don't know the proper terminology but hopefully you understand).
That shouldn't happen if you do the motion transform uniformly on all axes. It should just be equivalent to changing the speed of time, rather than actually changing the trajectories.
Hi all, I'm the author.
Thanks for all the feedback - really good to hear you like the clock. I've added some more animation easings as per your suggestions.
To clarify: when continual animation is off, each digit only animates for a specified amount of time. I set this at 20 seconds for all but the 'seconds' digits, which animate continually. I thought this looked cooler and they're inessential to reading the time.
I'll try and port it to Apple watch / Android wear when they release their proper watch face SDKs.