I couldn't open this video on Firefox at first, but when I downloaded it and tried to play it on VLC, my open-source AMD driver crashed. I'm running Mint kernel 5.15.0-56-generic. Be careful.
Got this: [drm:amdgpu_cs_ioctl [amdgpu]] ERROR Failed to initialize parser -125!
Had to kill the XServer with CTRL+Alt+Backspace to get it back.
There is also the embedded functional language futhark [0], which compiles numerical kernels to optimized c, ispc, opencl and cuda code. Parallelism is explicit using known constructs (map, reduce,...).
Documentation is excellent and there is also a SML style type system, hence you can rely on strong abstractions.
Depends on how much one is willing to suffer with the lesser development experience versus C# and VB, and Microsoft's expectation that it is mostly a community language that happens to ship on the dotnet sdk.
Much smaller than you'd hope. HN is very much mainstream these days. I asked how one would represent a pure function in a language that didn't support pureness/immutability and got `fn f(a, b) { return a + b; }` entirely missing the point.
Perhaps the point was that representation syntactically of something which can't actually be promised by the engine underneath is a false equivalence to a language which does support it?
Coding to yield() on a framework which doesn't support tail recursion might (for example) loop to infinity.
Coding to immutability which cannot be guaranteed, means you may return false results.
They presented two new languages for functional programming, can we leverage existing popular languages that can do FP just fine for the same goal, e.g. python, javascript, or rust,etc?
as an amateur rust admirer , looking at https://rise-lang.org/ .... i think its maybe possible. because their language / tools Rise + Elevate looks like its starting out with a chain of iterators, something we see often in Rust, like zip, reduce, map.
if i understand it, they then automatically analyze the iterator chain and rewrite some of it using parallel versions of the code, for example they change Map to MapPar
"In Rise, low-level implementation choices such as performing
a computation in parallel are encoded with low-level patterns.
For example, the map pattern that applies a function to each
element of an array might be performed in a data-parallel
fashion as indicated by the mapPar variant of the pattern"
This reminds me a bit of how the Rayon crate in Rust works, where you can take an iter() and change it to par_iter() and instantly get parallelism in your program (depending on various details). but it seems like they have taken it "to the next level" and can output a CUDA program automatically , starting from their basic iterator chain.
it's been a while since i used python; have lambdas improved? i recall weird restrictions using them, but i can't quite remember what. this was pre puthon 3
Is there a definition for 'high performance'. Does a .NET language like F# qualify. Or does it have to simply be compiled? I'm just not sure that Python wouldn't qualify.
the reality is, general programming language with FP capabilities is what flies in practice, pure FP language might be perfect, it's just hard to find coders for them, we will have to live with that.
for HPC and ML, Python is dominant, I would expect to enhance python somehow with FP instead of anything else, then adding c/c++/rust/fortran libraries(e.g numpy) for intensive computing needs.
Python isn't popular for data only because it's imperative, there's also the vast numerical processing ecosystem. Scala/Spark is an example where FP fits well.
My understanding is that even with GIL removed today, Python is still not a performant language for cpu-intensive tasks, be it single core or multiple-core.
Python is better used as a glue language and leverage numpy etc to do the heavy lifting, numpy can work with multiple-core just like c/c++/rust languages can.
If you really need launch multiple python processes, the multiprocessing module comes to rescue.
While not ideal, I don't see a showstopper to use Python for HPC's high and parallel computation needs at all.
It’s not even really a language as much as it is an interop layer of sorts. It’s a collection of functional operations on numpy arrays, and a capable JIT. It’s main draw is honestly autograd, and if you’re using that, you often can’t use while or for loops effectively, can’t index numpy arrays without .at[], and should avoid side effects, so strangely it’s neither it’s own language or Python any longer