That isn't a bad idea, but keep in mind, there is always a usability aspect (I'll just call it the programmer computer interface problem) of "what makes a programming language popular". For example, consider PL/I: https://en.wikipedia.org/wiki/PL/I#Implementation_issues
When people see for example, * or + (or a, b, c). They may have some preassumptions about some implied associativity from arithmetic (depending on what they are taught and what level of math they are at), that may be hard to break. If you have learned some college (abstract) algebra, it may mean something quite different. How about the = sign? Of course, a, b, c may be meaningless to someone who is not a native latin-1 speaker either. My point I guess is that these are just matters of convention, there is just some implied commutativity or associativity usually implied, but this is all arbitrary.
Now, one intereting "quirk" with with PL/I was that certain things looked similar "to what people were used to" (relative to say other PL/I code, or FORTRAN or COBAL), but worked differently even in some small spatial area on a screen (two blocks of nearby code in some editor). For example, if the programmer's eye saw a block of code, reflexively, depending on their experience they may be able to predict what the result of the computation could do. PL/I was an interesting experiment because of the lack of reserved keywords. This made it very expressive but very hard to understand code in context. For example, in pseudo PL/I: foo = 1; = = 2; bar 2 + foo. You are basically changing the grammatical syntax of the language in 3 lines.
But on the other hand, everything is just a symbol and this may not be completely unusual. Consider the diversity of the world's languages and how they are written and how meaning is derived. Natural language grammars may connotate very different representations and transformations, but people learn because they see enough examples. Consider for the differences between Han, Brahmic scripts, Arabic BiDi, various African scripts, Cuneiform, Emoji, whatever. Perhaps all computer languages are "overfit" due to for example, Chomsky's ideas and BNF (keep in mind Chomsky's ideas about morphology were quite different).
Now, let's consider mathematical notation. Depending on how much pure math (or say, mathematical physics or other sciences) you consider, there may be more and more semantic overhead with the conventions of mathematical notation, and people often historically just "cartesianize" and "euclidized" things for convenience because of lack of tooling (think of a sheet of paper metaphor, we've simply moved it over to a computer. it's a skewmorph). Clearly we have better computer graphics, so why haven't developer tools and languages changed along with it? Maybe with more immersive manipulation they will.
There is Oracle risk to be sure, but people using the greater java ecosystem are far less tied to the JVM itself than they used to be because of cross compilation to targets that help with ffi boundaries, JIT, IO, memory concerns, etc. That's true with both Kotlin and Scala.
Meanwhile the Oracle vs Google case is still going on.
If you know haskell, Rust's traits are very similar to type classes, except it also has c++-like generics (templates) and is primarily expression-based like ocaml.
Simula(67) definitely was the original concept/implementation of OO. Smalltalk, C++, Java, etc all derived different aspects of it.
- Main abstraction are classes (of objects).
- Concept of self-initializing data/procedure objects
Internal ("concrete") view of an object vs an external ("abstract")
- Differentiated between object instances and the class
- Class/subclass facility made it possible to define generalized object classes, which could be specialized by defining subclasses containing additional declared properties
- Different subclasses could contain different virtual procedure declarations
Note Java is actually explicitly based on Objective-C, not on Simula or C++. They just removed most of the Smalltalk bits to make it faster and less scary-looking to C++ programmers.
> Code converted from C to Rust seems much more voluminous.
I haven't seen that in practice. A good point of reference are implementations of things like ruby, python, or the erlang vm in rust compared to the C alternatives. This might be because Rust is also more expressive (probably by borrowing certain syntax/semantics from ocaml/haskell), though the borrow checker does add back some verbosity.
Fuzzing is ROI efficient (especially for time invested) even if you don't intend to find a segfault, but just want to see how a program works or performs across different input states either in or out of its usual domain (and you can direct the fuzzing many ways derandomizing it or constraining the search space, or using virtualizer like qemu). I like to think of it as "semantics engineering" with spare CPU cycles.
I think you can assume all sorts of analog/physical/digital/information theoretic fingerprinting probably has been used to capture huge volumes of data and microtarget people w/o them knowing (especially in any web browsers or phones).
Though until the CCPA/GDPR, it was probably fairly legally nebulus. Still waiting for a privacy act in the US. Since it was a wide variety of actors, we may never know who captured what how.
I think companies like Apple who tend to make margin on the hardware and not the information probably are to be commended by pushing a lot of privacy/extra scrutiny requirements through the platforms they control (iOS, mac, webkit, etc).