As the author of a JS-to-OCaml compiler [1], I must admit that Poe’s Law applies here [2]:
“Without a clear indicator of the author’s intent, any parodic or sarcastic expression of extreme views can be mistaken by some readers for a sincere expression of those views.”
Cool. Recursion in python is common bottleneck in competitive programming. Will give it a try. I created a similar tool for recursion [1]. But ended with rewriting AST and emulating stack. Pros - no need for accumulator, cons - almost unusable in real world.
I'm working on a block-based visual programming environment for kids — a sort of Scratch alternative — but instead of inventing a new language, it's a subset of Elixir. I'm using Google’s Blockly to generate real Elixir code from the blocks.
Right now, I'm building a Space Invaders clone in Elixir with LiveView, and integrating Blockly so the game's core logic can be edited visually. Hoping it becomes a fun way to learn both functional programming and web dev.
A lib was heavily relying on React internals for testing, rather than just on components' public api. That this approach was going to be unsustainable was already obvious around 2020. The question is, after you've invested a lot of work in a bad practice, how to move to a better practice with the least amount of pain. Another, more philosophical, question is how does a bad practice gain so much traction in the developer community.
eyJ... — is a beginning of base64 encoded JSON. The fact that I know it, and see it in standards make me feel bad.
I can't proof that this is wrong, but encoding one text format in other text format and wrapping it in third text format — seems wasteful and hacky. This relates not only to this specs, but rather to web development in general.
base64 is not just a text format, but one that's safely transmissible through e.g. 7-bit only mediums, or being embedded into a URL (assuming url-safe b64!), being stored in a cookie and passed through janky middlewares, etc. etc.
Further, base64 doesn't actually encode strings, it encodes bytes. If you're doing cryptography on something, you want to have a canonical byte representation. Canonicalizing JSON itself is error-prone, whereas decoding b64 gives you the same bytes every time.
The JWS/JWE compact and JSON encodings are text based formats, safe for various internet protocol use (such as embedding in a HTTP header, URL query parameter, or cookie)
The header is JSON and could have potentially used another encoding for space. The payload and signature are both binary, so they needed a way to be represented in the 66 or so safe characters across all of those uses. In that case, non-padded URL-safe Base64 encoding is the best option.
Unfortunately, nested JWTs (signed and encrypted) as well as embedded binary data (such as public keys and thumbprints) in a JSON format also need to be base64 encoded. So there's a bit of a penalty in size for including these in the message, and that puts a bit of a design motivation in applications using these to limit such binary data within the messages themselves.
There is COSE, which uses CBOR to be entirely binary, but CBOR is rather robust and library support isn't close to what we have for JSON support.
For JSON Web Proofs, the goal is to define the core primitives in terms of binary data, such that a CBOR encoding does not require reinvention.
Yes. Regarding JOSE (JWS/JWE/JWA/JWK/JWT) I consider that point one of the core design differences I have with JOSE as it results in re-encode ballooning. That's separate from the fact that JSON was intended to be human readable and base64 encoding of otherwise human readable payloads robs JSON of this design goal. (If going so far to base64 encode human readable payload, why not take advantage of efficient binary messaging standards instead of using JSON? The whole point of the "bloat" of JSON over binary forms is human readability.)
Not only is JOSE base64 encoded, but many situations have base64 payloads embedded in the JSON, meaning payloads are double encoded, and as far as I'm aware, JOSE headers are always base64 encoded, regardless of the outer (re-)encoding. Each round of base64 encoding adds overhead.
For example, if starting with a 32 byte payload, the first round base64 encodes as 43 bytes, then is inserted into JSON, and then base64 re-encoded as 58 bytes. The starting payload of 32 balloons to nearly twice the size of 58.
For a direct example from one of the RFCs the header `{"alg":"RS256"}` becomes `"eyJhbGciOiJSUzI1NiJ9"`. That's 15 bytes unencoded, human friendly JSON and 20 bytes as encoded unfriendly base64.
Further, there's a later RFC 7797 that was intended to address the complaints of this, the 'JOSE JSON serialization Unencoded Payload Option', but that too failed to address encode ballooning in the headers. From the RFC:
That encoded header is 58 bytes but when unencoded, {"alg":"HS256","b64":false,"crit":["b64"]}, it is 42 bytes. There's no compelling reason for this design.
“Without a clear indicator of the author’s intent, any parodic or sarcastic expression of extreme views can be mistaken by some readers for a sincere expression of those views.”
[1] https://dev.to/denyspotapov/porting-is-odd-npm-to-ocaml-usin...
[2] https://en.wikipedia.org/wiki/Poe's_law
reply