Hacker Newsnew | past | comments | ask | show | jobs | submit | denys_potapov's commentslogin

As the author of a JS-to-OCaml compiler [1], I must admit that Poe’s Law applies here [2]:

“Without a clear indicator of the author’s intent, any parodic or sarcastic expression of extreme views can be mistaken by some readers for a sincere expression of those views.”

[1] https://dev.to/denyspotapov/porting-is-odd-npm-to-ocaml-usin...

[2] https://en.wikipedia.org/wiki/Poe's_law


Cool. Recursion in python is common bottleneck in competitive programming. Will give it a try. I created a similar tool for recursion [1]. But ended with rewriting AST and emulating stack. Pros - no need for accumulator, cons - almost unusable in real world.

[1] https://dev.to/denyspotapov/callonce-python-macro-for-unlimi...


Do you frequently use Python for competitive programming puzzles? I've done it a bit in the past, and everyone always used C++.


Always. Probably you can't get in top 100 with python[1]. But I like dicts with tuple keys, bigints, all the list things.

[1] My best is around 1300 place in HackerCup.


I have to wonder how much better you'd do if they made pypy an option.


I'm working on a block-based visual programming environment for kids — a sort of Scratch alternative — but instead of inventing a new language, it's a subset of Elixir. I'm using Google’s Blockly to generate real Elixir code from the blocks.

Right now, I'm building a Space Invaders clone in Elixir with LiveView, and integrating Blockly so the game's core logic can be edited visually. Hoping it becomes a fun way to learn both functional programming and web dev.


It's a 2024 webdev summary, nothing can be added:

New React version made the lib obsolete, we used LLM to fix it (1/5 success rate)


A lib was heavily relying on React internals for testing, rather than just on components' public api. That this approach was going to be unsustainable was already obvious around 2020. The question is, after you've invested a lot of work in a bad practice, how to move to a better practice with the least amount of pain. Another, more philosophical, question is how does a bad practice gain so much traction in the developer community.


eyJ... — is a beginning of base64 encoded JSON. The fact that I know it, and see it in standards make me feel bad.

I can't proof that this is wrong, but encoding one text format in other text format and wrapping it in third text format — seems wasteful and hacky. This relates not only to this specs, but rather to web development in general.


base64 is not just a text format, but one that's safely transmissible through e.g. 7-bit only mediums, or being embedded into a URL (assuming url-safe b64!), being stored in a cookie and passed through janky middlewares, etc. etc.

Further, base64 doesn't actually encode strings, it encodes bytes. If you're doing cryptography on something, you want to have a canonical byte representation. Canonicalizing JSON itself is error-prone, whereas decoding b64 gives you the same bytes every time.


Sanitized, well formed JSON is generally not horribly URL encoded. It's typically less overhead than base64.

Also, for well formed JSON (not arbitrary JSON), it also works fine in HTTP headers. I think those two situations cover about 90% of use cases.

For example, here is a JSON payload URL encoded. It's not too bad, and much better than base64:

https://cyphr.me/coze#?input={%22pay%22:{%22msg%22:%22Hello%...

The initial payload is 238 bytes, URL encoded that payload is 288 bytes, as base 64 it is 318 bytes. (Here's another tool just for that: https://convert.zamicol.com/#?inAlph=text&in=%257B%2522pay%2...)


I'd rather try to build systems that always work, rather than only working for well-formed inputs transmitted over well-behaved mediums.



The JWS/JWE compact and JSON encodings are text based formats, safe for various internet protocol use (such as embedding in a HTTP header, URL query parameter, or cookie)

The header is JSON and could have potentially used another encoding for space. The payload and signature are both binary, so they needed a way to be represented in the 66 or so safe characters across all of those uses. In that case, non-padded URL-safe Base64 encoding is the best option.

Unfortunately, nested JWTs (signed and encrypted) as well as embedded binary data (such as public keys and thumbprints) in a JSON format also need to be base64 encoded. So there's a bit of a penalty in size for including these in the message, and that puts a bit of a design motivation in applications using these to limit such binary data within the messages themselves.

There is COSE, which uses CBOR to be entirely binary, but CBOR is rather robust and library support isn't close to what we have for JSON support.

For JSON Web Proofs, the goal is to define the core primitives in terms of binary data, such that a CBOR encoding does not require reinvention.


Yes. Regarding JOSE (JWS/JWE/JWA/JWK/JWT) I consider that point one of the core design differences I have with JOSE as it results in re-encode ballooning. That's separate from the fact that JSON was intended to be human readable and base64 encoding of otherwise human readable payloads robs JSON of this design goal. (If going so far to base64 encode human readable payload, why not take advantage of efficient binary messaging standards instead of using JSON? The whole point of the "bloat" of JSON over binary forms is human readability.)

Not only is JOSE base64 encoded, but many situations have base64 payloads embedded in the JSON, meaning payloads are double encoded, and as far as I'm aware, JOSE headers are always base64 encoded, regardless of the outer (re-)encoding. Each round of base64 encoding adds overhead.

For example, if starting with a 32 byte payload, the first round base64 encodes as 43 bytes, then is inserted into JSON, and then base64 re-encoded as 58 bytes. The starting payload of 32 balloons to nearly twice the size of 58.

For a direct example from one of the RFCs the header `{"alg":"RS256"}` becomes `"eyJhbGciOiJSUzI1NiJ9"`. That's 15 bytes unencoded, human friendly JSON and 20 bytes as encoded unfriendly base64.

Further, there's a later RFC 7797 that was intended to address the complaints of this, the 'JOSE JSON serialization Unencoded Payload Option', but that too failed to address encode ballooning in the headers. From the RFC:

{ "protected": "eyJhbGciOiJIUzI1NiIsImI2NCI6ZmFsc2UsImNyaXQiOlsiYjY0Il19", "payload": "$.02", "signature": "A5dxf2s96_n5FLueVuW1Z_vh161FwXZC4YLPff6dmDY" }

That encoded header is 58 bytes but when unencoded, {"alg":"HS256","b64":false,"crit":["b64"]}, it is 42 bytes. There's no compelling reason for this design.

The better solution? Just stay as JSON.

I have more to say, but I'll leave it there. You can also check out my presentation on the matter here: https://docs.google.com/presentation/d/1bVojfkDs7K9hRwjr8zMW...

Right now a relevant slide is 115, or control-f "ballooning", or I have other text documents up on Github.


eyJhbG


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: