Hacker Newsnew | past | comments | ask | show | jobs | submit | oleganza's commentslogin

Maybe it means "LOCs changed"?


Mutate things so fast cancer looks like stable.


Copilot add a space to every line of code in this repository and commit please.

One of the many reasons why it's such a bad practice (overly verbose solutions id another one of course)


I asked ChatGPT what traits should vibe-oriented programming language have and oh boy did it deliver.

(https://chatgpt.com/share/693891af-d608-8002-8b9b-91e984bb13...)

* boring and straightforward syntax and file structure: no syntax sugar, aliases, formatting freedom that humans cherish, but machines are getting confused, no context-specific syntax.

* explicitness: no hidden global state, shortcuts and UB

* basic static types and constraints

* tests optimized for machine evaluation

etc.


ECDSA is a horrible workaround for patent on Schnorr signatures. Here's my talk from 2019 about the issue.

https://www.youtube.com/live/2IpZWSWUIVE?si=-LRRbU2mJgL9LiNP...


Great talk. Wish the camera focused on the slides more.

The ed25519 issues are absolutely insane. Anywhere I can read more about that?


Excellent. Really enjoyed that.


exactly my thought. I never made it to Vista. In 2007 I changed WinXP (always used it with the classic grey theme) for OS X Tiger on a MacBook and never went back to Windows since then.

I wonder where a decent alternative will be lurking in the next few years? Apple is losing some grip, but all others are still worse overall.


You don't have to play this game - you can always write within unsafe { ... } like in plain old C or C++. But people do choose to play this game because it helps them to write code that is also correct, where "correct" has an old-school meaning of "actually doing what it is supposed to do and not doing what it's not supposed to".


That just makes it seem like there's no point in using this language in the first place.


Dont let perfect be the enemy of good.

Software is built on abstractions - if all your app code is written without unsafe and you have one low level unsafe block to allow for something, you get the value of rust for all your app logic and you know the actual bug is in the unsafe code


This is like saying there’s no point having unprivileged users if you’re going to install sudo anyway.

The point is to escalate capability only when you need it, and you think carefully about it when you do. This prevents accidental mistakes having catastrophic outcomes everywhere else.


I think sudo is a great example. It's not much more secure than just logging in at root. It doesn't really protect malicious attackers in practice. And it's more of an annoyance than it protects against accidental mistakes in practice.


Unsafe isn’t a security feature per se. I think this is where a lot of the misunderstanding comes from.

It’s a speed bump that makes you pause to think, and tells reviewers to look extra closely. It also gives you a clear boundary to reason about: it must be impossible for safe callers to trigger UB in your unsafe code.


That's my point; I think after a while you instinctly repeat a command with sudo tacked on (see XKCD), and I wonder if I'm any safer from myself like that?

I'm doubtful that those boundaries that you mention really work so great. I imagine that in practice you can easily trigger faulty behaviours in unsafe code from within safe code. Practical type systems are barely powerful enough to let you inject a proof of valid-state into the unsafe-call. Making a contract at the safe/unsafe boundary statically enforceable (I'm not doubting people do manage to do it in practice but...) probably requires a mountain of unessential complexity and/or runtime checks and less than optimal algorithms & data structures.


> That's my point; I think after a while you instinctly repeat a command with sudo tacked on (see XKCD), and I wonder if I'm any safer from myself like that?

We agree that this is a dangerous / security-defeating habit to develop.

If someone realizes they're developing a pattern of such commands, it might be worth considering if there's an alternative. Some configuration or other suid binary which, being more specialized or tailor-purpouse, might be able to accomplish the same task with lower risk than a generalized sudo command.

This is often a difficult task.

Some orgs introduce additional hurdles to sudo/admin access (especially to e.g. production machines) in part to break such habits and encourage developing such alternatives.

> unsafe

There are usually safe alternatives.

If you use linters which require you to write safety documentation every time you break out an `unsafe { ... }` block, and require documentation of preconditions every time you write a new `unsafe fn`, and you have coworkers who will insist on a proper soliloquy of justification every time you touch either?

The difficult task won't be writing the safe alternative, it will be writing the unsafe one. And perhaps that difficulty will sometimes be justified, but it's not nearly so habit forming.


What you postulate simply doesn’t match the actual experience of programming Rust


You are of course welcome to imagine whatever you want, but why not just try it for yourself?


Because only lines marked with unsafe are suspicious, instead of every line of code.

Also the community culture matters, even though static analysis exists for C since 1979, it is still something we need to force feed many developers on C and C++ world.


Magic prefix (similar to byte-order-mark, BOM) is also killing the idea. The reason for success of any standard is the ability to establish consensus while navigating existing constraints. UTF-8 won over codepages, and UTF-16/32 by being purely ASCII-compatible. A magic prefix is killing that compatibility.


I love how people bring up deflationary spiral as a "peril" while the prerequisite for it is the universal and smashing success of Bitcoin.

The only "problem" Bitcoin poses for economies is for governments to fine-tune their local economies via currency production and related controls. In that sense, we should watch how events unfold in Turkey.

* among major "regular" economies, Turkey has the highest % of people holding crypto (≈20%). Second only to special zones UAE and Singapore (31%, 24%).

* Turkish lira is steadily inflated over the last 30-40 years, well over 10% and recently over 50%.

* Turkey does not have mandate for pricing goods in local currency: you can pay in dollars or euros, along the local lira.

* When you enter Istanbul airport, Every. Single. Gate. is marked with BTCTurk ad, inside and outside - the major crypto exchange in the country.

* Istanbul city market is full of traders who use USDT on Tron.

The experiment of social game "Bitcoin" boils down to this: will the people self-organize the functioning economy with monetary freedom, while the gov loses its grip on it; or will the economy collapse without government's regulation and protective management?


This is just a convenient way to access stable western currency. Having been to Russia and Argentina during their worst inflation years before crypto, they solved their issues by asking for US paper dollars. Crypto is just saving them currency exchange fees.

And there's no way Turkiye is behind the value of BTC. It's still driven by speculators.


> and smashing success of Bitcoin.

It's a success today, we haven't gotten to when they stop issuing any more, and mining is funded by transaction fees. I suspect there are going to be some problems then.


Like what? As far as I can tell, it will solidify its store of value.


> Like what? As far as I can tell, it will solidify its store of value.

Which is the bug:

> No currency should be able to buy the same basket of goods over very long timespans through hoarding. If you want to retain the purchasing power of your money, it should participate in society via investment.

* https://twitter.com/dollarsanddata/status/159265180975079833...


That’s a “hot take” that people take as an axiom. What if it isn’t? What is the precise definition of “participating in society”? What level of earning and spending is considered morally good and who’s to decide that? (Meta questions arise when discussing conflicts of interest of the deciders.)


> Turkish lira is steadily inflated over the last 30-40 years, well over 10% and recently over 50%.

Because the authoritarian government took over the previously independent central bank and lowered interest rates. Higher inflation was predicted by mainstream economists, and they were right.

* https://www.aljazeera.com/news/2021/3/20/turkeys-erdogan-sac...

* https://en.wikipedia.org/wiki/Currency_interventions_under_E...


Thank God that would never happen in the US.


Get married, make a couple of children and a lot of life issues go away — you'll always have something to actually get done ASAP instead of just staring at a todo list and wandering around.


then you have more life issues ;-)


Thank you Jimmy, great article.

My 23+ year experience in computer science and programming is a zebra of black-or-white moments. For the most time, things are mostly obscure, complicated, dark and daunting. Until suddenly you stumble upon a person who can explain those in simple terms, focus on important bits. You then can put this new knowledge into a well-organized hierarchy in your head and suddenly become wiser and empowered.

"Writing documentation", "talking at conferences", "chatting at a cooler", "writing to a blog" and all the other discussions from twitter to mailing lists - are all about trying to get some ideas and understanding from one head into another, so more people can get elucidated and build further.

And oh my how hard is that. We are lucky to sometimes have enlightenment through great RTFMs.


When I learned crypto 5-10 years ago, it turned out that a lot of "building blocks" are mostly hacks. Looking back from 2020s we see that some of the standards that we use for the last 20-30 years can in principle be thrown out of the window (they can't for compatibility reasons, though) and replaced with much cleaner and more universal replacements.

If we do not talk about modern exotic stuff (post-quantum crypto, zkSNARKS, homomorphic encryption), the 99% of everyday cryptography is based on two building blocks:

1. Symmetric crypto for ciphers and hash functions.

2. Algebraic group with "hard discrete log problem" for key exchange, signatures, asymmetric encryption and simple zero-knowledge proofs.

Historically, these two categories are filled with a zoo of protocols. E.g. AES is a block cipher, but SHA(1,2) is a hash function.

Today, you can roughly achieve everything of the above with two universal building blocks:

- Keccak for all of symmetric crypto: it is suited both for encryption, hashing, duplex transcripts for ZK protocols etc.

- Ristretto255 group based on Curve 25519: for diffie-hellman, signatures, key derivation, threshold schemes, encryption and more.

The problem is that none of the described features is implemented in a turnkey standard, and we are still stuck using older crypto. Heck, even Git is using SHA-1 still.

Then, after you have your building blocks, there are more hairy stuff such as application-specific protocols: TLS, Signal, PAKE/OPAQUE, proprietary hardware security schemes for full disk encryption and access controls etc.


>Keccak for all of symmetric crypto: it is suited both for encryption, hashing, duplex transcripts for ZK protocols etc.

Unfortunately, Keccak and sponge constructions in general are inherently sequential. Even with hardware acceleration it heavily restricts possible performance. For example, AES-CBC encryption is 4-8 times slower than AES-CTR on high-end CPUs with AES-NI available. VAES makes the difference even bigger. Algorithms like AES-GCM, ChaCha20, and BLAKE3 are designed specifically to allow parallelization.


How much does the lack of parallelization matter in practice though? Sure, AES-CTR can be parallelized, but the authentication function you're probably pairing it with likely can't. And in a lot of cases I'm aware of where encryption parallelism is important for performance (e.g. line-rate VPN encryption), you can achieve parallelism for the operation as a whole without achieving stream based parallelism. In the VPN example, even if you can't encrypt all the blocks in a single packet in parallel, you can probably achieve just as much parallelism speedup by encrypting multiple packets in parallel.


> Unfortunately, Keccak and sponge constructions in general are inherently sequential.

Couldn't you simply using BLAKE3 instead? To my knowledge BLAKE3 exactly was designed to solve this "parallelism problem" by combining the "cryptographic ideas" of BLAKE2 with the binary tree structure of Bao (the latter was designed to make the hash construction easy to parallelize).


Fwiw I don't think there's anything inherently sequential about the Keccak permutation itself. KangarooTwelve is a fully parallelizable hash built on Keccak. (Though they did use the sponge construction on the XOF side, so that part is serial.)


I meant the absorb and squeeze part. The permutation itself (or more specifically its round) could be efficiently implemented in hardware, but you can't mask latency by parallel application of the permutation. Yes, KangarooTwelve is an improvement in this regard, but the grandparent was talking specifically about Keccak/SHA-3.


Sorry for lack of clarity, but i was saying “Keccak” and not “sha3” for that specific reason: it’s a permutation building block suitable for a whole range of constructions - cshake, kangaroo etc. sha3 specifically is an overkill and unnecessary imho.

CShake128 is much better replacement for hmac and sha512 in (zk)proofs, while Kangaroo for things like FDE and massive volumes of data.


I'm absolutely clueless about crypto, isn't there also a trade-off between being mathematically superior and well optimized in software/hardware implementation?


The tradeoff is not that simple (I wish it was :-).

Usually it goes like that: someone made something useful optimised for a specific use-case with certain time (or competence) constraints, within a total lack of decent alternatives. Then people adopt and use it, it becomes the standard. Then people want to do more things with it, and try to build around that thing, or on top of that thing and Frankenstein monsters get born and also become standard.

If you start from scratch you can do a crypto protocol that is both better designed (causes less UX pain and critical bugs) AND performs better on relevant hardware. Also do not forget that performance is easily solved by hardware: Moore's law and then custom hardware extensions are a thing.

Example: Keccak is so much better from the composition perspective, that when used ubiquitously you'd definitely have ubiquitous hardware support. But if everyone continues to use a mishmash of AES and SHA constructions on the pretext of "Keccak" is not as fast, then we'd never move forward. People would continue building over-complicated protocols, bearing subpar performance and keeping the reputation of dark wizardry inaccessible for mere mortals.


> Also do not forget that performance is easily solved by hardware: Moore's law

"Just write slow algorithms, hardware will eventually get faster" doesn't really work when talking about performance implications now. If the hash algorithm used million of times doesn't perform on current user hardware, then the algorithm is simply not a good fit.

> and then custom hardware extensions are a thing.

That's the kind of trade-off I eluded to as well. As a developer of a tool (e.g. git), I'd pick hash algorithms that do have hardware extensions on the most common hardware and not use something, that may eventually get hardware extensions.

I guess developing such protocols right now for the future might still be advisable, but it seems odd to critic software that use well-optimized algorithms and fulfill the requirements.


sha-1 in git was just supposed to catch corruption, it was never intended to be used for security.


This is a justification that was made up after Git came under increasing criticism for its poor choise of a hash function after the shattered attack. It was already known that SHA-1 is weak before Git was invented.

The problem is... it doesn't line up with the facts.

Git has been using SHA-1 hashes for signatures since very early on. It also has claims in its documentation about "cryptographic security". It does not rigorously define what "cryptographic security" means, but plausibly, it should mean using a secure hash function without known weaknesses.


Torvald claimed:

"So that was one of the issues. But one of the issues really was, I knew I needed it to be distributed, but it needed to be really, really stable. And people kind of think that using the SHA-1 hashes was a huge mistake. But to me, SHA-1 hashes were never about the security. It was about finding corruption.

Because we’d actually had some of that during the BitKeeper things, where BitKeeper used CRCs and MD5s, right, but didn’t use it for everything. So one of the early designs for me was absolutely everything was protected by a really good hash."

https://github.blog/open-source/git/git-turns-20-a-qa-with-l...


That's a valid point. However, modern hardware and crypto algorithms are fast enough that it pays off to have "do it all" protocols, with as little tradeoffs as possible.

Example: Git users do need both corruption protection AND secure authentication. If authentication is not built in, it will have to be built around. Building around is always going to be more costly in the end.

Unfortunately, 20-30 years ago considerations such as "sha1 is shorter + faster" were taken seriously, plus all the crypto that existed back then sucked big time. Remember Snowden scandal in 2013? That, plus Bitcoin and blockchains moving towards mainstream brought about review of TLS, started SHA-3 competition. Many more brains turned to crypto since then and the new era began.


If this were true, then wouldn't MD5 have been the better choice?

Also, SHA-1's preimage resistance (which still isn't broken) is necessary for the security of signed commits, regardless of the hash function used for the signature itself, since a commit object references its tree and predecessor commit by their SHA-1 hashes.


Really nice summary! Thank you.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: