Hacker Newsnew | past | comments | ask | show | jobs | submit | valenterry's commentslogin

Scala has dependant types (though inferior than Idris ones) and has the whole jvm ecosystem.


> This is emphatically not fundamental to LLMs! Yes, the next token is selected randomly; but "randomly" could mean "chosen using an RNG with a fixed seed."

This. Thanks for saying that, because now I don't need to read the article, since if the author doesn't even get that, I'm not interested in the rest.


It should be noted that that is not an inherential advantage of passkeys over passwords. It is possible to achieve the same with passwords, e.g. by using a hash-cascade.


Sure, but then you still need a protocol between user agent and website. If you just do this in Javascript, you're not protected against phishing sites just forwarding the password entered directly.

Passkeys can in fact be backed by exactly this, i.e. a HMAC-only stateless implementation backed by a single password: https://github.com/lxgr/brainchain


> Sure, but then you still need a protocol between user agent and website.

Yes of course. Just like you do for passkeys.

> Passkeys can in fact be backed by exactly this, i.e. a HMAC-only stateless implementation backed by a single password: https://github.com/lxgr/brainchain

No, not quite. It's written on there:

> "Login" with your passphrase, and you can create non-discoverable WebAuthN credentials (don't call them passkeys, but definitely be reminded of them) at ~all~ some websites supporting them (...)

That's the thing: with passwords, a website/app cannot prevent you from controlling the password yourself. With passkeys and attestation it can.


But attestation for passkeys is dead. Neither Apple's, nor Google's implementation (with negligible exceptions) support it anymore, so any site demanding attestation will immediately disqualify > 99% of all potential users.

Some still might, e.g. for corporate or high security contexts, but I don't think it'll become a mass-adopted thing if things don't somehow drastically change course.


It's still in the standard. They could remove it, but they don't, so from my perspective it's just like how Google wasn't evil. Until they decided otherwise.


> It's still in the standard.

Yes, because hardware authenticators (like Yubikeys) still commonly support it, and it makes sense there.

I guess they could add an explicit remark like "synchronized credentials must not support attestation", and given the amount of FUD this regularly seems to generate I'd appreciate that. But attestation semantics seem to be governed more by FIDO than the W3C, so putting that in the WebAuthN spec would be a bit awkward, I think.


Hm, I disagree. I prefer if the user has the freedom to choose how they want to do things. At the cost of some users choosing the wrong way and then getting problems. It's a question of balance, but when I look at recent tech/internet history, I tend to not want to give central authorities any more power than they already have.


Ideally, sure, but the reality is just that some entities are not only reputationally, but also legally required to bear the liability for account takeovers.

In other words, you have a principal-agent problem: Users doing custom software passkey acrobatics and the banks liable for any funds lost.

Preferably, use of attestation should be limited to these (and enterprise) scenarios, and I do share the concern of others starting to use them as weak proofs of humanity etc.


> Ideally, sure, but the reality is just that some entities are not only reputationally, but also legally required to bear the liability for account takeovers.

Seems like an absolutely rare edge case to me. Or maybe even just a misunderstanding. I doubt there is a law that says that. If anything, I could imagine a law saying that a company has to take "sufficient precautions".

But even if what you say were to be true - that's not something to solve with tech. That means the law should be changed.


> Seems like an absolutely rare edge case to me.

Bank and payment card transactions are arguably a pretty big part of everyday life to most people.

> I doubt there is a law that says that.

Reg E/Z in the US and PSD2 in the EU pretty firmly put the burden for these types of situations/losses on the bank/PSP. They don't specifically mandate the "how", but for better or worse, industry perception and common practice is for that to include root detection, blocking VoIP numbers from receiving SMS-OTPs etc.

> That means the law should be changed.

The law that makes banks liable for most cases of account compromise? I'm actually quite happy with that, even if it comes with some unfortunate externalities.


is it fair to say all passkey implementations have this advantage while only some password implementations can match?


It is absolutely unfair to say it. Just like passwords stored in a password manager, passkeys can be copied out of the device for safekeeping. Because you can copy them out, a user can be induced to give them to someone.

I saw passkey boosters go very, very rapidly from "Passkeys are immune to phishing!" to "Passkeys are phishing resistant!" when lots of real-world people started using passkeys and demonstrated that you absolutely must have a way to back them up and move them around.


> passkeys can be copied out of the device for safekeeping

You can't copy them out on at least the iOS, Android, and (to my knowledge) Windows default implementations.

> lots of real-world people started using passkeys and demonstrated that you absolutely must have a way to back them up and move them around.

Millions of people use them without being able to move them around in the way you describe.


> You can't copy them out on at least the iOS, Android, and (to my knowledge) Windows default implementations.

Pardon? The official support docs disagree with you [0][1][2]. They absolutely leave the device.

Other passkey managers let them leave the device in a way that you control, but even the default ones copy them off the system they were created on.

[0] <https://support.google.com/accounts/answer/6197437?hl=en&co=...>

[1] <https://support.apple.com/guide/iphone/passwords-devices-iph...>

[2] Examine the "Can I use passkeys across multiple devices?" Q and its A here: <https://support.microsoft.com/en-us/windows/passkeys-frequen...>


Yes, they're synchronized, but I wouldn't call that "copying them out", as that to me implies somehow getting access to the raw private key or root secret bytes.

Both Apple and Google have pretty elaborate ceremonies for adding a new device to an existing account in a way that synchronizes over passkeys.


> ...as that to me implies somehow getting access to the raw private key or root secret bytes.

When passkeys were first introduced, they were 100% stuck to the device that they were created. There was absolutely no real way to copy them off. This is when proponents were -correctly- making the claim that they were immune to phishing.

When lots of users (who -notably- were not supported by whole-ass IT departments who set up and run systems that handle provisioning and enrolling new devices) started using passkeys, the correctness of the thing that many non-boosters were screaming ("You have to have a way to back these up and move them between devices!") became abundantly clear. Passkeys became something that could be copied off of devices, and proponents -correctly- switched to the claim "Passkeys are phishing resistant".

Once things switched around so that passkeys were no longer stuck on a single device, third-party managers got the ability to manage and copy passkeys. [0]

Hopefully it's now clear that the shift from "they never leave the device" to "they do leave the device" (and the consequences of this change) is what I'm talking about.

[0] At least, they will for the next five, ten years until the big players decide that it's okay to use attestation to lock them out to "enhance security".


It sounds like part of the problem is that two rather separate standards of "phishing" are getting conflated:

1. "Hi, I'm your bank, log in just like you normally do." (Passkeys immune.)

2. "Hi, I'm your bank, do something strange I've never ever asked you to do before by uploading some special files or running this sketchy program." (Passkeys just resist.)

The problem with the expansive definition is it basically starts to encompass every kind of trick or social-engineering ever.


That qualifies as "immune to phishing" as far as I'm concerned. No reasonable person using a reasonable implementation will ever be successfully victimized in that manner.

We need to stop pretending that padded cells for the criminally incompetent are a desirable design target. If you are too stupid to realize that you are being taken for a ride when asked to go through a manual export process and fork over sensitive information (in this case your passkeys) to a third party then you have no business managing sensitive information to begin with. Such people should not have online accounts. We should not design technology to accommodate that level of incompetence.

If you can't stop driving your car into pedestrians in crosswalks you lose your license. If you can't stop handing over your bank account number to strangers who call you on the phone you lose all of your money. If you eat rotten food you get sick and possibly die. If you hop a fence and proceed to fall off of the cliff behind it you will most likely perish. To some extent the world inherently has sharp edges and we need to stop pretending that it doesn't because when we do that it makes the world a worse place.


Then you should write assembly only. Like `MOV`, `ADD`... can't really get simpler than that.

Problem is, that makes every small part of the program simple, but it increases the number of parts (and/or their interaction). And ultimately, if you need to understand the whole thing it's suddenly much harder.

Surely you can write the same behaviour in "clever" (when did that become a negative attribute?) or "good" way in assembly. You are correct. But that's a different matter.


By that definition you will be stuck on the first language you love.

And someone will be stuck not to do anything because they are unsatisfied with all languages. :-)


> Shameless plug: learn your tool. Don’t approach Postgresql/Mssql/whathaveyousql like you’re a backend engineer.

Erm, knowing and understanding how to use your database is a bread and butter skill of a backend engineer.


Google isn't even good at engineering great software.

They have some good people working on some good projects. If you look at the relation between software-quality of their average product and number of developers they have... yeah I don't know. Maybe hiring tons of new-grads that are good at leetcode and then forcing them to use golang... is not what actually makes high quality software.

I could believe that they are good at doing research though.


Most of the core products at Google are still written in pre-C++11.

I wish these services would be rewritten in Go!

That’s where a lot of the development time goes: trying to make incredibly small changes that cause cascading bugs and regressions a massive 2000s C++ codebase that doesn’t even use smart pointers half the time.

Also, I think the outside world has a very skewed view on Go and how development happens at Google. It’s still a rather bottom up, or at least distributed company. It’s hard to make hundreds of teams to actually do something. Most teams just ignored those top-down “write new code in Go” directives and continued using C++, Python, and Java.


I wouldn't say most. Google is known for constantly iterating on its code internally to the point of not getting anything done other than code churn. While there is use of raw pointers, I'd argue it's idiomatic to still use raw pointers in c++ for non owning references that are well scoped. Using shared pointers everywhere can be overkill. That doesn't mean the codebase is pre c++11 in style.

Rewriting a codebase in another language that has no good interop is rarely a good idea. The need to replicate multiple versions of each internal library can become incredibly taxing. Migrations need to be low risk at Google scale and if you can't do it piecewise it's often not worth attempting either. Also worth noting that java is just as prevelant if not moreso in core products.


> Static types, algebraic data types, making illegal states unrepresentable: the functional programming tradition has developed extraordinary tools for reasoning about programs

Looks like the term "functional programming" has been watered down so much that now it is as useful as OOP: not at all.

Look, what matters is pure functional programming. It's about referential transparency, which means managing effects and reason about code in a similar way you can do with math. Static typing is very nice but orthogonal, ADT and making illegal states unrepresentable are good things, but all orthogonal.


What would you say if someone has a project written in, let's say, PureScript and then they use a Java backend to generate/overwrite and also version control Java code. If they claim that this would be a Java project, you would probably disagree right? Seems to me that LLMs are the same thing, that is, if you also store the prompt and everything else to reproduce the same code generation process. Since LLMs can be made deterministic, I don't see why that wouldn't be possible.


PureScript is a programming language. English is not. A better analogy would be what would you say about someone who uses a No Code solution that behind the scenes writes Java. I would say that's a much better analogy. NoCode -> Java is similar to LLM -> Java.

I'm not debating whether LLMs are amazing tools or whether they change programming. Clearly both are true. I'm debating whether people are using accurate analogies.


> PureScript is a programming language. English is not.

Why can’t English be a programming language? You would absolutely be able to describe a program in English well enough that it would unambiguously be able to instruct a person on the exact program to write. If it can do that, why couldn’t it be used to tell a computer exactly what program to write?


> Why can’t English be a programming language? You would absolutely be able to describe a program in English well enough that it would unambiguously be able to instruct a person on the exact program to write

Various attempt has been made. We got Cobol, Basic, SQL,… Programming language needs to be formal and English is not that.


I don’t think you can do that. Or at least if you could, it would be an unintelligible version of English that would not seem much different from a programming language.


I agree with your conclusion but I don't think it'd necessarily be unintelligible. I think you can describe a program unambiguously using everyday natural language, it'd just be tediously inefficient to interpret.

To make it sensible you'd end up standardising the way you say things: words, order, etc and probably add punctuation and formatting conventions to make it easier to read.

By then you're basically just at a verbose programming language, and the last step to an actual programming language is just dropping a few filler words here and there to make it more concise while preserving the meaning.


I think so too.

However I think there is a misunderstanding between being "deterministic" and "unambiguous". Even C is an ambiguous programming language" but it is "deterministic" in that it behaves in the same ambiguous/undefined way under the same conditions.

The same can be achieved with LLMs too. They are "more" ambiguous of course and if someone doesn't want that, then they have to resort to exactly what you just described. But that was not the point that I was making.


I'm not sure there's any conflict with what you're saying, which I guess is that language can describe instructions which are deterministic while still being ambiguous in certain ways.

My point is just a narrower version of that: where language is completely unambiguous, it is also deterministic where interepreted in some deterministic way. In that sense plain, intelligible english can be a sort of (very verbose) programming language if you just ensure it is unambiguous which is certainly possible.

It may be that this can still be the case if it's partly ambiguous but that doesn't conflict with the narrower case.

I think we're agreed on LLMs in that they introduce non-determinism in the interpretation of even completely unambiguous instructions. So it's all thrown out as the input is only relevant in some probabilistic sense.


I don't think it would be unintelligible.

It would be very verbose, yes, but not unintelligible.


Why not?

Here's a very simple algorithm: you tell the other person (in English) literally what key they have to press next. So you can easily have them write all the java code you want in a deterministic and reproducible way.

And yes, maybe that doesn't seem much different from a programming language which... is the point no? But it's still natural English.


No. Natural language is vague, ambiguous and indirect.

Watch these poor children struggle with writing instructions for making a sandwich:

https://youtu.be/FN2RM-CHkuI


English can be ambiguous. Programming languages like C or Java cannot


English CAN be ambiguous, but it doesn't have to be.

Think about it. Human beings are able to work out ambiguity when it arrises between people with enough time and dedication, and how do they do it? They use English (or another equivalent human language). With enough back and forth, clarifying questions, or enough specificity in the words you choose, you can resolve any ambiguity.

Or, think about it this way. In order for the ambiguity to be a problem, there would have to exist an ambiguity that could not be removed with more English words. Can you think of any example of ambiguous language, where you are unable to describe and eliminate the ambiguity only using English words?


Human beings are able to work out the ambiguity because a lot of meaning is carried in shared context, which in turn arises out of cultural grounding. That achieves disambiguation, but only in a limited sense. If humans could perfectly disambiguate, you wouldn't have people having disputes among otherwise loving spouses and friends, arising out of merely misunderstanding what the other person said.

Programming languages are written to eliminate that ambiguity because you don't want your bank server to make a payment because it misinterpreted ambiguous language in the same way that you might misinterpret your spouse's remarks.

Can that ambiguity be resolved with more English words? Maybe. But that would require humans to be perfect communicators, which is not that easy because again, if it were possible, humans would have learnt to first communicate perfectly with the people closest to them.


COBOL was designed under the same principles: a simple, unambiguous English like language that works for computers.



A determinisitic prompt + seed used to generate an output is interesting as a way to deterministically record entirely how code came about, but it's also not a thing people are actually doing. Right now, everyone is slinging around LLM outputs without any trying to be reproducible; no seed, nothing. What you've described and what the article describe are very different.


Yes, you are right. I was mostly speaking in theoretical terms - currently people don't work like that. And you would also have to use the same trained LLM of course, so using a third party provider probably doesn't give that guarantee.

But it would be possible in theory.


Nope, that is precisely what pure functional programming is about: to turn actions like "draw something to the screen" into regular values that you can store into a variable, pass around, return from a function and so on.

It's not an utopia. It will eventually happen and it will replace how react.js currently works. effect.website will probably be the foundation.


I'm well aware of what "pure functional programming" is about, I spend most of my time in Clojure during a normal work day, and done my fair deal of Haskell too :)

And yes, even the most pure functional language eventually needs to do something non-pure, even if the entire flow up until that point is pure, that last step (IO) just cannot be pure, no matter how badly you want it to.

With that said, you'd have to pry my pure functions out of my cold dead hands, but I'm not living under the illusion that every function can be pure in a typical application, unless you have no outputs at all.


That last step is still pure, because it's still just return a value. A data-structure. You did Haskell, so you should know how it works.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: