Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I mostly disagree.

First, LLMs should be happy to use made up languages described in a couple thousand tokens without issues (you just have to have a good llm-friendly description, some examples). That and having a compiler it can iterate with / get feedback from.

Second, LLMs heavily reduce ecosystem advantage. Before LLMs, presence of libraries for common use cases to save myself time was one of the main deciding factors for language choice.

Now? The LLM will be happy to implement any utility / api client library I want given the API I want. May even be more thoroughly tested than the average open-source library.



Have you tried having an LLM write significant amounts of, say, F#? Real language, lots of documentation, definitely in the pre-training corpus, but I've never had much luck with even mid sized problems in languages like it -- ones where today's models absolutely wipe the floor in JavaScript or Python.


Even best in class LLMs like GPT5 or Sonnet 4.5 do noticeably worse in languages like C# which are pretty mainstream, but not on the level of Typescript and Python - to the degree that I don't think they are reliably able to output production level code without a crazy level of oversight.

And this is for generic backend stuff, like a CRUD server with a Rest API, the same thing with an Express/Node backend works no trouble.


I’m doing Zig and it’s fine, though not significant amounts yet. I just had to have it synthesize the latest release changelog (0.15) into a short summary.

To be clear, I mean specifically using Claude Code, with preloaded sample context and giving it the ability to call the compiler and iterate on it.

I’m sure one-shot results (like asking Claude via the web UI and verifying after one iteration) will go much worse. But if it has the compiler available and writes tests, shouldn’t be an issue. It’s possible it causes 2-3 more back and forths with the compiler, but that’s an extra couple minutes, tops.

In general, even if working with Go (what I usually do), I will start each Claude Code session with tens of thousands of tokens of context from the code base, so it follows the (somewhat peculiar) existing code style / patterns, and understands what’s where.


Humans can barely untangle F# code..




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: