Hacker Newsnew | past | comments | ask | show | jobs | submit | bitwizeshift's commentslogin

Thank you, bumholes

Aside from what some other users have said, logging is fundamentally an observable side-effect of your library. It’s now a behavior that can become load-bearing — and putting it in library code forces this exposed behavior on the consumer.

As a developer, this gets frustrating. I want to present a clean and coherent output to my callers, and poorly-authored libraries ruin that — especially if they offer no mechanism to disable it.

It’s also just _sloppy_ in many cases. Well-designed library code often shouldn’t even need to log in the first place because it should clearly articulate each units side-effects; the composition of which should become clear to understand. Sadly, “design” has become a lost art in modern software development.


In Java world logging in libraries is effectively a no-op unless explicitly enabled by user, so side effects are negligible. And it actually does make sense, e.g. when a library is offering a convenient abstraction level over i/o, parsing or other stuff with initially unknown failure modes, where some logs may help clarifying the picture beyond handling an exception or error code. The way logging is done there is the art of good software design, which has never been lost (it may have not reached other platforms though). So I disagree with you and some other commenters: strict Verbot is dogmatic, and good design is never dogmatic.

You definitely are not alone in feeling this way, it’s happening everywhere now and it’s driving me nuts too.

I have the same complaint at work, where coworkers are using it for writing pull request descriptions, and it pumps out slop buzzwords like “streamlined the documentation”. Like, you didn’t streamline anything, you ran prettier on a markdown file!

On top of this type of description being useless marketing jargon, the writing style risks to train future LLMs to devolve their writing styles further into this. More frighteningly, how long until the excess amount of LLM-generated slop text like this starts training future humans reading it? People tend to model how they speak off of what they hear and read, and it’s everywhere now.


> how long until the excess amount of LLM-generated slop text like this starts training future humans reading it

Not much. Add to it deference to technology and the innate preference for new stuff not liked by previous generations: sloppy LLM style will look more authoritative than well thought human style of parents and older siblings.


ok


I don’t really think one needs to define intelligence to be able to acknowledge that inability to distinguish fact from fiction, or even just basic cognition and awareness of when it’s uncertain, telling the truth, or lying — is a glaring flaw in claiming intelligence. Real intelligence doesn’t have an effective stroke from hearing a username (token training errors); this is when you are peeling back the curtain of the underlying implementation and seeing its flaws.

If we measure intelligence as results oriented, then my calculator is intelligent because it can do math better than me; but that’s what it’s programmed/wired to do. A text predictor is intelligent at predicting text, but it doesn’t mean it’s general intelligence. It lacks any real comprehension of the model or world around it. It just know words, and


I hit send too early; Meant to say that it just knows words and that’s effectively it.

It’s cool technology, but the burden of proof of real intelligence shouldn’t be “can it answer questions it has great swaths of information on”, because that is the result it was designed to do.

It should be focused on whether it can truly synthesize information and know its limitations - something any programmer using Claude, copilot, Gemini, etc will tell you that it fabricates false information/apis/etc on a regular basis and has no fundamental knowledge that it even did that.

Or alternatively, ask these models leading questions that have no basis in reality — and watch what it comes up with. It’s become a fun meme in some circles to ask for definitions of nonsensical made up phrases to models, and see what crap it comes up with (again, without even knowing that it is).


Why “perma goodbye”?

Go has a similar function declaration, and it supports anonymous functions/lambdas.

E.g. in go, an anonymous func like this could be defined as

foo := func(x int, _ int) int { … }

So I’d imagine in Zig it should be feasible to do something like

var foo = fn(x: i32, i32) i32 { … }

unless I’m missing something?


Anonymous functions aren't the same as lambda functions. People in the Go community keep asking for lambda functions and never get them. There should be no need for func/fn and explicit return. Because the arrow would break stuff is one of the reasons.

See

https://github.com/golang/go/issues/59122

https://github.com/golang/go/issues/21498

    res := is.Map(func(i int)int{return i+1}).Filter(func(i int) bool { return i % 2 == 0 }).
             Reduce(func(a, b int) int { return a + b })

vs

    res := is.Map((i) => i+1).Filter((i)=>i % 2 == 0).Reduce((a,b)=>a+b)


Tech interviews in general need to be overhauled, and if they were it’d be less likely that AI would be as helpful in the process to begin with (at least for LLMs in their current state).

Current LLMs can do some basic coding and stitch it together to form cool programs, but it struggles at good design work that scales. Design-focused interviews paired with soft-skill-focus is a better measure of how a dev will be in the workplace in general. Yet, most interviews are just “if you can solve this esoteric problem we don’t use at all at work, you are hired”. I’d take a bad solution with a good design over a good solution with a bad design any day, because the former is always easier to refactor and iterate on.

AI is not really good at that yet; it’s trained on a lot of public data that skews towards worse designs. It’s also not all that great at behaving like a human during code reviews; it agrees too much, is overly verbose, it hallucinates, etc.


This is a good read on how to commit concepts to long-term memory and building skills.

I think there is a typo in the article though; there is a point that says:

> work out the problem by computing 6 × 5 = 5 + 5 + 5 + 5 + 5 + 5 = 30 (or 6 + 6 + 6 + 6 + 6 = 21)

The second parenthetical statement should be 30, unless I’m missing something?


Whoops, yeah, that's a typo. I think I originally had 7x3 as the example. Thanks for catching it.


The article never talked about bot-generated products, only bot generated comments and upvotes. How does manual review address this exactly?


What a strange and subjective take… I am genuinely struggling to understand the author’s viewpoint here, and why this post needed to exist at all.

The author proposes that braces are somehow subjectively harder to read for matching, and then says to just use a different delimiter of “end”. At which point, when you read nested code, you just see lots of “end” statements which are no different visually to seeing “}” closing braces, so what problem was solved exactly…?

I’m not saying it’s bad, it just doesn’t solve any practical problem, and it doesn’t improve anything objectively. This is just like debating why call a builtin type is “int” instead of “Int”. Most language-nerds I know tend to discuss more important details that can theoretically improve a language, and this is just stating a preference for Ruby “end” over C-style braces.

I feel like this needs to be reposted on April 1st


I chuckled when it said that curly brackets are bad because they don't look nice when rotated by 90 degrees.

> This is just like debating why call a builtin type is “int” instead of “Int”.

It's even a bit worse because any text editor knows out of the box about balancing brackets (e.g. Emacs in the text mode) and doesn't know about the "end" syntax.


This hasn’t been my experience at all in the slightest.

Been programming since I was in elementary school, and current Copilot, OpenAI and even Gemini models generate code at a very very junior level. It might solve a practical problem, but it can’t write a decent abstraction to save its life unless you repeatedly prompt it to. It also massively struggles to retain coherence when it has more moving parts; if you have different things being mutated, it often just forgets it and will write code that crashes/panics/generates UB/etc.

When you are lucky and you get something that vaguely works, the test cases it writes are of negative value. Test cases are either useless cases that don’t cover edge cases, are incorrect entirely and fail, or worse yet — look correct and pass, but are semantically wrong. LLM models have been absolutely hilariously bad at this, where it will generate passing cases for the code as written, but not for the semantics of the code being written. Writing it by hand would catch it quickly, but a junior dev using these tools can easily miss this.

Then there is Rust; most models don’t do rust well. In isolation they are kind of okay, but overall it frequently generates borrowing issues that fail to compile.


But I guess, and this is dangerous to say I do realize, is that the tooling around the prompts and around the results is key to getting the best results. Just prompts without guards is not how you want to do it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: