Hacker Newsnew | past | comments | ask | show | jobs | submit | 9rx's commentslogin

What's the use-case for block-level defer?

In a tight loop you'd want your cleanup to happen after the fact. And in, say, an IO loop, you're going to want concurrency anyway, which necessarily introduces new function scope.


> In a tight loop you'd want your cleanup to happen after the fact.

Why? Doing 10 000 iterations where each iteration allocates and operates a resource, then later going through and freeing those 10 000 resources, is not better than doing 10 000 iterations where each iteration allocates a resource, operates on it, and frees it. You just waste more resources.

> And in, say, an IO loop, you're going to want concurrency anyway

This is not necessarily true; not everything is so performance sensitive that you want to add the significant complexity of doing it async. Often, a simple loop where each iteration opens a file, reads stuff from it, and closes it, is more than good enough.

Say you have a folder with a bunch of data files you need to work on. Maybe the work you do per file is significant and easily parallelizable; you would probably want to iterate through the files one by one and process each file with all your cores. There are even situations where the output of working on one file becomes part of the input for work on the next file.

Anyway, I will concede that all of this is sort of an edge case which doesn't come up that often. But why should the obvious way be the wrong way? Block-scoped defer is the most obvious solution since variable lifetimes are naturally block-scoped; what's the argument for why it ought to be different?


Let's say you're opening files upon each loop iteration. If you're not careful you'll run out of open file descriptors before the loop finishes.

It doesn't just have to be files, FWIW. I once worked in a Go project which used SDL through CGO for drawing. "Widgets" were basically functions which would allocate an SDL surface, draw to it using Cairo, and return it to Go code. That SDL surface would be wrapped in a Go wrapper with a Destroy method which would call SDL_DestroySurface.

And to draw a surface to the screen, you need to create an SDL texture from it. If that's all you want to do, you can then destroy the SDL surface.

So you could imagine code like this:

    strings := []string{"Lorem", "ipsum", "dolor", "sit", "amet"}
    
    stringTextures := []SDLTexture{}
    for _, s := range strings {
        surface := RenderTextToSurface(s)
        defer surface.Destroy()
        stringTextures = append(stringTextures, surface.CreateTexture())
    }
Oops, you're now using way more memory than you need!

Why would you allocate/destroy memory in each iteration when you can reuse it to much greater effect? Aside from bad API design, but a language isn't there to paper over bad design decisions. A good language makes bad design decisions painful.

The surfaces are all of different size, so the code would have to be more complex, resizing some underlying buffer on demand. You'd have to split up the text rendering into an API to measure the text and an API to render the text, so that you could resize the buffer. So you'd introduce quite a lot of extra complexity.

And what would be the benefit? You save up to one malloc and free per string you want to render, but text rendering is so demanding it completely drowns out the cost of one allocation.


Why does the buffer need to be resized? Your malloc version allocates a fixed amount of memory on each iteration. You can allocate the same amount of memory ahead of time.

If you were dynamically changing the malloc allocation size on each iteration then you have a case for a growable buffer to do the same, but in that case you would already have all the complexity of which you speak as required to support a dynamically-sized malloc.


The example allocates an SDL_Surface large enough to fit the text string each iteration.

Granted, you could do a pre-pass to find the largest string and allocate enough memory for that once, then use that buffer throughout the loop.

But again, what do you gain from that complexity?


> The example allocates an SDL_Surface large enough to fit the text string each iteration.

Impossible without knowing how much to allocate, which you indicate would require adding a bunch of complexity. However, I am willing to chalk that up to being a typo. Given that we are now calculating how much to allocate on each iteration, where is the meaningful complexity? I see almost no difference between:

    while (next()) {
        size_t size = measure_text(t);
        void *p = malloc(size);
        draw_text(p, t);
        free(p);
    }
and

    void *p = NULL;
    while (next()) {
        size_t size = measure_text(t);
        void *p = galloc(p, size);
        draw_text(p, t);
    }
    free(p);

>> The example allocates an SDL_Surface large enough to fit the text string each iteration.

> Impossible without knowing how much to allocate

But we do know how much to allocate? The implementation of this example's RenderTextToSurface function would use SDL functions to measure the text, then allocate an SDL_Surface large enough, then draw to that surface.

> I see almost no difference between: (code example) and (code example)

What? Those two code examples aren't even in the same language as the code I showed.

The difference would be between the example I gave earlier:

    stringTextures := []SDLTexture{}
    for _, str := range strings {
        surface := RenderTextToSurface(str)
        defer surface.Destroy()
        stringTextures = append(stringTextures, surface.CreateTexture())
    }
and:

    surface := NewSDLSurface(0, 0)
    defer surface.Destroy()
    stringTextures := []SDLTexture{}
    for _, str := range strings {
        size := MeasureText(s)
        if size.X > surface.X || size.Y > surface.Y {
            surface.Destroy()
            surface = NewSDLSurface(size.X, size.Y)
        }

        surface.Clear()
        RenderTextToSurface(surface, str)
        stringTextures = append(stringTextures, surface.CreateTextureFromRegion(0, 0, size.X, size.Y))
    }
Remember, I'm talking about the API to a Go wrapper around SDL. How the C code would've looked if you wrote it in C is pretty much irrelevant.

I have to ask again though, since you ignored me the first time: what do you gain? Text rendering is really really slow compared to memory allocation.


> Remember, I'm talking about the API to a Go wrapper around SDL.

We were talking about using malloc/free vs. a resizable buffer. Happy to progress the discussion towards a Go API, however. That, obviously, is going to look something more like this:

    renderer := SDLRenderer()
    defer renderer.Destroy()
    for _, str := range strings {
        surface := renderer.RenderTextToSurface(str)
        textures = append(textures, renderer.CreateTextureFromSurface(surface))
    }
I have no idea why you think it would look like that monstrosity you came up with.

> No. We were talking about using malloc/free vs. a resizable buffer.

No. This is a conversation about Go. My example[1], that you responded to, was an example taken from a real-world project I've worked on which uses Go wrappers around SDL functions to render text. Nowhere did I mention malloc or free, you brought those up.

The code you gave this time is literally my first example (again, [1]), which allocates a new surface every time, except that you forgot to destroy the surface. Good job.

Can this conversation be over now?

[1] https://news.ycombinator.com/item?id=47088409


I invite you to read the code again. You missed a few things. Notably it uses a shared memory buffer, as discussed, and does free it upon defer being executed. It is essentially equivalent to the second C snippet above, while your original example is essentially equivalent to the first C snippet.

Wait, so your wrapper around SDL_Renderer now also inexplicably contains a scratch buffer? I guess that explains why you put RenderTextToSurface on your SDL_Renderer wrapper, but ... that's some really weird API design. Why does the SDL_Renderer wrapper know how to use SDL_TTF or PangoCairo to draw text to a surface? Why does SDL_Renderer then own the resulting surface?

To anyone used to SDL, your proposed API is extremely surprising.

It would've made your point clearer if you'd explained this coupling between SDL_Renderer and text rendering in your original post.

But yes, I concede that if there was any reason to do so, putting a scratch surface into your SDL_Renderer that you can auto-resize and render text to would be a solution that makes for slightly nicer API design. Your SDL_Renderer now needs to be passed around as a parameter to stuff which only ought to need to concern itself with CPU rendering, and you now need to deal with mutexes if you have multiple goroutines rendering text, but those would've been alright trade-offs -- again, if there was a reason to do so. But there's not; the allocation is fast and the text rendering is slow.


You're right to call out that the SDLRenderer name was a poor choice. SDL is an implementation detail that should be completely hidden from the user of the API. That it may or may not use SDL under the hood is irrelevant to the user of the API. If the user wanted to use SDL, they would do so directly. The whole point of this kind of abstraction, of course, is to decouple of the dependence on something like SDL. Point taken.

Aside from my failure in dealing with the hardest problem in computer science, how would you improve the intent of the API? It is clearly improved over the original version, but we would do well to iterate towards something even better.


I think the most obvious improvement would be: just make it a free function which returns a surface, text rendering is slow and allocation is fast

That is a good point. If text rendering is slow, why are you not doing it in parallel? This is what 9rx called out earlier.

Some hypothetical example numbers: if software-rendering text takes 0.1 milliseconds, and I have a handful of text strings to render, I may not care that rendering the strings takes a millisecond or two.

But that 0.1 millisecond to render a string is an eternity compared to the time it takes to allocate some memory, which might be on the order of single digit microseconds. Saving a microsecond from a process which takes 0.1 milliseconds isn't noticeable.


You might not care today, but the next guy tasked to render many millions of strings tomorrow does care. If he has to build yet another API that ultimately does the same thing and is almost exactly the same, something has gone wrong. A good API is accommodating to users of all kinds.

I think I've been successfully nerd sniped.

It might be preferable to create a font atlas and just allocate printable ASCII characters as a spritesheet (a single SDL_Texture* reference and an array of rects.) Rather than allocating a texture for each string, you just iterate the string and blit the characters, no new allocations necessary.

If you need something more complex, with kerning and the like, the current version of SDL_TTF can create font atlases for various backends.


Completely depends on context. If you're rendering dynamically changing text, you should do as you say. If you have some completely static text, there's really nothing wrong with doing the text rendering once using PangoCairo and then re-using that texture. Doing it with PangoCairo also lets you do other fancy things like drop shadows easier.

Files are IO, which means a lot of waiting. For what reason wouldn't you want to open them concurrently?

Opening a file is fairly fast (at least if you're on Linux; Windows not so much). Synchronous code is simpler than concurrent code. If processing files sequentially is fast enough, for what reason would you want to open them concurrently?

For concurrent processing you'd probably do something like splitting the file names into several batches and process those batches sequentially in each goroutine, so it's very much possible that you'd have an exact same loop for the concurrent scenario.

P.S. If you have enough files you don't want to try to open them all at once — Go will start creating more and more threads to handle the "blocked" syscalls (open(2) in this case), and you can run out of 10,000 threads too


You'd probably have to be doing something pretty unusual to not use a worker queue. Your "P.S." point being a perfect case in point as to why.

If you have a legitimate reason for doing something unusual, it is fine to have to use the tools unusually. It serves as a useful reminder that you are purposefully doing something unusual rather than simply making a bad design choice. A good language makes bad design decisions painful.


You have now transformed the easy problem of "iterate through some files" into the much more complex problem of either finding a work queue library or writing your own work queue library; and you're baking in the assumption that the only reasonable way to use that work queue is to make each work item exactly one file.

What you propose is not a bad solution, but don't come here and pretend it's the only reasonable solution for almost all situations. It's not. Sometimes, you want each work item to be a list of files, if processing one file is fast enough for synchronisation overhead to be significant. Often, you don't have to care so much about the wall clock time your loop takes and it's fast enough to just do sequentially. Sometimes, you're implementing a non-important background task where you intentionally want to only bother one core. None of these are super unusual situations.

It is telling that you keep insisting that any solution that's not a one-file-per-work-item work queue is super strange and should be punished by the language's design, when you haven't even responded to my core argument that: sometimes sequential is fast enough.


> It is telling that you keep insisting

Keep insisting? What do you mean by that?

> when you haven't even responded to my core argument that: sometimes sequential is fast enough.

That stands to reason. I wasn't responding to you. The above comment was in reply to nasretdinov.


Your comment was in reply to nasretdinov, but its fundamental logic ignores what I've been telling you this whole time. You're pretending that the only solution to iterating through files is a work queue and that any solution that does a synchronous open/close for each iteration is fundamentally bad. I have told you why it isn't: you don't always need the performance.

Using a "work queue", i.e. a channel would still have a for loop like

  for filename := range workQueue {
      fp, err := os.Open(filename)
      if err != nil { ... }
      defer fp.Close()
      // do work
  }

Which would have the same exact problem :)

I don't see the problem.

    for _, filename := range files {
        queue <- func() {
            f, _ := os.Open(filename)
            defer f.Close()
        }
    }
or more realistically,

    var group errgroup.Group
    group.SetLimit(10)
    for _, filename := range files {
        group.Go(func() error {
            f, err := os.Open(filename)
            if err != nil {
                return fmt.Errorf("failed to open file %s: %w", filename, err)
            }
            defer f.Close()  
            // ...
            return nil          
        })
    }
    if err := group.Wait(); err != nil {
        return fmt.Errorf("failed to process files: %w", err)
    }
Perhaps you can elaborate?

I did read your code, but it is not clear where the worker queue is. It looks like it ranges over (presumably) a channel of filenames, which is not meaningfully different than ranging over a slice of filenames. That is the original, non-concurrent solution, more or less.


I think they imagine a solution like this:

    // Spawn workers
    for _ := range 10 {
        go func() {
            for path := range workQueue {
                fp, err := os.Open(path)
                if err != nil { ... }
                defer fp.Close()
                // do work
            }
        }()
    }

    // Iterate files and give work to workers
    for _, path := range paths {
        workQueue <- path
    }

Maybe, but why would one introduce coupling between the worker queue and the work being done? That is a poor design.

Now we know why it was painful. What is interesting here is that the pain wasn't noticed as a signal that the design was off. I wonder why?

We should dive into that topic. I suspect at the heart of it lies why there is so much general dislike for Go as a language, with it being far less forgiving to poor choices than a lot of other popular languages.


I think your issue is that you're an architecture astronaut. This is not a compliment. It's okay for things to just do the thing they're meant to do and not be super duper generic and extensible.

It is perfectly okay inside of a package. Once you introduce exports, like as seen in another thread, then there is good reason to think more carefully about how users are going to use it. Pulling the rug out from underneath them later when you discover your original API was ill-conceived is not good citizenry.

But one does still have to be mindful if they want to write software productively. Using a "super duper generic and extensible" solution means that things like error propagation is already solved for you. Your code, on the other hand, is going to quickly become a mess once you start adding all that extra machinery. It didn't go unnoticed that you conveniently left that out.

Maybe that no longer matters with LLMs, when you don't even have to look the code and producing it is effectively free, but LLMs these days also understand how defer works so then this whole thing becomes moot.


Current generation "AI" has already largely solved cheaper, faster, and more reliable. But it hasn't figured out how to curb demand. So far, the more software we build, the more people want even more software. Much like is told in the lump of labor fallacy, it appears that there is no end to finding productive uses for software. And certainly that has been the "common wisdom" for at least the last couple of decades; that whole "software is eating the world" thing.

What changed in the last month that has you thinking that a demand wall is a real possibility?


I agree the pie can grow, but I don’t know that the profession survives in its current form. Whether the next form is personally profitable for those of us who’ve sunk a decade+ into the SWE skillset remains to be seen.

I selfishly hope it is, but imo it’s simply to early to tell.


China also likes to claim it is a democracy because it holds elections.

It is fair to say that the USA is still a democracy, but not because of elections. Elections have little to do with democracy. In fact, if the majority of the population hold the view that elections equate to democracy, you don't have a democracy.


I wouldn't say that elections have little to do with democracy, they are necessary. Though I agree that merely having an election isn't sufficient. A lot of modern dictatorships have "elections". And that's not to even begin to get in to how representation works.

> I wouldn't say that elections have little to do with democracy, they are necessary.

Elections are a useful tool, but not strictly necessary. Obviously in the small scale the people in a democracy can simply communicate directly. As things scale up you do need to, for all practical purposes, introduce a messenger[1] to carry what the people at the local level have decided upon, to compile with all the other local levels. But that does not require elections either, only trust that the message will be delivered accurately and in good faith. Elections are a really good way to select who you trust, which is why it is the norm in a representative democracy, but if in some hypothetical world where someone naturally became trusted by the people and became the messenger out of simple happenstance, that would be just as democratic. The only signifiant feature of a democracy is that the people hold control[2].

[1] Now that you no longer need to travel thousands of miles to talk to another person it is questionable how necessary that remains. However, we've never successfully developed a trust model without face-to-face interaction. As such, we willingly retain a trusted messenger to offer the face-to-face presence.

[2] Which is why the USA is oft said to not be a democracy. Few people in the USA actually get involved in democracy, which then makes it look like a small group hold control over everyone else. However, there is nothing to suggest that anyone is prevented from getting involved if they want to. Choosing to not participate is quite different from not being able to participate. And thus it is rightfully still considered a democracy.


>China also likes to claim it is a democracy because it holds elections.

Plenty of places called China have or have had elections. Taiwan, Hong Kong, etc.

Oh, you mean the mainland? You can vote for The Party, or vote for The Party. I see nothing undemocratic about that!


> Rust has straightforward support for every part of OOP other than implementation inheritance

Except the only thing that makes OOP OOP: Message passing.

Granted, Swift only just barely supports it, and only for the sake of interop with Objective-C. Still, Swift has better OO support because of it. Rust doesn't even try.

Not that OOP is much of a goal. There is likely good reason why Smalltalk, Objective-C, and Ruby are really the only OOP languages in existence (some esoteric language nobody has ever heard of notwithstanding).


I’m pretty sure when the Ladybird team said “Swift has strictly better OOP support”, they were not referring to ObjC style message passing, so it’s not even relevant.

I'm pretty sure your guessing is silly. I assume you are trying to be here in good faith, so make your case. Since it is not support for message passing, what else makes Swift have "strictly better OOP support"?

That's the thing man, there isn't anything. It was an odd thing in the tweet to say that it has better OOP.

(source: iOS dev from jailbreak days, so like 8 years before Swift, till 2019. He did not mean dynamic dispatch and Swift has dynamic dispatch by way of "you can annotate a Swift method with @objc and we'll emit it as an ObjC method instead of Swift", not Smalltalk-ish, like, at all. if you're the poster who originally said "because of dynamic dispatch", I understand why you're frustrated but I have 0 idea why you think dynamic dispatch in Swift would matter, much less say it makes Swift much better at OOP than Rust. It's impolite to say "utterly baffling engineering decision" in public, so there's subtext in the conversation. Namely that both claims made 0 sense if you had practical familiarity with either)


> That's the thing man, there isn't anything.

But that's the thing, there is: It supports message passing.

Like we already discussed long ago, it supports it poorly, and only for the sake of compatibility with Objective-C, but still that makes its OOP support better than Rust's. Rust has no OOP support at all. It is not an OOP language and never would want to be. OOP goes completely against its core principles (statically-typed, performance-minded).

Realistically, nobody would consider Swift an OOP language either. However, on the spectrum, it is unquestionably closer to being an OOP language. It at least gets an honourable mention in the short list of OOP languages. It is undeniable that Swift has "better" OOP support; not to be confused with good OOP support.

> He did not mean dynamic dispatch

Of course not. Dynamic dispatch is for function calling. OOP doesn't use dynamic dispatch. That's literally why we call it object-oriented rather than function-oriented (or functional, as the kids say). This is also why Objective-C, quite brilliantly, uses [foo bar] syntax (hint: it kind of looks like an ASCII-art envelope for a reason): To make it clear that conceptually you are not calling a function.

> I understand why you're frustrated

I don't. Fill us in.


Ok, I've now read through the rest of this thread and I think I understand where you're coming from, but I also think you're making my point for me.

You're using a definition of OOP where only Smalltalk-style message passing counts. By that definition, you're right: Swift is closer, because `@objc` exists. But by that definition, neither Swift nor Rust is an OOP language in any meaningful sense, and the delta between them is mass-of-the-electron tiny. Swift's message passing support is an annotation that is a compatibility shim for a 40-year-old runtime, not a design philosophy.

So when you say Swift has "better OO support"... sure, in the same way that my car has better submarine support than yours because I left the windows cracked and water could theoretically get in. Technically true! Not useful!

The Ladybird team are C++ developers evaluating languages for a browser engine. When C++ developers say "OOP" they mean classes, inheritance hierarchies, virtual methods. The DOM is a giant inheritance tree. That's the context. You can tell them they're using the word wrong, but that doesn't change what they meant, and what they meant is the only thing that matters for understanding whether the claim made sense.

And under their definition-which is also the definition used by mass, essentially every working programmer and every university curriculum for the last 30 years-Swift and Rust are actually pretty close, which was my original point. Swift has `class` with inheritance, Rust has traits and composition. Neither is Smalltalk. The claim that Swift is "strictly better" was weird no matter which definition you pick.

(Also, and I say this respectfully: characterizing C++/Java/Rust as "functional programming" because they encapsulate data with functions is... a take. I get the logic chain you're following but that is not a definition that will help anyone communicate with anyone else, which is presumably the point of definitions.)


> When C++ developers say "OOP" they mean classes, inheritance hierarchies, virtual methods.

Okay, but for that to be true in this case then you must explain how Swift has "better OOP support". If there is no rational explanation for now Swift has "better OOP support" by the metrics you are imagining, as you, me, and everyone else has alluded to earlier, then clearly that isn't what they meant.

> I get the logic chain you're following but that is not a definition that will help anyone communicate with anyone else

Won't help communicate with anyone else meaning that you are the only one capable of understanding the logic chain? I'm sure you are a talented guy, but the world isn't exactly lacking in talented people. I'm sorry to say, but most everyone on HN will have absolutely no trouble understanding this.


> Okay, but for that to be true in this case then you must explain how Swift has "better OOP support" by that token.

I did. Swift has `class` types with implementation inheritance. Rust does not. If you're porting a C++ codebase with deep class hierarchies (like, say, a browser DOM), Swift lets you transliterate those hierarchies directly. Rust makes you rethink them into composition and traits. That's a real difference that matters to a team mid-migration, and it's an extremely rational explanation for why a C++ dev would say Swift has "strictly better OOP support."

You don't have to agree it's a large difference (I don't, which was my original point), but "there is no rational explanation" just isn't true. There's a very obvious one, it's just boring.

> If there is no rational explanation, as you, me, and everyone else has alluded to earlier, then clearly that isn't what they meant.

My position was never "there is no rational explanation." My position was "the difference is small enough that 'strictly better' was a weird thing to say." Those are different claims! You're kind of merging me into your argument when we don't actually agree.

> Won't help communicate with anyone else meaning that you are the only one capable of understanding the logic chain?

No, I meant that if you say "C++ is a functional programming language" to mass, any working programmer, they will not understand you, because that is not what those words mean to them. It's not about intelligence, it's about shared vocabulary. You've built a internally-consistent taxonomy where functional = data + functions, OOP = data + message passing, and imperative = data and functions separate. I can follow it fine. But you've redefined three terms that already have widespread, different meanings, and then you're treating disagreement as confusion. That's the communication problem.


> You've built a internally-consistent taxonomy

I'm certainly not clever enough to have built it. Not to mention that the person who coined OOP is quite famous for having done so. I am not him, I can assure you. I have merely ingested it from what is out there in widespread circulation.

I can appreciate that you live in a different bubble and what is widespread there is not the same. It's a pretty big world out there. However, it doesn't really matter as if "C++ is a functional programming language" doesn't jive with your understanding, as you'll simply ask: "What ever do you mean?" and which point "functional programming language" will be defined and a shared understanding will be reached.

This isn't the problem you are imagining.

> I did.

Right. Seems we encountered a communication barrier again. "That's the thing man, there isn't anything." in my world would read "That's the thing man, there are things and here they are: ..." However, this highlights again that it doesn't actually harm communication as further clarification follows and eventually everyone will reach a shared understanding. Communication isn't some kind of TV game show where you have to get the right answer on your first try. This is not a problem in any way, shape, or form.


> I'm certainly not clever enough to have built it.

Ha, don't sell yourself short, you're doing a great job defending it.

> However, it doesn't really matter as if "C++ is a functional programming language" doesn't jive with your understanding, you'll simply ask: "What ever do you mean?"

Okay, genuinely, let's try this exercise. You say to me "C++ is a functional programming language." I ask "What ever do you mean?" You say "data is grouped with functions." I say "...that's also true of Python, JavaScript, Kotlin, Scala, Dart, TypeScript, and basically every language designed after 1990. What term do you use for Haskell?" And now we're in another 20-message thread defining terms from scratch instead of talking about the actual thing.

Like, you've got a taxonomy where imperative/functional/OOP is a clean trichotomy based on how data relates to code. That's elegant! But it also means "functional programming" contains both Haskell and Java, which in practice need to be distinguished from each other far more often than they need to be grouped together. The Kay-pure definitions give you clean categories at the cost of useful ones.

*Obj-C doesn't even pass muster of the Kay-pure definition, which renders the whole conversation moot.*

> "That's the thing man, there isn't anything." in my world would read "That's the thing man, there are things and here they are: ..."

Okay, fair hit. :) What I meant was: there's nothing that would make a C++ team say "strictly better." Swift has classes with inheritance, sure. But "strictly better" implies Rust can't even get close, and it can-you just model things differently. The Ladybird team discovered this themselves, which is... kind of the whole story here? They said "strictly better OOP support," tried it, and now have removed Swift. The claim didn't survive contact with their own codebase. That was the entire point of my original comment sitting at -3. (now at +2)

> Communication isn't some kind of TV game show where you have to get the right answer on your first try.

No, but Hacker News comments at -3 do get grayed out and collapsed, so in practice it kind of is, unfortunately.


> What term do you use for Haskell?

In the context of the dimension we have been talking about, it is also functional. There is no difference between Haskell, Python, Java, etc. in that particular dimension. All of those languages you list are quite different in other dimensions, of course. Are you under the impression that programming languages are one dimensional? Unfortunately, that is not the case.

> And now we're in another 20-message thread defining terms from scratch instead of talking about the actual thing.

Especially when we find out that what we really wanted to talk about was type systems. Thinking of programming languages as being one dimensional is a fool's errand.

> But it also means "functional programming" contains both Haskell and Java, which in practice need to be distinguished from each other far more often than they need to be grouped together.

Right, there may be a need to separate them, but sensibly you would separate them on the dimension that is relevant to the separation intent, not some other arbitrary quality. For example, perhaps your interest is in separating mutability and immutability. Therefore, something like "Haskell is an immutable-by-default programming language" would be an appropriate statement in that desired context. "Haskell is a statically-typed programming language", not so much.

> No, but Hacker News comments at -3 do get grayed out and collapsed, so in practice it kind of is, unfortunately.

I'll still read your comments if they turn grey. I don't care about what color they are. This isn't a problem.


You just need to define a trait, then you can use dynamic dispatch.

You can, but then you don't get any of what OOP actually offers. Message passing isn't the same thing as dynamic dispatch. OOP is a very different paradigm.

I think you are both unknowingly talking past each other: my understanding is that Smalltalk-style "object-oriented programming" ("everything is a message!") is quite distinct from C++/C#/Java/Rust "object-oriented programming" ("my structs have methods!")

Right, the former is what OOP is. The latter, encapsulating data in "objects", is functional programming.

They are both OOP, just as a "football" can be either spherical or oblong.

Functional programming is not "encapsulating data in 'objects'". Such a model would naturally feature methods like "void Die.roll()", "void list.Add(element)" which are definitely not functional programming (which emphasizes immutability, pure functions, composition, etc.)


> They are both OOP, just as a "football" can be either spherical or oblong.

They are both OOP like a football can be something that you use in a sport and something that flies you to the moon. Except it is not clear what flies you to the moon goes by "football".

> Such a model would naturally feature methods like "void Die.roll()", "void list.Add(element)" which are definitely not functional programming

Exactly, functional programming. `Die` and `list` encapsulate data and use higher order functions to operate on it, which is what defines functional programming.

> which emphasizes [...] pure functions

Functions without encapsulation is imperative programming. If you introduce encapsulation then, yes, you have functional programming, just like your examples above.

Immutability is a disjoined property that can optionally apply to all of these programming paradigms. OOP, functional, and imperative programs can all be immutable, or not.

Composition and encapsulation go hand in hand, so that one is also functional programming, yes. And certainly composition is core to languages like C++/Java/Rust/etc. Naturally, them being functional languages.

To reiterate:

- Imperative programming: Data and functions are separate.

- Functional programming: Data is grouped with functions.

- Object-oriented programming: Data is grouped with message responders.


You're welcome to insist that your definitions are canonical ("the objective in football is to kick the ball into the net"), but they are at odds with the rest of the thread

Just as you are welcome to come up with other definitions. Although last time you tried they ended up being quite inconsistent, so one does need to be careful if you want them to be useful.

These definitions are not at odds with the discussion at hand at all. It was clearly stated that Swift has better OO support. Which is obviously true because it tries to be compatible with Objective-C, and therefore needs to have some level of OO support. That is something that Rust has no interest in doing, and rightfully so.

Your redefinition violates the claim, and therefore we can logically determine that it is not what was being used in the rest of the thread. That is, aside from the confused Rust guy that set this tangent in motion, but the remainder of this thread was merely clarifying to him what was originally intended. "He has a different definition for OOP" doesn't really help his understanding. That is what this thread branch has always been about, so it is not clear where you are trying to go with that.


Two years. Basically the period when people were stuck at home during COVID restrictions and were willing to spend extra money to make that experience more comfortable. Prices fell precipitously after restrictions were lifted and people had desires outside of the home again.

Same goes for humans. There are some wild exceptions, but most Go projects look like they were written by the same person.

sceptic - someone inclined to question or doubt what they sense optically.

skeptic - someone inclined to question or doubt what they sense magnetically.


Solid State Drive, usually, but when it comes to language anything goes.

A drive is a motor or other similar device, one that is driven or worked.

But there are no moving parts in an SSD.


Hence solid state.

> it's expensive because a _LOT_ of people want to live there.

I can't figure out how to make the math make sense even if I were to build a house in the middle of nowhere. Time and materials is the real killer.

Some day, when AI eliminates software development as a career, maybe you will be able to hire those people to build you houses for next to nothing, but right now I don't think it matters where or how many you build. The only way the average Joe is going to be able to afford one — at least until population decline fixes the problem naturally — is for someone else to take a huge loss on construction. And, well, who is going to line up to do that?


You can't afford a 175k house on a software engineer salary?

https://www.zillow.com/homedetails/3024-N-Vermont-Ave-Oklaho...


"Built in 1954" doesn't sound like new construction. Of course you can buy used houses at a fraction of the cost. That's nothing new. Maybe you missed it, but the discussion here about building new to make homes more affordable.

It's not like the newly built homes are typically the most affordable. It causes a ripple effect as those that can afford it upgrade their housing.

https://research.upjohn.org/cgi/viewcontent.cgi?article=1314...


It is not like I'm homeless. I would be the one upgrading. Except I don't see how the numbers make sense.

You're right: The cost of new construction anchors the used market. Used housing is so expensive because new housing is even more expensive. If new houses were cheaper I, like many others, would have already have built one and my current home would be up for grabs at a lower price than I'd expect in the current reality. However, that's repeating what was already said.


> building new to make homes more affordable

No need to build new, a plethora of affordable homes are available.


If one was freely able to move about the entire world you may have a point. Especially given current events, I am not sure the country in which that house is located would take kindly to many of us moving there. In a more practical reality you're not going to find anything for anywhere close to that price even in the middle of nowhere, never mind somewhere where everyone wants to live. That is where earlier comments suggest building more housing would help.

Except it is not clear who can afford new construction either. It is even more expensive.


> That is where earlier comments suggest building more housing would help.

I explained earlier why I don't think it would. The places with a housing "shortages" are the places where everyone wants to live. Those places would have to build an impossible number of houses to affect demand.

You have people saying they can't afford housing and then, when you show them they can, they say, "not there..."


> Those places would have to build an impossible number of houses to affect demand.

If houses were able to be built freely then everyone would be able to build a house... Except, if you can't afford a used house, you most definitely cannot afford a new one. As before, time and materials are the real killer. The used housing market is merely a reflection of the cost to build new. Same reason used cars have risen so high in price in recent years: Because new cars have even higher prices.

> You have people saying they can't afford housing and then, when you show them they can, they say, "not there..."

The trouble is that you confuse affordability with sticker price. I technically could live in that house for six months before I have to return back to my home country, but I could not legally work during that time. It is far more affordable to pay significantly higher prices in my country for a house and work all year long. The price of that house is low, but the cost is very high.

The places everyone wants to live are the places everyone wants to live because they are the most affordable places to live. If it were cheaper to move somewhere else, the people would have moved there already. Humans love to chase a good deal and carve out an advantage for themselves. However, a low price doesn't mean cheaper.


> The used housing market is merely a reflection of the cost to build new.

The majority of the cost of a home in places with shortages is the land, not the home.


Land is more or less worth the same whether it has a used house on it or if you build a new house on it. The trouble remains that the high cost of new construction anchors the cost of used houses.

Construction costs should really have been driven down by the march of technology, but that really hadn't been the case. It's mostly stagnant IIRC. But construction costs doesn't really explain the housing crisis well.

Same way we determine all the other inputs that go into the various unemployment rates? Ask.

"Marginally attached" and "discouraged workers" are already tracked and reported in U4, U5, and U6, so this is a strange hypothetical.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: