I have memories of being endlessly frustrated trying to use an iMac because "close" would just hide the window.
We've gone full circle, and now everything in windows likes to treat close as "minimize to system tray", but back in win9x era, the expectation was that close was "terminate the application".
With exception to single window utility programs, Mac windows have always truly closed with the resources taken by the represented document being freed and all that. The windows weren't hidden. It's just that closing the window ≠ quitting the application… the program can remain in memory even if it has no documents loaded.
This serves a couple of purposes: first, documents open more quickly (particularly when the program is loaded from a slow spinning HDD, floppy, etc) since the program doesn't need to be reloaded, and second, new document creation flows and non-document functions can be accessed without having a document open or requiring the developer to create a bespoke "home screen" UI that serves that purpose since the full menubar is accessible as long as the app is foregrounded.
It's just a different set of expectations. The original versions of the Mac OS should almost be thought of as a multiple-document interface. Consider the web browser you're reading this in. You wouldn't expect closing a single tab or window to quit the whole application, would you? That's really what was going on in early Mac system software. Go to infinite-mac and open Mac Paint on a System 1.0 machine. It becomes very obvious when you open the app, and all of the Finder windows and desktop icons disappear.
This is only confusing in comparison to Windows though. If you used graphical DOS applications, it was the exact same experience. You open the app, and can interact with your documents, but closing a document doesn't necessarily close the app.
Even Photoshop on Windows of the day worked the same way. When you opened Photoshop, a parent window would open that was the app. Closing documents left the app open, unless you also closed the parent window.
The comparison to modern browsers is odd and IMO plays into GP's point. You can't get a modern browser to be a single process so it is like your examples and bad for it.
I think it taxes your brain in two different ways - the mental model of the code is updated in the same way as a PR from a co-worker updates code, but in a minute instead of every now and then. So you need to recalibrate your understanding and think through edge cases to determine if the approach is what you want or if it will support future changes etc. And this happens after every prompt. The older/more experienced you are, the harder it is to NOT DO THIS thinking even if you are intending to "vibe" something, since it is baked into your programming flow.
The other tax is the intermittent downtime when you are waiting for the LLM to finish. In the olden days you might have productive downtime waiting for code to compile or a test suite to run. While this was happening you might review your assumptions or check your changes or realize you forgot an edge case and start working on a patch immediately.
When an LLM is running, you can't do this. Your changes are being done on your behalf. You don't know how long the LLM will take, or how you might rephrase your prompt if it does the wrong thing until you see and review the output. At best, you can context switch to some other problem but then 30 seconds later you come back into "review mode" and have to think architecturally about the changes made then "prompt mode" to determine how to proceed.
When you are doing basic stuff all of this is ok, but when you are trying to structure a large project or deal with multiple competing concerns you quickly overwhelm your ability to think clearly because you are thinking deeply about things while getting interrupted by completed LLM tasks or context switching.
My least favorite part is where it runs into some stupid problem and then tries to go around it.
Like when I'm asking it to run a bunch of tests against the UI using a browser tool, and something doesn't work. Then it goes and just writes code to update the database instead of using the user element.
My other thing that makes me insane is when I tell it what to do, and it says, "But wait, let me do something else instead."
Really, this. You still need to check its work, but it is also pretty good at checking its work if told to look at specific things.
Make it stop. Tell it to review whether the code is cohesive. Tell it to review it for security issues. Tell it to review it for common problems you've seen in just your codebase.
Tell it to write a todo list for everything it finds, and tell it fix it.
And only review the code once it's worked through a checklist of its own reviews.
We wouldn't waste time reviewing a first draft from another developer if they hadn't bothered looking over it and test it properly, so why would we do that for an AI agent that is far cheaper.
I wouldn't mind see a collection of objectives and the emitted output. My experience with LLM output is that they are very often over-engineered for no good reason, which is taxing on me to review.
I want to see this code written to some objective, to compare with what I would have written to the same objective. What I've seen so far are specs so detailed that very little is left to the discretion of the LLM.
What I want to see are those where the LLM is asked for something, and provided it because I am curious to compare it to my proposed solution.
(This sounds like a great idea for a site that shows users the user-submitted task, and only after they submit their attempt does it show them the LLM's attempt. Someone please vibe code this up, TIA)
It increasingly is. E.g. if you use Claude Code, you'll notice it "likes" to produce todo lists that rendered specially via the TodoWrite tool that's built in.
But it's also a balance of avoiding being over-prescriptive in tools that needs to support very different workflows, and it's easy to add more specific checks via plugins.
We're bound to see more packaged up workflows over time, but the tooling here is still in very early stages.
It absolutely can, I'm building things to do this for me. Claude Code has hooks that are supposed to trigger upon certain states and so far they don't trigger reliably enough to be useful. What we need are the primitives to build code based development cycles where each step is executed by a model but the flow is dictated by code. Everything today relies too heavily on prompt engineering and with long context windows instruction following goes lax. I ask my model "What did you do wrong?" and it comes back clearly with "I didn't follow instructions" and then gives clear and detailed correct reasons about how it didn't follow instructions... but that's not supremely helpful because it still doesn't follow instructions afterwards.
Tell it to grade its work in various categories and that you'll only accept B+ or greater work. Focusing on how good it's doing is an important distinction.
Think of it as a "you should - and is allowed to - spend more time on this" command, because that is pretty much what it is. The model only gets so much "thinking time" to produce the initial output. By asking it to iterate you're giving it more time to think and iterate.
Oh I'm not at all joking. It's better at evaluating quality than producing it blindly. Tell it to grade it's work and it can tell you most of the stuff it did wrong. Tell it to grade it's work again. Keep going through the cycle and you'll get significantly better code.
The thinking should probably include this kind of introspection (give me a million dollars for training and I'll write a paper) but if it doesn't you can just prompt it to.
It's worse than unrealistic. It's ludicrous. Any company running more than an hour of actions workflows per week on GitHub can afford a few dollars a month for infrastructure. The per-minute charge is less than the cost of a millisecond of engineering labor time.
Dude why are you so determined to defend this pricing change? You're all over this thread arguing with people that it's not a big deal. If it's a big deal to them, why do you give a shit? It's not like it's your problem if people take their business elsewhere for a poor reason.
I've used finnix forever as my go to live recovery distro, and have never heard of knoppix.
From a cursory search, it appears that finnix is focused on the command line while knoppix provides a desktop environment. Don't most distros offer live boot environments these days? I know I've done this with Fedora, Debian, Suse, and Alpine at least..
Yeah, in my book (used Knoppix, never heard of or don't remember Finnix), Knoppix had 2 uses: restore systems but primarily try out if Linux worked on an unknown PC. And also run Linux on Windows PCs at school.
> On 23 October 2005, Finnix 86.0 was released. Earlier unreleased versions (84, and 85.0 through 85.3) were "Knoppix remasters", with support for Linux LVM and dm-crypt being the main reason for creation. However, 86.0 was a departure from Knoppix, and was derived directly from the Debian "testing" tree.[7]
My reading of this is that early versions of Finnix were based on Knoppix. However, according to the wikipedia sidebars, the initial release of Knoppix was 30 September 2000, while the initial release of Finnix was March 22, 2000. Something something beta/pre-release versions?
> Finnix 0.01 was based on Red Hat Linux 6.0, and was created to help with administration and recovery of other Linux workstations around Finnie's office.[citation needed] The first public release of Finnix was 0.03, and was released in early 2000, based on an updated Red Hat Linux 6.1.
If I had to venture a guess, I'd say compression. Knopper picked up and became maintainer of a kernel module called "cloop" that supports zlib-compressed loopback filesystems.
If Knoppix had this and Finnix didn't, or if Knopper was able to supply enhancements and bugfixes in order to support Knoppix releases, then he was likely able to fit a much more complete system onto a given CD or DVD.
But idk what kind of compression early Finnix used, if any. (Nowadays, everything uses SquashFS, right?)
Right, although I would argue the most interesting part of the type here is the container, not the containee.
With good naming it should be pretty obvious it's a Foo, and then either you know the type by heart, or will need to look up the definition anyway.
With standard containers, you can have the assumption that everyone knows the type, at least high level. So knowing whether it's a list, a vector, a stack, a map or a multimap, ... is pretty useful and avoid a lookup.
an interesting demarcation of subjective mental encapsulation ... associating the anonymous type of a buffer with the buffer's name ... as opposed to explicitly specifying the type of an anonymously named buffer
This is time efficient* but rather wasteful of space.
The best way to save space is to use a Bloom Filter.
If we capture all the even numbers, that would sadly only give us "Definitely not Even" or "Maybe Even".
But for just the cost of doubling our space, we can use two Bloom filters!
So we can construct one bloom filter capturing even numbers, and another bloom filter capturing odd numbers.
Now we have "Definitely not Even" and "Maybe Even" but also "Definitely not Odd" and "Maybe Odd".
In this manner, we can use the "evens" filter to find the odd numbers and the "odds" filter to find the even numbers.
Having done this, we'll be left with just a handful of unlucky numbers that are recorded as both "Maybe even" and "Maybe odd". These will surely be few enough in number that we can special case these in our if/else block.
The filters as a first-pass will save gigabytes of memory!
> But for just the cost of doubling our space, we can use two Bloom filters!
We can optimize the hash function to make it more space efficient.
Instead of using remainders to locate filter positions, we can use a mersenne prime number mask (like say 31), but in this case I have a feeling the best hash function to use would be to mask with (2^1)-1.
You are absolutely write! Let me write a Go program to implement this idea. The bloom filters will take approximately 5gb (for a 1% error rate) and take a few minutes to populate on a modern Macbook Pro.
It's a constant number of lookups, and all good Computer Scientists know that it is therefore an O(1) algorithm.
It is hard to imagine better efficiency than O(1)!
Indeed we could improve it further by performing all evaluations even when we find the answer earlier, ensuring it is a true Constant Time algorithm, safe for use in cryptography.
> This is time efficient* but rather wasteful of space.
You're saying that the blog's solution is time efficient. Which it is not. Your solution may be O(1) but it is also not efficient. As I'm sure you are aware.
I can tell you a practical solution which is also O(1) and takes up maybe 2 or 3 instructions of program code and no extra memory at all.
`x & 1` or `x % 2 != 0`
This blog post was taking a joke and running with it. And your comment is in that spirit as well, I just wanted to point out that it's by no means time efficient when we have 2s or 1s complement numbers which make this algorithm trivial.
I may have missed the * meaning. I got that the bloom filter was an extension of the joke as I mentioned below. I was just clarifying in case someone else missed the joke.
You're absolutely right. The obvious solution would have been to create a boolean table containing all the pre-computed answers, and then simply use the integer you are testing as the index of the correct answer in memory. Now your isEven code is just a simple array lookup! Such an obvious improvement, I can't believe the OP didn't see it.
And with a little extra work you can shrink the whole table's size in memory by a factor of eight, but I'll leave that as an exercise for the interested reader.
If the "exercise" is to strictly rely on if-else statements, then the obvious speedup is to perform a binary search instead of a linear one. The result would still be horrifically space inefficient, but the speed would be roughly the time it takes to load 32x 4KB pages randomly from disk (the article memory-mapped the file). On a modern SSD a random read is 20 microseconds, so that's less than a millisecond for an even/odd check!
"That's good enough, ship it to production. We'll optimise it later."
Perhaps, but I fear you’re veering way too much into “clever” territory. Remember, this code has to be understandable to the junior members of the team! If you’re not careful you’ll end up with arcane operators, strange magic numbers, and a general unreadable mess.
> I guess the word contemporary has been misused to the point of just meaning current or modern and I shouldn't nitpick it!
According to at least a few references, it very clearly applies to the two meanings. I couldn't find a single dictionary that excludes or seems to favor one over the other.
Ah, thanks -- I was just trying to capture the weirdness that happens when a work is set in the past, and then that work itself becomes old. For instance, if you watch Braveheart right now you're getting two views of the past: you're getting a (not-very-realistic) view of medieval England, and then in addition you're getting a view into how people in the 90s felt about history and social issues.
In the long run, this makes for very interesting rhetorical analysis of the work.
Your example of Braveheart, for instance, involves two views of the past through the lens of the _present_. So even in that context, both of those views are tinted by the experience and environment of the observer.
"contemporary fiction" is an industry/academic term for a genre of literature, but not widely used in the TV world. I think they meant "contemporary fiction" in the sense of the production of the fiction is contemporary. As in the TV show is contemporary in its creation, but the setting is historical. I don't think that redefines contemporary outside of... contemporary usage and definition.
It makes the most sense in context, and the discussion is about a TV show and not literature.
Different nitpick: Mad Men first aired in 2007. Is an 18 year old show that stopped production more than a decade ago contemporary?
I have memories of being endlessly frustrated trying to use an iMac because "close" would just hide the window.
We've gone full circle, and now everything in windows likes to treat close as "minimize to system tray", but back in win9x era, the expectation was that close was "terminate the application".
reply