Yep, I remember downloading a beta version of what would be eventually released as Windows Server 2003. The beta version was called Windows .Net Server 2003.
I had some books that referred to it as .NET Server printed before the name change. In the long history of terrible Microsoft names, this was a rare case where they were able to right the ship.
If had first meant a coffee table form factor PC with touch screen and special software, which was able to sense special objects placed on top of it.
Then that was renamed to "PixelSense" [1] and "Surface" instead got put on a line of touchscreen tablet form factor PCs launched together with Windows 8. OK, reusing a strong name for a product line expected to sell more, and which still fit the theme made sense.
.. but then the brand was also put on laptops, convertibles, desktop PC and an Android phone ... eh, OK, but at least those also had touch screens.
... but then the brand was also put on generic peripherals: keyboard, mouse, headphones, earbuds, etc. which diluted the brand to mean practically nothing.
For example, a search for "surface keyboard", could result in a "type cover" for some kind of tablet PC or a keyboard intended for desktop computers.
Microsoft later did the same with the "Microsoft Sculpt" brand. It was first a compact curved "sculpted" ergonomic keyboard with chiclet keys and an ergonomic mouse that were most often sold as a set. That got quite popular and so the brand achieved recognition.
But later, Microsoft decided to reuse that brand for completely generic peripherals with no special ergonomic designs whatsoever.
BTW. Not long after, Microsoft also released products with the similarly ungoogleable names "Microsoft Bluetooth Keyboard" and "Microsoft Ergonomic Keyboard".
The first one I remember is RealPlayer. I think the official story was that they were having more and more trouble convincing people to install upgrades (at a time when 56k and slower modems were still common, dowloading an app could take minutes, ugrade nags seemed to be ever present), so they decide to name the new major version RealOne Player "because it's the One, the only One you need, the One that does everything for you".
Of course, this meant that the next time they tried to get anybody to install a patch, some of us felt annoyed because. RealOne Player wasn't "the One" after all. Why should we get back on the treadmill of waiting for downloads that rarely seem necessary?
Ahem. I think this event sensitised me against all attempts at using "one" like that. I mentally flip a table every time.
iOS seems to mute the web audio apis when the phone is in silent mode (the switch on the side of the phone). If you toggle it on, then this site (and many others) play sound.
I have no idea why it works this way and it’s frequently annoying.
Why wouldn't it work that way? Whether it's a hardware toggle like on iPhone or a software one like in Android, I want silent to mean silent. Not "silent but if a web page decides to play sound it can".
There is some amount of the "Focus follows brain" problem here. What we want is for things to do what we meant, all the time, and in this case it's very possible that the visitor wanted to hear the music. It is not practical (without yet to invented technology) for that to work so we have a substitute - there's a switch and you should remember to press it.
"Focus follows brain" is how everybody wants windowed UIs to work. When I type on the keyboard the letters go where my brain thought they should go - duh, but of course that's unimplementable, so the Windows UI provides "Click to focus" - if I click on a Window the typing goes there until I click another window, meanwhile some Unix systems do "Focus follows Mouse" - if I move the mouse over a Window then my typing goes there even without clicking. Neither is what we actually wanted, both are trying to approximate.
Many many times I have music playing in the background from another app while browsing. So no, there’s no way to focus follow brain. There’s just no way for this device to know what I want unless I tell it
The phone will still make sound if I launch a music app, why is a web page different?
And I hate web pages making sound! But the UX is confusing, and it’s changed over the years, seemingly without reason.
Iphones now have a software toggle as well, which may have coincided with the shift from “mute ringer” to “mute (almost) everything” that came with the multifunction button.
Web browsers on desktop operating systems initially allowed any website to play audio without any interaction required. Some websites would blast annoying audio ads as soon as you opened a page on their site. So effort was put into making it so that web browsers on desktops would only play sound after user interaction via mouse click. Later, some websites were exempted from that by some desktop web browsers, for example YouTube I think.
Even without ads, background noise that starts automatically as soon as you visit a page can be distracting and disruptive.
I’m perfectly happy that Safari on iOS does not play background audio when I have my phone in silent mode. Even when I have tapped on buttons on the page.
Silent mode is not entirely only for notifications anyway. The built-in keyboard is also silent in silent mode, whereas when silent mode is off it makes an annoying click sound for every button that you press. Likewise the builtin camera app on iOS makes a shutter sound when you take photos with silent mode off. With silent mode the camera app is silent. Same with taking screenshots. I take a lot of screenshots, and prefer that people around me don’t think I’m taking photos when I am taking a screenshot on the phone.
Meanwhile, if I open a music player app on my phone and hit play, I have made a very deliberate choice about playing sound.
All of the games on my phone I can think of are also silent in silent mode. Not sure if all games have to be silent in silent mode or not on iOS (i.e. if “can play sound in silent mode” is a special permission in iOS and if Apple disallows apps categorized as games in App Store from having that permission or not). But I like that the games I play on my phone are silent in silent mode.
There is some inconsistency indeed about what is silent or not, but I am happy with the way that it is as someone who prefers surprising silence over surprising noises from my phone when it’s in silent mode.
media sound is generally unaffected by the silent mode toggle, which apple suggests is only for notifications. but the toggle inconsistently affects media, muting some things but not others. it's incredibly frustrating. android has much better audio controls for notifications, media, alarms, and vibrate.
> zig's caching system is designed explicitly so that garbage collection could happen in one process simultaneously while the cache is being used by another process.
> I just ran WizTree to find out why my disk was full, and the zig cache for one project alone was like 140 GB.
> not only the .zig-cache directory in my projects, but the global zig cache directory which is caching various dependencies: I'm finding each week I have to clear both caches to prevent run-away disk space
Like what's going on? This doesn't seem normal at all. I also read somewhere that zig stores every version of your binary as well? Can you shed some light on why it works like this in zigland?
AFAIK garbage collection is basically not implemented yet. I myself do `ZIG_LOCAL_CACHE_DIR=~/.cache/zig` so I only have to nuke single directory whenever I feel like it.
20 seconds each time. Last time I tried to enable incremental build, it wasn't working for us. It was a while ago, but I think it had to do with something in our v8 bridge.
After implementing a number of dithering approaches, including blue noise and the three line approach used in modern games, I’ve found that quasi random sequences give the best results. Have you tried them out?
What is the advantage over blue noise? I've had very good results with a 64x64 blue noise texture and it's pretty fast on a modern GPU. Are quasirandom sequences faster or better quality?
(There's no TAA in my use case, so there's no advantage for interleaved gradient noise there.)
EDIT: Actually, I remember trying R2 sequences for dither. I didn't think it looked much better than interleaved gradient noise, but my bigger problem was figuring out how to add a temporal component. I tried generalizing it to 3 dimensions, but the result wasn't great. I also tried shifting it around, but I thought animated interleaved gradient noise still looked better. This was my shadertoy: https://www.shadertoy.com/view/33cXzM
Ooh, I haven't actually! I'll need to implement and test this for sure. Looking at the results though it does remind me of a dither (https://pippin.gimp.org/a_dither/), which I guess makes sense since they are created in a broadly similar way.
Looks pretty good! It looks a bit like a dither, but with fewer artifacts. Definitely a "sharper" look than blue noise, but in places like the transitions between the text boxes you can definitely see a bit more artifacts (almost looks like the boxes have a staggered edge).
If you're the one building the image, rebuild with newer versions of constituent software and re-create. If you're pulling the image from a public repository (or use a dynamic tag), bump the version number you're pulling and re-create. Several automations exist for both, if you're into automatic updates.
To me, that workflow is no more arduous than what one would do with apt/rpm - rebuild package & install, or just install.
How does one do it on nix? Bump version in a config and install? Seems similar
Now do that for 30 services and system config such as firewall, routing if you do that, DNS, and so on and so forth. Nix is a one stop shop to have everything done right, declaratively, and with an easy lock file, unlike Docker.
Doing all that with containers is a spaghetti soup of custom scripts.
Perhaps. There are many people, even in the IT industry, that don't deal with containers at all; think about the Windows apps, games, embedded stuff, etc. Containers are a niche in the grand scheme of things, not the vast majority like some people assume.
Really? I'm a biologist, just do some self hosting as a hobby, and need a lot of FOSS software for work. I have experienced containers as nothing other than pervasive. I guess my surprise is just stemming from the fact that I, a non CS person even knows containers and see them as almost unavoidable. But what you say sounds logical.
I'm a career IT guy who supports biz in my metro area. I've never used docker nor run into it with any of my customers vendors. My current clients are Windows shops across med, pharma, web retail and brick/mortar retail. Virtualization here is hyper-v.
And it this isn't a non-FOSS world. BSD powers firewalls and NAS. About a third of the VMs under my care are *nix.
And as curious as some might be at the lack of dockerism in my world, I'm equally confounded at the lack of compartmentalization in their browsing - using just one browser and that one w/o containers. Why on Earth do folks at this technical level let their internet instances constantly sniff at each other?
Self-hosting and bioinformatics are both great use cases for containers, because you want "just let me run this software somebody else wrote," without caring what language it's in, or looking for rpms, etc etc.
If you're e.g: a Java shop, your company already has a deployment strategy for everything you write, so there's not as much pressure to deploy arbitrary things into production.
Containers decouple programs from their state. The state/data live outside the container so the container itself is disposable and can be discarded and rebuild cheaply. Of course there need to be some provisions for when the state (ie schema) needs to be updated by the containerized software. But that is the same as for non-containerized services.
I'm a bit surprised this has to be explained in 2025, what field do you work in?
First I need to monitor all the dependencies inside my containers, which is half a Linux distribution in many cases.
Then I have to rebuild and mess with all potential issues if software builds ...
Yes, in the happy path it is just a "docker build" which updates stuff from a Linux distro repo and then builds only what is needed, but as soon as the happy path fails this can become really tedious really quickly as all people write their Dockerfiles differently, handle build step differently, use different base Linux distributions, ...
I'm a bit surprised this has to be explained in 2025, what field do you work in?
It does feel like one of the side effects of containers is that now, instead of having to worry about dependencies on one host, you have to worry about dependencies for the host (because you can't just ignore security issues on the host) as well as in every container on said host.
So you go from having to worry about one image + N services to up-to-N images + N services.
Just that state _can_ be outside the container, and in most cases should. It doesn't have to be outside the container. A process running in a container can also write files inside the container, in a location not covered by any mount or volume. The downside or upside of this is, that once you down your container, stuff is basically gone, which is why usually the state does live outside, like you are saying.
Your understanding of not-containers is incorrect.
In non-containerized applications, the data & state live outside the application, store in files, database, cache, s3, etc.
In fact, this is the only way containers can decouple programs from state — if it’s already done so by the application. But with containers you have the extra steps of setting up volumes, virtual networks, and port translation.
But I’m not surprised this has to be explained to some people in 2025, considering you probably think that a CPU is something transmitted by a series of tubes from AWS to Vercel that is made obsolete by NVidia NFTs.
I'm curious about it and your thinking on how to track things over time and see what has surprised us since we got started. It is useful to note down every time you (or your team) sets an expectation with someone (or another team) and then make sure you don't forget about that. It's also useful to be deliberate when setting expectations.
Having a public journal could well work for noting down when expectations are set and whenever there is a meeting of minds. I've found when tracking things like this that the amount of data can quickly grow to the point where you can no longer quickly and easily reason about it. The success seems to live and die on the data visualization or UI/UX.
Ok, I'll bite. From the article I can't really figure out what collaborating by contract (CBC) is, how it works in practice or how to introduce it to an organization.
A search in Google for "Collaborate by contract" gives three results, all from the same person, all in the last few weeks. Including this new article it's 1776 words in total on CBC. It doesn't seem to be real or something that has been tried out in an organization. It appears to be Al Newkirk's idea for a system that could work, but has not been tried.
Specifically, I'd like to see an example of a contract and who agrees to it; what the journal of contracts looks like; what happens when after an agreement everyone learns something that they didn't know when the agreements were made; what are the leaders committing to and what happens when they fail to deliver that?
Many teams have working agreements; and companies have employee handbooks.
I don’t know if you ever read these in detail, but they’re generally in one direction. Other than dating/relationships (manager and direct reports, etc) and some generally applicable guidelines, it’s favorable to management.
One thing I make very clear to my direct reports is that I expect them to hold me to account when I fail to do something or hinder the team; even going above me if needed.
But this is ad-hoc. It’s not consistent across the board, and I see managers who are active hindrances to their team or their mission.
This is also the norm in many companies, and it’s a problem.
It sounds like you've got something specific in mind when you say, "modeling". The term modeling is used in a lot of different situations to mean different things. For example, it could mean to make a 3d model in Blender, it could mean to pose for someone to paint you or to take a photo, with databases it's used to mean modeling the data, with statistics it's used to mean finding a way to simply represent and reason about the data (create a model of it).
The things you've listed out make me guess you want to write 2d or 3d image rendering software. Is that right?
If that's the case, there's no substitute for trying to recreate certain algorithms or curves using a language or tool that you're comfortable with. It'll help you build an intuition about how the mathematical object behaves and what problems it solves (and doesn't). All of these approaches were created to solve problems, understanding the theory of it doesn't quite get you there. If you don't have a good place to try out functions, I recommend https://thebookofshaders.com/05/ , https://www.desmos.com/calculator , or https://www.geogebra.org/calculator .
A good place to start is linear interpolation (lerp). It seems dead simple, but it's used extensively to blend two things together (say positions or colors) and the other things you listed are mostly fancier things built on top of linear interpolation.
A final note: a lot of graphics math involves algebra. Algebra can be fun, but it also can be frustrating and tedious, particularly when you're working through something large and make a silly mistake and the result doesn't work. I suggest using sympy to rearrange equations or do substitutions and so on. It can seem like overkill but as soon as you save a few hours debugging it's worth it. It also does differentiation and integration for you along with simplifying equations.
reply