No, this is incoherent garbage written by a man whose too fragile to accept and reflect on real peer view (you know something that is actually required for real science and philosophy, or just contributing to the Linux Kernel), and is using a self-described, sycophantic LLM, as pathetic substitute, because no real human wants to put up with your abusive behavior.
You didn't "win" me over with your garbage arguments when we were friends, and your completely unemphatic ad homin, lost you as a friend.
Churning out LLM slop, and labeling it "science" isn't convincing anyone that you're anything other than a lolcow in the making.
Yeah, these all sound like complete non issues if you're actually... keeping your codebase clean and talking through design with Claude instead of just having it go wild.
I'm using it for converting all of the userspace bcachefs code to Rust right now, and it's going incredibly smoothly. The trick is just to think of it like a junior engineer - a smart, fast junior engineer, but lacking in experience and big picture thinking.
But if you were vibe coding and YOLOing before Claude, all those bad habits are catching up with you suuuuuuuuuuuper hard right now :)
Have you looked at the history of data loss bugs in other filesystems?
If you look at actual data - frequency of user impacting data loss bugs - bcachefs has been doing quite a bit better than other filesystems have /after/ they've dropped the experimental level.
We just live in the age of hype and overhype and excitement that turns into drama. Everyone just needs to chill out :)
And I don't hide stuff like this: compare the impact of the bug itself to what you'd see in other filesystems. We knew basically from the first report what caused it, were able to communicate to users what happened, it wasn't random, it wasn't silent data loss - error messages were good and it was able to understand what was going on.
Talk to people who are actually using it. I know of quite a few people who are now migrating from ZFS because they want something more reliable.
Fair point, and I appreciate the transparency around data loss bugs.
How does it look about long-term sustainability? Looking at the git history, ~97% of bcachefs commits are yours. What happens if you step back, burn out, or can't continue for any reason? Is there a fallback plan? A community or team that could realistically take over?
For anyone evaluating this for production use in a company, that's the question that matters most. A filesystem isn't a library you can swap out — you're locked in for years. The technical quality can be excellent and it still won't pass a risk assessment if it depends on a single person.
I'm not the only person who knows the codebase well enough to do actual work (there's at least one other person I'd be comfortable with giving commit access to), and it's clean and documented pretty well for a filesystem.
And the sustainability equation just changed dramatically, thanks to Claude.
I've been using it the past week for a lot of stuff, and I should really write something longer up, but suffice it to say that I'm impressed. It can't do much independently yet, but it's been able to handle a /lot/ of the grunt work - the other night I had it go through open Github issues, fix what it could and take notes on the rest, and I came back to 8 patches for actual bugs, all of them correct, with excellent commit messages. Holy shit, we're living in the future :)
It can't design for shit, it doesn't understand performance implications when writing code (I've noticed this repeatedly); most of how I'm using it is "pair programming". But I'm finally feeling like I'll be able to take a vacation in the near future and still keep up with everything.
Two other people are using it (with heavy review, actively telling it to go back and research topics more) for a big update to the Principles of Operations. Nice.
Basically, the sustainability aspect comes down to writing clean, maintainable, well documented code - and I think I've accomplished that. One holy shit moment the other day was watching Claude navigate /and understand/ btree and journalling code - and making the connection between the way I use assertions and linear typing/dependent typing. All those years spent on that code developing new ways of thinking about and using assertions to make that stuff practical for one person to do... it's paid off.
Beyond that, the real challenges are pushing the boundaries of how introspectable, understandable and transparent complex systems code can be for the end user - and bcachefs is pushing boundaries there. Making a habit of writing pretty printers for absolutely everything means that now our tracing is the best you'll see in software like this, and well integrated with 'bcachefs fs top'. The timestats stuff that I started well over a decade ago - we've now got a new 'bcachefs fs timestats' interface for that, which is already making debugging performance issues dramatically easier than it has been in the past.
It's not just faces. When recognizing objects in the environment, we normally filter out a great number of details going through the visual cortex - by the time information from our eyes hits the level of conscious awareness, it's more of a scene graph.
Table; chair behind and little to the left of the chair; plant on table
Most people won't really have conscious access to all the details that we use in recognizing objects - but that is a skill that can be consciously developed, as artists and painters do. A non-artist would be able to identify most of the details, but not all (I would be really bad compared to an actual artist with colors and spatial relationships), and I wouldn't be able to enumerate the important details in a way that makes any kind of sense for forming a recognizable scene.
So it follows from that that our ability to recognize faces is not purely - or even primarily - an attribute of what we would normally call "memory", certainly in the sense of conscious memory where we can recall details on demand. Like you alluded to re: mammals and spaces, we're really good at identifying, categorizing, and recognizing new forms of structure.
You'd have to be stupid and desperate to steal from a garage.
The people who work there aren't office workers; you've got blue collar workers who spend all day working together and hanging out using heavy equipment right in the back. And they're going to be well acquainted with the local tow truck drivers and the local police - so unless you're somewhere like Detroit, you better be on your way across state lines the moment you're out of there. And you're not conning a typical corporate drone who sees 100 faces a day; they'll be able to give a good description.
And then what? You're either stuck filing off VINs and faking a bunch of paperwork, or you have to sell it to a chop shop. The only way it'd plausibly have a decent enough payoff is if you're scouting for unique vehicles with some value (say, a mint condition 3000GT), but that's an even worse proposition for social engineering - people working in a garage are car guys, when someone brings in a cool vehicle everyone's talking about it and the guy who brought it in. Good luck with that :)
Dealership? Even worse proposition, they're actual targets so they know how to track down missing vehicles.
If you really want to steal a car via social engineering, hit a car rental place, give them fake documentation, then drive to a different state to unload it - you still have to fake all the paperwork, and strip anything that identifies it as a rental, and you won't be able to sell to anyone reputable so it'll be a slow process, and you'll need to disguise your appearance differently both times so descriptions don't match later. IOW - if you're doing it right so it has a chance in hell of working, that office job starts to sound a whole lot less tedious.
Stolen cars are often sold for low amounts of money - like $50 - and then used to commit crimes that are not traceable from their plates. It hasn't really been possible to steal and resell a car in the United States for many years, barring a few carefully watched loopholes (Vermont out-of-state registrations is one example that was recently closed).
When Kia and Hyundai were recently selling models without real keys or ignition interlocks, that was the main thing folks did when they stole them.
In Canada there's been a big problem with stolen cars lately. Mostly trucks, and other high value vehicles though. Selling them locally isn't feasible, but there's a criminal organization that's gotten very good at getting them on container ships and out to countries that don't care if the vehicles are stolen. So even with tracking, there's nothing people can do. Stopping it at the port is the obvious fix, but somehow that's not what is being done. Probably bribery to look the other way.
Same thing in Australia - some gang was busted recently for stealing mid-range four wheel drives, packing them in shipping containers with partially dismantled cars (I guess so that a cursory inspection would just show "car parts" rather than a single nice looking car) and then shipping them around the world (I guess an overseas buyer isn't checking if a car with this VIN has been stolen on the other side of the world).
Yeah, the only way to do it would be a cash transaction where you'd have to forge a legitimate looking title/registration and pass it off to a naive buyer. So it's still technically possible, but not in any kind of remotely scalable way.
Well, there's a flip side of that, which is that all our critical infrastructure is now open source.
And if you're comparing where we're at now, culturally, with where we were at in the early days of the internet - John Postel, the RFC process, the guys building up the early protocols, running DNS and all that - there's been a different kind of shift.
The way I look at it is, a lot of us hackers (the category I'd put myself in), academics, and hardcore engineers who worked in industry but didn't give a damn about anything except doing solid work other people could rely on - we built up the modern tech stack, and then industry jumped to it as a cost cutting measure and it's been downhill from there.
And this puts us all in a real bind when the critical infrastructure we all rely on is dominated by a few corporate giants who still have the mindset that they want to own everything, and they only pay lip service to the community and even getting bug fixes in if it's something they don't care about is a problem.
This mindset invading the Linux kernel is a huge part of the reasons for the bcachefs split, btw. We had a prominent filesystem maintainer recently talking openly about how they'll only fix bugs if they feel like it or as a part of a quid pro quo with another established player - and that's just not OK. Open source is used by the entire world, not just Google/IBM/Facebook/Amazon.
"How we manage critical infrastructure as a commons - responsibly" needs to be part of the conversation.
reply