Hacker Newsnew | past | comments | ask | show | jobs | submit | reidrac's commentslogin

Very useful because the information is almost distribution agnostic as Arch will stick to upstream as much as possible; or at least that's my impression as Debian user reading their wiki.

Also: isn't the Arch wiki the new Gentoo wiki? Because that was the wiki early 2000s and, again, I've never used Gentoo!


Gentoo's wiki is still great (& Arch's has been great for a long time), but yes, Arch's is probably improving at a faster rate. Arch is also a little more comprehensive when it comes to mainstream tech that's divergent like init & network management - Gentoo's still good here but openrc & netifrc show their influence throughout.

I think the Gentoo wiki that gp is referring to is the old gentoo-wiki.info that went down after a hardware failure or something like that and never came back.

The wiki.gentoo.org we have now restores some of that but probably not everything - and there was a void left between them that let the Arch wiki gain mindshare.


I get the sense the Arch wiki pages has more detail than the man pages themselves.

The wiki captures the knowledge that developers of said apps assume to be common, but don’t actually make sense unless you are bootstrapped into the paradigm.


Most man pages are written for someone who knows pretty precisely what they want to do, but don't recall which nobs to turn in the program to get that done. The Arch wiki instead focuses on someone who has a vague idea of what tools to use but doesn't know how those tools operate.

I've found that with an intermediate understanding, the Arch wiki is so much better that I often times won't even check the man pages. But on the occasions where I know the thing pretty well, they can be quite spotty, especially when it's a weird or niche tool among Arch users. So, depending on how you define "more detail", that might be an illusion.


Man pages were always intended to be concise reference material, not tutorials or full docs. More akin to commented header files or a program's --help output, before the latter became common.

(GNU info tried to be a more comprehensive CLI documentation system but never fully caught on.)


man pages got replaced by --help in many, many cases.

GNU info was an interesting experiment but it got replaced by online wikis.


Anecdotally the arch wiki expands on the vauge man pages, often with examples for cases actually used by people. And they are much more easily accessible to modify and have instant gratification of publishing changes. Publishing to upstream man pages of a project, need to wait for it to trickle down.

Arch wiki is far better than most man pages. I've referred to Arch for my own non-Arch systems and when building Yocto systems. Most Arch info applies.

In the ancient days I used TLDP to learn about Linux stuff. Arch wiki is now the best doc. The actual shipped documentation on most Linux stuff is usually terrible.

GNU coreutils have man pages that are correct and list all the flags at least, but suffer from GNU jargonisms and usually a lack of any concise overview or example sections. Most man pages are a very short description of what the program does, and an alphabetic list of flags. For something as versatile and important as dd the description reads only "Copy a file, converting and formatting according to the operands" and there's not even one example of a full dd command given. Yes, you can figure it out from the man page, but it's like an 80s reference, not good documentation.

man pages for util-linux are my go-to example for bad documentation. Dense, require a lot of implicit knowledge of concepts, make references to 90s or 80s technology that are now neither relevant nor understandable to most users.

Plenty of other projects have typical documentation written by engineers for other engineers who already know this. man pipewire leaves you completely in the dark as to what the thing even does.

Credit to systemd, that documentation is actually comprehensive and useful.


> Also: isn't the Arch wiki the new Gentoo wiki? Because that was the wiki early 2000s and, again, I've never used Gentoo!

It is, didn't Gentoo suffer some sort of data loss which made it lose its popularity?


Gentoo's source based approach was always destined to be less popular than a precompiled distro. Compile times & customization options select for a certain clientele.

I think the reference was to Gentoo's wiki, which was indeed hacked and lost data iirc.

But yes, comparing distros themselves, Gentoo will not out compete streamlined and prepackaged distros in the broader adoption metrics.

The wikis themselves are largely distro agnostic and exceptionally useful for everyone on Linux though.


All my machines still run Gentoo (I have used it for over 25 years). I just love the package manager. It has become much more low friction with the binary packages and gentoo-kernel(-bin). I regularly visit both the Gentoo and Arch documentation. They even cross reference each other and both are a great resource.

> Also: isn't the Arch wiki the new Gentoo wiki? Because that was the wiki early 2000s and, again, I've never used Gentoo!

Exactly my thought! 20 years ago, I used Gentoo, and their wiki was the best. Somewhen the Arch wiki appeared and became better and better. At some point, I was tired of compiling for hours and switched one machine at a time to Arch, and today, the Arch wiki is the number one.


Interestingly enough, the ArchWiki itself seems to slowly be getting augmented by NixOS its wiki. Due to the way NixOS works, new packages constantly hit weird edge cases, which then requires deep diving into the package to write a workaround, the info of which either ends up in the wiki or the .nix package comments.

Arch and its wikin were already pretty good when it happened, but the real turning point was when the Gentoo wiki got hacked. After that, it never really recovered, and the Arch wiki must have absorbed a lot of that expertise because that's when it really took off.

as I recall anyway. can't believe it's been so long.


The Gentoo wiki was (is in many ways) phenomenal, and I recommend anyone interested in the inner workings of Linux at least walk through a full install from scratch - you learn a lot even just copying the instructions into the terminal.

According to my experience, yes, it is. I have used Gentoo (using its wiki to install and configure), then after a few distro hops I was at Arch Linux and the wiki was a blessing and ever since I have found it (>10 years), I never needed anything else. Stuff they have on there applies specifically AND generally. Whereas Gentoo's wiki is usually specific IIRC.

Yes, the Gentoo wiki used to be the top, but it was an unofficial wiki and so wasn't backed up properly. Then it suffered a data loss and never recovered. I believe there is still an archive of some of its pages on the Wayback Machine.

Glad ours not just me. Had been using Arch for years, and whenever I landed on their docs pages, the first thing I would think of EVERY time without fail was is Gentoo wiki!

> Also: isn't the Arch wiki the new Gentoo wiki? Because that was the wiki early 2000s and, again, I've never used Gentoo!

man came here to say the same.

used gentoo for all of 5 minutes in 2005 but the wiki was amazing and I referenced it repeatedly for other things.

generally heard the same about the arch wiki, too


> A manager shouldn’t get bogged down in the specifics—they should focus on the higher-level, abstract work. That’s what management really is.

I don't know about this; or at least, in my experience, is not a what happens with good managers.


Indeed. When I was just starting every blog and tweet screamed micro-management sucks. It does if the manager does this all the time. But sometimes it is extremely important and prevents disasters.

I guess best managers just develop the hunch and know when to do this and when to ask engineers for smallest details to potentially develop different solutions. You have to be technical enough to do this


I continued reading, but you're right. Why did the author feel that it was necessary to include that?


Because typing in text and syntax is now becoming irrelevant and mostly taken care of by language models. Computational thinking and sematics on the other hand will remain essential in the craft and always have been.


> Pre-training is, actually, our collective gift that allows many individuals to do things they could otherwise never do, like if we are now linked in a collective mind, in a certain way.

Is not a gift if it was stolen.

Anyway, in my opinion the code that was generated by the LLM is yours as long as you're responsible for it. When I look at a PR I'm reading the output of a person, independently of the tools that person used.

There's conflict perhaps when the submitter doesn't take full ownership of the code. So I agree with Antirez on that part


>Is not a gift if it was stolen.

Yeah, I had a visceral reaction to that statement.


Yet nobody is changing their licenses to exclude AI use. So I assume they are OK with it.


Licenses mean nothing if AI training on your data is fair use, which courts have yet to determine.

You can have a license that says "NO AI TRAINING EVER" in no uncertain terms and it would mean absolutely nothing because fair use isn't dictated by licenses.


[dead]


Which God are you talking about, and how do you believe they have communicated this?


What's the point of changing the license? It will be scrapped anyway.

'The only winning move is not to play' - stop contributing to OSS.


I would and people do but no one respects the license because it is largely uninforceable


It is knowledge, it can't be stolen. It is stolen only in the sense of someone gatekeeping knowledge. Which is as a practice, the least we can say, dubious. because is math stolen ? if you stole math to build your knowledge on top of it, you own nothing and can claim to have been stolen yourself


I disagree.

Code is the expression of knowledge and can be protected by copyright.

A lot of the popular licenses on GitHub (like MIT) permits you to use a piece of code on the condition that you credit the original author. If an LLM outputs code from such a project (or remixes code from several such projects) then it needs to credit the original authors or be in violation.

If Disney's intellectual property can be stolen and needs to be protected for 95+ years by copyright then surely the bedroom programmers' labor deserves the same protections.


We're not talking about the expression of knowledge. What is used in AI models is the knowledge from that expression. That code is not copied as is, instead knowledge is extracted from it and used to produce similar code. Copyright does not apply, IMHO


Then why was Claude code generating exact copies of code files all with the copyrights intact last week when I used it?


So you can train AI on Disney Movies to generate and sell your own disney movies because "knowledge is extracted" from it ? Betcha that won't fly in the courts. Here is "Slim Cinderella" - trained and extracted from all Disney Cinderella movies!


[dead]


Sure, you can do it illegally - by breaking the law and recognizing that you need to be a fugitive. You can give up civilization and live in the wilderness. People can do whatever they want on their 10 year old Dell as long as they don't sell/distribute products made from other people's true efforts.


Are you against copyright, patents, and IP in all forms then?


Independent of ones philosophical stance on the broader topic: I find it highly concerning that AI companies, at least right now, seem to be largely exempt from all those rules which apply to everyone else, often enforced rigorously.


I draw from this that no-one should be subject to those rules, and we should try to use the AI companies as a wedge to widen that crack. Instead, most people people who claim that their objection is really only consistency, not love for IP spend their time trying to tighten the definitions of fair use, widen the definitions of derivative works, and in general make IP even stronger, which will effect far more than just the AI companies they're going after. This doesn't look to me like the behavior of people who truly only want consistency, but don't like IP.

And before you say that they're doing it because it's always better to resist massive, evil corporations than to side with them, even if it might seem expedient to do so, the people who are most strongly fighting against AI companies in favor of IP, in the name of "consistency" are themselves siding with Disney, one of the most evil companies — from the perspective of the health of the arts and our culture — that's working right now. So they're already fine with siding with corporations; they just happened to pick the side that's pro-IP.


oh hey, let's have a thought experiment in this world with no IP rules

suppose I write a webnovel that I publish for free on the net, and I solicit donations. Kinda like what's happening today anyway.

Now suppose I'm not good at marketing, but this other guy is. He takes my webnovel, changes some names, and publishes it online under his name. He is good at social media and marketing, and so makes a killing from donations. I don't see a dime. People accuse me of plagiarism. I have no legal recourse.

Is this fair?


There are also unfair situations that can happen, equally as often, if IP does exist, and likewise, in those situations, those with more money, influence, or charisma will win out.

Also, the idea that that situation is unfair relies entirely on the idea that we own our ideas and have a right to secure (future, hypothetical) profit from them. So you're essentially begging the question.

You're also relying on a premise that, when drawn out, seems fundamentally absurd to me: that you should own not just the money you earn, but the rights to any money you might earn in the future, had someone not done something that caused unrelated others to never have paid you. If you extend that logic, any kind of competition is wrong!


let's have another though experiment:

there are two programmers. first is very talented technically, but weak at negotiations, so he earns median pay. second is average technically, but very good at negotiations, and he earns much better.

is it fair?

life is not fair.


Surely one easily see that the second programmer didn't take the first programmer talent (or his knowledge) and claimed it as their own...


Engineers man… of all the problems we see today, giving real power to engineers is probably a root cause of many.


In China, engineers hold the most power, yet the country prospers. I don't think the problem is giving engineers power, rather a cultural thing. In china there is a general feeling of contributing towards the society, in the US everyone is trying to screw over each-other, for political or monetary reasons.


I am.


Absolutely. As any logical person should be.


This is obviously false on the face of it. Let’s say I have a patent, song, or a book that that I receive large royalty payments for. It would obviously not be logical for me be in favor of abolishing something that’s beneficial to me.

Declaring that your side has a monopoly on logic is rarely helpful.


Either by democracy (more consumers than produces), or ethically (thoughts and intellect are not property), it's logical. I guess it is not logical for someone who makes money with it today.


If you are so adamant about this, why don't you release all your own code in the public domain? Aren't you gatekeeping knowledge too?


I agree with GP, and so, yes, I release everything I do — code and the hundreds of thousands of painstakingly researched, drafted, deeply thought through words of writing that I do — using a public domain equivalent license (to ensure it's as free as possible), the zero clause BSD.


That's commendable, but unfortunately I asked GP.


Is there a link?


Sure!

Personal blog: https://neonvagabond.xyz/ (591,305 total words, written over 6 years; feel free to do whatever you want with it)

My personal github page: https://github.com/alexispurslane/ (I only recently switched to Zero-Clause BSD for my code, and haven't gotten around to re-licensing all my old stuff, but I give you permission to send a PR with a different license to any of them if you wanna use any of it)


I assume is this one https://www.gnu.org/software/ddd/

I used it back in Uni, in 98, and it really helped me to understand debuggers. After it, even using gdb made sense.


Mine: https://www.usebox.net/jjm/

Established in 2002.

Went full circle: static, PHP+mysql, python+tornado+redis (I had a nosql phase), python+Django+sqlite, and now static again (but this time with a generator, so it is all md).


> unburdened by how things were before.

What burden are you talking about? Using LLMs isn't that hard, we have done harder things before.

Sure, there will be people that refuses to "let go" and want to keep doing things the way the like them, but hey! I've been productive with vim (now neovim) for 25 years and I work with engineers that haven't mastered their IDEs at the same level. Not even close!

Sure, they have have never been "burdened" by knowing other editors before those IDEs existed, but claiming that I would have it harder to use any of those because I've mastered other tools before is ridiculous.


Not sure how to address this without just restating TFA. Not all change builds on existing knowledge, and sometimes it is so rapid that keeping up is difficult.


It has been years now that I only care about my subscriptions. I also installed an extension to remove anything else (especially shorts!), and that works great for me.

The downside is perhaps that I rarely discover new content, but YT can't be trusted to give me that organically.

Every time I access YT without being logged to my account and this extension, I'm surprised by the amount of garbage that YT feeds me based on my IP and/or location they infer from it. I worry what effect that is having in the population that consume it without safeguarding.

Sure, there's always been garbage TV, but this is the next level, and on demand.


I did the same. Unhooked for desktop and UnTrap for iOS. No suggestions, no shorts, and no comments. Just the videos from creators I subscribed to.


Thanks for the suggestion. I just bought the UnTrap creator’s bundle social media product and tested with YouTube, X, Facebook, and LinkedIn. This is a safari extension. My biggest surprise was how pleasant it was to spend t minutes in Facebook.


I find very difficult deciding about my all time favourites on anything, but the Hyperion books are pretty close to it.

Not sure about the Endymion ones.


I couldn't get past the awful sex scenes of "Altered Carbon". OK, after having watched the TV series I should have known, but reading it is completely different. Also, the main character is so dislikeable.

It has been a while since my last Heinlein, you reminded me I should read more.


I finally read The Moon is a Harsh Mistress and it was phenomenal, I'm sure an excellent re-read


Perhaps his best work IMHO.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: