Hacker Newsnew | past | comments | ask | show | jobs | submit | more sroerick's commentslogin

I'm a little embarrassed by my current workflow, which is:

A. Emacs and org mode on my laptop

B. Neovim to do development via SSH on my dedicated Hetzner box, because my laptop is too potato for dev

C. A bash script to push up any random notes I have up to the server

I have used sshfs, syncthing and unison in the past, but never quite got the workflow for either to click.

After about 13 years of trying I still am not as functional as most Dropbox users. I just can't stand Dropbox.


You're looking for tramp-mode. I used tramp-mode for years when working in a lab in grad school where is write code in emacs, have it save via SSH, then build and run the code on the remote. It allows you to use emacs just to author text and to use the remote for everything else.


Ok, so I'm playing with OCAML a lot right now, and it seems like in this workflow I would lose access to all the IDE tooling that is provided. That's not the end of the world, but still a big workflow hit which is solved by just remote ssh into NVIM. I'm definitely curious about your workflow, though.


Curious how would you lose it? Do you mean the tooling you're using won't work across Tramp? You should ask in an emacs community for more detailed feedback on this if you're interested.


I'm pretty sure this is the case - as I think the OCAML LSP requires dune to be running in watch mode to provide full information


Don't be embarrassed by a setup that works.

In the spirit of hopefully constrictive feedback:

A/B: Any reason not to do emacs or neovim everywhere? You can copy your dotfiles to the server if needed?

C: I wouldn't/don't use Dropbox either. If bash+scp works then great, but have you considered keeping your files in git? Still easy to sync over ssh from one machine to another, but natively handles things like sync conflicts.


I just haven't found Emacs to be particularly productive over SSH. IMO it works best on a local machine, there's just too much in the GUI which isn't as workable over terminal. Font rendering, images, clickable text links all take a hit. None are really deal breakers, but Emacs TUI just kind of feels like an afterthought. X11 over SSH doesn't feel responsive to me.

Its almost more of an aesthetic choice really, its just that Emacs feels comfier to me on a local machine. You otherwise lose too much of that feeling of customizing everything to your own taste, which is to me the nicest part of Emacs. It's kind of what I imagine a well tuned Forth to feel like.

Neovim is great over SSH, and I kind of prefer it as an editor - but Org support is too compelling. I've tried Neovim Org configs but they just can't compete with the legacy of Emacs Org. Org roam is unbeatable even with the preponderance of wiki style knowledge base apps. Org publish is just too good, as well. I've played with Neorg, and I really like it as a project, but it does feel like it is about 20 years behind.

I use git a lot but it runs into the large binary problem. I know git-annex is supposed to be good, but I haven't used it much. Syncthing is good but a lot of UI. I like unison but it isn't super well suited to the 'background sync' workflow.

My laptop is also a modified chromebook with a 50 GB HDD. I could get a real computer and solve a lot of my sync issues tomorrow, but then what would I have to complain about?

I see people with surface pros running VB studio, drinking Folger's with no discernable side effects and they are probably happier and more productive than I am.

Point being I might try Emacs on android


> I just haven't found Emacs to be particularly productive over SSH. IMO it works best on a local machine, there's just too much in the GUI which isn't as workable over terminal. Font rendering, images, clickable text links all take a hit. None are really deal breakers, but Emacs TUI just kind of feels like an afterthought. X11 over SSH doesn't feel responsive to me.

But that's what tramp is for, it works nicely and is surprisingly well integrated into the rest of Emacs. The only obvious downside is initial performance, but that can be worked around by tweaking SSH settings to keep connections open.

Another hack I use is to initiate a connection from remote to my local Emacs instance. The use case is ssh'ing into a remote shell, typing "remote-emacs <file-xyz>" and having that open the file on my local machine.

I did that by creating a script that gets my local IP from $SSH_CONNECTION, uses that to ssh into my local machine and executes "emacsclient -n /ssh:$HOSTNAME:$FILEPATH" which then in turn opens the remote file using tramp. Pretty useful.


How does it handle things like project heirarchy? Does folder browsing work? Can I use an org-roam database? I've used TRAMP to open single files over SSH, but it seems less functional than mounting with FUSE. But I haven't looked into it extensively.

I am definitely going to build out that bash script for the second use case, that sounds excellent. Thanks, I had no idea you could do that


Yes, it works basically everywhere you'd interact with a local file or directory.

For example, you open a remote dired buffer with C-x C-f /ssh:host:/dir/. Afterwards, opening a file or navigating to a directory will open it remotely as well. You can also use project functions or magit seamlessly. I have plenty of bookmarks remotely etc.

Fundamentally, you just prepend "/ssh:[user@]host:" to any path or file operation and things will magically Just Work (tm).


Awesome


> but Emacs TUI just kind of feels like an afterthought

This reads as a testament to how far the Emacs GUI has progressed!


Yes, it's actually so good


I've used git-annex and I'll tell you, it's overcomplicated. Git LFS is probably better.


Your setup is pretty awesome. But if you miss dropbox so much, why not set up owncloud on the hetzner machine?


Does your bash script use rsync or does it duplicate some of rsync's functionality? (rsync also uses zip to speed things up.)


It literally just takes a string, formats it as an org entry, and then appends it to an 'incoming.org' file on my remote.

Then I can access the incoming.org file and org-refile entries at will.

Usage is just 'note "note text"'. this is generally how I process notes in org - I collect things in an inbox and them I elaborate on them, then refile them into a fully fledged note or the appropriate context.

Its dead simple, but comfy for my workflows and solves the problem of "collecting notes from mobile" without trying to struggle session mobile Emacs or Org mode


Hi OP, just chiming in here because you mentioned us at Hetzner and I saw your post. I also wasn't sure if the comment from nurettin below was meant to be "NextCloud" instead of "owncloud"...? NextCloud and Dropbox have some very similar use cases. We have a line of NextCloud-based products (Storage Shares). Maybe it would be worth trying out. --Katie


Org is pretty good.


org-mode could have had a chance if they had provided tooling outside the emacs ecosystem. But now LLMs have chosen markdown, so it's destined to forever remain an obscurity.


I'm not really qualified to talk about either topic at length, but my impression is that the Microservice crowd is kind of a different group than the anti-OOP crowd.

As a total beginner to the functional programming world, something I've never seen mentioned at length is that OOP actually makes a ton of sense for CRUD and database operations.

I get not wanting crazy multi tier class inheritance, that seems like a disaster.

In my case, I wanted to do CRUD endpoints which were programmatically generated based on database schema. Turns out - it's super hard without an ORM or at least some kind of object layer. I got halfway through it before I realized what I was making was actually an ORM.

Please feel free to let me know why this is all an awful idea, or why I'm doing it wrong, I genuinely am just winging it.


You're not wrong.

It's fashionable to dunk on OOP (because most examples - like employee being a subtype of person - are stupid) and ORM (because yes you need to hand write queries of any real complexity).

But there's a reason large projects rely on them. When used properly they are powerful, useful, time-saving and complexity-reducing abstractions.

Code hipsters always push new techniques and disparage the old ones, then eventually realise that there were good reasons for the status quo.

Case in point the arrival of NoSQL and wild uptake of MongoDB and the like last decade. Today people have re-learned the value of the R part of RDBMS.


Large projects benefited from OOP because large projects need abstraction and modularization. But OOP is not unique in providing those benefits, and it includes some constructs (e.g. inheritance, strictly-dynamic polymorphism) that have proven harmful over time.


Inheritance == harmful is quite an extreme position.


It may be extreme, but it's very common. It's probably the single most common argument used against OOP. If you drop out inheritance, most of the complaints about OO fall away.


Can you share where you've seen inheritance get dropped and it resulted in fewer complaints about OOP?


Anecdotally, yes. In work efforts where inheritance was kept to a minimum (shallow, more sensible, class structures) there were far fewer issues with both complaints and problems caused by it.

Outside that, look to Go. Some people will waste a few pages and hours of their life arguing about whether it is or isn't OO, but it provides everything other OO languages provide except for inheritance (struct embedding kinda-sorta looks like inheritance, but it's composition and some syntax sugar to avoid `deeply.nested.references()`). It provides for polymorphism, encapsulation, and information hiding. The complaints about Go are never (or rarely) about its OO system.


OOP without inheritance is exactly what VB6 had… now its cool again, I guess.


Always has been.


That position's not uncommon, but generally people who hold it prefer the rust style trait/interfaces system. To me it makes more sense, I don't care what this object is so long as it guarantees to provide me with certain functionality that I need.


Almost all languages have some sort of object representation, right? Classes with their own behavior, DTOs, records, structs, etc.,. What language are you working in? If you're coupled to a specific database provider anyway there's usually a system table you can query to get your list of tables, column names, etc., so you could almost just use one data source and only need to deal with its structure to provide all your endpoints (not really recommending this approach).


This is probably the correct solution for this use case, but obviously and objectively much harder than object.get(id=1).

I was mainly doing this in Go, posted more in a side post.


> As a total beginner to the functional programming world, something I've never seen mentioned at length is that OOP actually makes a ton of sense for CRUD and database operations.

That's because you are wrong. There's nothing in relational databases mapping that make objects a better target than even the normal data structures you see in functional languages.

> In my case, I wanted to do CRUD endpoints which were programmatically generated based on database schema.

What is a pure transformation.

The problem is that CRUD applications are an incredibly bad explored area. There only mature projects out there are the OOP ORM ones. That's not because OOP is inherently better suited to the application, it's because there's simply not a lot of people willing to risk into working at that problem.

(And the reason people are not willing can be because developers don't choose their tools through rational evaluation, or may be some irrational one IDK. Mine is certainly because I know if I built an amazing system, nobody would come.)


> What is a pure transformation.

I don't know! Can you explain it, or how I would use it for this application?

> And the reason people are not willing can be because developers don't choose their tools through rational evaluation, or may be some irrational one IDK.

I think "do the tools exist in the world" is a pretty rational evaluation. I'd love to see the FP equivalent!


> I don't know!

A pure function is a function whose output only depends on the paramters and not has any internal state. Which in my opinion is a somewhat useless distinction, because you can make your state/instance/whatever the first parameter, (what C and Python do) and tada, every function that doesn't use globals (or static in C), is a pure function.


a pure function doesn't have side effects. it doesn't read from nonlocal state or trigger distant changes nor perform any manner of io.

a pure function doesn't change its inputs, it simply uses them to craft it's outputs.

bringing in an object or function via parameter to perform these side effects still leaves the function impure.


I agree that a return type and a parameter are quite different things in a specific implementation, but on an abstract level, they are basically the same and there is now reason to distinguish between output parameters and the return value other than due to language syntax.

I don't see how mergeing an input and an output parameter to a single in/out parameter makes any difference, other than making it inconvenient for the caller to keep the old state around, which is often what you want. The fact that the callee than can derive the new state from the old state by only specifying the changed values does not fundamentally change anything and is only convenient for the implementation.


Is that distinct from a pure transformation?


A pure transformation is an abstract operation, a pure function implements it.


"OOP actually makes a ton of sense for CRUD and database operations."

Not at all OOP is great at simulations, videogames, emergent behaviour in general. If you do crud with oop you will complain about overengineering.


I think that's fair, and I generally prefer a lighter stack for CRUD, but I still love Django and Rails. Maybe just having "objects" is not enough to qualify as OOP but for many use cases, the convenience offered by "Batteries Included" is worth the trade off in "overengineering".

If I have to build an app, I'm going for rails. If I'm building a back end, I'm reaching for Go. If I need to integrate with Python libraries, Django is great.

But ask me again when I get to the other side of some OCAML projects


Even in video games, I avoid inheritance, I always much prefer composition. Build a complex object from many small objects, then vary behavior with parameters rather than deriving a child class and overriding methods.


Right that's still OOP.


Oh yeah, just saying inheritance is the part I don't like.


Inheritance and composition is fundamentally the same. This is easy to see when you implement it in a language that has no specific support for either (like C). Inheritance is basically composition of the data and the vtable and then overriding the default vtable entry with the supertype implementation.


OOP is not simplifying CRUD or DB ops because you want to batch.

You don’t want lazy loading. You don’t want to load 1 thing. You don’t want to update 1 thing.

You want to actually exploit RETURNING and not have the transaction fail on a single element in batch.

If you care about performance you do not want ORM at all. You want to load the response buffer and not hydrate objects.

If you ignore ORM you will realize CRUD is easy. You could even batch the actual HTTP requests instead of processing them 1 by 1. Try to do that with a bunch of objects.

I would personally never use ORM or dependency injection (toposort+annotations). Both approaches in my opinion do not solve hard problems and in most cases you don’t even want to have the problems they solve.


I agree with you, but I am not sold on optimizing for performance above all else.

Business logic ran fine on ancient mainframes. It can run fine on Raspberry Pis.

CRUD is super easy. It's also not super resource intensive.

I know that's the path that led us all down into Java OOP / start menu is a react native component, but it is actually true.

ORM adds a convenience layer. It also adds some decent protection against SQL injects OOTB and other dev comforts.

Is that trade off worth it? Probably not. But sometimes it's the best tool for the job


Optimize for your users above all else. Yes, even above developer experience. If that means optimize for performance, you optimize for performance.

The only thing that matters is what your users feel when using your product. Everything else is a waste of time. OOP, FP, language choice, it's all just fluff.


I think this is a wonderful philosophy, but many times the actually more important thing is "optimize for the client's budget"


Sometimes the client just can't afford your services. There is nothing wrong with that. :)


ORM gives:

1. migration

2. Validation when inserting

3. Validation when loading

3.1. Serialization

4. Joins

5. Abstracts away db fully if you want, not use db specific features

6. Lazy loading as an encapsulation promoting mechanism

None of these things are especially hard and I’d argue query builders that compose and some other tools deal with these points in a simpler and more efficient manner. Migrations in most cases require careful consideration with multiple steps. Simple cases are simple without ORM.

I’m pretty confident most users of ORM are dealing with problems inflicted by ORM behavior, not db. The biggest infliction is natural push towards single-entity logic that is prevalent in OOP and ORM design.


Are you suggesting that you can't update more than 1 thing at a time in a transaction with any ORM?


No.

You can go further and ask me if I imply lazy loading is mandatory.

Imagine what happens when lazy loading turns off. You lose encapsulation. How will OOP work now if you have to reason about your whole call stack and know what exactly has to load up front?

Why can lazy loading be turned off? What if I write my code BAU and then realize I need to turn it off?


Sorry, there must be a language barrier. Good luck.


If you want to participate, paste this into a LLM and ask it questions. Might be a barrier that has nothing to do with language.


> As a total beginner to the functional programming world, something I've never seen mentioned at length is that OOP actually makes a ton of sense for CRUD and database operations.

I've heard this a lot in my career. I can agree that most object-oriented languages have had to do a lot of work to make CRUD and database operations easy to do, because they are common needs. ORM libraries are common because mapping between objects and relations (SQL) is a common need.

It doesn't necessarily mean that object-oriented programming is the best for CRUD because ORMs exist. You can find just as many complaints that ORMs obfuscate how database operations really work/think. The reason you need to map from the relational world to the object world is because they are different worlds. SQL is not an object-oriented language and doesn't follow object-oriented ideals. (At least, not out of the box as a standardized language; many practical database systems have object-oriented underpinnings and/or present object-oriented scripting language extensions to SQL.)

> it's super hard without an ORM or at least some kind of object layer

This seems like you might have got caught in something of a tautological loop situation that because you were working in a language with "object layers" it seemed easiest to work in one, and thus work with an ORM.

It might also be confusing the concepts of "data structure" and "object". Which most object-oriented languages generally do, and have good reason to. A good OOP language wants every data structure to be an object.

The functional programming world still makes heavy use of data structures. It's hard to program in any language without data structures. FP CRUD can be as simple as four functions `create`, 'read`, `update`, and `delete`, but still needs some mapping to data structures/data types. That may still sound object-oriented if you are used to thinking of all data structures as "objects". But beyond that, it should still sound relatively "easy" from an FP perspective: CRUD is just functions that take data structures and make database operations or make database operations and return data structures.

A difference between FP and OOP's view of data structures is where "behaviors" live. An object is a data structure with "attached" behaviors which often modify a data structure in place. FP generally relies on functions that take one data structure and return the next data structure. If you aren't using much in the way of class inheritance, if your "objects" out of your ORM have few methods of their own, you may be closer to FP than you think. (The boundary is slippery.)


> OOP actually makes a ton of sense for CRUD and database operations.

OOP is nothing but trouble when you try to do some advanced database operations. Select some columns, aggregate them. That is hard in OOP. Throw in window functions and OOP just decides you don't exist.


OOP is a fine paradigm for an abstraction boundary, the problem is actually abstraction boundaries in the wrong places. Make your object the database, and none of these problems exist. I know none of these OOP-first/only languages encourages that.


results = database.query(from table select column)

That is a bold take these days. You will take a lot of flak if any of the parts just happens to barely look like SQL. Either you are on the ORM side or on the wrong side.


Not at all what I mean.

    std::vector<User> users = Database.recipient_list_for_message_board (message_thread, Database::ORDER_RECENT, skip=skip, inverse=True);


> This seems like you might have got caught in something of a tautological loop situation that because you were working in a language with "object layers" it seemed easiest to work in one, and thus work with an ORM.

I mean, I think this is likely the case. So, I tried this, for example in Go, which is not really a proper functional programming language as I understand it, but is definitely not object-oriented.

So for my use case, I wanted to be able to take a database schema and programmatically create a set of CRUD endpoints in a TUI. Based on my pretty limited knowledge of Go, I found this to be pretty challenging. At first, I built it with Soda / Pop, the ORM from Buffalo framework. It worked fairly well.

Then I got frustrated with using Soda outside Buffalo, and yoinked the ORM to try and remove a layer. Using vanilla Go, it seems like the accepted pattern is that you create separate functions for C R U and D, as you referred to. However, it seems like this is pretty challenging to do programmatically, particularly without sophisticated metaprogramming, and even if you had a language which had complex macros or something, that is objectively significantly harder than object.get() and object.save().

Finally, I put GORM back in, and it worked fine. And GORM is a nice library, even though I think having an ORM is not the "Go" way of doing things in the first place. But also, Gorm is basically using function magic to feel like OOP. And maybe the problem with this idea is that it's not "Proper Go" to make a thing like this, it would be better to just code it. There's an admin panel in the Pagoda go stack which relies on ent ORM to function as well. I can only assume the developer motivations but I assume they are along the same lines as my experience.

I certainly don't think any of this requires insane class inheritance, and maybe that's all people are talking about with OOP. But I still think methods go a long way in this scenario.

In the real world, in business logic, objects do things. They aren't just data structures.

To summarize, CRUD seems pretty easy in any language, programmatically doing CRUD seems super hard in FP. Classes make that a lot easier. Maybe we shouldn't do that ever, and that's fine, but I'm a Django guy, I love my admin panels. Just my experience.


> I certainly don't think any of this requires insane class inheritance, and maybe that's all people are talking about with OOP. But I still think methods go a long way in this scenario.

Methods at all make a language OOP. Class inheritance is almost a side quest in OOP. (There are OOP languages with no class inheritance.)

Go seems quite object-oriented to me. I would definitely assume it is easier to use an ORM in Go than to not use an ORM.

I don't use a lot of Go, so I can't speak to anything about what the "proper Go" way of doing things is.

I could try to describe some of the non-ORM, functional programming ways of working with databases as I've seen in languages like F#, Haskell, or Lisp, but I'm not sure how helpful that would be to show that CRUD is not "super hard" in FP especially because you won't be familiar with those languages.

The thing I'm mostly picking up from your post here is that you like OOP and are comfortable with it, and that's great. Use what you like and use what you are comfortable with. OOP is great in that a lot of people also like it and feel comfortable with it.


I get how to do CRUD in FP - I don't get how to generate endpoints automatically in FP. Is anybody doing that?


PostgREST


Dude awesome. I didn't realize this was Supabase backend. I haven't used Supabase but I really like their approach.


> but is definitely not object-oriented.

I think Go is pretty much an OOP like programming language. While maybe it does not "look like" an OOP language it seems to me to allow a wide range of constructs and concepts from OOP.

I am not a Go programmer just reading about it, so I could be wrong.


Maybe I am not exactly what you mentioned, but I do feel OOP set us back about a decade or two and do think the general concept of microservices is a good idea. But maybe to your point, these beliefs are completely orthagonal to one another, and why they are mentioned as being related baffled me. To be honest the whole post baffled me and I am disappointed I can not downvote the submission. Anyway more to your topic- OOP in the early 2000s was put on this massive pedestal and trying to point out its flaws would often get you chastised or shunned, and labeled that you just didn't get OOP and such. But the object hierarchies often became their own source of inflexibility, and shoehorning something new into them could often be very difficult and often involve an hour or three of debate/meetings on how to best make teh change.

Microservices are more about making very concrete borders between components with an actual network in between them... and really a contract that has to be negotiated across teams. I feel the best thing this did was force a real conversation around the API boundary and contract, monoliths turn to a big ball of mud once a change slips through that passes in an entire object when just a field is needed, and after a few of these now everything is fairly tightly coupled- modern practices with PRs could prevent a lot of this, but there is still a lot of rubber stamping going on and they don't catch everything. Objects themselves are fine ideas, and I think OOP is great when you focus on composition over inheritance, and bonus points if the objects map cleanly into a relational database schema- once you are starting getting inheritance hierarchies, they often do not. If I had to guess, your experience with OOP is mostly using ORMs where you define the data and it spits out a table for you and some accessor methods, and that works... until it doesn't. At a certain level of complexity the ORM falls apart, and what I have seen in nearly every place I have worked at- is that at some point some innocuous change gets included and now all of a sudden a query does not use an index properly, and it works fine in dev, but then you push it to prod and the DB lights on fire and its really difficult to understand what happened. The style of programming you are talking about would be derided by some old heads as "C with objects" and not "really" OOP. But I do think you are onto something by taking the best parts and avoiding the bad.

"Micro" services aren't great when they are taken to their utmost tiny size, but the idea of a problem domain being well constrained into a deployable unit usually leads to better long term outcomes than a monolith, though its also very true that for under $10k you can get 32 cores of xeons and about 256 gigs of ram, and unless you are building something with intense compute requirements, that is going to get you a VERY long way in terms of concurrent users.


Pretty crazy, right? It almost seems like a honeypot


Isn't this fundamentally the same problem as ad blockers? Which is essentially a solved problem


Huh? Blocking senders as you surf the web based on what you want to see is a completely different problem from blocking requests to your server based on what the intent of the requester is. I can think of no way these problems are similar except in the very narrow technical sense of maintaining a blocklist and attaching it to a request cycle, which is really not the hard part of either of these problems.


Why couldn't there be a crowdsourced list of ips to block similar to adblocker? You could set flags of IPs to block based on your preferences


Because IPs are shared.


IPs are not shared without limit.

All IPs are allocated to CIDR blocks and Autonomous Systems, the latter identified by their Autonomous System Number (ASN). It's reasonably straightforward and tractable to track good/bad behaviour by either, and (thanks to the Law of Large Numbers and Power Laws), there's virtually always a very small number of absolutely horribly-misbehaved blocks from which a large fraction of abuse originates. Moreover, at sufficiently fine detail, it's possible to identify both friendly and hostile address spaces, permitting carve-outs for the former and scaled response against the latter.

The second part of this approach is that defences need not be all-or-nothing, universal, and/or unscaled. A netblock with a few bad actors might be subject to a slight performance penalty. A netblock with no non-hostile traffic could be blocked entirely (or tarpitted or otherwise subject to negative performance impacts). And of course, reputation data can be shared, as a broader view (one which, say, a large CDN or monitoring service might have) is going to provide both earlier warning and greater detail of where hostile activity originates. And individual instances of good behaviour could be excepted from broader blocks.

Ultimately, connectivity providers, whether of data centres or residential / organisational / mobile Internet services, should be encouraged to police their own outbound traffic and take actions themselves in the event of identified abusive behaviour. (That's been a long-standing dream of mine, it's ... stubbornly refused realisation.)


I actually completely agree. I am learning OpenBSD and the man pages are very good, but all too often I find myself reading them, beating my head against the wall, and then googling or using tdlr or gippity.

For example, I just was digging into BSD_auth and authenticate, and I don't know much about how auth works generally. I found it pretty tough to grok from the man page. I love the idea of learning everything from directly within the system and man pages, but I might just not be smart enough for that.


This is very cool! I am mildly disappointed that it isn't called Wikipedia Brown. Despite this, great work


Fantastic! The REPL is the thing I missed the most in Emacs


I have not tried Zorin, but it is near the top of distros I would give to a "set it and forget it" non-techie.

Anybody have a top 5 list for distros in this realm? I'm an Artix / OpenBSD guy these days and I feel like I'm too far down the rabbit hole to know what's good for new folks.


Debian. I’m not trolling, it’s easy to install, very stable, and KDE, while allowing everything to be customized, is awesome out of the box. It’s way more oldschool than what I’m seeing of Zorin there but for the oldschoolers the feeling of owning his computer again has no price. I’ve been a pop!os user too for quite a long time and it’s very polished, but too fisherpricish for a desktop to me.


I don’t think distros with older kernels should be generally suggested to newbies who can have a smattering of hardware. Unsupported hardware can be such a showstopper for new users.


I have installed Zorin on my parent's 10 year old computer. They use it for browsing the internet and some video calls. Zorin works like a charm, and they have not had any major issues.


Stories of people giving grandma a Linux computer always surprise me.

Zorin in particular was the distro that made me stop using Linux a few years ago, the day I turned on my computer and all of the sudden everything was completely messed up. Took me a long time to recover the DE and get everything back to working condition. Immediately after I went back to Windows for the first time in years, which I don't love, but at least the OS is alway there when I turn the PC on.

How do people give their grandma a Linux pc and never hear from them again? Obviously a catastrophic failure like mine is not normal; and if you need 100% stability for a mission-critical system, I don't doubt you could accomplish it much better with Linux than Windows, but that's not by default. Do you disable automatic updates on grandma's PC?


And here I am looking at the Windows 11 machine I keep around to play a few games that has forced to me to do a complete reinstall four times because Windows updates broke it overnight, even though I had auto-update turned off...


My grandmother was fine for about a decade. I did all the maint. stuff though, including a couple rough upgrades, one where I tarballed her home directory did a clean install and restore the tarball. In the end, it worked fine for her, as she really didn't change much... the only apps she really used were the browser and a handful of old Windows games installed through Wine.


With complete sincerity: I would like to hear as much about your linux-using retro-gamer grandma as you feel comfortable sharing, she is an icon.


She had a couple old card and casino games she bought in the later 90's... they installed in WINE without any real issues at all, total surprise to me, but they were likely just using simple GDI calls or whatever, prior to DirectX really taking over. I had also installed a handful of similar games via the distro repositories.

She mostly used her browser for email (Yahoo) and to order grocery delivery once a week. She emailed and shared pictures with extended family quite a bit.

Nothing really extreme at all, and not really a heavy gamer by any means. Just casual play. Oh, she liked a few of the columns/gems type games as well.


I think it depends a lot on who you're recommending something to. A fellow developer who wants to be productive? Probably Arch or CachyOS. Someone curious about Linux but needs a lot of existing resources and hand-holding? Probably still Ubuntu, or maybe Mint. Someone who wants to really dig into things and see something new? TempleOS (RIP Terry). Someone who likes the same type of programming languages as myself? Mezzano.


Never heard of Mezzano until today, looks neat! Do you run in a VM? How much general purpose computing do you do in it?


ArchLinux is the best alternative for yesterday's Windows 10 user


I love Arch, but suggesting that even an above average Windows 10 user migrate to Arch without prior Linux experience is just irresponsible. They may be left with an unusable machine if they format their "C:" drive but don't manage to properly install Arch and a DE. Mint or even plain Debian seem far better for this, and updating such systems is usually more predictable.


How so? Arch is much more technical and does not even come with a graphical package manager by default.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: