Hacker Newsnew | past | comments | ask | show | jobs | submit | jrm4's commentslogin

Anecdotally, that literally only one of my college age students knew about the Moon mission was wild.

I genuinely now believe that a real barrier to (the terrible idea of) reinstating the draft is that it would actually be difficult to find and inform the public about it, in a believable way.


It's not really a challenge of reaching those students, the outlets that could otherwise reach them just simply aren't interested in spreading that particular news, because it goes against their ideological beliefs.

And if you ask yourself "wait, why wouldn't they want to inform people of a NASA moon mission?", you're really behind the ball on what's going on.

If it was information that they actually wanted to spread, it would be spread wide and far and reach those students.


I think generally agree, except it ain't even as deep as "ideology," it's just clicks and algorithms...

Perhaps I would have heard of this moon mission if HN wasn’t busy inundating me with politics and AI product threads.

I feel like I vaguely remember hearing about it a while back, with little fanfare, and then not again until just yesterday.

In that time I’ve learned of all kinds of crazy developments in politics and AI companies here on HN


> Perhaps I would have heard of this moon mission if HN wasn’t busy inundating me with politics and AI product threads.

Personal accountability can still be something we all strive to honor. Blaming a news aggregator website for your own ignorance is a hell of a thing.


Is this related to that thing where somehow the entire damn world forgot about the power of boolean (and other precise) searching?

Do y'all mean backend or the Ollama frontend or both? I find it trivially easy to sub in my local Ollama api thing in virtually all of the interesting frontend things. I'm quite curious about the "why not Ollama" here.

Ollama user with the opposite question -- why not? What am I missing out on? I'm using it as the backend for playing with other frontend stuff and it seems to work just fine.

And as someone running at 16gb card, I'm especially curious as to if I'm missing out on better performance?


> Ollama user with the opposite question -- why not? What am I missing out on? I'm using it as the backend for playing with other frontend stuff and it seems to work just fine.

Used to be an Ollama user. Everything that you cite as benefits for Ollama is what I was drawn to in the first place as well, then moved on to using llama.cpp directly. Apart from being extremely unethical, The issue is that they try to abstract away a bit too much, especially when LLM model quality is highly affected by a bunch of parameters. Hell you can't tell what quant you're downloading. Can you tell at a glance what size of model's downloaded? Can you tell if it's optimized for your arch? Or what Quant?

`ollama pull gemma4`

(Yes, I know you can add parameters etc. but the point stands because this is sold as noob-friendly. If you are going to be adding cli params to tweak this, then just do the same with llama.cpp?)

That became a big issue when Deep Seek R1 came out because everyone and their mother was making TikToks saying that you can run the full fat model without explaining that it was a distill, which Ollama had abstracted away. Running `ollama run deepseek-r1` means nothing when the quality ranges from useless to super good.

> And as someone running at 16gb card, I'm especially curious as to if I'm missing out on better performance?

I'd go so far as to say, I can *GUARANTEE* you're missing out on performance if you are using Ollama, no matter the size of your GPU VRAM. You can get significant improvement if you just run underlying llama.cpp.

Secondly, it's chock full of dark patterns (like the ones above) and anti-open source behavior. For some examples:

1. It mangles GGUF files so other apps can't use them, and you can't access them either without a bunch of work on your end (had to script a way to unmangle these long sha-hashed file names) 2. Ollama conveniently fails contribute improvements back to the original codebase (they don't have to technically thanks to MIT), but they didn't bother assisting llama.cpp in developing multimodal capabilities and features such as iSWA. 3. Any innovations to the do is just piggybacking off of llama.cpp that they try to pass off as their own without contributing back to upstream. When new models come out they post "WIP" publicly while twiddling their thumbs waiting for llama.cpp to do the actual work.

It operates in this weird "middle layer" where it is kind of user friendly but it’s not as user friendly as LM Studio.

After all this, I just couldn't continue using it. If the benefits it provides you are good, then by all means continue.

IMO just finding the most optimal parameters for a models and aliasing them in your cli would be a much better experience ngl, especially now that we have llama-server, a nice webui and hot reloading built into llama.cpp


> 1. It mangles GGUF files so other apps can't use them, and you can't access them either without a bunch of work on your end (had to script a way to unmangle these long sha-hashed file names)

This is what pushed me away from Ollama. All I wanted was to scp a model from one machine to another so I didn't have to re-download it and waste bandwidth. But Ollama makes it annoying, so I switched to llama.cpp. I did also find slightly better performance on CPU vs Ollama, likely due to compiling with -march=native.

> (they don't have to technically thanks to MIT)

Minor nit: I'm not aware of any license that requires improvements to be upstreamed. Even GPL just requires that you publish derivative source code under the GPL.


Ollama has had bad defaults forever (stuck on a default CTX of 2048 for like 2 years) and they typically are late to support the latest models vs llamacpp. Absolutely no reason to use it in 2026.

Yeah, it feels like the real lede here may not be "Andressen is a soulless weirdo" but "It's very clear that a lot of our tech 'leaders' are soulless weirdos, which is why this feels like a good explanation of what he's saying."

Can we keep the term "weirdo" positive?

A lot of the people once called weirdos -- a term now partly taken back, as fairly positive, such as in "weird nerds" -- are our hackers, creative thinkers and artists, progressives slightly ahead of history, etc.

The massive problem with tech industry "leaders" is not weirdos/nerds. It's greedy sociopaths, narcissists, and nepo baby halfwits who merely stumbled into way too much power.

Some prominent ones are now openly and proudly presenting themselves as toxic for society/humanity, and even as ruthless fascists.

Call the bad people what they are, but let's be nice to the good weirdos.


Great question and my gut is that it makes it that much easier for large, perhaps corporate interests to gain surveillance and control. I'm aware it's possible now, but it really feels like there's some safety in the friction of the possibility that my home devices just switch up IP addresses once in a while.

Like, wouldn't e.g. IPv6 theoretically make "ISP's charging per device in your home" easier, if only a little bit? I know they COULD just do MAC addresses, but still.


You can't correlate the number of addresses with the number of devices because IPv6 temporary addresses exist. If you enable temporary addresses, your computer will periodically randomly generate a new address and switch to it.

https://www.rfc-editor.org/rfc/rfc8981.html


I feel like this is a silly narrowing of the problem for normal, retail users. My priority isn't masking "the number of addresses" or devices. My desire is to not have a persistent identifier to correlate all my traffic. The whole idea of temporary addresses fails at this because the network prefix becomes the correlation ID.

I'm not an IPv4 apologist though. Clearly the NAT/DHCP assignments from the ISP are essentially the same risk, with just one shallow layer of pseudo-obscurity. I'd rather have IPv6 and remind myself that my traffic is tagged with my customer ID, one way or another.

Unfortunately, I see no real hope that this will ever be mitigated. Incentives are not aligned for any ISP to actually help mask customer traffic. It seems that onion routing (i.e. Tor) is the best anyone has come up with, and I suspect that in today's world, this has become a net liability for a mundane, privacy-conscious user.


> My desire is to not have a persistent identifier to correlate all my traffic.

Reboot your router. Asus (with the vendor firmware) allows you do this in a scheduled manner. You'll get a new IPv4 WAN IP (for your NAT stuff) and (with most ISPs) a new IPV6 prefix.

As it stands, if you think NAT hides an individual device, you may have a false sense of security (PDF):

* https://oasis.library.unlv.edu/cgi/viewcontent.cgi?article=1...


> The whole idea of temporary addresses fails at this because the network prefix becomes the correlation ID.

So the same as the public IPv4 on a traditional home NAT setup?


Most home users do not have a static public IPv4 address - they have a single address that changes over time.

But most ISPs aren’t giving out static IPv6 prefixes either. Instead they are collecting logs of what addresses they’ve handed out to which customer and holding on to them for years and years in case a court requests them. Tracking visitors doesn’t need to use ip addresses simply because it’s trivial to do so with cookies or browser fingerprinting. There’s exactly zero privacy either way.

> Instead they are collecting logs of what addresses they’ve handed out to which customer and holding on to them for years and years in case a court requests them.

They are only supposed to hang on to them for a limited time according to the law where I live (six months AFAIK). Courts are also unwilling to accept IPv4 addresses as proof of identity.

> Tracking visitors doesn’t need to use ip addresses simply because it’s trivial to do so with cookies or browser fingerprinting

Cookies can be deleted. Browser fingerprinting can be made unreliable.

Its not zero privacy either way. Privacy is not a binary. Giving out more information reduces your privacy.


> Most home users do not have a static public IPv4 address - they have a single address that changes over time.

I'd be curious to know the statistics on this: I would hazard to guess that for most ISPs, if your router/modem does not reboot, your IPv4 address (and IPv6 prefix) will not change.


Anecdatally, no. I had to yell at my ISP because mine changed AND I PAY FOR A STATIC ADDRESS.

"If you enable" is doing ALL THE HEAVY LIFTING THERE.

Again, my point isn't about what is possible, but what is likely. -- which is MUCH MORE IMPORTANT for the real world.

If we'd started out in an IPv6 world, the defaults would have been "easy to discover unique addresses" and it's reasonable to think that would have made "pay per device" or other negatives that much easier.


Temporary addresses are enabled by default in OSX, windows, android, and iOS. That's what, like 95% of the consumer non-server market? As for Linux, that's going to be up to each distro to decide what their defaults are. It looks like they are _not_ the default on FreeBSD, which makes sense because that OS is primarily targeting servers (even though I use it on my laptop).

Temporary addresses are used by any Linux distro using NetworkManager (all desktop ones). For server distros, it can differ.

In Gnome it's just a toggle in the network settings

> ALL THE HEAVY LIFTING THERE

> MUCH MORE IMPORTANT

I haven't done the exhaustive research but props in advance for being the only person shouting in caps on HN. Definitely one way to proclaim one's not AI-ness without forced spelling errors.


Didn't even think about that. Interesting.

and most OS do enable it by default

Right? Kinda weird; I wonder what tiny pie it is that they think they're fighting over, and what makes any of these individual projects think that they're powerful enough over the others (not saying they might not be)

Good. While I don't condone anything illegal, influential code like this is nearly always better made public.

This feels right. I think stronger differentiation between the big and the small is important. There's room for a "big everywhere thing," and I think ATProto is probably the wrong way to do it; as I've said elsewhere, ATProto's most important feature ("take it with you") is potentially its greatest weakness and danger.

AKA, if ATProto takes off as a big thing, you've made surveillance and data gathering by Big Brother THAT MUCH EASIER.


I agree; and whatever you think that is good can possibly be used against you. This is why I think ATProto is possibly dangerous, it makes Big Brother's job easier, as opposed to how ActivityPub does it.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: