Hacker Newsnew | past | comments | ask | show | jobs | submit | hparadiz's commentslogin

VRChat won because it's a relatively open platform. That's it. The people in there spent money on Meta hardware when it was better but they would then use it only in VRChat.

If a big company embraced an open platform I suspect the space would be far successful. Still a lot of untapped potential.

VRChat is successful because someone can show up in a Goku avatar and start roleplaying. A DJ can stream their twitch steam right into an instance.

VRChat still has no real store system having people upload unity projects manually to use a custom avatar. There's an entire universe of potential revenue if a clothing, avatar, and instance space system was built into the client.


You can buy avatars now from the VRChat marketplace for VRChat credits (that yre essentially Japanese Yen in value :D). It is progress but wit the unfortunate bad practices of the platform reportedly taking a sizeable cut.

In that regard the long term practice of the artists and users of their creations (mainly avatars) transacting directly via Booth or Gumroad can be seen as healthier & more robust long term.


10 years from now: "Can you believe they did anything with such a small context window?"

More likely: "Can you believe they were actually trying to use LLMs for this?"

OSes and software engs did not end up using less RAM.

Measurable responses to the environment lag, Moore's law has been slowing down (e: and demand has been speeding up, a lot).

From just a sustainability point, I really hope that the parent post's quote is true, because otherwise I've personally seen LLMs used over and over to complete the same task that it could have been used for once to generate a script, and I'd really like to be able to still afford to own my own hardware at home.


How many times have we implemented Hello World?

I'm using local models on a 6 year old AMD GPU that would have felt like a technology indistinguishable from magic 10 years ago. I ask it for crc32 in C and it gives me an answer. I ask it to play a game with me. It does. If I'm an isolated human this is like a magic talking box. But it's not magic. It doesn't use more energy than playing a video game either.


Which models?


Thanks! I've been playing with some of the qwen models via openrouter as well.. I'll have to give 9b a go at some point, I've been mostly playing with 27b and coder-next up till now.

10 years from now: "The next big thing: HENG - Human Engineers! These make mistakes, but when they do, they can just learn from it and move on and never make it again! It's like magic! Almost as smart as GPT-63.3-Fast-Xtra-Ultra-Google23-v2-Mem-Quantum"

I would love to live in a world where my coworkers learn from their mistakes

is this Human 2.0? I only have 1.0a beta in the office.

I get the joke but it really does highlight how flimsy the argument is for humans. IME humans frequently make simple errors everywhere they don’t learn from and get things right the first time very rarely. Damn. Sounds like LLMs. And those are only getting better. Humans aren’t.


> Did you know if you ask <X> a question and it doesn't know the answer, sometimes it just makes something up?!

I think maybe a lot of us live in a bubble where the above statement is less frequently true of our peers than average.


Imagine believing humans don’t make the same mistakes. You live in a different universe than me buddy.

Sometimes we repeat mistakes. But humans are capable of occasionally learning. I've seen it!

I've always wanted a better way to test programmers' debugging in an interview setting. Like, sometimes just working problems gets at it, but usually just the "can you re-read your own code and spot a mistake" sort of debugging.

Which is not nothing, and I'm not sure how LLMs do on that style; I'd expect them to be able to fake it well enough on common mistakes in common idioms, which might get you pretty far, and fall flat on novel code.

The kind of debugging that makes me feel cool is when I see or am told about a novel failure in a large program, and my mental model of the system is good enough that this immediately "unlocks" a new understanding of a corner case I hadn't previously considered. "Ah, yes, if this is happening it means that precondition must be false, and we need to change a line of code in a particular file just so." And when it happens and I get it right, there's no better feeling.

Of course, half the time it turns out I'm wrong, and I resort to some combination of printf debugging (to improve my understanding of the code) and "making random changes", where I take swing-and-a-miss after swing-and-a-miss changing things I think could be the problem and testing to see if it works.

And that last thing? I kind of feel like it's all LLMs do when you tell them the code is broken and ask then to fix it. They'll rewrite it, tell you it's fixed and ... maybe it is? It never understands the problem to fix it.


I mean, that is not what they are writing buddy.

I am kind of already at that point. For all the complaining about context windows being stuffed with MCPs, I am curious what they are up to and how many MCPs they have that this is a problem.

10 years from now: “what’s a context window?”

10 years from now: “come with me if you want to live”

Terminator 2 Clip: https://youtu.be/XTzTkRU6mRY?t=72&si=dmfLNDqpDZosSP4M


“640K ought to be enough for anybody”

I dunno why you're getting down voted. This is funny.

Very!

"That was back when models were so slow and weighty they had to use cloud based versions. Now the same LLM power is available in my microwave"

It's biggest hurdle is having to explain even to tech people on HN that it's actually a good idea to have a UI where a user can approve a screen sharing request. You'd think for folks that claim to care about security that'd be a prime concern. It really is so weird how difficult that is for people to grasp. The implementation is likewise not complicated. Seriously how hard is it to draw a box selector and show an okay / cancel box.

It's because people got used to using screen share in X11 when they really want remote login. You cannot do remote login if there has to be someone sitting at the PC to approve it. Since Wayland has no remote login model, people are left trying to kludge together something out of screen sharing. I can guarantee the moment login over RDP becomes available everyone complaining about the screen sharing will quiet down. And yes, I know this is "not Wayland's concern". Kicking the ball does not fix the problem of "if I switch to Wayland I cannot login remotely". There needs to be a parent project which IS concerned with all the use-cases people require to function for a full working desktop experience. Otherwise you get left with this fragmentation, which isn't good for anyone. Basic OS services being fragmented between implementations really sucks. Microsoft figured this out 30 years ago.

https://github.com/KDE/krdp

Works great.

Ya'll are exhausting. Wayland is the one thing where nerds on the internet will not even bother grabbing a livecd of a linux distro just to try it out and then complain about things that have been implemented for years.


>The server starts at session login

Okay so I STILL have to log in locally before I can log in remotely. Also the list of known issues is pretty concerning. This is in not even close to a remote login solution. You are not accomplishing anything by pretending Wayland is anything more than a half-baked toy at present.


It's actually far superior. X11 only provides frames of the display so you can't do h.264 encoding on RDP streams. With Wayland the compositor provides a stream of the window whether it's video or a game. The RDP protocol encodes it in h264 and you get much lower latency and frame loss. X11 can forward a socket but you get no clipboard, no audio, nothing except key strokes and frames. It's not encrypted. Doesn't scale. Have to use SSH tunneling. And good luck having any influence on what the physical displays are showing. You can only do a virtual display or a physical one but not both.

This RDP implementation has clipboard sharing and audio integration. It spawns in a user session but it actually locks the real screens and creates a virtual display. You can make as many virtual displays as you want. In theory you can attach to a single window or rectangular area of the screen. Also it works perfectly fine with SDDM Autologin so you can spawn a display on your server and just auto unlock it.

The project is actually awesome. It's a holistic and far superior RDP solution to anything in the Linux space before.


If everyone appears to be missing something that's so easy to understand and implement, perhaps they're not missing it. They could have a different security/threat model than you're using. They could be expressing frustrations with being forced to manually approve something every time. They could be hitting dumb bugs in the implementation. There could be different people clamoring for more security and less intrusive security.

Honestly most people are just being lazy about it. You don't even need to prompt the user if you wanna allow everything by default. You just need to implement the screenshots, screensharing, and hot keys APIs. All 3 are super simple.

Then we also hit the question of who we're talking to/about. If you want to tell devs that they should implement a handful of general, simple APIs, that's probably fair. (Please start with the GNOME devs.) But some of us are just users explaining why Wayland doesn't work for us; even if we wanted, we can't fix it.

That's how valve's gamescope works. It's a compositor on it's own. Run it inside KDE Plasma and you've got nested compositors.

Lots of weird misinformation in the comments here. Wayland doesn't choose anything. It leaves the compositor to decide where to position a window and whether or not that window receives key presses or not. The program can't draw wherever it wants or receive system wide keystrokes or on behalf of another program. When appropriately implemented the screenshot system is built directly into the compositor. It's an API that let's a program request read access to a part of the screen and the compositor provides upon approval. It's much more secure that way and it works perfectly fine these days. Unfortunately not every compositor implements this.

However if you really really really wanna side step this you can look at keyd - https://github.com/rvaiya/keyd

A project that has a daemon run in the background as a root service and that can provide an appropriate shim to pass key strokes to anything you want.

And just to be clear the appropriate secure model is to have a program request to register a "global" hot key and then the compositor passes it to the appropriate program once registered. This is already a thing in KDE Plasma 6 and works just fine.


> Unfortunately not every compositor implements this.

That's kind of a big sticking point. When GNOME, KDE, and eg. Sway all have different screenshot APIs, the (eco)system doesn't work.


The thing is: they mostly do implement xdg-desktop-portal's screenshot API since that does also handle permission management.

In effect the modern desktop is wayland (window communication), pipewire (audio/video), and xdg-desktop-portal (compositor/environment requests) which all kinda have to be worked with for a desktop application.


X11 can't do different hz on different screens. If you have a dual screen setup where one screen is 165 hz and the other is 60 you're SOL.

Works fine for me with 144/120 with the second as 60.

What's happening is you are running both screens at 144/120 and your 60 is gonna have vsync and screen tearing issues.

What vsync issues?

Couldn't you stop tearing with a couple lines of code? Don't swap out buffers mid-frame, very simple.

The simple solution doesn't make 144+60 look super smooth, but 120+60 should be almost perfect.


How does wayland handle the hypothetical problem of playing a 24 fps movie split across a 48hz and 60hz display?

(Hint: It’s not possible to solve correctly.)


X11 can't fix climate change.

You joke, but the wayland protocol leaves this up to the compositor. Nothing in the protocol prevents your desktop environment from doing this.

I heard the desktop environmentalists are working on such a project.

I think what's actually funnier is that the satellite shooting the laser has to know where the terminal is with pin point accuracy too. So it's pretty easy to cut off targeting to a vast chunk of the planet.

The sats don't use lasers to communicate with terminals, just regular radio waves, they only use lasers for inter-satellite communication

Starlink cells are ~15 miles wide BTW.

I'm already running an LLM locally. This is just me renting space in a data center. Since when did we restrict people's ability to do things? For the record my local models run off the solar bolted to my roof. Even including the data center I'm using 1/10th of the energy we were using on tube monitors back in the 90s. This is exhausting. My GPU would be demonstrably using more power by playing a videogame right now than when I run a local LLM.

Since when did we restrict people's ability to do things?

This question is not the obvious winner you think it is. To me, and I am sure many, it sort of undermines your argument.

Even in the most ‘free' cultures, society has _always_ restricted people’s individual ability to do things that it collectively deems harmful to the whole society.


This is literally why America was founded. Too many people stifle innovation. Move to Europe if you want to be stuck in the 20th century frankly. That doesn't mean we can't take care of folks. But the ludites need to get the fuck out of the way. You're all exhausting.

And people in the late 1700s were just allowed to do anything? (The answer to that is obviously ‘no’).

I’m not even in complete disagreement with your opinion on data centers (like, people are coming up with noise, water use, pollution and traffic arguments about why a data center should not replace a recently controversially closed paper mill near me, which is ridiculous), but your argument doesn’t work. You need to change it if you want to convince people.


America was founded because rich people didn't want to pay taxes.

[flagged]


Please, don't be so negative about the rest of the world. No one has any idea what would have happened if the US did not create their country the way they did. This is the same level of under-appreciation of humans that the ancient aliens people have when they say its impossible for humans to have built the pyramids. Lets be constructive instead of just hating on everyone else please.

I was born in Europe. I know this for a fact. The difference in "can do" culture between old world and new world is everything. There's a reason Europe still doesn't have a self landing rocket. They aren't even trying. It's crabs in a bucket mentality writ large. I wish it weren't so. Yet it is.

It's partially true but it's not as true as doomers would like. It's not America: innovation=yes, Europe: innovation=no. Most of the American innovation came from a small number of very rich people. It has a lot of very poor people as a consequence.

> Most of the American innovation came from a small number of very rich people

Replace "came from" with "was purchased by" or "was copied by an entity with the resources to push the inventor out of the market" and you're getting a lot closer.


How about "was driven by"

This encompasses rich people telling others what to do, and it also encompasses others doing work they think they can sell to rich people.

I think in Europe, people are just overall a bit more chill, and happy people don't feel the need to join the ultra-competitive scramble to the top, they're fine doing enough work but not an extreme amount.


I don't even agree with that. In many cases the rich people at best paid the salaries of other innovative people and then claimed the IP rights and the overwhelming share of the proceeds.

Elon didn't invent anything about rockets or electric cars. He hired (or perhaps just bought a company that had already hired) smart innovative people and got rich off them.

Pharmaceutical CEOs aren't innovating anything but they get rich off the innovations of others.

Most of the people who innovate or invent a new tool or product don't have the capital to mass produce and market it and end up selling their rights, which others benefit from.

Very few rich people are involved at all in innovations. Technology, which is less capital-intensive to scale than other fields, is an exception where several rich folks actually were involved - Steve Jobs' design sense, Larry and Sergei's PageRank algorithm, etc. but even then most of the people actually innovating new things don't get rich and watch others with more resources copy them, outmarket them, and take the money.


>>when did we restrict people's abilities to do things? That's literally what most laws are, saying what you can and can't do. This is like, a foundational understanding of what government/regulation is.

>>this is just me renting space... Okay, so a "network effect" is when things have greater impact due to larger usage. So the data center usage that you're talking about does not represent the overall impact of the data center. Saying "I only pour ONE cup of bleach into the ocean, so I don't see why it's so bad to have the bleach factory pump all its waste in as well" is a WILD take.


>Since when did we restrict people's ability to do things?

When those things impact other people - such as by skyrocketing utility prices, overloading the electrical grid, and more.


I thought this was a free market? Or is that not how things work anymore?

An absolute free market would, by definition, permit the selling of the service "restrict someone's freedom for me".

Not sure if that leaves it a free market. So if we're gonna be talking holes in the cheese - seems like you're reasoning in terms of a basically self-contradictory notion.

But truly, what do you reckon about the 1st point, in terms of the interpretation of market freedom which you use?


Never has been. A totally free market doesn't work and has failed every time it was tried. You want one today, go set up shop in Somalia.

I can't respect that opinion. It's full of holes.

Holes such as what?

There have always been rules and laws. The US has never been a totally free market. Most of the laws and rules we have were written in blood by people professing a "free market" right to poison our people, rivers, air, and more.


America was largely a free market until the 1920s. Since then more regulations have actually increased the cost of living. The healthcare problem in America has a lot to do with increased regulations. For one we have a fixed limit on how many doctors can graduate every year. That was put in place by the medical lobby in the US. Ever since then healthcare costs have increased exponentially. Tale as old as time. This happens with every single new rule put in place. Rent control does the same thing. Prices just go up. This includes NIMBY laws.

The US does not limit the number of doctors that can graduate. The limit is on the number of residencies funded by medicare. If the private sector wanted more doctors in order to pay doctors less, they could just offer paid residencies themselves. Somehow the free market hasn't solved that one. This ignores that doctors' salaries aren't a significant cause of the problems and insurance companies are the true root of high prices.

Rent control stabilizes prices while more supply can be built, because it is in the interests of society for people to be able to afford to live, and we can't will additional buildings into place overnight. High eviction rates destroy communities and have many negative side effects.

In the absence of regulation, corporations lie, cheat, and steal, and have a massive power imbalance against ordinary people. No one has enough time and energy to research every option for everything in their daily life, and they rely on laws to establish safety measures they can rely on.


Oh you're one of those. You actually believe rent control works in the face of overwhelming evidence that all is does is increase the cost of housing. Fascinating. Pointless talking to you.

Rent control doesn't have to be "you as a landlord can't change no more than $X in rent." It can also be "rent increases on existing tenants in good standing are limited to X%.

What are the holes? There are places today with no government - perfect free markets. If you think perfect free markets are awesome, you can move there and do business there. It's a bit like telling someone who loves communism to go to China.

> Since when did we restrict people's ability to do things?

At least 4000 years ago, but that's just the earliest we have evidence for

https://en.wikipedia.org/wiki/Code_of_Ur-Nammu


I don't think you understand the qualifier. I meant in the tradition of liberal free markets that have unlocked human potential on the global scale. I'm saying no it's actually good that you don't have to ask the local government when you want to do something. If American style free markets didn't gain traction we'd still be doing subsistence farming.

The thing is, since we recognized that such a tradition led to the unfettered destruction of the natural environment which we depend upon to survive, we have decided that local governments should be responsible for preserving said environment by regulating the destructive actions performed by the liberal free market. Not doing so will even destroy our ability to perform subsistence farming in the long run.

So far all I hear is complaining about electricity prices. No one actually cares about the "environment". They are just mad that the KW/h is up 3 cents.

Then you are not replying to me in good faith. I didn't say a thing about electricity rates.

I've been building a new task manager in C for Linux.

If you're not using AI you are cooked. You just don't realize it yet.

https://i.imgur.com/YXLZvy3.png


> If you're not using AI you are cooked. You just don't realize it yet.

Truth. But not just “using”.

Because here’s where this ship has already landed: humans will not write code, humans will not review code.

I see mostly rage against this idea, but it is already here. Resistance is futile. There will be no “hand crafted software” shops. You have at most 3-4 years left if you think this is your job.


I don't really agree.

People should still understand the code because sometimes the AI solution really is wrong and I have to shove my hand in it's guts and force it to use my solution or even explain the reasoning.

People should be studying architecture. Cause now I can orchestrate stuff that used to take teams and I would throwaway as a non-viable idea. Now I can just do it. But no you will still be reviewing code.


Most people as at March 2026 still agree with you.

People still understand metallurgy and casting even though machines make all the paperclips.

Are you using AI to write this? Please stop.

It has subpar grammar (uncapitalized word "humans" and "hand crafted" is unhyphenated). I think you're hallucinating.

Clearly so. To me it's the LLM writing style at least.

Said like a bot. Please stop.

The Chromium project builds many things. The Android version is just one of those things.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: