VRChat won because it's a relatively open platform. That's it. The people in there spent money on Meta hardware when it was better but they would then use it only in VRChat.
If a big company embraced an open platform I suspect the space would be far successful. Still a lot of untapped potential.
VRChat is successful because someone can show up in a Goku avatar and start roleplaying. A DJ can stream their twitch steam right into an instance.
VRChat still has no real store system having people upload unity projects manually to use a custom avatar. There's an entire universe of potential revenue if a clothing, avatar, and instance space system was built into the client.
You can buy avatars now from the VRChat marketplace for VRChat credits (that yre essentially Japanese Yen in value :D). It is progress but wit the unfortunate bad practices of the platform reportedly taking a sizeable cut.
In that regard the long term practice of the artists and users of their creations (mainly avatars) transacting directly via Booth or Gumroad can be seen as healthier & more robust long term.
Measurable responses to the environment lag, Moore's law has been slowing down (e: and demand has been speeding up, a lot).
From just a sustainability point, I really hope that the parent post's quote is true, because otherwise I've personally seen LLMs used over and over to complete the same task that it could have been used for once to generate a script, and I'd really like to be able to still afford to own my own hardware at home.
I'm using local models on a 6 year old AMD GPU that would have felt like a technology indistinguishable from magic 10 years ago. I ask it for crc32 in C and it gives me an answer. I ask it to play a game with me. It does. If I'm an isolated human this is like a magic talking box. But it's not magic. It doesn't use more energy than playing a video game either.
Thanks! I've been playing with some of the qwen models via openrouter as well.. I'll have to give 9b a go at some point, I've been mostly playing with 27b and coder-next up till now.
10 years from now: "The next big thing: HENG - Human Engineers! These make mistakes, but when they do, they can just learn from it and move on and never make it again! It's like magic! Almost as smart as GPT-63.3-Fast-Xtra-Ultra-Google23-v2-Mem-Quantum"
I would love to live in a world where my coworkers learn from their mistakes
is this Human 2.0? I only have 1.0a beta in the office.
I get the joke but it really does highlight how flimsy the argument is for humans. IME humans frequently make simple errors everywhere they don’t learn from and get things right the first time very rarely. Damn. Sounds like LLMs. And those are only getting better. Humans aren’t.
I've always wanted a better way to test programmers' debugging in an interview setting. Like, sometimes just working problems gets at it, but usually just the "can you re-read your own code and spot a mistake" sort of debugging.
Which is not nothing, and I'm not sure how LLMs do on that style; I'd expect them to be able to fake it well enough on common mistakes in common idioms, which might get you pretty far, and fall flat on novel code.
The kind of debugging that makes me feel cool is when I see or am told about a novel failure in a large program, and my mental model of the system is good enough that this immediately "unlocks" a new understanding of a corner case I hadn't previously considered. "Ah, yes, if this is happening it means that precondition must be false, and we need to change a line of code in a particular file just so." And when it happens and I get it right, there's no better feeling.
Of course, half the time it turns out I'm wrong, and I resort to some combination of printf debugging (to improve my understanding of the code) and "making random changes", where I take swing-and-a-miss after swing-and-a-miss changing things I think could be the problem and testing to see if it works.
And that last thing? I kind of feel like it's all LLMs do when you tell them the code is broken and ask then to fix it. They'll rewrite it, tell you it's fixed and ... maybe it is? It never understands the problem to fix it.
I am kind of already at that point. For all the complaining about context windows being stuffed with MCPs, I am curious what they are up to and how many MCPs they have that this is a problem.
It's biggest hurdle is having to explain even to tech people on HN that it's actually a good idea to have a UI where a user can approve a screen sharing request. You'd think for folks that claim to care about security that'd be a prime concern. It really is so weird how difficult that is for people to grasp. The implementation is likewise not complicated. Seriously how hard is it to draw a box selector and show an okay / cancel box.
It's because people got used to using screen share in X11 when they really want remote login. You cannot do remote login if there has to be someone sitting at the PC to approve it. Since Wayland has no remote login model, people are left trying to kludge together something out of screen sharing. I can guarantee the moment login over RDP becomes available everyone complaining about the screen sharing will quiet down. And yes, I know this is "not Wayland's concern". Kicking the ball does not fix the problem of "if I switch to Wayland I cannot login remotely". There needs to be a parent project which IS concerned with all the use-cases people require to function for a full working desktop experience. Otherwise you get left with this fragmentation, which isn't good for anyone. Basic OS services being fragmented between implementations really sucks. Microsoft figured this out 30 years ago.
Ya'll are exhausting. Wayland is the one thing where nerds on the internet will not even bother grabbing a livecd of a linux distro just to try it out and then complain about things that have been implemented for years.
Okay so I STILL have to log in locally before I can log in remotely. Also the list of known issues is pretty concerning. This is in not even close to a remote login solution. You are not accomplishing anything by pretending Wayland is anything more than a half-baked toy at present.
It's actually far superior. X11 only provides frames of the display so you can't do h.264 encoding on RDP streams. With Wayland the compositor provides a stream of the window whether it's video or a game. The RDP protocol encodes it in h264 and you get much lower latency and frame loss. X11 can forward a socket but you get no clipboard, no audio, nothing except key strokes and frames. It's not encrypted. Doesn't scale. Have to use SSH tunneling. And good luck having any influence on what the physical displays are showing. You can only do a virtual display or a physical one but not both.
This RDP implementation has clipboard sharing and audio integration. It spawns in a user session but it actually locks the real screens and creates a virtual display. You can make as many virtual displays as you want. In theory you can attach to a single window or rectangular area of the screen. Also it works perfectly fine with SDDM Autologin so you can spawn a display on your server and just auto unlock it.
The project is actually awesome. It's a holistic and far superior RDP solution to anything in the Linux space before.
If everyone appears to be missing something that's so easy to understand and implement, perhaps they're not missing it. They could have a different security/threat model than you're using. They could be expressing frustrations with being forced to manually approve something every time. They could be hitting dumb bugs in the implementation. There could be different people clamoring for more security and less intrusive security.
Honestly most people are just being lazy about it. You don't even need to prompt the user if you wanna allow everything by default. You just need to implement the screenshots, screensharing, and hot keys APIs. All 3 are super simple.
Then we also hit the question of who we're talking to/about. If you want to tell devs that they should implement a handful of general, simple APIs, that's probably fair. (Please start with the GNOME devs.) But some of us are just users explaining why Wayland doesn't work for us; even if we wanted, we can't fix it.
Lots of weird misinformation in the comments here. Wayland doesn't choose anything. It leaves the compositor to decide where to position a window and whether or not that window receives key presses or not. The program can't draw wherever it wants or receive system wide keystrokes or on behalf of another program. When appropriately implemented the screenshot system is built directly into the compositor. It's an API that let's a program request read access to a part of the screen and the compositor provides upon approval. It's much more secure that way and it works perfectly fine these days. Unfortunately not every compositor implements this.
A project that has a daemon run in the background as a root service and that can provide an appropriate shim to pass key strokes to anything you want.
And just to be clear the appropriate secure model is to have a program request to register a "global" hot key and then the compositor passes it to the appropriate program once registered. This is already a thing in KDE Plasma 6 and works just fine.
The thing is: they mostly do implement xdg-desktop-portal's screenshot API since that does also handle permission management.
In effect the modern desktop is wayland (window communication), pipewire (audio/video), and xdg-desktop-portal (compositor/environment requests) which all kinda have to be worked with for a desktop application.
I think what's actually funnier is that the satellite shooting the laser has to know where the terminal is with pin point accuracy too. So it's pretty easy to cut off targeting to a vast chunk of the planet.
I'm already running an LLM locally. This is just me renting space in a data center. Since when did we restrict people's ability to do things? For the record my local models run off the solar bolted to my roof. Even including the data center I'm using 1/10th of the energy we were using on tube monitors back in the 90s. This is exhausting. My GPU would be demonstrably using more power by playing a videogame right now than when I run a local LLM.
Since when did we restrict people's ability to do things?
This question is not the obvious winner you think it is. To me, and I am sure many, it sort of undermines your argument.
Even in the most ‘free' cultures, society has _always_ restricted people’s individual ability to do things that it collectively deems harmful to the whole society.
This is literally why America was founded. Too many people stifle innovation. Move to Europe if you want to be stuck in the 20th century frankly. That doesn't mean we can't take care of folks. But the ludites need to get the fuck out of the way. You're all exhausting.
And people in the late 1700s were just allowed to do anything? (The answer to that is obviously ‘no’).
I’m not even in complete disagreement with your opinion on data centers (like, people are coming up with noise, water use, pollution and traffic arguments about why a data center should not replace a recently controversially closed paper mill near me, which is ridiculous), but your argument doesn’t work. You need to change it if you want to convince people.
Please, don't be so negative about the rest of the world. No one has any idea what would have happened if the US did not create their country the way they did. This is the same level of under-appreciation of humans that the ancient aliens people have when they say its impossible for humans to have built the pyramids. Lets be constructive instead of just hating on everyone else please.
I was born in Europe. I know this for a fact. The difference in "can do" culture between old world and new world is everything. There's a reason Europe still doesn't have a self landing rocket. They aren't even trying. It's crabs in a bucket mentality writ large. I wish it weren't so. Yet it is.
It's partially true but it's not as true as doomers would like. It's not America: innovation=yes, Europe: innovation=no. Most of the American innovation came from a small number of very rich people. It has a lot of very poor people as a consequence.
> Most of the American innovation came from a small number of very rich people
Replace "came from" with "was purchased by" or "was copied by an entity with the resources to push the inventor out of the market" and you're getting a lot closer.
This encompasses rich people telling others what to do, and it also encompasses others doing work they think they can sell to rich people.
I think in Europe, people are just overall a bit more chill, and happy people don't feel the need to join the ultra-competitive scramble to the top, they're fine doing enough work but not an extreme amount.
I don't even agree with that. In many cases the rich people at best paid the salaries of other innovative people and then claimed the IP rights and the overwhelming share of the proceeds.
Elon didn't invent anything about rockets or electric cars. He hired (or perhaps just bought a company that had already hired) smart innovative people and got rich off them.
Pharmaceutical CEOs aren't innovating anything but they get rich off the innovations of others.
Most of the people who innovate or invent a new tool or product don't have the capital to mass produce and market it and end up selling their rights, which others benefit from.
Very few rich people are involved at all in innovations. Technology, which is less capital-intensive to scale than other fields, is an exception where several rich folks actually were involved - Steve Jobs' design sense, Larry and Sergei's PageRank algorithm, etc. but even then most of the people actually innovating new things don't get rich and watch others with more resources copy them, outmarket them, and take the money.
>>when did we restrict people's abilities to do things? That's literally what most laws are, saying what you can and can't do. This is like, a foundational understanding of what government/regulation is.
>>this is just me renting space...
Okay, so a "network effect" is when things have greater impact due to larger usage. So the data center usage that you're talking about does not represent the overall impact of the data center. Saying "I only pour ONE cup of bleach into the ocean, so I don't see why it's so bad to have the bleach factory pump all its waste in as well" is a WILD take.
An absolute free market would, by definition, permit the selling of the service "restrict someone's freedom for me".
Not sure if that leaves it a free market. So if we're gonna be talking holes in the cheese - seems like you're reasoning in terms of a basically self-contradictory notion.
But truly, what do you reckon about the 1st point, in terms of the interpretation of market freedom which you use?
There have always been rules and laws. The US has never been a totally free market. Most of the laws and rules we have were written in blood by people professing a "free market" right to poison our people, rivers, air, and more.
America was largely a free market until the 1920s. Since then more regulations have actually increased the cost of living. The healthcare problem in America has a lot to do with increased regulations. For one we have a fixed limit on how many doctors can graduate every year. That was put in place by the medical lobby in the US. Ever since then healthcare costs have increased exponentially. Tale as old as time. This happens with every single new rule put in place. Rent control does the same thing. Prices just go up. This includes NIMBY laws.
The US does not limit the number of doctors that can graduate. The limit is on the number of residencies funded by medicare. If the private sector wanted more doctors in order to pay doctors less, they could just offer paid residencies themselves. Somehow the free market hasn't solved that one. This ignores that doctors' salaries aren't a significant cause of the problems and insurance companies are the true root of high prices.
Rent control stabilizes prices while more supply can be built, because it is in the interests of society for people to be able to afford to live, and we can't will additional buildings into place overnight. High eviction rates destroy communities and have many negative side effects.
In the absence of regulation, corporations lie, cheat, and steal, and have a massive power imbalance against ordinary people. No one has enough time and energy to research every option for everything in their daily life, and they rely on laws to establish safety measures they can rely on.
Oh you're one of those. You actually believe rent control works in the face of overwhelming evidence that all is does is increase the cost of housing. Fascinating. Pointless talking to you.
Rent control doesn't have to be "you as a landlord can't change no more than $X in rent." It can also be "rent increases on existing tenants in good standing are limited to X%.
What are the holes? There are places today with no government - perfect free markets. If you think perfect free markets are awesome, you can move there and do business there. It's a bit like telling someone who loves communism to go to China.
I don't think you understand the qualifier. I meant in the tradition of liberal free markets that have unlocked human potential on the global scale. I'm saying no it's actually good that you don't have to ask the local government when you want to do something. If American style free markets didn't gain traction we'd still be doing subsistence farming.
The thing is, since we recognized that such a tradition led to the unfettered destruction of the natural environment which we depend upon to survive, we have decided that local governments should be responsible for preserving said environment by regulating the destructive actions performed by the liberal free market. Not doing so will even destroy our ability to perform subsistence farming in the long run.
So far all I hear is complaining about electricity prices. No one actually cares about the "environment". They are just mad that the KW/h is up 3 cents.
> If you're not using AI you are cooked. You just don't realize it yet.
Truth. But not just “using”.
Because here’s where this ship has already landed: humans will not write code, humans will not review code.
I see mostly rage against this idea, but it is already here. Resistance is futile. There will be no “hand crafted software” shops. You have at most 3-4 years left if you think this is your job.
People should still understand the code because sometimes the AI solution really is wrong and I have to shove my hand in it's guts and force it to use my solution or even explain the reasoning.
People should be studying architecture. Cause now I can orchestrate stuff that used to take teams and I would throwaway as a non-viable idea. Now I can just do it. But no you will still be reviewing code.
If a big company embraced an open platform I suspect the space would be far successful. Still a lot of untapped potential.
VRChat is successful because someone can show up in a Goku avatar and start roleplaying. A DJ can stream their twitch steam right into an instance.
VRChat still has no real store system having people upload unity projects manually to use a custom avatar. There's an entire universe of potential revenue if a clothing, avatar, and instance space system was built into the client.
reply