Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I'm not sure they understand that the copycats needed to be built with non-websockets as a primary transport because of the limitations of those other frameworks.

I do understand but I don't think it's practical to use WebSockets to transport a page diff to move from page A to B, or to update a little area of a page that's not broadcast to anyone else.

The case to use WebSockets from Elixir has always been "but think of what you don't need to send over the wire like HTTP headers!"... until you want to know things like what the user's agent is, IP address and a few other common pieces of information that's normally stored in an HTTP header. Now you need to explicitly add these to your socket. It's easy (1 line of config) but these bytes persist on the socket for every open connection.

Now it means if you have your million concurrent users you have to make sure you have 1GB of memory on your server that's dedicated to storing nothing but this information. I'm being really optimistic here and only accounting for 1kb of data per connected user. Realistically a lot more memory will be used with a million users.

In the grand scheme of things 1GB isn't a lot of memory, especially if you're talking about a million users but this is a problem HTTP doesn't have. This information isn't persisted on a socket and stored in memory on your server. It's part of the headers and that's it, your server is done knowing about or caring about it once the response is sent. If you had a million concurrent visitors on your site your server wouldn't store anything on a socket because no socket exists.

Likewise as soon as you start using `assigns` with LiveView, anything you store on the socket is going to actively take up memory on your server. Yes I know about `temporary_assigns` but if you start using that everywhere then you lose out on the "surgical diffs" and you're back to transferring a bunch of stuff over the wire because the state isn't saved on the socket for each connected user.

The system contradicts itself and you end up needing a real lot of memory on your server to hold this state or you trade that off for storing only the essentials and it's back to sending a lot over the wire. This also has a bunch of mental complexity because you as the end user who builds the app needs to think about these things all the time.

It's much more than "think" too. You end up in a world where you can go out of memory and your server will crash unless you carefully pre-estimate / provision resources based on a specific traffic load to know how much memory it will take. You can go out of memory with a lot less than a million connections too, especially if you forget to make something a temporary assigns which the compiler won't help you with. With HTTP can throw a cache in front of things and now you're bound by how fast your web server can write out responses. HTTP also has decades of support around efficiently scaling, serving, compressing, CDNs, etc..

Then on top of that, if a user loads your site and there's a 3 second blip in the network after it's loaded then they're going to see loading bars because the WebSockets connection was dropped and the socket needs to reconnect. With HTTP this isn't a problem because once you load the page the response is done and that's it. If there's a blip in the network while reading a blog post that's already loaded then it doesn't matter because the content has already been served.

On paper being instantly aware of a dropped connection sounds amazing, but in practice it creates a poor user experience for anything that's read-only such as reading a post on HN, or a GitHub issue, or a blog post or just about anything that's not related to you actively submitting a form. The world is fully of spotty internet connections and HTTP is a master of hiding these blips for a lot of cases.

You can build pretty interactive sites without WebSockets too. If you bring up the network inspector on Google Documents there's no WebSockets connection open. I don't know what Google is doing here but I notice a lot of big sites don't use WebSockets.

For example AWS's web console has a bunch of spots where you can get updates but there's no WebSockets connection open. The same goes for GMail, GitHub (real time issue comments) and others.

That's not to say WebSockets are all around bad, it's just interesting that even for something as collaborative and real-time as Google Docs it can be done without them while having an excellent user experience.

NOTE: I would never solely base a tech decision on what these companies are using for their tech but it's at least interesting they've all chosen not to use WebSockets for whatever reasons they had.

> The architecture of Plug/ Phoenix also means that, should the community come to a conclusion that WebSockets are no longer right, the transport mechanism is extensible

What would happen if the transport layer went back to HTTP? Right now with LiveView you have to rewrite your entire front-end to not depend on plugs / controllers and now you have to re-create this idea of a plug-like system in WebSockets (such as using on_mount, etc.). All of this new code and patterns to solve the same things that were solved for decades with HTTP -- or let's say ~7 years with Phoenix pre-LiveView.

Those are only a few things related to the problems you face when using WebSockets for everything. I still think WebSockets are good but personally I wouldn't use a framework that pushes to use it for everything, the world isn't connected over a local connection with 1ms of latency and 0% packet loss. Plus, WebSockets powered sites tend to feel pretty sluggish for me. It's like browsers are less optimized to deal with painting changes emit by a socket, I don't have a scientific measurement to show you here but I can feel it on sites that use WebSockets to handle things like page navigation. There was a site I saw like 8 months ago where it showed both a Hotwire Turbo and Stimulus Reflex (WebSockets) demo page hosted on the same server to do pagination on a table and clicking around the WebSockets version felt slower even with the same latency to the server.

Speaking of "feel", I made a video about how Topbar from Phoenix makes page transitions with a fast connection feel slower a few weeks ago https://nickjanetakis.com/blog/customizing-topbar-to-add-a-d.... This is unrelated to WebSockets but I'm just saying it's pretty easy to see inefficiencies around user experience in ways that aren't mathematically proven.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: