Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Most software houses spend so much time focusing on how expensive engineering time is that they neglect user time. Software houses optimize for feature delivery and not user interaction time.

I don’t know what you mean by software houses, but every consumer facing software product I’ve worked on has tracked things like startup time and latency for common operations as a key metric

This has been common wisdom for decades. I don’t know how many times I’ve heard the repeated quote about how Amazon loses $X million for every Y milliseconds of page loading time, as an example.



There was a thread here earlier this month,

> Helldivers 2 devs slash install size from 154GB to 23GB

https://news.ycombinator.com/item?id=46134178

Section of the top comment says,

> It seems bizarre to me that they'd have accepted such a high cost (150GB+ installation size!) without entirely verifying that it was necessary!

and the reply to it has,

> They’re not the ones bearing the cost. Customers are.


There was also the GTA wasting minutes to load/parse JSON files at startup. https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times...

And Skylines rendering teeth on models miles away https://www.reddit.com/r/CitiesSkylines/comments/17gfq13/the...

Sometimes the performance is really ignored.


I met a seasoned game dev who complained to me he was only ever hired at the end of projects to speed up the code a bunch of mid/junior level game devs the company had used to actually make the game. Basically he said there was only so much time he'd get given, and he'd have to go for low hanging fruit and might miss stuff.

We've only got a couple of game dev shops in my city, so not sure how common that is.


Sweatshops love junior devs, as they never complain, never make suggestions and always take the blame for bugs.

A senior joining when time is tight makes sense, they don’t want anyone to rock the boat, just to plug the holes.


Im pretty sure that CS rendering teeth a mile away turned out to be false. But it was repeated so much and the game release state was (and still is) so bad people assumed it to be true.


Wasn' there a website with formula on how much time things like the GTA bug costed humanity as a whole? Like 5 minutes × users× sessionsperday accumulated?

It cost several human lifetimes if i remember correctly. Still not as bad as windows update which taking the time times wage has set the gdp of a small nation on fire every year..


That's not how it works. The demand for engineering hours is an order of magnitude higher than the supply for any given game, you have to pick and choose your battles because there's always much, much more to do. It's not bizarre that nobody verified texture storage was being done in an optimal way at launch, without sacrificing load times at the altar or visual fidelity, particularly given the state the rest of the game was in. Who the hell has time to do that when there are crashes abound and the network stack has to be rewritten at a moments notice?

Gamedev is very different from other domains, being in the 90th percentile for complexity and codebase size, and the 99th percentile for structural instability. It's a foregone conclusion that you will rewrite huge chunks of your massive codebase many, many times within a single year to accomidate changing design choices, or if you're lucky, to improve an abstraction. Not every team gets so lucky on every project. Launch deadlines are hit when there's a huge backlog of additional stuff to do, sitting atop a mountain of cut features.


> It's not bizarre that nobody verified texture storage was being done in an optimal way at launch

The inverse, however, is bizarre. That they spent potentially quite a bit of engineering effort implementing the (extremely non-optimal) system that duplicates all the assets half a dozen time to potentially save precious seconds on spinning rust - all without validating it was worth implementing in the first place.


Was Helldivers II built from the ground up? Or grown from the v1 codebase?

The first was on PS3 and PS4 where they had to deal with spinning disks and that system would absolutely be necessary.

Also if the game ever targeted the PS4 during development, even though it wasn’t released there, again that system would be NEEDED.


It's a completely different game, engine, etc.


Yes.

They talk about it being an optimization. They also talk about the bottleneck being level generation, which happens at the same time as loading from disk.


Gamedev engineering hours are also in endless oversupply thanks to myDreamCream brain.


[flagged]


> > you will rewrite huge chunks of your massive codebase

> You're not rewriting Unreal

Do you consider the Unreal engine code to be part of "your codebase"?


It's a dependency .. do you not consider dependencies as part of your codebase?


I think they meant the gameplay side of things instead of the engine


Unity rewrote and discontinued lots of major systems several times in a row in the last 10 years.

I’d be careful before telling people to “get a grip”.


Several times in 10 years, sure. Many times every year, certainly not. I'm getting downvoted, but I stand by my statement.


Strawman. If GP had said “full rewrite” sure. They said “huge chunks of your massive codebase” which is definitely true for unity.


I don't think it's quite that simple. The reason they had such a large install size in the first place was due to concern about the load times for players using HDDs instead of SSDs; duplicating the data was intended to be a way to avoid making some players load into levels much more slowly than others (which in an online multiplayer game would potentially have repercussions for other players as well). The link you give mentions that this was based on flawed data (although it's somewhat light on those details), but that's means the actual cause was a combination of a technical mistake and the presence of care for user experience, just not the experience of the majority at the expense of the smaller but not insignificant minority. There's certainly room for argument about whether this was the correct judgement call to make or that they should have been better at recognizing their data was flawed, but it doesn't really seem like it fits the trends of devs not giving a shit about user experience. If making perfect judgement calls and never having flawed data is the bar for proving you care about users, we might as well just give up on the idea that any companies will ever reach it.


How about GitHub actions with safe sleep that took over a year to accept a trivial PR that fixed a bug that caused actions to hang forever because someone forgot that you need <= instead of == in a counter...

Though in this case GitHub wasn't bearing the cost, it was gaining a profit...

https://github.com/actions/runner/pull/3157

https://github.com/actions/runner/issues/3792


> They’re not the ones bearing the cost. Customers are.

I think this is uncharitably erasing the context here.

AFAICT, the reason that Helldivers 2 was larger on disk is because they were following the standard industry practice of deliberately duplicating data in such a way as to improve locality and thereby reduce load times. In other words, this seems to have been a deliberate attempt to improve player experience, not something done out of sheer developer laziness. The fact that this attempt at optimization is obsolete these days just didn't filter down to whatever particular decision-maker was at the reins on the day this decision was made.


I worked in e-commerce SaaS in 2011~ and this was true then but I find it less true these days.

Are you sure that you’re not the driving force behind those metrics; or that you’re not self-selecting for like-minded individuals?

I find it really difficult to convince myself that even large players (Discord) are measuring startup time. Every time I start the thing I’m greeted by a 25s wait and a `RAND()%9` number of updates that each take about 5-10s.


Discord’s user base is 99% people who leave it running 100% of the time, it’s not a typical situation


I think that they make the startup so horrible that people are more likely to leave it running.


As a discord user, it's the kind of platform that I would want to have running to receive notifications, sort of like the SMS of gaming.

A large part of my friend group use discord as the primary method of communication, even in an in person context (was at a festival a few months ago with a friend, and we would send texts over discord if we got split up) so maybe its not a common use case.


It leads to me dreading having to start it (or accidentally starting it - remember IE?) and opting for the browser instead.


I strongly doubt that!


Hikacking my own comment to mention that the normal thing on forums when a reasonable person reads an unreasonable comment is they move on, which can make the comment stand unopposed which gives it credence it doesn’t deserve. I believe if more of us actually showed our disagreement out loud as I have here, it could change things sometimes.


I have the same experience on windows. On the other hand, starting up discord on my cachyos install is virtually instant. So maybe there is also a difference between the platform the developers use and that their users use.


I have plenty of responses to an angry comment I made several months ago that supports your point.

I made a slight at Word taking like 10 seconds to start and some people came back saying it only takes 2, as if that still isn't 2s too long.

Then again, look at how Microsoft is handling slow File Explorer speeds...

https://news.ycombinator.com/item?id=44944352


I never said that 2s wasn’t too long. I just said your environment was broken if it took 10.


There is a high chance the extra nuts and bolts added to Windows, which slow it down, are IT required softwoods, settings, and security enhancements.

Took me almost a year to get a separate laptop laptop for office and development. Their Enhanced Security prevented me from testing administrative code features and broke Visual Studios bug submission system, which Microsoft requires you to use for posting software bugs.

By the way, I can brake Windows simply by running their PowerShell utilities to configure NICs. Windows is not the stable product people think it is.


This was on macOS


Wild. How'd you even find me


I read the comments on this post


Yep, indeed. Which is the main reason I don’t run Discord.


I strongly doubt that. The main reason you don’t run it is likely because you don’t have strong motivation to do so, or you’d push through the odd start up time.


Just going to throw out an anecdote that I don’t use it for the same reason.

It’s closed unless I get a DM on my phone and then I suffer the 2-3 minute startup/failed update process and quit it again. Not a fan of leaving their broken, resource hogging app running at all times.


It would fail to auto update as a system installed package, because that requires a system level package install.

It would not fail to update if installed as a user installed flatpak.

Many apps are this way now.


Why not just respond to the dm on your phone?


For me, I really dislike the fact Discord is completely closed off to the wider internet, and Discord, the company, has absolute control: from a privacy and freedom of speech point of view. This goes against the core ideas of a free and open internet.

I'll admit that the Discord service is really good from a UX point of view.


Contrary, every consumer facing product I've worked had no performance metrics tracked. And for enterprise software it was even worse as the end user is not the one who makes a decision to buy and use software.

>>what you mean by software houses

How about Microsoft? Start menu is a slow electron app.


The Start menu is not an Electron app. Don't believe everything you read on the internet.


That makes the usability and performance of the windows start menu even more embarrassing.

The decline of Windows as a user facing product is amazing, especially as they are really good at developing things they care about. The “back of house” guts of Windows has improved alot, for example. They should just have a cartoon Bill Gates pop up like clippy and flip you the bird at this point.


Much worse is that the search function built into the start menu has been broken in different ways in every major release of Windows since XP, including Server builds.

It has both indexing failures and multi-day performance issues for mere kilobytes of text!


The Start menu is React Native, but Outlook is now an Electron app.


React Native, not Electron. Though it is slower than it was


That's even more damning that they can't dogfood their own GUI toolkits for the primary UI of their own OS.


People believing it says something about the start menu


hey, haven't seen that one in the wild for a little bit :-D https://www.smbc-comics.com/comic/aaaah


The comic artist seems pretty ignorant to think that it’s not meaningful.

What falsehoods people believe and spread about a particular topic is an excellent way to tell what the public opinion is on something.

Consider spreading a falsehood about Boeing QA getting bonuses based on number of passed planes vs the same falsehood about Airbus. If the Boeing one spreads like wildfire, it tells you that Boeing has a terrible track record of safety and that it’s completely believable.

Back to the start menu. It should be a complete embarrassment to MSFT SWEs that people even think the start menu performance is so bad that it could be implemented in electron.

In summary: what lies spread easily is an amazing signal on public perception. The SMBC comic is dumb.


It's less meaningful than you think. Widespread prejudice does give you signal on public sentiment, but it doesn't give you much signal on whether the prejudice happens to coincide with reality or not, compared to other methods. People should be open to having their prejudices corrected by more relevant information.


We’re talking about new prejudices, not old.


AAAAAAAAAAAAAA


> How about Microsoft? Start menu is a slow electron app.

If your users are trapped due to a lack of competition then this can definitely happen.


If only community actually gathered around the true Linux distribution instead of endless forks.


Exactly. Let's start by listing all the true Linux distributions and we can go from there!


>> I don’t know what you mean by software houses, but every consumer facing software product I’ve worked on has tracked things like startup time and latency for common operations as a key metric

Maybe Google? Gmail app is 700+ MB


> I don’t know how many times I’ve heard the repeated quote about how Amazon loses $X million for every Y milliseconds of page loading time, as an example.

This is true for sites that are trying to make sales. You can quantify how much a delay affects closing a sale.

For other apps, it’s less clear. During its high-growth years, MS Office had an abysmally long startup time.

Maybe this was due to MS having a locked-in base of enterprise users. But given that OpenOffice and LibreOffice effectively duplicated long startup times, I don’t think it’s just that.

You also see the Adobe suite (and also tools like GIMP) with some excruciatingly long startup times.

I think it’s very likely that startup times of office apps have very little impact on whether users will buy the software.


They even made it render the screen but still be unusable to make it look like it was running.


Every SSRed app these days…


> every consumer facing software product I’ve worked on has tracked things like startup time and latency for common operations as a key metric

Must be nice. In my career, all working on webapps, I've seen a few leaders popping in to ask us to fix a particularly egregious performance issue if the right customers complain, but aside from those finely-targeted and limited-attention-span drives to "improve performance" it seems the answer for the past decade or so is just to assume everyone is on at least a gigabit connection, stick fingers in ears, and just keep adding more node modules. If the developers' disks get full because node_modules got too big, buy a bigger SSD and keep going. (ok that last part is slight hyperbole but I also don't think frontend devs would be deterred from their ravenous appetite for libraries by a full disk).


Clearly Amazon doesn't care about that sentiment across the board. Plenty of their products are absurdly slow because of their poor engineering.


Can confirm at least for Firefox. When I worked on it, I've spent literal years shaving seconds from startup, or shutdown, or milliseconds from tab switching.

Everybody likes to hate Telemetry, and yes, it can be abused, but that's how Mozilla (and its competitors) manage to make user's life more comfortable.


> every consumer facing software product I’ve worked on has tracked things like startup time and latency for common operations as a key metric

Are they evaluating the shape of that line with the same goal as the stonk score? Time spent by users is an "engagement" metric, right?


The issue here is not tracking, but developing. Like, how do you explain the fact that whole classes of software have gotten worse on those "key metrics"? (and that includes web-selling webpages)


Then why do many software house favor cloud software over on premise?

They often have a recognizable delay to user data input compared to local software


The MBAs hate capital expenditures and love operating expenditures. They'll make strategic blunders like over-dependence on external services just to satisfy their warped mindset.


>I don’t know what you mean by software houses, but every consumer facing software product I’ve worked on has tracked things like startup time and latency for common operations as a key metric.

Then respectfully, uh, why is basically all proprietary software slow as ass?


An exception that confirms the rule.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: