Since there’s a lot of assumptions on personality here, I’ll toss my perspective here.
Worked at Atlassian for 5 years, had plenty of interactions with Mike. I wouldn’t categorize him as a jerk. I have plenty of disagreements about decisions he’s made, and I think he heavily over-hired (and is paying for it now), but a jerk he is not.
The reality is Atlassian has mechanisms, for better or for worse, that reward social discontent - Hello (their internal Confluence instance which has Reddit-like upvoting on blogs) and their karma bot on slack. Both of which tend to result in people gamifying these to boost their social status, which as you’ve seen with Reddit, often results in a subset of people realizing negative comments get more attention than positive ones. This got out of hand and they’ve been trying to dial it back, leading to cuts like these. It’s been a problem at Atlassian for a while.
The employee didn't call him a jerk. That was a straw-man from Atlassian. Now we're arguing over whether he's a jerk or not.
A opposed to what actually happened: Mike (CEO) fired 19,000 people. Then Mike held a video AMA regarding the firings. Mike took the meeting from the headquarters of the NBA team he owns.
The employee, Unterwurzacher, parodied the CEO on Slack, writing, “What’s up Outragers, just dialing in from my NBA team’s headquarters to yell at the people whose careers I’ve just pummeled.”
> The employee didn't call him a jerk. That was a straw-man from Atlassian.
We don't really have enough information to adjudicate either way, the article doesn't include a transcript of what she actually said or a transcript of what was being said in the courtroom with context (tribunalroom? boardroom? wherever the lawyer was talking).
It seems a bit pointless to hypothesise what might have happened then decide whether the imaginary actions were reasonable in the hypothetical scenario. If we're going to debate correctness there needs to be actual source material instead of this third-hand summary behind a paywall.
Reading this comment really shifted my perspective on this whole thing. I’m less upset about the firing and more upset that anyone ever has the ability to control the livelihoods of 19,000 people.
19k is a fairly small business. I mean it isn't "small business" but it is small relative to many others.
Large companies aren't anytime new. Ford had 100k in the 1920s. Then you have places like new York City government that has 309k people.
I would prefer to have many smaller companies than a could of big ones. But 19k isn't really that many people
does this particularly qualify him as a jerk? or just that the employee takes all the risk in employment, and capitalism does wrong by rewarding owners and management vs workers?
that he's showing off how rich he is as the result of throwing these people on the street is just part of the system weve built
He was a passionate climate activist, possibly still is.
He has since purchased a private jet under controversy.
His company now sponsors an F1 team.
He now seems to be a typical billionaire. You don’t get to be a billionaire without being ruthless.
He probably is now a rich jerk. When I worked at Atlassian and on boarded, one of the managers said if you are in a lift with Mike or Scott, and they asked what you do here, you better tell them what value you are bringing…
Mike was also very public he was proud Atlassian was not a high payer, he wouldn’t compete with Google etc on pay, at the time, yet people still wanted to work at Atlassian. Also didn’t hide the fact they absolutely utilised lack of local market knowledge for visa holders when nearly have the office was a temporary visa holder at the time.
Maybe he was a great guy. But people change. It seems as though having your brains marinated in money is highly neurotoxic, no matter how you started off.
(Anyway: the main offence is using the term "jerk" instead of "wanker").
No amount of valuation can fix global supply issues for GPUs for inference unfortunately.
I suspect they're highly oversubscribed, thus the reason why we're seeing them do other things to cut down on inference cost (ie changing their default thinking length).
Wouldn't that be good? I remember back in the day you could only get Gmail thru an invite, it was an awesome strategy. "Currently closed for applications" creates FOMO. They'd just need to actually get the GPUs in relatively short supply. They could do it in bursts though, right? "Now accepting applications for a short time."
I'm not an internet marketer but that sounds like a win win to me. People feel special, they get extra hype, and the service isn't broken.
Are you sure it was fake scarcity for Gmail? IIRC they did it because they were worried about systems falling over if it grew too fast, and discovered the marketing benefits as a side effect.
maybe, but the response to GPU shortages being increased error rates is the concern imo. they could implement queuing or delayed response times. it's been long enough that they've had plenty of time to implement things like this, at least on their web-ui where they have full control. instead it still just errors with no further information.
i notice that as well. most of the time when i see those it has a retry counter also and i can see it trying and failing multiple requests haha. almost never succeeds in producing a response when i see those though, eventually just errors out completely.
> thus the reason why we're seeing them do other things to cut down on inference cost (ie changing their default thinking length).
The dynamic thinking and response length is funny enough the best upgrade I've experienced with the service for more than a year. I really appreciate that when I say or ask something simple the answer now just comes back as a single sentence without having to manually toggle "concise" mode on and off again.
That implies that either the auth is too heavy (possible, ish) or their systems don't degrade gracefully enough and many different types of failures propagate up and out all the way to their outermost layer, ie. auth (more plausible).
Disclosure: I have scars from a distributed system where errors propagated outwards and took down auth...
Worked at Figma for 5 years. The author uses Figma as an example, but I think misses the point. They're so close though. Note these quotes:
> Both are very well-designed from first principles, but do not conform to what other interfaces the user might be familiar with
> The lack of homogeneous interfaces means that I spend most of my digital time not in a state of productive flow
There are generally two types of apps - general apps and professional tools. While I highly agree with the author that general apps should align with trends, from a pure time-spent PoV Figma is a professional tool. The design editor in particular is designed for users who are in it every day for multiple hours a day. In this scenario, small delays in common actions stack up significantly.
I'll use the Variables project in Figma as an example (mainly because that was my baby while I was there). Variables were used on the order of magnitude of billions. An increase in 1s in the time it took to pick a variable was a net loss of around 100 human years in aggregate. We could have used more standardized patterns for picking them (ie illustrator's palette approach), or unified patterns for picking them (making styles and variables the same thing), but in the end we picked slightly different behavior because at the end of the day it was faster.
In the end it's about minimizing friction of an experience. Sometimes minimizing friction for one audience impacts another - in the case of Figma minimizing it for pro users increased the friction for casual users, but that's the nature of pro tools. Blender shouldn't try and adopt idiomatic patterns - it doesn't make sense for it, as it would negatively impact their core audience despite lowering friction for casual users. You have to look at net friction as a whole.
Good point, I think in case of Figma the idiomatic design was set by Sketch and other UI design apps, which in itself was a step away from the idiomatic design established by Photoshop.
Barring an Internet giant suing them in court, it really feels like this is unlikely to change as most just don’t understand the why or the effect.
Someone needs to write a heist movie set in Spain where a key part of the plan is they steal something while La Liga is blocking some key security route.
I've been using linux as a daily driver since the start of the year.
There's still a long ways to go before things "just work". It's about equivalent to windows right now in terms of frustrations, it's just that frustrations are more along the lines of "this is a bit wonky" instead of "this is malicious / was their intended behavior". It's gotten a LOT better, don't get me wrong, but it's still far off from what a typical user would need.
I'd love to see either Valve or Nvidia really put in effort into creating their own hardware/software integration on a level that Apple does. I think it'd go a long way to legitimizing it.
Thank you for saying something I've been saying for awhile: Linux definitely has jank, but I'm not convinced it's more janky than Windows.
I think people are so used to Windows' awfulness that they kind of forget about how much bullshit is associated with it. Linux has bullshit too, though it's getting better, but when people talk about Linux jank they're always smuggling in an implication of Windows having less jank, which I don't concede at all.
After I replaced my last windows install a few years ago... Checking windows 11 on a friend's PC a few weeks ago was a nightmare. I considered myself a power user back in the day and I really struggled. So now I do have perspective from the other end and it fits the picture - windows is also jank it is just familiar jank for most people.
There is another point too. The trend with Linux is up and improving slowly over decades. And for windows it seems to be the reverse and faster.
Ah the time old classic. Go into the registry and change these 3 keys that seeming have zero relation to the problem at hand and restart your machine TWICE then its fixed.
Out of the box most popular distros require less tweaking and hammering into shape than a windows 11 install and that is a very important "feature"
I don’t think it’s a question that Linux has more jank. I recently installed a fedora spin on a laptop that came with regular Fedora installed originally and the WiFi didn’t work. That’s some janky stuff right there.
I've had wifi drivers not work with fresh installs of Windows as well, so that's hardly a unique Linux thing. I've also had to reboot Windows into special modes because apparently a driver from a Broadcom WiFi card was "unsigned", so I had to disable the check for that.
I've also had registry corruptions, and I've had unprompted updates brick my hard drive because Windows Update is a terrible piece of software, because as far as I can tell the Windows "repair tools" have never worked for any human in history, and neither has System Restore.
I've had updates in Linux break things but never so thoroughly as the time my mom got an automatic update where she literally could not boot in at all (because I think that the automatic update to Windows 11 that she did not want or ask for screwed up the boot keys).
As much as I am a nixOS user myself, I think regular users should be directed to use atomic, immutable distros (as is the case with most of the distros growing in popularity) because of the robust update system along with the ease of rollback should something go wrong.
Regular distros (really comes down to the package manager of choice) are much more brittle, perhaps even worse than Windows Update.
Installing the equivalent of OS "slop" isn't Linux's fault... For better or worse the choice that is afforded by OSS licenses means that many of those choices will be bad.
I've been using Linux on the desktop off-and-on for 20 years. I used OSX for awhile 2008-2015 when they clearly had the best hardware, and the OS was pretty nice. I've been using KDE since then, and I recently installed Bazzite (Fedora+KDE-based) on my sans-windows gaming PC. I also started a new job this year, where I have to use the company-provided MBP for compliance reasons, after having not used MacOS since 2015. So all this is pretty fresh in my mind, and I'll say that 2025+ KDE is by far the best out-of-box experience for power users. It mostly just works, and anything you want to tweak is easy to find in the settings. Setting up modern MacOS with things like more keyboard shortcuts for window management, focus-follows-mouse or even remembering where windows where after waking up from sleep requires you to buy an app or pay a subscription.
Linux may break more often, but you can almost always fix it with a quick google search. If it doesn't do what you want, there's certainly a setting or config or free app you can install that does.
MacOS may break less often, but when it does you're mostly out of luck. It may do what you want more often, but if it doesn't you have to buy an app, if its even possible at all.
> Linux may break more often, but you can almost always fix it with a quick google search.
And that’s where the problem is: a quick google search. Laughably trivial for technical users.
Non-trivial for the majority of the population.
I love Linux and it is completely viable as a desktop operating system, but it’s far from ready for mainstream without better support.
For a rough analogy, I’d compare it to an old car before electronics. An old car is easy to work on and reliable if you do the maintenance. But an old car wouldn’t be reliable for somebody who doesn’t do any work on a car and outsources the maintenance.
Linux excels when things go right. The failure modes are substantially worse and far more likely to occur. It doesn’t matter if they’re rare. They’re not rare enough. And there isn’t support when things go wrong.
For example: It’s difficult to make the macOS UI fail to start through configuration. You never need to directly touch configuration. (And you can’t modify or delete macOS system files.)
With Linux, some normal problems just have to be solved in the terminal. This allows you to put the system into a configuration where the GUI does not start.
Have also been using Bazzite since march on my home desktop and you are spot on. I think the main reason for average person linux being difficult these days are laptops with weird hardware configurations.
I use MacOS at work and although it is miles better than windows, if I had a choice, I would also use Linux for work.
Me too, I was a 30 year Windows developer and Electronics Engineer so I went pretty conservative with Kubuntu LTS and it's been a pretty slick experience. Gemini has been great tech support for all the CLI stuff and getting all of my weirder hardware projects interfaced (100% success rate to date). Just considering whether to delete my windows partition to put my MP3's on, as realistically I'm not going to get any more Windows Programming gigs.
Yeah, for example a bunch of my system updates began showing scary error notes because somehow there is a header inconsistency between the amdgpu driver and the kernel.
I'm not regretting my choice, but it's also something where the average user can't just call Linux Support and get a "run X and it'll fix it" solution.
Do typical users care that much about a bit of jank, though? All the “typical users” I know are on spyware infested Windows laptops and just interpret the horrible shabbiness of the whole experience as being normal.
To add. It is jarring for me when I occasionally get to use someone's browser that does not have an ad blocker. It is indeed surprising what users have accepted as the norm.
Additionally, if you provide any service that offers image diffusion as an offering. You WILL get CSAM* being generated. Make sure you set up multiple layers to catch this. I built out Figma's safety pipeline and procedures for generated content. You'd be amazed what people try and make.
* Not going to debate whether or not AI imagery is CSAM here, but the point being you'll get users trying to generate ai images with subjects < 18yrs old.
I read the entire thing fwiw (pseudo-retired life helps with time here).
It looks like it was a collaborative effort across multiple teams, where each team (research, security, psycology, etc etc etc) were all submitting ~10 pages or so. It doesn't feel like slop.
I'll copy the highlights here, but the tweets have imagery as well:
> The obvious hype - It crushes benchmarks across the board, and it does so with fewer tokens per task.
> Despite this, they don’t think it can self-improve on its own. There are still areas your average engineer does better with, and despite it accelerating tasks by 4x, that only translates to <2x increase in overall progress.
> They’re probably right to hold this back - its ability to exploit things is unprecedented. Any site running on an old stack right now or any traditional industry with outdated software should be terrified if this becomes accessible.
> Counterintuitively, while it’s the most dangerous model, it’s also the safest. They’ve also seen significant additional improvements in safety between their early versions of Mythos and the preview version.
> Anthropic does a really good job of documenting some of the rare dangerous behaviors the early models had.
> Interestingly, Mythos itself leaked a recent internal “code related artifact” on github.
> Mythos is also RUTHLESS in Vending Bench. Agent-as-a-CEO might be viable?
> The last thing: Mythos has emergent humor. One of the first models I’ve seen that’s witty. The examples are puns it came up with and witty slack responses it had when operating as a bot.
This is obviously in response to Mythos, but I'll actually defend their statement at that time - they were right to take a pause.
Think about how much things have changed in our industry since GPT-2 has dropped - it WAS that dangerous, not in itself, but because it was the first that really signaled a change in the field of play. GPT-2 was where the capabilities of these were really proven, up until that point it was a neat research project.
Mythos is similar. It's showing things we haven't seen before. I read the full 250 page whitepaper today (joys of being pseudo-retired, had the hours to do it), and I was blown away. It's capabilities for hacking are unparalleled, but more importantly they've shown that they've made significant improvements in safety for this model just in the last month, and taking more time to make sure it doesn't negatively affect society is a net positive.
A great first step. I'd love to see a sin tax associated with this as well - ie, for adverts that do run, they should have to pay a % of the ad fee to the government.
I don't think people understand just how ingrained in the culture gambling is in Australia. One of the primary 3rd spaces for people in Australia are RSLs, which are technically clubs for veterans to get co-op like services, but have evolved into a 3rd space for everyone that offer food, alcohol, entertainment, and of course, sports gambling and "pokies" (poker/slot machines).
As a West Australian this is so interesting to me, because gambling culture is extremely niche here - but WA law is that pokies are only allowed at the casino, nowhere else. And thank fuck for that.
The "RSL sub-branch" is a not-for-profit welfare organisation, that looks after veterans. For the most part they are small and if they are lucky they get the use of a meeting room in the RSL club.
The "RSL Club" is a multimillion dollar commercial enterprise that looks after its own interests, conducts political lobbying, makes millions of dollars off gambling addicts and hands out token grants in the community to give the impression that they are there to benefit the community. Typically nothing to do with the RSL sub-branch.
Worked at Atlassian for 5 years, had plenty of interactions with Mike. I wouldn’t categorize him as a jerk. I have plenty of disagreements about decisions he’s made, and I think he heavily over-hired (and is paying for it now), but a jerk he is not.
The reality is Atlassian has mechanisms, for better or for worse, that reward social discontent - Hello (their internal Confluence instance which has Reddit-like upvoting on blogs) and their karma bot on slack. Both of which tend to result in people gamifying these to boost their social status, which as you’ve seen with Reddit, often results in a subset of people realizing negative comments get more attention than positive ones. This got out of hand and they’ve been trying to dial it back, leading to cuts like these. It’s been a problem at Atlassian for a while.
reply