A big gorilla comes in and under prices the entire market. They can do that because they already have tons of money. They do this long enough to break the market and drive the competition out of business. Once the competitors are gone they jack up the prices to unprecedented levels because there's no more alternatives available and bleed the market for all the money.
This presupposes some athletic new competitor can’t enter the market and take the margin off the fat incumbent.
It’s why we have capital markets: If capturing a profitable opportunity requires spending some money, someone who wants to profit will send that money your way.
But it should only be because they indeed have lower margins or more efficient operations. It should not be funded by external money (other departments or investors), only to undercut competition too force them out only to raise prices to above the previous point after.
So a simple law could be that prices can only be raised to the point where they were at before the competition was squashed.
You do it the same way you fix every other disaster of a code-base. You add a ton of tests and start breaking it up into modules. You then rewrite each module/component/service/etc. one at a time using good practices. That's how every project gets out of the muck.
That's a big, slow, and expensive process though.
Will Anthropic actually do that or will they keep throwing AI at it and hope the AI figures this approach out? We shall see...
With the competition biting at their heels I don't think they have time to do that. They're stuck with what they have. At least until innovation settles a little
I've been hitting the limit a lot lately as well. The worst part is I try to compact things and check my limits using the / commands and can't make heads or tails how much I actually have left. It's not clear at all.
I've been using CC until I run out of credits and then switch to Cursor (my employer pays for both). I prefer Claude but I never hit any limits in Cursor.
Thanks. I don't know why but I just I couldn't find that command. I spent so much time trying to understand what /context and other commands were showing me I got lost in that noise.
How does Ghostty break scroll? I've never noticed this and I just tested, seems to work fine. My problem is the lack of a scrollbar but I know they are working on that.
It's because they run 24/7 in a challenging environment. They will start dying at some point and if you aren't replacing them you will have a big problem when they all die en masse at the same time.
These things are like cars, they don't last forever and break down with usage. Yes, they can last 7 years in your home computer when you run it 1% of the time. They won't last that long in a data center where they are running 90% of the time.
A makeshift cryptomining rig is absolutely a "challenging environment" and most GPUs by far that went through that are just fine. The idea that the hardware might just die after 3 years' usage is bonkers.
Crypto miners undervote for efficiency GPUs and in general crypto mining is extremely light weight on GPUs compared to AI training or inference at scale
With good enough cooling they can run indefinitely!!!!! The vast majority of failures are either at the beginning due to defects or at the end due to cooling! It’s like the idea that no moving parts (except the HVAC) is somehow unreliable is coming out of thin air!
How could it not be the issue? We're already drowning in corporate and malicious garbage. My email has become nigh on unusable because of all the bad actors and short sighted thinking. What used to be a powerful tool for productivity and keeping in touch with friends and family is now a drain on my day.
That was bad enough, but now AI is enabling this rot on an unprecedented level (and the amount of junk making it through Google's spam filters is testament to this).
AI used in this way without any actual human accountability risks breaking many social structures (such as email) on a fundamental level. That is very much the point.
Sites like lmgtfy existed long before AI because people will always take short cuts.
reply