Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Indeed. I have a sinking feeling they realized (or were otherwise convinced) those models are too disruptive to existing businesses and whole market segments, in particular (but not limited to) when it comes to writing code. Or at least that's where it's most obvious to me just how many different classes of companies could grow and capture value[0] that GPT-4 has been providing, pay-as-you-go, for a dozen cents per use. But the same must be true in many other industries.

Come to think of it, it must be the case, because the alternative would be pretty much every player on the market taking the hit and carrying on, or pretending they don't see the untapped value source that just freely flows out of OpenAI for anyone to enjoy, for a modest fee.

As a prime example, I'd point out Microsoft and their various copilots - the code one, the Office 365 one, the Windows system-wide one, in varying stages of development. API access to GPT-4 as good as it originally was[1], directly devalues all of those.

It stands to reason that slowly making the model dumber, while also making it faster and cheaper to use, is the best way for OpenAI to safeguard big players' markets - the "faster" and "cheaper" give perfect cover, while the overall effect is salting the entire space of possibilities - making the model good enough to entertain the crowd, but just not good enough to build solutions on top, not unless you're working for one of the players with special deals.

TL;DR: too many entities with money were unhappy about all the value OpenAI was giving to the world for peanuts, so the model is being gradually nerfed in a way that allows that value to be captured, controlled, and doled out for a hefty price.

(And if that turns out to be true, I'm going to be really pissed. I guess it's in the style of humanity to slow down pace of development not because of ideology, not because of potential risks, but because it's growing too fast to fully monetize.)

--

[0] - I mean that in the most nasty, parasitic sense possible.

[1] - I'm talking about the public release. That GPT-4 version seems to have already been weakened compared to pre-"safety tuning" GPT-4 (see the TikZ Unicorn benchmark story), but we can't really talk about what we never got to play with.



I've smelt the sweet scent of anticompetitive back-room dealing around OpenAI ever since they and Microsoft started forcing people to apply for access to the APIs and including telling them what the use case they were going to use it was.

It just seemed obvious that if anyone suggested a use case that was actually really high value MS would just take the idea, run with it for a month or two to see if it has legs, and then steal it if it actually worked.

All while you're waiting in the queue to have your idea validated as "safe".


Meanwhile Sam Altman was on a worldwide press tour repeatedly saying that their mission is to “democratise” AI. They’re actually doing the exact opposite: gatekeeping, building moats, and seeking legislation to entrench a monopoly position.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: