Hacker Newsnew | past | comments | ask | show | jobs | submit | throwaw12's commentslogin

does it matter though?

for consumer users, yes, it is getting expensive. But for corporations, they are willing to pay the price to be competitive


I have burned through more than my salary in AI API calls and nobody seems to care!

> Where are the salary bumps to reflect this?

Let me increase salary to all my employees 2x, because productivity is 4x'ed now - never said a capitalist.


> for the majority of power users you could stop now and people would be generally okay with it

Why stop though? Google didn't say Altavista and Yahoo is good enough for the majority of power users, let's not create something better.

When you have something good at your hand and you see other possibilities, would you say let's stop, this is enough?


I disagree with your sentiment and genuinely think something big is coming. It doesn't need to be perfect now, but it could be good enough to disrupt SaaS market.

> say all of this fluff when everyone knows it’s not exactly true yet

How do you know it's not exactly true? I am already seeing employees in enterprises are heavily reliant on LLMs, instead of using other SaaS vendors.

* Want to draft email and fix your grammar -> LLMs -> Grammarly is dying

* Want to design something -> Lovable -> No need to wait designer, no need to get access to Figma, let designer design and present, for anything else use lovable, or alternatives

* want to code -> obviously LLMs -> I sometimes feel like JetBrains is probably in code red at the moment, because I am barely opening it (saying this as a heavy user in the past)

To make message shorter, I will share my vision in the reply


Let's imagine AI is not there yet and won't be there for 100% accuracy, but you still need accountability, you can't run everything in autopilot and hope you will make 10B ARR.

How do you overcome this limitation?

By making human accountable, imagine you come to work in the morning and your only task is to: "Approve / Request improvement / Reject". You just press 3 buttons all day long:

* Customer is requesting pricing for X, based on the requirements, I found CustomerA had similar requirements and we offered them 100$ / piece last month. What should I do? Approve / Reject / "Ask for 110$"

* Customer (or their agent) is not happy with your 110$ proposal, I used historical data and based on X,Y,Z min we can offer is 104$ to make our ARR increase 15% year-over-year, what should I do? Approve / Reject / Your input

....


So what, you show up to work one day, hit your three button rotation and one day you just end up in prison because your agent asked you to approve fraud/abuse because the legal ramifications section was outside of it's context window? This is asinine.

nope, you won't.

your agentic platform vendor will be responsible for not showing important things


> your agentic platform vendor will be responsible for not showing important things

That'll be covered by their ToS/contracts so they won't be liable


Ok, you’re the platform vendor and just enabled fraud. Now what?

That's exactly how I play RPGs

I am on the opposite side of what you are thinking.

- Blocking access to others (cursor, openai, opencode)

- Asking to regulate hardware chips more, so that they don't get good competition from Chinese labs

- partnerships with palantir, DoD as if it wasn't obvious how these organizations use technology and for what purposes.

at this scale, I don't think there are good companies. My hope is on open models, and only labs doing good in that front are Chinese labs.


The problem is that "good" companies cannot succeed in a landscape filled with morally bad ones, when you are in a time of low morality being rewarded. Competing in a rigged market by trying to be 100% morally and ethically right ends up in not competing at all. So companies have to pick and choose the hills they fight on. If you take a look at how people are voting with their dollars by paying for these tools...being a "good" company doesn't seem to factor much into it on aggregate.

exactly. you cant compete morally when cheating, doing illegal things and supporting bad guys are norm. Hence, I hope open models will win in the long term.

Similar to Oracle vs Postgres, or some closed source obscure caching vs Redis. One day I hope we will have very good SOTA open models where closed models compete to catch up (not saying Oracle is playing a catch up with Pg).


No good companies for you, yet you bet on Chinese labs! Even if you have no moral problems at all with the China authoritarian, Chinese companies are as morally trustworthy as American ones. That is clear.

As it’s often said: there is no such thing as free product, you are the product. AI training is expensive even for Chinese companies.


I expect to some degree the Chinese models don't need immediate profits, because having them as a show of capability for the state is already a goal met? They're probably getting some support from the state at least.

> Even if you have no moral problems at all with the China authoritarian

It's funny how you framed your sentence. Let's unpack it.

1. I didn't say Chinese companies are good, I said my hope is on open models and only Chinese labs are doing good in that front

2. Chinese company doesn't immediately mean its about regime. Maybe its true in the US with the current admin, see how Meta, Google, Microsoft got immediately aligned with current admin

3. Even when company is associated with Chinese regime, I don't remember Chinese authoritarian regime kidnapping the head of another state, invading bunch of countries in the Middle East and supporting states committing genocide and ethnic cleansing (Israel in Gaza, UAE in Sudan and many more small militant groups across Africa and ME), and authoritarian regimes like Saudi Arabia.

If you ask me to rate them by evil level, I would give the US 80/100, and China 25/100 - no invasions, no kidnapping of head of states, no obvious terror acts - but unfortunate situation with Uyghurs.


> Blocking access

> Asking to regulate hardware chips more

> partnerships with [the military-industrial complex]

> only labs doing good in that front are Chinese labs

That last one is a doozy.


I agree, they seem to be following the Apple playbook. Make a closed off platform and present yourself as morally superior.

We are getting there, as a next step please release something to outperform Opus 4.5 and GPT 5.2 in coding tasks

By the time that happens, Opus 5 and GPT-5.5 will be out. At that point will a GPT-5.2 tier open-weights model feel "good enough"? Based on my experience with frontier models, once you get a taste of the latest and greatest it's very hard to go back to a less capable model, even if that less capable model would have been SOTA 9 months ago.

I think it depends on what you use it for. Coding, where time is money? You probably want the Good Shit, but also want decent open weights models to keep prices sane rather than sama’s 20k/month nonsense. Something like a basic sentiment analysis? You can get good results out of a 30b MoE that runs at good pace on a midrange laptop. Researching things online with many sources and decent results I’d expect to be doable locally by the end of 2026 if you have 128GB ram, although it’ll take a while to resolve.

What does it mean for U.S. AI firms if the new equilibrium is devs running open models on local hardware?

OpenAI isn’t cornering the market on DRAM for kicks…

When Alibaba succeeds at producing a GPT-5.2-equivalent model, they won't be releasing the weights. They'll only offer API access, like for the previous models in the Qwen Max series.

Don't forget that they want to make money in the end. They release small models for free because the publicity is worth more than they could charge for them, but they won't just give away models that are good enough that people would pay significant amounts of money to use them.


It feels like the gap between open weight and closed weight models is closing though.

Mode like open local models are becoming "good enough".

I got stuff done with Sonnet 3.7 just fine, it did need a bunch of babysitting, but still it was a net positive to productivity. Now local models are at that level, closing up on the current SOTA.

When "anyone" can run an Opus 4.5 level model at home, we're going to be getting diminishing returns from closed online-only models.


See, the market is investing like _that will never happen_.

I'm just riding the VC powered wave of way-too-cheap online AI services and building tools and scaffolding to prepare for the eventual switch to local models =)

> Based on my experience with frontier models, once you get a taste of the latest and greatest it's very hard to go back to a less capable model, even if that less capable model would have been SOTA 9 months ago.

That's the tyranny of comfort. Same for high end car, living in a big place, etc.

There's a good work around though: just don't try the luxury in the first place so you can stay happy with the 9 months delay.


If an open weights model is released that’s as capable at coding as Opus 4.5, then there’s very little reason not to offload the actual writing of code to open weight subagents running locally and stick strictly to planning with Opus 5. Could get you masses more usage out of your plan (or cut down on API costs).

I'm going in the opposite direction: with each new model, the more I try to optimize my existing workflows by breaking the tasks down so that I can delegate tasks to the less powerful models and only rely on the newer ones if the results are not acceptable.

I used to say that Sonnet 4.5 was all I would ever need, but now I exclusively use Opus...

I'd be happy with something that's close or same as opus 4.5 that I can run locally, at reasonable (same) speed as claude cli, and at reasonable budget (within $10-30k).

Try KimiK2.5 and DeepSeekv3.2-Speciale

Just code it yourself, you might surprise yourself :)

> At this time, we are still openly committed to the 2nd amendment in defense of the 1st, 3rd, 4th, 6th and so on.

Has it ever worked? ICE is killing Americans, and you can't point your gun to them, its not lawful.

If Trump tells ICE to seize all weapons in the US, or otherwise shoot people, you can't point your gun to them, its not lawful.


> tells ICE to seize all weapons in the US

Outside the constitution, outside their jurisdiction, and not lawful.

Popular sovereignty always works. One way or another.

The most wrong opinion in the debate is to claim that we will be punished for open carry demonstrations. Only an abuse victim excuses the attacker, and allowing the attacker to do what they want is just catch 22.


ICE killing Americans is also not lawful. Without law, you have anarchy. In anarchy, everything is allowed. Good luck.

Thanks for the perspective, do you use agents in your day to day work? Does your expectations increased for your developers, because they are now at least 20% more productive?

I think Pump should happen in any new industry.

Pump == experimentation/innovation, different people look at it differently, so you get variety of interesting ideas.

Dump == natural consequence of over-supply, in this case whatever is not useful, we will drop.

But to invent/discover new things, new paradigms, we need that Pump.

1. Look at age of computers, we had so many different architectures and computer brands with own hardware, now mostly converged to a couple of architectures

2. Operating systems, at some point everyone was writing operating systems, now converged to primarily 3

3. Programming languages, not converged to small number of languages, but there were bunch of languages, same with Databases

4. Frontend frameworks, converged around React & Vue.

5. Search engines

6. Social networks

We need that Pump


“Pump & Dump” has a very specific meaning here, something that is essentially a scam to cheat people out of their money, and not an actual honest attempt to create something new…

Pump and dump is not the same as competition resulting in winners and losers, it’s a grift by the losers to profit at the expense of users through deception.

And this is why the OOP article makes zero sense. How is Cursor a grift to profit at the expense of users? Users use Cursor because they want to write code faster. Whether writing code faster is an inherently good thing is up to the users. Was Visual Studio (premium version once sold at ~$5000, btw) a pump & dump?

For different people, its in different stages.

What's cool about this is that, every time new engineer joins this wave, there are more interesting ideas coming and shaping the "vibers" industry.

In my day to day job, now I am worried, it will be very difficult to get a new job, because I vibe so much that I almost forgot to write code from scratch.

Examples:

* Hey Claude, increase mem usage from 500Mb to 1500Mb in production - fire and forget

* Plan mode: What kind of custom metrics can we add to Xyz query processor? Edit mode: add only 3,4 and 9. Later we will discuss 8

* Any other small changes I have...

I primarily became a manager of bunch of AI agents running in parallel. If you interview me and ask me to write some concurrent code, there is a high probability that I will fail it without my AI babies


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: