Hacker Newsnew | past | comments | ask | show | jobs | submit | lima's commentslogin

We tried this, but the quota for Opus models defaults to 0 on VertexAI and quota increase requests are auto-rejected.

Any tips?


What? There's no quota at all. You pay per token up to infinity.

There are in fact quotas and rate limits in VertexAI, albeit generous and automatically increased based on spend

No, unless you count tricks which are explicitly against ToS

They use a buffer battery, it's quite feasible with that.


Feels like such a waste for marginal gains?

With the range as good as a modern EV the charge time already isn't a particularly that bad. I'd much prefer more chargers (so that you can combine charging with something else you were going to do anyway) than faster ones.


I tend to agree but I think the strategy here is to convert people who stubbornly cling to gas vehicles because EVs somehow defy their expectations. I have been approached many times at highway rest stops by people who are curious and slightly skeptical about the EV value proposition. They see me hanging around the vehicle for a half hour and think “ugh, no thanks” as if that’s all I do when I travel. What they’re not seeing is that I rarely use public chargers at all, because 99% of my charging is done either at home or at the charger in the parking lot at work. It’s really just road trips. Not to mention, if you’re an ICE owner hanging around long enough at a rest stops to notice that I’m hanging around, are you really that much faster on a road trip?!!

Back on topic, I am ok with losing a little efficiency in the fast charging process if it means that more people switch away from a horribly inefficient and polluting technology.


You can still use OpenCode with the Anthropic API.


Yep. That's what I do. Just API keys and you can switch from Opus to GPT especially this week when Opus has been kind of wonky.


I pay $100/mo to Anthropic. Yesterday I coded one small feature via an API key by accident and it cost $6. At this rate, it will cost me $1000/mo to develop with Opus. I might as well code by hand, or switch to the $20 Codex plan, which will probably be more than enough.

I'd rather switch to OpenAI than give up my favorite harness.


Yeah I had a similar experience one time. Which is why I laugh when people suggest Anthropic is profitable. Sure, maybe if everyone does API pricing. Which they won’t because it’s so damn expensive. Another way to think about it is API pricing is a glimpse into the future when everyone is dependent on these services and the subscription model price increases start.


I don't get why people talk about ChatGPT as some great saviour though, they're in the same boat but just have more money to burn.


This is the intention. They do not want folks that can’t pay to use their service.


SOTA models cost SOTA prices. Nothing new there


Out of curiosity, what's your next monthly subscription in terms of price?


Electricity, $95/mo.


Now you got me thinking my electric company should start offering subscription tiers in these uncertain energy times...


Ours never will, they're a cartel, sadly. If you mean fixed subscription, next one is Netflix, I think, or my server provider at $40 or so.


My monthly "connection fee" is more than that (no solar, just EV). Your cartel needs to step it up!

For me it's $0.8/kWh during peak, $0.47 off peak, and super off peak of $0.15. I accidentally left a little mini 500W heater on all day, while I was out, costing > 5% of your whole month!


Wow, what the hell.


[flagged]


Yes, you are doing it too with antropic an xAI. I don't get your point. xAI and OpenAI are a little worst? Maybe, still very well fascism.


Quite a lot worse. Both OpenAI and xAI were among the largest donors of Trump's campaign

Musk was the largest individual political donor of the 2024 election [1] and Greg Brockman was the largest donor to Trump's "MAGA Inc" super PAC [2]

[1] https://www.washingtonpost.com/technology/2024/12/06/elon-mu...

[2] https://www.theverge.com/ai-artificial-intelligence/867947/o...


You’re right, Anthropic is quite a bit worse https://www.washingtonpost.com/technology/2026/03/04/anthrop...


Wait - are you missing all the context on this? Anthropic pushed back against this hard, there was a whole back and forth. I'm on mobile and can't look it up for you atm but if you google about this scenario, Anthropic definitely come out of this looking a lot better than OpenAI and xAI


Did you read the article? Or are you just replying?

Anthropic has literally been working with the DoD and Plantair for 2 years now. They were key to the Iran invasion.

If thats “looking better”, keep it.


Key being “worked”, past tense

Now they’re blacklisted from government work (appeal pending) and OpenAI practically jumped to replace them immediately


You’re right it completely doesn’t matter they’ve been instrumental to ICE and the war in Iran. It’s fine now, their previous actions are excused.

Edit: the word you’re also looking for is “working” not “worked”. I have many friends on government contracts still using Claude.


If you evaluate fascism in terms of donation, yes.

But it is more about the political opinions, IMHO, and Anthropic doesn't sound more attractive than the competitors. Anthropic is very much to the right of the transhumanism spectrum (even if xAI and OpenAI are even farther).


IMO, OpenAI have either implicitly committed to becoming the IT service for Trump's secret police, or they've willingly signed up for the harsh retaliation Anthropic's getting, knowing that the Trump administration will inevitably try to push OpenAI around in the same way, if they meaningfully refuse to assist in domestic mass surveillance efforts.


Anthropic was fine doing the same, they just didn't want it done to Americans.


OpenAI agreeing to operate as Trump's secret police materially impacts your security as a European, though, because it cements Trump's power.


Again, though, both of them agreed to that. Anthropic just didn't want to spy on Americans.


You can argue a moral equivalence, I guess, but on a practical level, OpenAI's decision is more dangerous for everyone, because it will help to secure Trump as a dictator.


Or have Claude write the code and Gemini review it. (Was using GPT for review until the recent Pentagon thing.)


You can also review the code you ship yourself.


I certainly do -- but having Gemini review it first saves a lot of time.


'just API key' lol. just hundreds of dollars at a minimum


Yes. And many companies pay that.


[flagged]


I'm testing glm5 on Claude code and opencode just to stop consuming American... Soo good so far!


Qwen works fine and requires paying no-one except a hardware vendor.


Fun fact: In Germany, the civil courts will usually take the case anyways if it has merit, but the winner ends up paying for the whole lawsuit if they failed to make an effort to resolve the case before suing.


And Hungary is pretty much a rogue EU state - their government did go full authoritarian and is aligned with Russia.


It's true, neither AWS nor GCP support spending limits. Only alerting.


It is worth noting that both products have had "student" tiers or similar, that had fixed credit limits with a cliff.

Therefore, they've implemented hard-limits. So not offering hard-limits is a business decision, NOT a technical one. They're essentially hiding functionality they have.

Make of that as you will. Anyone justifying it, should be me with skepticism.


I have never heard of nor seen AWS student accounts.

There is a free tier but that varies per service and anyway will not limit anything. It works as if it just gives you some credit to offset the costs.


AWS Educate "Starter" Accounts were exactly that[0]. It didn't ask for, nor need a Credit Card, and there was functionally no way to exceed.

[0] https://www.geeksforgeeks.org/cloud-computing/aws-educate-st...

They also offered (may still offer) the same thing with AWS Academy.


Soft limits would be ideal (x/day with maximum peak of x/minute), but hey, that's literally negative value to them (work to code, CPU time to implement, less income out of "mistakes")


That's because you pay for stuff like storage. If you had a spending limit, they'd have to delete your data to stop your spend.


Or do what every other industry does, and trigger a conversation. Or even don't let you store more, or restrict access. Why the need to delete?

'By the way old chap, you have gone over your storage limit. Do you want to buy more or delete some stuff?'


>By the way old chap, you have gone over your storage limit. Do you want to buy more or delete some stuff?

Why does my AWS counselor sound British. Am I in eu-west-2?


Why shouldn't it, its just a machine? Wouldn't the world be better if these messages varied a bit!


That's what alarms that you set up are for.


If I reduce my gdrive subscription they don’t simply delete what I have over the new (lower) limit. There is a grace period and it’s standard practice. Why should it be any different in this case?


I've heard that Google keeps Google Drive data around for up to two years if your subscription expired and your account is over quota. They could certainly do the same with other cloud storage.


If only we had the technology to exempt storage from spending limits.


As if that would solve anything? Depending on use, storage could be the largest line item (storage across databases, VMs, object storage).


If only there was a way to pause all the other stuff and only let storage to keep costing you ...


There is, and it would cause an outage while still not achieving the supposed goal of not going over budget. You don't want to be killing your customer's production over potential misconfigurations/forgotten budgets. Especially when you'd continue to bill them for the storage and other static things like IPs.

It's so much easier for them to have support wave accidental overuses.


Everything gets more expensive?


How does this approach compare to the various Ghidra MCP servers?


There’s not much difference, really. I stupidly didn’t bother looking at prior art when I started reverse engineering and the ghidra-cli was born (along with several others like ilspy-cli and debugger-cli)

That said, it should be easier to use as a human to follow along with the agent and Claude Code seems to have an easier time with discovery rather than stuffing all the tool definitions into the context.


That is pretty funny. But you probably learned something in implementing it! This is such a new field, I think small projects like this are really worthwhile :)


I also did this approach (scripts + home-brew cli)...because I didn't know Ghidra MCP servers existed when I got started.

So I don't have a clear idea of what the comparison would be but it worked pretty well for me!


> But the implementation of Gerrit seems rather unloved

There are lots of people who are very fond of Gerrit, and if anything, upstream development has picked up recently. Google has an increasing amount of production code that lives outside of their monorepo, and all of those teams use Gerrit. They have a few internal plugins, but most improvements are released as part of the upstream project.

My company has been using it for years, it's a big and sustained productivity win once everyone is past the learning curve.

Gerritforge[0] offers commercial support and runs a very stable public instance, GerritHub. I'm not affiliated with them and not a customer, but I talk to them a lot on the Gerrit Discord server and they're awesome.

[0]: https://www.gerritforge.com/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: