As someone who cares about latency, why would I add an additional HTTP request to the request my users are making? I can just use an in-memory store to achieve the same thing under 1ms.
10ms is an eternity, and 100ms added on to every request is insane.
I think this is for a different kind of rate limiting. E.g. you have a website with a couple of servers, and you want to limit the number of chatGPT requests those servers are making per day. 10ms is meaningless to the chatGPT 12s p50.
You could spend an hour making your own persistence/coordination solution - or spend a minute and 5 bucks to call this service.
I can do this with two lines in my nginx config. I have no idea why I would ever consider using another service for this if all it does is rate limiting.
I read the mostly negative comments but actually rate limiting is a vast subject. It can be hacked in less than half an hour or it can take days. It depends on the degree of sophistication that an app needs.
Sometimes we want to rate limit ourselves not to be temporarily banned from another service with a rate limit on its API. This SaaS can be used for that too. I built a system like that for a customer of mine and it didn't take half an hour. The free tier or the $5 per month would have cost my customer less. Furthermore it would work globally on all our machines on different networks. I remember that I told my customer that we would need a rate limiting server to keep track of the global limits, but the answer was to keep working on features and keep using our imperfect solution until we don't run into troubles.
Adam here, the author of rlimit, great to see some discussion and pointers on what I need to work on when it comes to pitching the project.
rlimit is a distributed counter, keeping counters in sync in several regions, allowing you to have consistent rate limiting, everywhere.
If you run on a single machine, there are low to no benefits for you in using rlimit. If you use multiple machines (or serverless runtime), you will likely need something to sync counters - this can be Redis, or you could just sign up for rlimit, and have the counter replicated globally out of the box.
This is not for you in particular, but as a community, we need to stop adding pricing pages to interview projects one can do in an afternoon. We are better than this. Yesterday, it was a compressor that was a wrapper on ffpmg. Now, a literal wrapper of a library.
Everyone wants to get rich, and I understand that, but seeing these projects have a pricing page is pretty annoying. Of course, I'll move ahead and not buy it, and I get it, but as developers, we need to see this era as an era to build better stuff, not just play these finite games.
I don’t understand. As a founder, I worry not having a pricing page will signal to people that my product is way too expensive (“contact us”/fuck off you can’t afford it pricing).
What other option is there? Do you mean these products should be self hosted or open source? I am honestly asking.
That is the reason I included the pricing already. I did not want to give "free" vibes. I launched a couple of other products in the past without pricing and it just made sad anyone who started using the product during the beta.
I am not charging for rlimit, it is not even implemented yet. It is free to use while it is in beta, but there will be limits soon-ish, and I want to be upfront about future costs.
My answer was not related to price (it's a free and open market!) but mostly to the communication behind it:
Maybe you should focus on the value (rate limiting is not a value!) your product produces, the market you want to reach, etc., rather than just rate limiting because it's an issue addressed at almost every layer of the OSI model. If you sell your product as a developer, to developers, the developers will say why? because they know it's quite easy to set up a simple rate limit. Show us why rate limiting is not simple enough (I'm honestly with you here), and money will rain.
Why is your product better than a two-line configuration? if that question isn't answered on your home page, I'll say, "Wtf, does it have a pricing page?".
Maybe adopting a free-premium subscription model is better, in case you want to distribute rate limiting, but mostly I don't think it's about that, but maybe a company-level distribution, like, "I have a small business, I need to put something on top of it, this something needs to have permissions, accesses, etc, across my domain. If so, take my money."
I didn't want to sound harsh, but it's a finite game, and I'm against those! (recommendation here to read Finite and Infinite game)
I can see why this makes sense in serverless environments, in some cases — particularly where you already have an API proxy and coordinated state is overly expensive to think about or accomplish.
Good job posting and getting it done. That takes guts. Don’t listen to the comments; if you find a market, that’s all you need. I can definitely see Node people using this kind of thing happily.
I’m curious, why Google Cloud? What’s it written in?
To the people saying this model is silly or inside out: perhaps. But a determined founder finds and solves a problem.
Where they start often isn’t where they end up. Maybe he’ll add a proxy option which rate limits the request and then it is useful in two circumstances.
If you are going to be mean to this person I better see you shitting on LaunchDarkly et al, too.
I love building and solving problems, rlimit is not just a wrapper over some library, and I believe it fits perfectly into many application stacks.
Most of the comments are about misunderstanding the value - which is completely on me, and I will iterate. I am learning on the way, and I will get better at this, stay tuned.
I’m so confused. So I ping rlimit, then if I’m limited I don’t proceed with a request I’d normally make next to my real API? Do “while” or “for” loops not exist? Sleep (5)?
This has no way of enforcing the rate limit, right? Wouldn’t it be far better to actually enforce the rate limit at the request instead of having to make an extra request for each actual request? Is this even serious?
Holy shit, I thought this was a proxy that enforces the limit for you, at least that would have been reasonable. This is an API you call to tell you whether you should rate-limit that request!
10ms is an eternity, and 100ms added on to every request is insane.