On top of this, you also cannot verify if the position was filled through another job board. The company/HR might say "we filled it through a different portal" and in most portals, expired listings don't show up and there is no way to verify if they actually did hire someone from there.
I wonder where this sits when compared with Cloudflare's product offerings. CF Containers seem like it, but the pricing gap is huge. The bunny.net spend (if run 24/7) comes to almost a dollar an hour. Which is easily ~$700 a month.
You can have a CF implementation for as little as $70-80 a month (running 24/7).
I thought same but when i threw a hello world on there and left it for a bit the actual spend ended up way lower than expected.
They're not doing a great job communicating this imo. Felt closer to cloud function style billing to me where its active or sleeping and the pricing alludes to that "pay only for CPU time used"...yet the dollar number they give is full time for 8 cores (I think)?
Props for very honesty but they're not doing themselves any favours there imo
Yes you're right. The CPU you get is pretty beefy, and they do a really poor job of marketing it.
I am not sure the average budget constrained developer is the right audience for this product. Might be a steal for enterprises though if they can run intense tasks at the edge for just $600-700 bucks.
>I am not sure the average budget constrained developer is the right audience for this product.
Actually think that's their #1 audience.
They absolute take an L on raw pricing, but their "don't send me surprise 100k bills" story is very strong. Their billing is prepay and when I was looking at their WAF even the free tier thing it looked solid on rate limit features
> intense tasks at the edge for just $600-700 bucks.
I'd venture that it would be the wrong tool for that anyway. I'd probably do something like static page on their CDN, dynamic requests to a magic container/function and that passes it on to something VM-ish that isn't publicly exposed. To me 100s of dollar spend in exposed edge compute just doesn't make sense
I think a lot of people will not like to hear this but we use AI almost for everything internally. The noob way to go about this is just give it a couple of tasks and just give it complete root access to your life. That's always going to end up in disappointment. Instead, I realised, AI always needs an architect. Opinionated. Strategic. Authoritative.
It is quite good at following most orders. Hence why you must ALWAYS be in the loop. AI can augment, but not replace. Maybe some day it might. But it's not now, even with the latest SOTA models.
I let AI write my emails for me. But never the ability to hit send. I let AI access to my data to make informed decisions, but never let it make the final decision.
You may think I'm being paranoid, but I'm a very cautious person. I don't jump into new technology fresh out of the oven and this has served me well for the last 15 years. (I learned my lesson courtesy of MongoDb).
With AI, I am taking the same approach. Experiment, understand the limits and only then implement. Working really well so far and have managed to automate tons of tedious tasks from emails to sales to even meetings.
I don't use Clawdbot, not any library. I wrote my own wrappers for everything using Elixir. I used Instructor and Ash framework with Phoenix and a bunch of generators to automate tedious tasks. I control the endpoints the models are loaded from (Open router) and use a multi-model flow so no one company has enough data about me. Only bits and pieces of random user IDs.
Look, whether you like it or not AI is here and it is decent at some tasks and the world is using it to automate stuff. You saw how Clawdbot exploded, right? Despite users getting hacked left and right didn't stop the adoption. Yesterday there was again a hack incident. It's a burning pain that AI solves to the point where people don't care even if they get hacked.
Will I crash and burn? Maybe, you're right. But, that's why I'm taking things at a very slow pace. Only automating internal tasks. Only things I trust AI to do. Very very limited scope. What's really my alternative here?
Just sit back and watch the world move on? My alternative is not changing with the times and being stagnant. That's not really a solution. Even if I'm doing that, I want to have data points that AI is really a dead end instead of just assumptions. My alternative reality isn't a bed or roses - a lot of people at the top do believe they can replace me and my work (CTO) with AI, thanks to the hype. I'm just trying to evolve so I don't become a meme down the line. Can they actually replace me or my job with AI? Absolutely not from what I'm seeing. But hypes of cutting cost is always attractive to people at the top. Just trying to stay alive man, lol.
This is how the NPM ecosystem works. Run first, care about consequences later..because, you know, time to market matters more. Who cares about security?
This is not new to the NPM ecosystem. At this point, every year there's a couple of funny instances like these. Most memorable one is from a decade ago, someone removed a package and it broke half the internet.
From Wikipedia:
module.exports = leftpad;
function leftpad (str, len, ch) {
str = String(str);
var i = -1;
ch || (ch = ' ');
len = len - str.length;
while (++i < len) {
str = ch + str;
}
return str;
}
Everyday I wake up and be glad that I chose Elixir. Thanks, NPM.
This is imo much worse than NPM, and full disclosure NPM is a part of our stack and I do not vet every package - I’d be out of a job if I took the time…
That said, packages can be audited, and people can validate that version X does what it says on the tin.
AI is a black box, however. Doesn’t matter what version, or what instructions you give it, whether it does what you want or even what it purports is completely up to chance, and that to me is a lot more risk to swallow. Leftpad was bad, sure, and it was also trivial to fix. LLMs are a different class of pain all together, and I’m not sure what lasting and effective protection looks like.
Wow, I was searching for this exact link for more than a decade (!). I remember seeing it on HN when I was new here and couldn't find the article ever again. Thanks for sharing!
Elixir has always been fashionable to build high performance systems in. In fact, it is more suited for AI applications than any other language or framework because of the BEAM architecture and the flexibility of the language itself. I wish more people gave it a chance. You get insane performance at your fingertips with so much scalability out of the box and your code by default is less error prone compared to dynamic languages.
Elixir has a LangChain implementation by the same name. And in my opinion as a user of both, the Python version and the Elixir version, the Elixir version is vastly superior and reliable too.
This agentic framework can co-exist with LangChain if that's what you're wondering.
Love this! The timing couldn't be more perfect. I had to write my agent framework with a mix of gen servers and Oban. It's honestly a pain to deal with. This looks like it will really remove a lot of pain for development. Thank you so much!
> The weirdest part is how regular people pick sides and defend their billionaire
Someone told me in another comment that it's possibly bot activity. I suspect so too, because in a tech forum like HN, a top voted comment can shift the entire focus/narrative of any given issue. I know there are a lot of mods on here to prevent this sort of thing, but given how good LLMs have gotten, I wonder if we are at a point where humans can even discern cases where this a mix of human and AI involvement in online activity (such as commenting).
It's not only single comments, but if you surround people in a sea of opinion, they will definitely start swimming in your direction. Thought, that's probably more important on reddit.
reply