Hacker Newsnew | past | comments | ask | show | jobs | submit | KenCochrane's commentslogin


Here is a link to a youtube video that explains it some more. https://www.youtube.com/watch?v=0GD63adFy7g


For those who don’t pay for a business insider account: http://archive.today/XGUl3


I think they are similar but a little different. From Dagger's website "Dagger is a programmable CI/CD engine that runs your pipelines in containers" Kargo is a tool that works with your existing CI and CD (Argo CD) to help promote your code from one stage (Dev, UAT, QA, Prod, etc) to the next.

Kargo has native support for Argo CD today, but one of it's goals is to be agnostic and could probably work with Dagger or other tools in the future.


I'm looking forward to seeing support for other GitOps tooling. Argo CD makes sense initially since this is by the same creators, but it would be nice to see agnostic tooling that can support other GitOps tools like Flux. I hope this is something that we will actually get to see and isn't just an empty promise.


Cool to see you around :)


Hey Nick, it has been to long, I hope you are doing well. :)


For those who are curious here is the Github repo https://github.com/akuity/kargo



Maybe I missed something, but how do you handle the fact that your nginx server is a single point of failure? If that goes down, traffic can’t get to your web servers.

Do you have more than one, and DNS load balance, or do you just live with the risk?

One of the main reasons why I use an ALB/ELB is so that I don’t have that SPOF. If you found a way around that, please share, I would love to know, so I can save some money :)


His database also is.

I think it's highly unprofessional to use a setup like this in production. Looking at his product, it seems like a product whose downtime has a big impact on their clients.


I hate to admit it but the nginx server infact is a SPOF.

We currently handle it by setting up alerts all over the place so I can take quick action if something goes sideways, but other than this I have not really found a way around it.

We also have latest snapshots ready of all our instances so that I can get another server running ASAP during a calamity.


Too bad they didn't list the pricing, it would be nice to know how much it will cost, once released.


Hey KenCochrane, I’m the Product Manager on this product at DigitalOcean. VonGuard is right, you only pay for the worker nodes (based on our Droplet pricing, there’s no premium) and we take care of the master. Our standard pricing lives here: https://www.digitalocean.com/pricing


Right now, GKE charges $18/month for a load balancer on top of node costs, which is costly for small scale/personal projects. Will DigitalOcean have anything similar?


Jamie from DigitalOcean here. Currently we'll deploy our DigitalOcean Load Balancer on your behalf, which is $20 a month, but we are also investigating other options. If you have any thoughts on how this should work, or what specifically you'd be looking for, I'd love to hear them.


Speaking personally, I'd rather opt out of the Load Balancer altogether and instead have a floating IP automatically set up across the workers. Ingresses are easy enough to set up so that would complete the picture.

I think having the Load Balancer option is important for simplicity, but I feel a lot of DO customers (such as myself) opt to use DO for optimizing cost as well. It's a balance.


Will be possible to run it in a single node scenario?


Yes, you’ll be able to spin up a single node cluster.


Will there be a minimum size for a worker node?


Currently it’s our $5 droplet (1GB RAM, 1 vCPU), but if you have a use case for smaller nodes I’d love to hear about it!


Awesome! The TechCrunch article [0] shows "16GB -> 192GB" in the screenshot so I just wanted to confirm.

[0] https://techcrunch.com/2018/05/02/digital-ocean-launches-its...


What about a competing product to AWS's lambda?


Jamie from DigitalOcean here. Kubernetes on DigitalOcean is the first step for us to enable more managed services like Lambda. You will be able to deploy projects like OpenFaaS or Fn very simply, but currently you would still need to determine your node pool for capacity.


DING DING DING

If you guys offered a lambda competitor with similarly competitive compute/bandwidth pricing I'd be all over it in a second.


You could run Kubeless on k8s on DO


Excellent thanks for sharing!


It's free. Just pay for nodes.


That's pretty cool, I have been thinking of building something similar, but for Python. How hard is it to add new languages?

What is the tech stack?

Does it cost much to keep it running?


Thanks!

Adding new languages is as easy as the package manager makes it... which is normally still quite hard! The core logic for Dependabot is open-source here, including all the language-specific logic for Ruby, JS and PHP, and a starter (lots of work still required) for Python: https://github.com/gocardless/bump-core.

For the app itself, we used Ruby (because we'd built the original core gem, which was https://github.com/gocardless/bump, in Ruby at a work hackathon years ago).

Costs under £50 a month to keep running at the moment, creating about 2,000 PRs a month. We could really do with getting it into the GitHub marketplace so we can start charging people and cover those costs!


It already exists for Python: https://pyup.io/



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: