Hacker Newsnew | past | comments | ask | show | jobs | submit | cnunciato's commentslogin

Buildkite also has hosted runners (which they all agents): https://buildkite.com/docs/pipelines/hosted-agents


It does, but that came second.

It was originally (and still usually) used by those who wanted to self-host runners.


You’re not alone!


> It’s a linguistic uncanny valley, close enough to human to be recognizable, but different enough to be repulsive.

I love this. It's exactly how I feel when I read AI-generated content. A little bit sick, not really sure why.


>Lack of local development. It's a known thing that there is no way of running GitHub Actions locally.

This is one thing I really love about Buildkite[0] -- being able to run the agent locally. (Full disclosure: I also work for Buildkite.) The Buildkite agent runs as a normal process too (rather than as a Docker container), which makes the process of workflow development way simpler, IMO. I also keep a handful of agents running locally on my laptop for personal projects, which is nice. (Why run those processes on someone else's infra if I don't have to?)

>Reusability and YAML

This is another one that I believe is unique to Buildkite and that I also find super useful. You can write your workflows in YAML of course -- but you can also write them in your language of choice, and then serialize to YAML or JSON when you're ready to start the run (or even add onto the run as it's running if you need to). This lets you encapsulate and reuse (and test, etc.) workflow logic as you need. We have many large customers that do some amazing things with this capability.

[0]: https://buildkite.com/docs/agent/v3


Are you two talking about the same thing? I believe the grandparent is talking about running it locally on development machines, often for testing purposes.

Asking because Github Action also supports Self-Hosted runners [1].

[1] https://docs.github.com/en/actions/hosting-your-own-runners/...


Same thing, yeah, IIUC (i.e., running the agent/worker locally for testing). It's conceptually similar to self-hosted runners, yes, but also different in a few practical ways that may matter to you, depending on how you plan to run in production.

For one, with GitHub Actions, hosted and self-hosted runners are fundamentally different applications; hosted runners are fully configured container images, (with base OS, tools, etc., on board), whereas self-hosted runners are essentially thin, unconfigured shell scripts. This means that unless you're planning on using self-hosted runners in production (which some do of course, but most don't), it wouldn't make sense to dev/test with them locally, given how different they are. With Buildkite, there's only one "way" -- `buildkite-agent`, the single binary I linked to above.

The connection models are also different. While both GHA self-hosted runners and the Buildkite agent connect to a remote service to claim and run jobs, GHA runners must first be registered with a GitHub org or repository before you can use them, and then workflows must also be configured to use them (e.g., with `runs-on` params). With Buildkite, any `buildkite-agent` with a proper token can connect to a queue to run a job.

There are others, but hopefully that gives you an idea.


Pulumi engineer here. We did, as an experiment (and labeled it thusly), but we believed we’d have much more control over it than we have. After we began hearing these reports, we began taking them down. The vast majority have since been converted to return HTTP 410 with noindex directives, yet Google still hasn’t responded. It’s been frustrating for us also.


The obvious difference, though, is that you knowingly use all of these tools; you get to choose whether and how you use them. AI, like the weapons you point out, can invade your life without you knowing it even exists.


It's actually the other way around with AI. If you want to do web-search, you got to disclose your keywords to a search engine. If you want to post on social networks, you are in plain sight. If you use a mobile phone, you broadcast your location.

But if you use your local models, it's more like running a linux box, it is private, customizable and unfiltered. LLMs will reverse the centralization trend we have seen with search and social. They can create a "safe space", "a room of one's own" where we can be creative and unrestricted.

LLMs promise the privacy internet never gave us, anyone, remember how we felt private online 20 years ago? You can't download a Google or FB but you can download a Mistral and even run it on a normal laptop.

https://en.wikipedia.org/wiki/A_Room_of_One%27s_Own


> But if you use your local models, it's more like running a linux box, it is private, customizable and unfiltered. LLMs will reverse the centralization trend we have seen with search and social. They can create a "safe space", "a room of one's own" where we can be creative and unrestricted.

i really like this idea too. Running LLMs locally reminds me of running Linux locally, it wasn't full on UNIX but it as close to a real operating system regular people could get. Maybe the LLMs you can run at home aren't the bleeding edge but they're as close as you can get. Notice how the technology companies are already lobbying authorities hard to try and keep LLMs only under their control. ..hosted in their 'cloud' and accessible only by paying their subscription. The generation that created the OSS movement and the best success stories, GNU and Linux, are doing their best to make sure it never happens again.

edit: fixed some bad grammar


The amount of people that will run their own models is negligible at the scale of the society. You also can't choose to cut yourself off from the bad outcomes, no matter how much you try. Propaganda 3.0 might not affect you directly, but the political outcomes it produces will.


> The amount of people that will run their own models is negligible at the scale of the society

Let me transpose your sentence back to 1980:

"The amount of people that will run their own computers is negligible at the scale of the society"

(I was 8 in 1980. People literally said this.)

All someone needs to do is to productize a whole-house LLM that does voice to text, runs the LLM, does text back to voice (possibly mimicking whatever voice you want), and you'd have an Alexa replacement that is far smarter. I'd buy it in a heartbeat just so that I wouldn't have to do maintenance on it.


But people don't run their own computers anymore. A phone on their own is nothing without email maps messenger apps browser cloud storage etc... So yes, nobody runs their own computer anymore. It'll be the same for llms. Can they run on commodity hardware? Yes. Will people go to lengths (including buying better hardware) to run gpt-lite instead of the much better internet-aware cloud llm gpt8?


It will become mainstream in a few years when every laptop, cellphone, web browser or operating system will sport local LLMs. For now it is a bit hard, but only a bit.


The same AI that invades my life to do harm can "invade" my life and be (extremely) helpful.

And the latter is more likely. Because the willpower of an AI is just the mimicked willpower of the people who made it, and there will always be more good people than bad people.


Pulumi docs engineer here. Really appreciate that feedback -- what sort of thing did you have to go digging for? We're definitely investing here so I'd love to hear more about what you were looking for and didn't find (or what you discovered along the way that we could've made clearer). Thanks!


It's been a few months, so I can't remember details, but one thing I wanted to use some features was the EventBridge Scheduler and a few other things like that, and I couldn't find anything on the website that referenced those, but I did find the relevant classes defined in the code and was able to get everything working within a few minutes. (Kudos to the comments around the code!)


There are more than a few resource types lacking import examples/details in the docs.

Other than that, for most resources I use the docs are pretty good.


Yep, this, exactly.


Pierce the cover with a knife and cut along the edges, then remove the goods through the flap. Works for us. :)


and do it over the sink


Pulumi engineer here. There’s no restriction —- you’re free to use the Individual Edition commercially if you like.


Really? Why? Trying something new before becoming a "master" in something similar seems like a totally reasonable concept to me. I work on Pulumi, and even I don't actively discourage people from trying its alternatives. Use what makes you happy, I say.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: