Hacker Newsnew | past | comments | ask | show | jobs | submit | ghostwriternr's commentslogin

I’ve got notifications set up for sandboxes, happy to see this referenced here! I’m one of the maintainers, and happy to talk in detail about any aspect of sandboxes (Cloudflare or otherwise) here.


You could also use this script to setup everything: curl -sL https://raw.githubusercontent.com/codestoryai/binaries/main/... | bash (you can see the source of the script too)



Confusingly, "Sidecar" is the name Apple uses for their feature of having an iPad serve as a second screen/touch interface for a Mac:

https://support.apple.com/en-us/102597



And AIDE is also the name for an Android IDE or an Advanced Intrusion Detection Environment


In my previous life at Facebook, I worked on the infra team and worked on a cluster manager similar to kubernetes, thats where I first heard the term sidecar. Something about the concept of a binary running alongside the pod powering other related things felt strong. In most parts this is the inspiration for naming the AI brain: sidecar


I believe I get the metaphor. Why is it confusing?


Overloading the term with a second technological meaning.


Unfortunately, there are a distinctly limited number of words that can communicate a particular concept - the pigeon-hole principle suggests duplication will be inevitable.

Ambiguity's only really a problem when the same term is used multiple ways in similar contexts. I think it's very unlikely that anyone will get confused between these two usages.


So what? Just because Apple calls something retina "display" means that others cannot call stuff displays.


read it as sAIDEcar


CodeStory (YC S23) | Founding Engineer | London, United Kingdom | Onsite | https://aide.dev

We're building Aide, an AI-native, privacy-first IDE. With the advent of AI, we believe there is an opportunity and necessity to re-imagine the IDE to be a place where developers and AI are both first-class citizens, and AI can pair-program as well as complete tasks independently.

We're still a team of 2, and are looking for a founding engineer with hands-on experience working with LLMs beyond basic prompting. ML background is not required, as long as you have understood LLMs deeply. We're not looking to train our own models, except fine-tuning when required for special use-cases. So someone who can understand and get into the internals of LLMs, hosting models, optimising model inference for our use case, setup evals, and (this may not be a skill you already have) can power the backend for streaming requirements needed to build agents in the next few years.

If this sounds interesting, apply at https://www.workatastartup.com/jobs/65502, and we'll get back to you shortly!


Hey HN!

I'm Naresh, co-founder at CodeStory (YC S23) where we're building an AI-powered mod of VSCode. Though we're still in our very early days, we have some thoughts about how the IDE can evolve to be a space where developers can leverage agents as additional developers who can do complete end-to-end development. But we're getting there by exposing all the capabilities to developers interacting with the IDE as well as the AI agent.

One of the problems we wanted to tackle first was code search within an IDE and ways to augment it to be a much more powerful experience. I'd love to hear your thoughts on the approach, feedback or open discussions about what we've built and how you'd like to see this evolving!


Hey! Author here. That's actually a fair point — in hindsight, I see why the article comes off as 'because AI' and not having clear articulations on how the developer experience changes. The current set of changes we have are indeed incremental additions.

That being said, this is not accidental because we intend to not fundamentally disrupt the core workflows of today right away (as another comment said — IDEs of today are indeed built on years of user experience research). But we do see the capability to simplify or rethink the developer experience once AI is deeply integrated into every workflow. Its perhaps the same way I'd have never stated writing code as a problem, but after having used Copilot, I don't want to go back to not having it.


100%. I'd like to hear your thoughts on this, but it's possible to argue that IDEs are purpose built and a lot more powerful for programming. This is exactly where we think building the right tooling would allow us to give an IDE-like experience for AI models to perform the same kind of tasks a developer is able to today with IDEs.


Not yet, but I'd expect we'll eventually get there. Code always remains the source of truth and thus will have its place in every IDE for finer control, but I imagine shifting our primary interaction with code from the editor panel to an agent controller can become a powerful models once these agents become very good. A neat way to think about this is code reviews.

When I give a task to another developer on the team, they go out to understand the task, work on it, write tests, run everything and put it up for review (which is then also auto-evaluated by CI first). In this scenario, as a reviewer, we already don't have an absolute need to read every line of code as long as high level design/project principles are followed and all scenarios are covered in tests that are passing.

AI agents can become this other developer picking up and completing end-to-end tasks, but rather than taking hours/days, they take a few seconds/minutes at best — so review comments can actually be shared and incorporated quicker within the IDE itself.


The comment was poking fun at how the page literally does not render anything except a solid green color with js disabled


lol thats true tho! we built it on a custom template :V


Hey! I'm the author of this post and this is certainly true! Jetbrains products are certainly the most powerful IDEs in the list due to the amount of custom tooling they've built for each language/framework and provide a great development experience out of the box.

But VSCode's approach of having an extension-first architecture is pretty powerful too (in fact, something I learnt only recently is how much of VSCode's implementation is built on the same extension architecture that is exposed for developers to extend the editor).


Did you look much into Emacs and its huge variety of packages?

Edited to add: it's built with a vast amount of Lisp and so much of the application can be live modified at runtime via Lisp. If you want an extensible editor, you can learn a lot from Emacs.


Thanks for this! I've personally never used Emacs but I'm definitely taking a look now. A wild idea we have is to make the interface between AI and dev tools extensible, because we can't potentially build a fine-tuned integration for the full world of developer tools. But if developers are interested and they see the benefit of getting AI to handle these tools for them, they can probably work on implementing the interface and let AI take over.

(I'm sorry about the usage of AI so many times haha. I've been staying away from saying LLMs because they're the state-of-the-art today, but that potentially changes as soon as next year)


Didn’t quite understand the last bit with what davinci-text-003 is doing! Can you ELI5 that part? Very cool otherwise, that I can ask all sorts of questions and get back contextual information from the conversation they had.


So the davinci-text-003 is basically prompt engineering the model to generate our answer. I tried being very specific and used a template which after a bit of debugging I figured out works best which was basically intuition based as you would talk to a human kind of ... https://github.com/theskcd/spchengine/blob/b2bf55268e9245f70...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: