Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, we are using an LLM for some parts of the code generation, specifically GPT-4. In the medium-term, we plan to go lower in the stack and have our own AI model. We broke down the process into modular steps to only leverage LLMs where it's most needed, and use rule-based methods in other parts of the process (e.g. in fixing compilation errors). This maximizes the accuracy of the transformations.


Modular use of an LLM over a problem-specific workflow skeleton is the winning ticket. Nicely conceptualized!


Do you have some sort of automatic test suite for what's generated by the LLM prior to release? Just to ensure what it returns won't break downstream?


Yes, internally, we have separate models that produce tests the final data has to pass before being presented to the user. In addition, you can define your own tests on the platform, and we will ensure transformations produced will pass those tests before deployment. We also have helpful versioning and backtesting features.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: