Hacker Newsnew | past | comments | ask | show | jobs | submit | piterrro's commentslogin

Who remembers Graphite and Carbon? This was 2010 era…


Is it beneficial for logs compression assuming you log to JSON but you dont know schema upfront? Im workong on a logs compression tool and Im wondering whether OpenZL fits there

[0] https://logdy.dev/logdy-pro



I've been developing AI apps for the past year and encountered a recurring issue. Non-tech individuals often asked me to adjust the prompts, seeking a more professional tone or better alignment with their use case. Each request involved diving into the code, making changes to hardcoded prompts, and then testing and deploying the updated version. I also wanted to experiment with different AI providers, such as OpenAI, Claude, and Ollama, but switching between them required additional code modifications and deployments, creating a cumbersome process. Upon exploring existing solutions, I found them to be too complex and geared towards enterprise use, which didn't align with my lightweight requirements. So, I created Hypersigil, a user-friendly UI for prompt management that enables centralized prompt control, facilitates non-tech user input, allows seamless prompt updates without app redeployment, and supports prompt testing across various providers simultaneously.

GH: https://github.com/hypersigilhq/hypersigil

Docs: hypersigilhq.github.io/hypersigil/introduction/


It worth taking a look at the prompts in the repo if you are keen understand how apps like these work. It's interesting to see that I basically have a similar process/rules fed to LLM when building locally. I also have similar process for the backend and a nice flow for connecting FE and BE with API contracts - work perfectly.


Nice tool! Im working on something similar but focused on repeatability and testing on multiple models/test data points.


Do you have a link? I'd like to see it.

Any specific feedback so far?


After building several full-stack applications, I discovered that Large Language Models (LLMs) face significant challenges when implementing features that span both backend and frontend components, particularly around API interfaces.

The core issues I observed:

- API Contract Drift: LLMs struggle to maintain consistency when defining an API endpoint and then implementing its usage in the frontend

- Context Loss: Without a clear, shared contract, LLMs lack the contextual assistance needed to ensure proper integration between client and server

- Integration Errors: The disconnect between backend definitions and frontend consumption leads to runtime errors that could be prevented

The Solution: Leverage TypeScript's powerful type system to provide real-time feedback and compile-time validation for both LLMs and developers. By creating a shared contract that enforces consistency across the entire stack, we eliminate the guesswork and reduce integration issues. A small NPM module with only dependency of Zod:

https://github.com/PeterOsinski/ts-typed-api

I already used it in a couple of projects and so far so good. LLMs don't get lost even when implementing changes to APIs with dozens of endpoints. I can share a prompt I'm using that instructs LLM how to leverage definitions and find implementations.

Let me know what you think, feedback welcome!


who are you?


He's a "Growth Engineer" from ElevenLabs. I'm not sure what that entails, but then I'm not familiar with that area of tech, so maybe someone else can explain it.


How does it differ from Cline VS extension? It already uses diff apply which makes bigger files edits much faster


Cline orchestrates all the models under the hood, you could use our apply model with Cline. Not sure what model they are using for that feature right now


Unless I dont understand that fully (which could be the case).

This idea could fly if downstream readers will be able to read it. Json is great because anything can read it, process, transform and serialize without having to know the intrisics of the protocol.

Whats the point of using binary, columnar format for data in transit?


better compression https://opentelemetry.io/blog/2023/otel-arrow/

You don't do high performance without knowing the data schema.


Is Arrow better than Parquet or Protobuf?


Arrow is an in-memory columnar format, kinda orthogonal to parquet (which is an at-rest format). Protobuf is a better comparison, but it's more message oriented and not suited for analytics.


Not having to write to disk is great, and zero-copy in memory access is instant...


the blog post comparison is against OTLP which is protobuf


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: