Hacker Newsnew | past | comments | ask | show | jobs | submit | Manik_agg's commentslogin

Hey we already had PostgreSQL so no new infrastructure to manage, it was easy way to see if vector database change has any value. It also has good enough performance - handles 10M vectors with HNSW indexes adequately open source - leverages existing infrastructure for future migration. we've created a vector service, easy to swap later if needed


Author here. We've been building CORE (open source) for the past year. Happy to answer questions about the architecture, reification approach, or what broke at scale.


I agree. Asking LLM to write for you is being lazy and it also results in sub-par results (don't know about brain-rot).

I also like preparing a draft and using llm for critique, it helps me figure out some blind spots or ways to articulate better.


You’re right dumping all memory into the context window doesn’t scale. But with CORE, we don’t do that.

We use a reified knowledge graph for memory, where: Each fact is a first-class node (with timestamp, source, certainty, etc.) - Nodes are typed (Person, Tool, Issue, etc.) and richly linked - Activity (e.g. a Slack message) is decomposed and connected to relevant context

This structure allows precise subgraph retrieval based on semantic, temporal, or relational filters—so only what’s relevant is pulled into the context window. It’s not just RAG over documents. It’s graph traversal over structured memory. The model doesn’t carry memory—it queries what it needs.

So yes, the memory problem is real—but reified graphs actually make it tractable.


Claude is incredibly powerful but it's limitation is no persistent memory hence you have to repeat yourself again and again.

I integrated Claude with CORE memory MCP, making it an assistant that remembers everything and have a better memory than Cursor or chatgpt.

Before CORE : "Hey Claude, I need to know the pros and cons of hosting my project on cloudfare vs AWS, here is the detailed spec about my project...."

And i have to REPEAT MYSELF again and again regarding my preferences and my tech stack and project details.

After CORE: "Hey Claude, tell me pros n cons of hosting my project on cloudfare vs AWS."

Claude instantly knows everything from my memory context.

What This Means - Persistent Context: You Never repeat yourself again - Continuous Learning: Claude gets smarter with every interaction it ingest and recall from memory - Personalized Responses: Tailored to your specific workflow and preferences

Check out full implementation guide here - https://docs.heysol.ai/providers/claude


Figma has come a long way, from a blocked Adobe acquisition to now filing for an IPO.


Hey - well put!

I guess "semantic web" folks were right about the destination, just few years early :P


Hey - agreed that for basic fact recall, a simple text file + MCP works fine.

We designed CORE for complex, evolving memory where text files break down.

Example: Health conversations across ChatGPT, Claude, etc. where your parameters change over time.

A text file can't give you: "What medications have I tried, why did I stop each one, and when?" or "Show me how my symptoms evolved over 6 months."

For timeline and relational memory, CORE wins. For static facts, text files are enough i guess.


Hey - i agree that the demonstrated use can be solved with simple plan.md file in the codebase itself.

With use-case we wanted to showcase the shareable aspect of CORE more. The main problem statement we wanted to address was "take your memory to every AI" and not repeating yourself again and again anymore.

The relational graph based aspect of CORE architecture is an overkill for simple fact recalling. But if you want an intelligent memory layer about you that can answer What, When, Why and also is accessible in all the major AI tools that you use, then CORE would make more sense.


Hey plan.md mostly will be a static file that you manually have to maintain. It won't be relational and not be able to form connections between info. You can't recall or query intelligently? (When did my preference change?)

CORE lets you - Automatically extracts and stores facts from conversations - Builds intelligent connections between related information - Answers complex queries ("What did I say about something and when?") - Detects contradictions and explains changes with full context

For simple fact recall, plan.md should work but for complex systems a relational memory should be able to help better.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: