Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am waiting for a llm entusiast to create something like MicroQuickJS from scratch.




Fabrice Bellard, who developed MicroQuickJS, is a user of LLMs.

I scratch my head trying to understand how your comment relates to the parent...

The implication is that the task originally wondered about was done to begin with.

[flagged]


> I think you halucinated this up. (Quote from original comment, pre malicious-edit)

No point in responding to a troll, but for the other people who may be reading this comment chain, he's used LLMs for various tasks. Not to mention that he founded TextSynth, an entire service that revolves around them.

https://textsynth.com/

https://bellard.org/ts_sms/


[flagged]


> TextSynth provides access to large language, text-to-image, text-to-speech or speech-to-text models such as Mistral, Llama, Stable Diffusion, Whisper thru a REST API and a playground. They can be used for example for text completion, question answering, classification, chat, translation, image generation, speech generation, speech to text transcription, ...

???


You're confused. The compression algorithm was something different. TextSynth is an LLM inference server, similar to (but older than) llama.cpp.

Creating a llama.cpp like software is not using LLMs to develop software neither.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: