Hacker Newsnew | past | comments | ask | show | jobs | submit | azath92's commentslogin

Almost an esolang, but orca is an amazing example of spatial programming for music production (GH https://github.com/hundredrabbits/Orca and video https://www.youtube.com/watch?v=gSFrBFBd7vY to see it in action)

Is this the one from the hippie(non perjorative) group living off a boat?

If it’s the same, it’s one that if I win the lottery I’d spend my time learning along with this tool from Imogen https://mimugloves.com/

I don’t think I’d ever produce something worth listening to, but if I won the lottery, why would I care beyond my own enjoyment?


‘Your own enjoyment’ is a rich reward. My unsolicited advice: Try making a mess with it everyday for a week / month / year and see if you don’t start to appreciate something in what you make. Orca is a brilliant piece of work.

My own enjoyment was predicated on the money side. If I was independently wealthy I’d be splitting my time between this and gem faceting as hobbies


That screenshot is super interesting, never seen anything like it.

It's giving me some ideas for a TUI video editor using that grid interface. What a cool project.


For small models this is for sure the way forward, there are some great small datasets out there (check out the tiny stories dataset that limits vocab to a certain age but keeps core reasoning inherent in even simple language https://huggingface.co/datasets/roneneldan/TinyStories https://arxiv.org/abs/2305.07759)

I have less concrete examples but my understanding is that dataset curation is for sure the way many improvements are gained at any model size. Unless you are building a frontier model, you can use a better model to help curate or generate that dataset for sure. TinyStories was generated with GPT-4 for example.


OP here: one thing that surprised me in this experiment was that the model trained on the more curated FineWeb-Edu dataset was worse than the one trained on FineWeb. That is very counterintuitive to me.


Totally agree, one of the most interesting podcasts i have listened to in a while was a couple of years ago on the Tiny Stories paper and dataset (the author used that dataset) which focuses on stories that only contain simple words and concepts (like bedtime stories for a 3 year old), but which can be used to train smaller models to produce coherent english, both with grammar, diversity, and reasoning.

The podcast itself with one of the authors was fantastic for explaining and discussing the capabilities of LLMs more broadly, using this small controlled research example.

As an aside: i dont know what the dataset is in the biological analogy, maybe the agar plate. A super simple and controlled environment in which to study simple organisms.

For ref: - Podcast ep https://www.cognitiverevolution.ai/the-tiny-model-revolution... - tinystories paper https://arxiv.org/abs/2305.07759


I like the agar plate analogy. Of course, the yeast is the star of the show, but so much work goes into prepping the plate.

As someone in biotech, 90% of the complaints I hear over lunch are not about bad results, but about bad mistakes during the experiment. E.G. someone didn't cover their mouth while pipetting and the plates unusable now.


Ha! I remember where I was when I listened to that episode (Lakeshore Drive almost into Chicago for some event or other) — thanks for triggering that memory — super interesting stuff


Im not sure about how this translates to react native, AFAICT build chains for apps less optimiside, but using vercel for deployment, neon for db if needed, Ive really been digging the ability for any branch/commit/pr to be deployed to a live site i can preview.

Coming from the python ecosystem, ive found the commit -> deployed code toolchain very easy, which for this kind of vibe coding really reduces friction when you are using it to explore functional features of which you will discard many.

It moves the decision surface on what the right thing to build to _after_ you have built it. which is quite interesting.

I will caveat this by saying this flow only works seamlessly if the feature is simple enough for the llm to oneshot it, but for the right thing its an interesting flow.


I often find that the hard part of writing big, persistant, code is not the writing but the building of a mental model (what the author calls "theory building"). This challenge multiplies when you are working with old code, or code others is working on.

Much of my mental space is spent building and updating the mental model. This changing of my mental model might look like building a better understanding of something i had glossed over in the past, or something that had been changed by someone else. Or it is i think the fundamental first step before actually changing any lines of code, you have to at least have an idea for how you want the mental model to change, and then make the code match that intended change. Same for debugging, finding a mismatch between your mental model and the reality as represented by the code.

And at the end of the day, AI coding tools can help with the actual writing, but the naive vibecoding approach as noted is that they avoid having to have a mental model at all. This is a falacy. They work best when you do have a mental model that is good enough to pass good context to them, and they are most useful when you carefully align their work with your model over time, or use them to explore/understand the code and build your mental model better/faster/with greater clarity.


The electric sheep always intrigued me so much! but was a bit before my time, and also felt so impenetrable. I appreciate you drawing the link between them and something like this which is so finite and understandable.

and to OP for making something so finite and understandable ofc.


Porting the ElectricSheep AfterEffects Plugin from Windows to OSX was my true first open source contribution (1999?). And that only happened because my friend was friends with its author and it came up when he was showing off some fractals movies. Then I said, “Oh I can help you with that.”

That OSS plugin itself was riding on the OSS ElectricSheep. All collaboration and distribution was via tarballs, although is on GitHub now. It was a trip seeing some code I wrangled make its way into commercial media, just organically.

The ElectricSheep project weaves so many cool tech threads together. Only thing is was missing compared to modernity is decentralized genome propagation.

Scott Draves, its author, has some great artist content too. He also spearheaded Polyglot notebooks, early on in that kind of interface.


I think the depth (in time, and community involvement) is one of the things that has drawn me to this project. It has the excellent vibe of a dedicated and yet accessible, IMO because of the beautiful and widely available visual output, internet community.

Thanks for sharing some of this rich history!


Electric Sheep still works great to this day. The lifetime membership is worth it if you want nice enough sheep with 0 effort, but there are countless HD packs on archiveorg and elsewhere


cool to hear that the actual electric sheep project is still something you can interact with!

For those super new to it (like me), check out https://electricsheep.org/ og video we came across it with https://www.youtube.com/watch?v=O5RdMvgk8b0 (this was just the first I found when looking there are many on youtube) and the algorithm behind it https://flam3.com/ This is all AFAICT as someone who's only just skimmed the surface, but i find it amazing.


see my separate comment for more on hackernews.coffee in particular, we (same team, different experiment) are thinking a lot about personal content, and how you have maximum visibility and control.

Keeping these projects separate allows us to test ideas that orbit around a theme (not 100 % sure what the theme is yet, but it features personal, anti-slop content, while still using llms.)


Continuing to work on https://www.hackernews.coffee/ to rerank the HN frontpage based on my interests, not just what's trending.

It does this by building a profile out of a small number of selected past articles, and we make the profile and how it produces recs from the profile transparent and editable. Especially after feedback on HN (https://news.ycombinator.com/item?id=44454305), Im trying to get to grips with why people seem to care as much about seeing how their recommendations work as they do about the actual quality of recommendations themselves.

I'm increasingly convinced its due to how many opaque LLM-powered everything and black-box recommendation algorithms there are. People want personal content systems (they are useful for sure!), but theres a lack of ones where they stay in control of what 'personal' actually looks like.


weve just explored a HN site-reskin as a quick way to validate this, and I now use it for my browsing every day. Its a pretty transparent "profile" that gets applied by an llm to rank your HN frontpage, but would be trivial to shift that to a filter.

An extension could be a powerful way to apply it without having to leave HN, but I wonder if that (and our website prototype) is a short term solution. I can imaging having an extension per news/content site, or an "alt site" for each that takes into account your preferences, but it feels clunky.

OTOH having a generic llm in browser that does this for all sites feels quite far off, so maybe the narrow solutions where you really care about it are the way to go?


people have said it elsewhere, but I think you might have to fight fire with fire if you want semantic filtering.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: