Hacker Newsnew | past | comments | ask | show | jobs | submit | sroerick's commentslogin

I had always heard about how RCT was built in Assembly, and thought it was very impressive.

The more I actually started digging into assembly, the more this task seems monumental and impossible.

I didn't know there was a fork and I'm excited to look into it


Programming in assembly isn't really "hard" it mostly takes lots of discipline. Consistency and patterns are key. The language also provides very little implicit documentation, so always document which arguments are passed how and where, what registers are caller and callee saved. Of course it is also very tedious.

Now writing very optimized assembly is very hard. Because you need to break your consistency and conventions to squeeze out all the possible performance. The larger "kernel" you optimize the more pattern breaking code you need to keep in your head at a time.


Macros. Lots of macros.

Yup. I've done a bit of assembly and it's really only a little harder than doing C. You simply have to get familiar with your assembler and the offered macros. Heck, I might even say that it's simpler than basic.

And presumably generous use of code comments

Back then a lot of people started with assembly because that was the only way to make games quick enough. Throughout the years they accumulated tons of experience and routines and tools.

Not saying that it was not a huge feat, but it’s definitely a lot harder to start from scratch nowadays, even for the same platform.


It is weird. I've had maintainable solutions on non-trivial code, but it does kind of require babysitting. Planning documents and detailed specs help. You get a feel for where the agent will want to take a shortcut and can devise ways to navigate around that.

I also find Go works really well, and generally stays, if not exceptional, than at least maintainable.

I've also enjoyed using OCaml, but I will say that I found the single worst function I've ever seen in a codebase in vibecoded OCAML.

You might just try asking - "hey I'm having trouble keeping maintainable codebases - how can I structure this project in a way where the code will be stable long term".

Sometimes getting the "software architect" role into the agent context is all it takes.


In my experience, I've worked with a number of people in non-tech industries who tried to pivot their company into software, and a huge obstacle was that ultimately the code became unmaintainable.

Maybe they didn't have the expertise to pick a software stack that would serve them in the long run, or they just didn't have the budget to hire a SWE or team full time, or their contractor team just wasn't super invested in the project.

So tech people look at "vibeslop" as unmaintainable technical debt, but they ignore that in a lot of situations their own salary is what makes the tech debt unmaintainable. Maybe that's uncharitable, but I do think many techs are very far removed from the "solve a problem and then dogfood it" cycle


XMPP is working pretty well for me

I was just wondering what HN thought of this. I've been using it for a bit and I like the ergonomics. I'm posting it here in the hopes that some people will come in and tell me all the dumb things the code is doing.

Here's one success I had -

https://github.com/sroerick/pakkun

It's git for ETL. I haven't looked at the code, but I've been using it pretty effectively for the last week or two. I wouldn't feel comfortable recommending it to anybody else, but it was basically one-shotted. I've been dogfooding it on a number of projects, had the LLM iterate on it a bit, and I'm generally very happy with the ergonomics.


That's a nice example, can you explain your 'one shot' setup in some more detail?

I don't have the prompt, but I used codex. I probably wrote a medium sized paragraph explaining the architecture. It scaffolded out the app, and I think I prompted it twice more with some very small bugfixes. That got me to an MVP which I used to build LaTeX pipelines. Since then, I've added a few features out as I've dogfooded it.

It's a bit challenging / frustrating to get LLMs to build out a framework/library and the app that you're using the framework in at the same time. If it hits a bug in the framework, sometimes it will rewrite the app to match the bug rather than fixing the bug. It's kind of a context balancing act, and you have to have a pretty good idea of how you're looking to improve things as you dogfood. It can be done, it takes some juggling, though.

I think LLMs are good at golang, and also good at that "lightweight utility function" class of software. If you keep things skeletal, I think you can avoid a lot of the slop feeling when you get stuck in a "MOVE THE BUTTON LEFT" loop.

I also think that dogfooding is another big key. I coded up a calculator app for a dentist office which 2-3 people use about 25 times a day. Not a lot of moving parts, it's literally just a calculator. It could basically be an excel spreadsheet, except it's a lot better UX to have an app. It wouldn't have been software I'd have written myself, really, but in about 3 total hours of vibecoding, I've had two revisions.

If you can get something to a minimal functional state without a lot of effort, and you can keep your dev/release loop extremely tight, and you use it every day, then over time you can iterate into something that's useful and good.

Overall, I'm definitely faster with LLMs. I don't know if I'm that much faster. I was probably most fluent building web apps in Django, and I was pretty dang fast with that. LLMs are more about things like "How do you build tests to prevent function drift" and "How can I scaffold a feedback loop so that the LLM can debug itself".


I like your pragmatic attitude to all this.

I think your prompts are 'the source' in a traditional sense, and the result of those prompts is almost like 'object code'. It would be great to have a higher level view of computer source code like the one you are sketching but then to distribute the prompt and the AI (toolchain...) to create the code with and the code itself as just one of many representations. This would also solve some of the copyright issues, as well as possibly some of the longer term maintainability challenges because if you need to make changes to the running system in a while then the tool that got you there may no longer be suitable unless there is a way to ingest all of the code it produced previously and then to suggest surgical strikes instead of wholesale updates.

Thank you for taking the time to write this all out, it is most enlightening. It's a fine line between 'nay sayer' and 'fanboi' and I think you've found the right balance.


Thanks for reading it! I didn't use an LLM, lol.

On documentation, I agree with you, and have gone done the same road. I actually built out a little chat app which acts as a wrapper around the codex app which does exactly this. Unfortunately, the UI sucks pretty bad, and I never find myself using it.

I actually asked codex if it could find the chat where I created this in my logs. It turns out, I used the web interface and asked it to make a spec. Here's the link to the chat. Sorry the way I described wasn't really what happened at all! lol. https://chatgpt.com/share/69b77eae-8314-8005-99f0-db0f7d11b7...

As it happens, I actually speak-to-texted my whole prompt. And then gippity glazed me saying "This is a very good idea". And then it wrote a very, very detailed spec. As an aside, I kind of have a conspiracy theory that they deploy "okay" and "very very good" models. And they give you the good model based on if they think it will help sway public opinion. So it wrote a pretty slick piece of software and now here I am promoting the LLM. Oof da!

I didn't really mention - spec first programming is a great thing to do with LLMs. But you can go way too far with it, also. If you let the LLM run wild with the spec it will totally lose track of your project goals. The spec it created here ended up being, I think, a very good spec.

I think "code readability" is really not a solved problem, either pre or post LLM. I'm a big fan of "Code as Data" static analysis tools. I actually think that the ideal situation is less of "here is the prompt history" and something closer to Don Knuth's Literate Programming. I don't actually want to read somebody fighting context drift for an hour. I want polished text which explains in detail both what the code does and why it is structured that way. I don't know how to make the LLMs do literate programming, but now that I think about it, I've never actually tried! Hmmm....


I'm developing on a $270 refurbished Dell, which has an i7 and 16 gigs of RAM. The Apple processor might be competitive, but the rest of the machine is not. 600 dollars is fine and not unreasonable, but there is certainly an Apple tax.

I feel like it was less than a year ago that Typescript was basically the only game in town and if you liked anything else you were a loon.

I have been an anti Typescript guy for a long time but I wouldn't deny for a moment that it's probably by far the most mature ecosystem.


You can get an AI to listen to that bass solo for you


But can you get an AI to zone out on a fluffy couch at the center point of a dank hi-fi setup with the volume cranked to 11, while chillin' on 50mg of THC?

And will you enjoy paying someone else to let the AI to do that?


This is a great point.

One issue I've had with IPFS is that there's nothing baked into the protocol to maintain peer health, which really limits the ability to keep the swarm connected and healthy.


I use to add webseeds but clients seem to love just downloading it from there rather than from my conventional seeding.

Some new ideas are needed in this space.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: