Hell yeah, this is some badass hackery, and the type of stuff I love seeing on HN. In the last decade or so as more and more stuff becomes locked down and hacker unfriendly, I've found myself longing for simple things I can hack on. If I ever get to a point where I don't have to work for a living, one of the things I'd like to do is build everything from little gadgets up to major appliances that are simple, reliable, and hackable for people who want to. It pains me that my appliances have full computers driving them but I can't get access to them. Kudos for this awesome work and phenomenal write-up!
Really neat! Also as a Linux user, I deeply appreciate the linux support :-)
A few questions and comments:
| Kvile |
- Awesome, really happy to see a reasonable take on this (open source, offline-first, no telemetry, no acount, etc). Do you think at some point you'll try to monetize it in some way?
Kvile: Thanks for letting me know! havent noticed, since i mostly just use it myself. Will get it fixed!
Stao: Hm yea this is a mistake on my LLM when it generated the website for me (i couldnt be bothered). It probably got confused since i released it for Linux. Its not open-source. Yes! Exactly, thats why i made it, i ALWAYS forgot. I still do, but far less frequently than before, using Stao helped me a lot.
Looks like a neat tool, and one I really need! I actually started building my own because I couldn't find anything satisfying. My build is currently in the very early stages and I'd love to abandon it :-) I'm definitely going to try difi out.
Also kudos for putting up a screenshot. I've looked through a lot of projects claiming to do similar to this, but there are so many different interpretations that can make it not a good fit for me, and when there aren't any screenshots the barrier of seeing it in action is often too high to where I only try one or two before I give up and stop wasting time. Having a screenshot made it so I could check it out quickly.
The screenshot is a little rough, so a few tips for next time:
1. Shrink your terminal window down a bit as a huge view is harder to follow
2. Keep the screenshots at full resolution so they are easier to read. The reduced resolution and the original screen being huge makes the text pretty difficult to read, even zoomed in to 200%
3. Use something like screenkey (or throw some subtitle text up or something) so the viewer knows what keys you are pressing and/or what you're trying do. It's pretty hard to follow along without those cues.
just my experience of course, but it had a lot of hype. It got into a lot of people's workflow and really had a strong first mover advantage. The fact that they supported neovim as a first-class editor surely helped a ton. But then they released their next set of features without neovim support and only (IIRC) support VS Code. That took a lot of wind out of the sails. Then combined with them for some reason being on older models (or with thinking turned down or whatever), the results got less and less useful. If Co-pilot had made their agent stuff work with neovim and with a CLI, I think they'd be the clear leader.
Yeah, you may have nailed it. Gemini is a good model, but in the Gemini CLI with a prompt like, "I'd like to add <feature x> support. What are my options? Don't write any code yet" it will proceed to skip right past telling me my options and will go ahead an implement whatever it feels like. Afterward it will print out a list of possible approaches and then tell you why it did the one it did.
Codex is the best at following instructions IME. Claude is pretty good too but is a little more "creative" than codex at trying to re-interpret my prompt to get at what I "probably" meant rather than what I actually said.
Can you (or anyone) explain how this might be? The "agent" is just a passthrough for the model, no? How is one CLI/TUI tool better than any other, given the same model that it's passing your user input to?
I am familiar with copilot cli (using models from different providers), OpenCode doing the same, and Claude with just the \A models, but if I ask all 3 the same thing using the same \A model, I SHOULD be getting roughly the same output, modulo LLM nondeterminism, right?
I've had the exact opposite experience. After including in my prompt "don't write any code yet" (or similar brief phrase), Gemini responds without writing code.
As an aside, claude and codex (and probably gemini) are pretty good at doing that. I've now done it with several repos and they are pretty good at finding stuff. In one case codex found an obscure way to reach around the authentication in one of our services. This is a great use case for LLMs IMHO
They are (of course) not foolproof and very well may miss something, so people need to evaluate their own risk/reward tradeoff with these extensions, even after reviewing them with AI, but I think they are pretty useful.
This is the thing I hate the most about "automatic updates" in general. I've disabled them and gone back to updating manually because the constant unexpected and unwanted UI changes finally broke a part of my soul. Unfortunately that is something that can't be done on the web, where major UI changes can be rolled out right in the middle of a session on you.
Agreed, although things I immediately think of are:
1. Is "anything but gcc" actually supported by the project? Do they have a goal of supporting other compilers or possibly an explicit decision not to support other compilers?
2. If they do support other compilers, how did the "d" suffix make it in the first place? That's something I would expect the dev or CI to catch pretty quickly.
3. Does gcc behave any differently with the "d" suffix not there? (I would think a core dev would know that off the top of their head, so it's possible they looked at it and decided it wasn't worth it. One would hope they'd comment on the PR though if they did that). If it does, this could introduce a really hard-to-track-down bug.
I'm not defending Oracle here (in fact I hate Oracle and think they are a scourge on humanity) but trying to approach this with an objective look.
That again assumes a project is looking to onboard contributors.
I absolutely get that it was an unfortunate interaction from the email writer's perspective, and it's really unfortunate.
But there are a lot of concerns/bureaucracy, etc in case of large projects like this. It may just never got to the person responsible, because it is a cross-cutting concern (so no clear way to assign it to someone) with a low priority.
They keep stringing him along in the process to onboard him as a contributor. The issue is the split personality in wanting but not acting on onboarding, with no meaningful communication. Your last paragraph about bureaucracy is exactly the complaint of the post. I don't see it as a defense. We can all throw our hands up and say "shit happens", and we can all agree it invariably does happen sometimes, but it's not a defense, per se.
Is the project clearly documented as being written in GNU C++ rather than standard C++? If not, anything that's accidentally invalid C++ is fair game for bug fixes, is it not?
All of the https://github.com/AOSC-Tracking/jdk/ links 404 for me, so it's difficult to get a sense of what was being done. Going off of the "loongson fork" links though they look rather trivial. Not saying they should be ignored, but I do think trivial PRs to large critical open source projects like JDK can often end up taking more time away from contributing engineers doing reviews and testing than they are worth.
I know first-hand the frustration of having PRs ignored and it can be quite demoralizing, so I do feel for the author. It sounds like the author is getting to a place of peace with it, and my advice from having been down that path before is to do exactly that, and find something else interesting to hack on.
But that's not what's happening here, right? They're blocked on having their 'Oracle Contributer Agreement' approved; they're not even at the stage where their PRs are eligible for being ignored.
reply