The site uses the word Helix so frequently that I began to feel stupid for having no idea what it means. Helix is used in so many different linguistic settings that it's not clear what it is.
It's explained in the first bullet point, even before the fold:
> Helix: Figure 03 features a completely redesigned sensory suite and hand system which is purpose-built to enable Helix - Figure's proprietary vision-language-action AI.
Coming from C#, whose generics are first class, I struggled to obtain any real value from Go's generics. It's not possible to execute on ideas that fit nicely in your head, and you instead end up fighting tooth and nail to wrangle what feels like an afterthought into something concrete that fits in your head.
Generics works well as a replacement for liberally using interface{} everywhere, making programs more readable, but as class and interface level I tend to avoid it as I find I don't really understand what is going on. I just needed it to work so I could move on
My successful AI written projects are those where I care solely on the output and have little to no knowledge about the subject matter.
When I try to walk an agent through creating anything about which I have a deeply held opinion of what good looks like, I end up frustrated and abandoning the project.
I've enjoyed using roo code's architect function to document an agreed upon approach, then been delighted and frustrated in equal measure by the implementation of code mode.
On revelation is to always start new tasks and avoid continuing large conversations, because I would typically tackle any problem myself in smaller steps with verifiable outputs, whereas I tend to pose the entire problem space to the agent which it invariably fails at.
I've settled on spending time finding what works for me. Earlier today I took 30 minutes to add functionality to an app that would've taken me days to write. And what's more I only put 30 minutes into the diary for it, because I knew what I wanted and didn't care how it got there.
This leads me to conclude that using AI to write code that a(nother) human is one day to interact with is a no-go, for all the reasons listed.
> "This leads me to conclude that using AI to write code that a(nother) human is one day to interact with is a no-go, for all the reasons listed."
So, if one's goal is to develop code that is easily maintainable by others, do you think that AI writing code gets in the way of that goal?
I saw that some of the fonts had a ligature for === making it a long congruence sign instead of three equals signs, and I avoided those like the plague.
Couldn't agree more. Recall means "sufficiently dangerous to need to recall the vehicle to the manufacturer" - yes, in the modern world it can be fixed OTA, but it's still dangerous enough to require a mass fix to a fast-moving death machine.
My current hack is reminding myself it's better to have a poorly architected system that's finished / functional than a half-finished "well architected" one
Once I notice things creaking I start doodling a new arch.
Or if I notice I'm spending too much time on addressing the product of my currently suboptimal design then I might refactor it, but it usually isn't or shouldn't be the most important thing
I moved from LastPass to 1Password last year. The difference in experience is not insignificant. Login screen presentation and identification is orders of magnitude better on 1Password than with LastPass.
Also, LastPass's Android app seems an afterthought.
LLMs cannot lie insofar as they cannot tell the truth. They're remarkably good at predicting what token comes next given a bunch of tokens, but nothing else.
Yes, but it's also generative, so at each time step it will be basing those predictions off of its own recent behavior so it's also chaotically, unpredictably performant in the quality of its predictions, but nothing else.