I believe that's one of the primary issues LLMs aim to address. Many historical texts aren't directly Googleable because they haven't been converted to HTML, a format that Google can parse.
It would be nice if we could get an LLM to simply say, "We (I) don't know."
I'll be the first to admit I don't know nearly enough about LLMs to make an educated comment, but perhaps someone here knows more than I do. Is that what a Hallucination is? When the AI model just sort of strings along an answer to the best of its ability. I'm mostly referring to ChatGPT and Gemini here, as I've seen that type of behavior with those tools in the past. Those are really the only tools I'm familiar with.
LLMs are extrapolation machines. They have some amount of hardcoded knowledge, and they weave a narrative around this knowledgebase while extrapolating claims that are likely given the memorized training data. This extrapolation can be in the form of logical entailment, high probability guesses or just wild guessing. The training regime doesn't distinguish between different kinds of prediction so it never learns to heavily weigh logical entailment and suppress wild guessing. It turns out that much of the text we produce is highly amenable to extrapolation so LLMs learn to be highly effective at bullshitting.
A few games developed entirely with AI. I'm using GitHub CoPilot to drive the development, and I'm having the AI come up with the graphics programmatically as well. It's a pretty fun project.
Technically stagnant is a good thing; I'd prefer the term technically mature. It's accomplished what it set out to do, which is to be a decentralized, anonymous form of digital currency.
The only thing that MIGHT kill it is if governments stopped printing money.
And by "dog feces," I assume you mean fiat currency, correct?
Cryptocurrency solves the money-printing problem we've had around the world since we left the gold standard. If governments stopped making their currencies worthless, then bitcoin would go to zero.
I was going to try having an AI agent analyze a well-established open source project. I was thinking of trying something like Bitcoin Core or an open-source JavaScript library, something that has had a lot of human eyes on it. To me, that seems like a good use case, as some of those projects can get pretty complex in what they're aiming to accomplish. Just the sheer amount of complexity involved in Bitcoin, for instance, would be a good candidate for having an AI agent explain the code to you as you're reviewing it. A lot of those projects are fairly well-written as they are, with the higher-level concepts being the more difficult thing to grasp.
Not attempting to claim anything against your company, but I've worked for enterprises where code bases were a complete mess and even the product itself didn't have a clear goal. That's likely not the ideal candidate for AI systems to augment.
Frankly, the code isn't messy whatsoever. There's just lots of it, and it's necessarily complex due to the domain. It's honestly the best codebase I've ever worked with - i shudder to think what nonsense Claude would spew trying to contextualize the spaghetti at my last job
The WebGL game was build with my 2D game engine "Impact", which I previously ported to C[1]. The game has a 3d view, but logic still mostly works in 2 dimensions on a flat ground. The N64 version "just" needed a different rendering and sound backend.
I'm a software engineer with five years of solid in-office experience. I've worked for two companies in the office, and recently I've worked a remote contract with Comcast doing front-end development.
I'm open to remote work and also full-time in-office positions. I've found it difficult to find work since my last contract ended in late 2022, and I would love to find a long-term, full-time development position that I could put many years into in the near future.
I applied, and I thought the idea of having a secret was pretty neat. (I've never seen that before) Hoping to hear back at some point in the near future.
reply