Hacker Newsnew | past | comments | ask | show | jobs | submit | stoneyhrm1's commentslogin

"Pass the salt? You mean pass the sodium chloride?"


I'm free to be corrected because I'm no expert in the field but isn't RAG just enriching context, it doesn't have to be semantic search, it could be an API call or grabbing info from a database.


Like it or not vibe coding is here to stay, I too don't agree with the concept but have told people in my org that I've 'vibe coded' this or 'vibe coded' that. To us it just means we used AI to write most of the code.

I would never have it be put into production without any type of review though, it's more for "I vibe coded this cool app, take a look, maybe this can be something bigger..."


But ... why? Saying you "vibe coded" when you actually didn't makes you sound like you are doing something, well, dumber than you are actually doing, while also giving unrealistic expectations to people who don't realize you aren't actually vibe coding.


The LLM endpoint via ollama or huggingface is not the one executing MCP tool calls, that is on behalf of the client that is interacting with the LLM. All the LLM does is take input as a prompt and produce a text output, that's it. Anything else is just a wrapper.


I understand the concern here but isn't this the same as making any other type of server public? This is just regarding servers hosting LLMs, which I wouldn't even consider a huge security concern vs hosting a should-be-internal tool publicly.

Servers that shouldn't be made public are made public, a cyber tale as old as time.


> servers hosting LLMs, which I wouldn't even consider a huge security concern

The new problem is if the LLMs are connected to tooling.

There's been plenty of examples showing that with subtle changes to the prompt you can jailbreak the LLM to execute tooling in wildly different ways from what was intended.

They're trying to paper over this by having the LLM call regular code just so they can sure all steps of the workflow are actually executed reliably every time.

Even the same prompt can give different results depending on the temperate used. How security teams are able to sign these things off is beyond me.


The tools are client side operations in Ollama, so I don't see a way an attacker could use that to their benefit, except to leverage the actual computing power the server provides.


What exactly is being moved? It's trained on human data, you can't make code more perfect than what is written out there by a human.


Some think it’s possible, I don’t, we agree actually.


> revolutionary breakthroughs in essentially all field

This doesn't really make sense outside computers. Since AI would be training itself, it needs to have the right answers, but as of now it doesn't really interact with the physical world. The most it could do is write code, and check things that have no room for interpretation, like speed, latency, percentage of errors, exceptions, etc.

But, what other fields would it do this in? How can it makes strives in biology, it can't dissect animals, it can't figure more out about plants that humans feed into the training data. Regarding math, math is human-defined. Humans said "addition does this", "this symbol means that", etc.

I just don't understand how AI could ever surpass anything human known before we live by the rules defined by us.


[in Morpheus voice]

"But when AI got finally access to a bank account and LinkedIn, the machines found the only source of hands it would ever need."

That's my bet at least - especially with remote work, etc. is that if the machines were really superhuman, they could convince people to partner with it to do anything else.


You mean like convincing them to invest implausibly huge sums of money in building ever bigger data-centres?


It is interesting that, even before real AGI/ASI gets here, that "the system wants what it wants", like capitalism + computing/internet creates the conditions for an infinite amplification loop.

I am amazed, hopeful, and terrified TBH.


Feedback gain loops have a tendency to continue right up to the point they blow a circuit breaker or otherwise drive their operating substrate beyond linear conditions.


This made me laugh and feel scared simultaneously.


I assume someone has already written it up as a sci-fi short story, but if not I'm tempted to have a go...


It starts to veer into sci-fi and I don't personally believe this is practically possible on any relevant timescale, but:

The idea is a sufficiently advanced AI could simulate.. everything. You don't need to interact with the physical world if you have a perfect model of it.

> But, what other fields would it do this in? How can it makes strives in biology, it can't dissect animals ...

It doesn't need to dissect an animal if it has a perfect model of it that it can simulate. All potential genetic variations, all interactions between biological/chemical processes inside it, etc.


Didn't we prove that it is mathematically impossible to have a perfect simulation of everything though (i.e. chaos theory)? These AIs would actually have to conduct experiments in the real world to find out what is true. If anything this sounds like the modern (or futuristic version) of empiricism versus rationalism debate.

>It doesn't need to dissect an animal if it has a perfect model of it that it can simulate. All potential genetic variations, all interactions between biological/chemical processes inside it, etc.

Emphasis on perfection, easier said than done. Some how this model was able to simulate millions of years of evolution so it could predict vestigial organs of unidentified species? We inherently cannot model how a pendulum with three arms can swing but somehow this AI figured out how to simulate evolution millions of years ago with unidentified species in the Amazon and can tell you all of its organs before anyone can check with 100% certainty?

I feel like these AI doomers/optimists are going to be in a shock when they find out that (unfortunately) John Locke was right about empiricism, and that there is a reason we use experiments and evidence to figure out new information. Simulations are ultimately not enough for every single field.


It’s plausible in a sci-fi sort of way, but where does the model come from? After a hundred years of focused study we’re kinda beginning to understand what’s going on inside a fruit fly, how are we going to provide the machine with “a perfect model of all interactions between biological/chemical processes”?

If you had that perfect model, you’ve basically solved an entire field of science. There wouldn’t be a lot more to learn by plugging it into a computer afterwards.


> You don't need to interact with the physical world if you have a perfect model of it.

How does it create a perfect model of the world without extensive interaction with the actual world?


How will it be able to devise this perfect model if it can't dissect the animal, analyze the genes, or perform experiments?


Well, first, it would be so far beyond anything we can comprehend as intelligence that even asking that question is considered silly. An ant isn't asking us how we measure the acidity of the atmosphere. It would simply do it via some mechanism we can't implement or understand ourselves.

But, again with the caveats above: if we assume an AI that is infinitely more intelligent than us and capable of recursive self-improvement to where it's compute was made more powerful by factorial orders of magnitude, it could simply brute force (with a bit of derivation) everything it would need from the data currently available.

It could iteratively create trillions (or more) of simulations until it finds a model that matches all known observations.


> Well, first, it would be so far beyond anything we can comprehend as intelligence that even asking that question is considered silly.

This does not answer the question. The question is "how does it become this intelligent without being able to interact with the physical world in many varied and complex ways?". The answer cannot be "first, it is superintelligent". How does it reach superintelligence? How does recursive self-improvement yield superintelligence without the ability to richly interact with reality?

> it could simply brute force (with a bit of derivation) everything it would need from the data currently available. It could iteratively create trillions (or more) of simulations until it finds a model that matches all known observations.

This assumes that the digital encoding of all recorded observations is enough information for a system to create a perfect simulation of reality. I am quite certain that claim is not made on solid ground, it is highly speculative. I think it is extremely unlikely, given the very small number of things we've recorded relative to the space of possibilities, and the very many things we don't know because we don't have enough data.


>The idea is a sufficiently advanced AI could simulate.. everything

This is a demonstrably false assumption. Foundational results in chaos theory show that many processes require exponentially more compute to simulate for a linearly longer time period. For such processes, even if every atom in the observable universe was turned into a computer, they could only be simulated for a few seconds or minutes more, due to the nature of exponential growth. This is an incontrovertible mathematical law of the universe, the same way that it's fundamentally impossible to sort an arbitrary array in O(1) time.


The counter-argument to this from the AI crowd would be that it's fundamentally impossible for _us_, with our goopy brains, to understand how to do it. Something that is factorial-orders-of-magnitude smarter and faster than us could figure it out.

Yes, it's a very hand-wavey argument.


You're right, but how much heavy lifting is within this phrase?

> if it has a perfect model


It feels very much like "assume a spherical cow..."


A perfect model of the world is the world. Are you saying AI will become the universe?


You can be super-human intelligent, and still not have a perfect model of the world.


We aren't that far away from AI that can interact with physical world and run it's own experiments. Robots in humanoid and other forms are getting good and will be able to do everything humans can do in a few years.


I'd think it would be more autistic to continue to use and have interest in something that's been superseded by something far more easier and efficient.

Who would you think is weirder, the person still obsessed with horse & buggies, or the person obsessed with cars?


I understand the author's sentiment but industries don't exist solely because somebody wants them to. I mean, sure, hobbies can exist, but you won't be paid well (or even at all) to work with them.

Software engineering pays because companies want people to develop software. It pays so well because it's hard, but the coding portion is become easier. Vibe coding and AI is here to stay, the author can choose to ignore it and go preach to a dying field (specifically, writing code, not CS), or embrace it. We should be happy we no longer need to type away if and for loops 20 times and instead can focus on high level architecture.


it's not LLMs vs typing for loops by hand. It's LLMs vs snippets and traditional cheap, pattern based code generation, find and replace, and traditional refactoring tools

those are still faster and cheaper and more predictable than an LLM in many cases


too few developers understand this


I mean you can boil anything down to it's building blocks and make it seem like it didn't 'decide' anything. When you as a human decide something, your brain and it's neurons just made some connections with an output signal sent to other parts that resulting in your body 'doing' something.

I don't think LLMs are sentient or any bullshit like that, but I do think people are too quick to write them off before really thinking about how a nn 'knows things' similar to how a human 'knows' things, it is trained and reacts to inputs and outputs. The body is just far more complex.


I wasn't talking about knowing (they clearly encode knowledge), I was talking about thinking/reasoning, which is something LLMs do not in fact do IMO.

These are very different and knowledge is not intelligence.


To me all of those are so vaguely defined that arguing whether an LLM is "really really" doing something is kind of a waste of time.

It's like we're clinging on to things that make us feel like human cognition is special so we're saying LLM's arent "really" doing it, then not defining what it actually is.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: