Hacker Newsnew | past | comments | ask | show | jobs | submit | karmasimida's commentslogin

No denial at this point, AI could produce something novel, and they will be doing more of this moving forward.

Not sure if AI can have clever or new ideas, it still seems to be it combines existing knowledge and executes algoritms.

I am not necessarily saying humans do something different either, but I have yet to see a novel solution from an AI that is not simply an extrapolation of current knowledge.


Speaking as a researcher, the line between new ideas and existing knowledge is very blurry and maybe doesn't even exist. The vast majority of research papers get new results by combining existing ideas in novel ways. This process can lead to genuinely new ideas, because the results of a good project teach you unexpected things.

My biggest hesitation with AI research at the moment is that they may not be as good at this last step as humans. They may make novel observations, but will they internalize these results as deeply as a human researcher would? But this is just a theoretical argument; in practice, I see no signs of progress slowing down.


This is my take as well. A human who learns, say, a Towers of Hanoi algorithm, will be able to apply it and use it next time without having to figure it out all over again. An LLM would probably get there eventually, but would have to do it all over again from scratch the next time. This makes it difficult combine lessons in new ways. Any new advancement relying on that foundational skill relies on, essentially, climbing the whole mountain from the ground.

I suppose the other side of it is that if you add what the model has figured out to the training set, it will always know it.


We call that Standing On The Shoulders Of Giants and revere Isaac Newton as clever, even though he himself stated that he was standing on the shoulders of giants.

Clever/novel ideas are very often subtle deviations from known, existing work.

Sometimes just having the time/compute to explore the available space with known knowledge is enough to produce something unique.


There is no such thing. All new ideas are derived from previous experiences and concepts.

The difference people are neglecting to point out is the experiences we have versus the experiences the AI has.

We have at least 5 senses, our thoughts, feelings, hormonal fluctuations, sleep and continuous analog exposure to all of these things 24/7. It's vastly different from how inputs are fed into an LLM.

On top of that we have millions of years of evolution toward processing this vast array of analog inputs.


So, just connect LLMs to lava lamps?

Jokes aside, imagine you give LLMs access to real-time, world-wide satellite imagery and just tell it to discover new patrerns/phenomens and corrrlations in the world.


"extrapolation" literally implies outside the extents of current knowledge.

Yes, but not necessarily new knowledge.

It means extending/expanding something, but the information is based on the current data.

In computer games, extrapolation is finding the future position of an object based on the current position, velocity and time wanted. We do have some "new" position, but the sistem entropy/information is the same.

Or if we have a line, we can expand infinitely and get new points, but this information was already there in the y = m * x + b line formula.


How would you know if it wasn't an extrapolation of current knowledge? Can you point me to somethings humans have done which isn't an extrapolation?

That was my point: "I am not necessarily saying humans do something different".

[flagged]


Your analogy falls apart if we consider the number wasn't on the clock face.

I am deeply baffled by AI denial at this point.

Complete denial that AI/LLMs can produce novel, good things is an indefensible stance at this point. But the large volume of AI slop is still an unsolved problem, and the claim that "AI will still mostly deliver slop" seems to be almost certainly correct in the near-term.

We've had a few decades to address email spam, and still haven't manage to disincentivize it enough to stop being the main challenge for email as a communication medium. I don't think there's much hope that we'll be able to disincentive the widespread, large-scale creation of AI slop even after more expensive models with higher-quality output are available.


It's quite simple: it has yet to show it can actually be useful, and all the claims that it can have (so far) turned out to be self delusion if not deliberate lies. When the industry is run by grifters, you shouldn't really be surprised when people stop believing them.

you are posting in a thread about it finding a novel solution to an unsolved mathematics problem

I mean, I can run a pseudo random number generator, and produce something novel too.

Is this novel? It's new. But we already know AI can generate new things, any statistical reassembly of any content will generate new things.

It's not to downplay this, but it's unclear what "novel" means here or what you think the implications are.


The only reliable raw converter that can handle Fuji color is Capture One. But they have collaboration with Fuji, I don't believe that conversion algorithm is open sourced.

But it would be interesting if AI coding agent could potentially reverse engineer the algorithm.


I always recommend RawTherapee for serious photography work. In addition to having been (at least originally) written by a complete colour theory geek and featuring a treasure trove of knowledge in the form of its companion RawPedia, it supports a whole host of raw formats, X-Trans RAFs among them (although Foveon X3Fs regrettably still an open issue).

I appreciate RawTherapee too and used it for a long time, but I started to notice that it really can’t match DPP for rendering Canon raw images. The denoising is nowhere near as good and it takes a lot of work to make the colors come out as good as DPP which has same processing profiles like “Faithful” that just look great out of the box.

What is DPP? I find it courteous in a conversation when the full name is provided before the first occurence of an acronym.

I had to look for it and for those who are as puzzled as I found Canon Digital Photo professional (RAW Image Processing, Viewing and Editing Software).

Pentax user here (hobby level), I am not aware of the other brands ecosystems.


Denoising is a weak part of RT, but I find with proper lighting it is rarely needed… At least for my use cases.

I have one Foveon camera, any hope for Foveon X3Fs support outside of RawTherapee? DarkTable does not process them correctly either

Like the native Fujifilm software, this does not do raw conversion itself. It uses the processor in the camera to do the conversion.

Smart move of Fujifilm. That will the future of software licencing with AI breaking copyright. Software will come encrypted and only run on secure processors. AI will push us further into an age of cloud, software DRM and software patents. The rest will be effectively public domain.

AIs will reverse engineer the processing algorithms based on observing a few example inputs and outputs...

Actually I give it a try … the results is interesting

I will share it shortly


Doesn't Adobe Lightroom these days also have proper RAW conversion and the Fuji film simulations?

Unless something changed in the last 6 months, the answer is no. Their demosaicing algorithm implementation for fuji still lead to the worms. You need to use capture 1 or dcraw/libraw.

It works for me - over sharpening produces worms but the denoise alone makes it worth it over Capture One for me.

The implementation used by libraw is just as good. Lightroom on the other hand is trash and wormy.

Just use GPT5.4, avoid the drama and it is a better model anyway

Not all of us enjoy being glazed mercilessly while getting subpar output

I have burnt billion of tokens in gpt 5.4 and I didn’t know what you are talking about

It's trash for larger codebases vs Opus unfortunately.

Quite on the contrary for my experience. xhigh is the only model + thinking level that can reliably locate the bug

Except for, yknow, the DoD and killer robots and mass surveillance drama.

Those are bigger models. The serving isn’t going to be cheaper.

Why expect cheaper then? The performance is also better


You seem to have insight into the size of OpenAI’s models.

Care to share the parameter counts for them?


HN is in denial, which is understandable

AI is already better at understanding code than 99.99% of human, the more I use it the more I believe this is true. It can draw connections between dots far quicker and accurate than a human could ever be.

At very least, AI is going to be a must even as a co-supervisor to your project

What in doubt right now, is whether AI can manage a codebase fully autonomously without bring it down, which I doubt it can at the moment. Be it 4.6 or 5.4, they always, almost always, add code instead of removing them, the sheer complexity will explode at certain point.

But that is my assessment for models TODAY, who knows where they will end up being in 6 months. AI is entering the recursive self improvement phase, that roadmap is laying in front our eyes, what it can and would unlock is truly, truly unpredictable.

I am both intrigued and scared.


> AI is already better at understanding code than 99.99% of human

not to nitpick here, but AI does not understand code. AI (LLMs) are token predictors pr at best sophisticated pattern matching in huge search space...


The RAG models are very competent at programming. I am worried about my job as a SWE in the near future, but didn't the MIT paper about a week ago pretty much confirm that width-scaling the model is about to (or has already) stopped giving any measurable increase in quality because the training data no longer overfills the model?

Any authentic training data from pre-LLM's is assumed to have been used in training already and synthetic or generated data gives worse performing models, so the path of increasing its training data seems to be a dead end as well?

What is the next vector of training? Maybe data curation? Remove the low quality entries and accept a smaller, but more accurate data set?

I think the AI companies are starting to sweat a little, considering the promises they have made, their inability to deliver and turn a profit at its current state and the slowing improvements.

Interesting times! We are either all out of jobs or a massive market crash is imminent, awesome...


Different architectures, different RL training loops, maybe memory modules [1][2] as part of the architecture, focusing on efficiency, the giant troves of data we're generating by using claude code/gemini-cli/opencode, there's lots of research to be made.

[1] https://research.google/blog/titans-miras-helping-ai-have-lo... [2] https://github.com/deepseek-ai/Engram


This is true.

When I am using codex, compaction isn’t something I fear, it feels like you save your gaming progress and move on.

For Claude Code compaction feels disastrous, also much longer


Well I really don't like my handwriting, would rather avoid it


I mean you don’t need your first job go to top of the top companies. Your first job is to get you into the industry then you can flourish.

How many juniors OpenAI GDM are going to hire in a year, probably double digits at max, the chances are super slim and they are by nature are allowed to be as picky as they should be.

That being said, I do agree this industry is turning into finance/law, but that won’t last long either. I genuinely can’t foresee what if when AGI/ASI is really here, it should start giving human ideas to better itself, and there will be no incentive to hire any human for a large sum anymore, maybe a single digit individuals on earth perhaps


The problem is the lack of experience compounds.

Because AI accelerates the rate of knowledge gain, this gets even faster.


This is definitely the Claude killer OpenAI is cooking.

And so far it has succeeded


You should respect the government’s choice. It is elected after all


The executive doesn’t pass laws. Congress created the Department of Defense. Only Congress can rename it. The executive being elected is irrelevant to this point. The Constitution actually matters.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: