Not sure if AI can have clever or new ideas, it still seems to be it combines existing knowledge and executes algoritms.
I am not necessarily saying humans do something different either, but I have yet to see a novel solution from an AI that is not simply an extrapolation of current knowledge.
Speaking as a researcher, the line between new ideas and existing knowledge is very blurry and maybe doesn't even exist. The vast majority of research papers get new results by combining existing ideas in novel ways. This process can lead to genuinely new ideas, because the results of a good project teach you unexpected things.
My biggest hesitation with AI research at the moment is that they may not be as good at this last step as humans. They may make novel observations, but will they internalize these results as deeply as a human researcher would? But this is just a theoretical argument; in practice, I see no signs of progress slowing down.
This is my take as well. A human who learns, say, a Towers of Hanoi algorithm, will be able to apply it and use it next time without having to figure it out all over again. An LLM would probably get there eventually, but would have to do it all over again from scratch the next time. This makes it difficult combine lessons in new ways. Any new advancement relying on that foundational skill relies on, essentially, climbing the whole mountain from the ground.
I suppose the other side of it is that if you add what the model has figured out to the training set, it will always know it.
We call that Standing On The Shoulders Of Giants and revere Isaac Newton as clever, even though he himself stated that he was standing on the shoulders of giants.
The difference people are neglecting to point out is the experiences we have versus the experiences the AI has.
We have at least 5 senses, our thoughts, feelings, hormonal fluctuations, sleep and continuous analog exposure to all of these things 24/7. It's vastly different from how inputs are fed into an LLM.
On top of that we have millions of years of evolution toward processing this vast array of analog inputs.
Jokes aside, imagine you give LLMs access to real-time, world-wide satellite imagery and just tell it to discover new patrerns/phenomens and corrrlations in the world.
It means extending/expanding something, but the information is based on the current data.
In computer games, extrapolation is finding the future position of an object based on the current position, velocity and time wanted. We do have some "new" position, but the sistem entropy/information is the same.
Or if we have a line, we can expand infinitely and get new points, but this information was already there in the y = m * x + b line formula.
Complete denial that AI/LLMs can produce novel, good things is an indefensible stance at this point. But the large volume of AI slop is still an unsolved problem, and the claim that "AI will still mostly deliver slop" seems to be almost certainly correct in the near-term.
We've had a few decades to address email spam, and still haven't manage to disincentivize it enough to stop being the main challenge for email as a communication medium. I don't think there's much hope that we'll be able to disincentive the widespread, large-scale creation of AI slop even after more expensive models with higher-quality output are available.
It's quite simple: it has yet to show it can actually be useful, and all the claims that it can have (so far) turned out to be self delusion if not deliberate lies. When the industry is run by grifters, you shouldn't really be surprised when people stop believing them.
The only reliable raw converter that can handle Fuji color is Capture One. But they have collaboration with Fuji, I don't believe that conversion algorithm is open sourced.
But it would be interesting if AI coding agent could potentially reverse engineer the algorithm.
I always recommend RawTherapee for serious photography work. In addition to having been (at least originally) written by a complete colour theory geek and featuring a treasure trove of knowledge in the form of its companion RawPedia, it supports a whole host of raw formats, X-Trans RAFs among them (although Foveon X3Fs regrettably still an open issue).
I appreciate RawTherapee too and used it for a long time, but I started to notice that it really can’t match DPP for rendering Canon raw images. The denoising is nowhere near as good and it takes a lot of work to make the colors come out as good as DPP which has same processing profiles like “Faithful” that just look great out of the box.
What is DPP? I find it courteous in a conversation when the full name is provided before the first occurence of an acronym.
I had to look for it and for those who are as puzzled as I found Canon Digital Photo professional (RAW Image Processing, Viewing and Editing Software).
Pentax user here (hobby level), I am not aware of the other brands ecosystems.
Smart move of Fujifilm. That will the future of software licencing with AI breaking copyright. Software will come encrypted and only run on secure processors. AI will push us further into an age of cloud, software DRM and software patents. The rest will be effectively public domain.
Unless something changed in the last 6 months, the answer is no. Their demosaicing algorithm implementation for fuji still lead to the worms. You need to use capture 1 or dcraw/libraw.
AI is already better at understanding code than 99.99% of human, the more I use it the more I believe this is true. It can draw connections between dots far quicker and accurate than a human could ever be.
At very least, AI is going to be a must even as a co-supervisor to your project
What in doubt right now, is whether AI can manage a codebase fully autonomously without bring it down, which I doubt it can at the moment. Be it 4.6 or 5.4, they always, almost always, add code instead of removing them, the sheer complexity will explode at certain point.
But that is my assessment for models TODAY, who knows where they will end up being in 6 months. AI is entering the recursive self improvement phase, that roadmap is laying in front our eyes, what it can and would unlock is truly, truly unpredictable.
The RAG models are very competent at programming. I am worried about my job as a SWE in the near future, but didn't the MIT paper about a week ago pretty much confirm that width-scaling the model is about to (or has already) stopped giving any measurable increase in quality because the training data no longer overfills the model?
Any authentic training data from pre-LLM's is assumed to have been used in training already and synthetic or generated data gives worse performing models, so the path of increasing its training data seems to be a dead end as well?
What is the next vector of training? Maybe data curation? Remove the low quality entries and accept a smaller, but more accurate data set?
I think the AI companies are starting to sweat a little, considering the promises they have made, their inability to deliver and turn a profit at its current state and the slowing improvements.
Interesting times! We are either all out of jobs or a massive market crash is imminent, awesome...
Different architectures, different RL training loops, maybe memory modules [1][2] as part of the architecture, focusing on efficiency, the giant troves of data we're generating by using claude code/gemini-cli/opencode, there's lots of research to be made.
I mean you don’t need your first job go to top of the top companies. Your first job is to get you into the industry then you can flourish.
How many juniors OpenAI GDM are going to hire in a year, probably double digits at max, the chances are super slim and they are by nature are allowed to be as picky as they should be.
That being said, I do agree this industry is turning into finance/law, but that won’t last long either. I genuinely can’t foresee what if when AGI/ASI is really here, it should start giving human ideas to better itself, and there will be no incentive to hire any human for a large sum anymore, maybe a single digit individuals on earth perhaps
The executive doesn’t pass laws. Congress created the Department of Defense. Only Congress can rename it. The executive being elected is irrelevant to this point. The Constitution actually matters.
reply