Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> However, the literature is unclear on how well LoRA performs relative to FullFT.

I think the literature is clear on that?

"LoRA vs Full Fine-tuning: An Illusion of Equivalence" -- https://arxiv.org/abs/2410.21228v1

Quoting from the conclusions:

> The paper describes the finding that LoRA and full fine-tuning, with equal performance on the fine-tuning task, can have solutions with very different generalization behaviors outside the fine-tuning task distribution. We found that LoRA and full fine-tuning yield models with significant differences spectral properties of their weight matrices: LoRA models often containing “intruder dimensions”, high-ranking singular vectors approximately orthogonal to the singular vectors of pre-trained weight matrices. The existence of intruder dimensions correlates with the fine-tuned model forgetting more of the pre-training distribution as well as forgetting more when trained on tasks sequentially in a continual learning setup.

I'm surprised they didn't cite this; it's a well known paper.



To say that the 'literature is clear on that' while citing a single paper, which has been rejected from ICLR, is a bit of an overstatement.


> which has been rejected from ICLR

Oh, you mean rejected just like these papers?

Efficient Estimation of Word Representations in Vector Space[1], one of the most influential papers in the space with tens of thousands of citations[2]? Or the RoBERTa[3] paper (dramatically improved upon BERT; RoBERTa and derived models currently have tens of millions of downloads on HF and still serve as a reliable industry workhorse)? Or the Mamba paper[4] (pretty much the only alternative to transformers that actually gets used)? Do you want me to keep going?

Honestly, I find that whether a paper gets rejected or not means diddly squat considering how broken the review system is, and through how much honestly terrible papers I have to wade through every time I'm looking through the conference submissions for anything good.

[1] -- https://openreview.net/forum?id=idpCdOWtqXd60

[2] -- https://scholar.google.com/scholar?cites=7447715766504981253

[3] -- https://openreview.net/forum?id=SyxS0T4tvS

[4] -- https://openreview.net/forum?id=AL1fq05o7H


Based.

This guys knows his stuff.


Thanks for this comment.


Even that paper itself does not provide any "clear" conclusions about which method is better.


> I'm surprised they didn't cite this; it's a well known paper.

I'm surprised you copied and pasted all of that without explaining what it means.

Does LoRA perform worse, better or statistically insignificantly different to FullFT?

You aren't able to tell from what you pasted, are you?


Standard LoRA (W_delta = B@A with standard inits) generally underperforms FT, primarily because of "intruder dimensions" (new high-ranking singular vectors which misalign with the singular vectors of the underlying weights) as outlined in the paper.

There are techniques like PiCa and SVFT which can mitigate much of the loss, though.


pica came out two days ago, how did you find out about it?


The one I was referring to was from this paper, first published in May: https://arxiv.org/abs/2505.20211v1

I don't recall how I found out about it, but it was either paperswithcode or an LLM research session working through the intruder dimensions problem.

In my Stable Diffusion tests, it substantially improves LoRA training speed and fidelity, though I've got some experiments that seem to even further substantially improve on it by adding learnable rotations of the singular vectors.


If you're going to be snarky, could you at least clarify what the answer is for those of us who don't stay on top of ML research...?


> If you're going to be snarky, could you at least clarify what the answer is for those of us who don't stay on top of ML research...?

The answer is "There's a difference, perhaps", but the GP appeared to imply that LoRA performed worse.

My understanding is that that paper found differences, but did not conclude that the differences were quantifiably better or worse, but this is not what GP's post implied.


The paper does not make any clear conclusions about LoRA vs FullFT performance, beyond "the two methods seem to be learning different things".


Why would they cite a paper that’s not helping with their Tinker API that was released soon after? :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: