Hacker Newsnew | past | comments | ask | show | jobs | submit | PunchyHamster's commentslogin

With UI/UX person involved in whole thing preferably. It's just... bad

Maybe have it run CLI in compatibility mode when called as `gpg` but have completely new one when called normally


In our case same hook is re-ran on server side; the pre-commit hook is purely to increase velocity

... and cos most people using git will have to take a second if the hook returns to them "hey, your third commit is incorrect, you forgot ticket number"


If it is something like repo for configuration management I can understand that because its often a lot of very small changes and so every second commit would be a merge, and it's just easier to read that way.

... for code, honestly no idea


> I want to add one other note: in any large organization, some developers will use tools in ways nobody can predict. This includes Git. Don't try to force any particular workflow, including mandatory or automatically-enabled hooks.

you will save your org a lot of pain if you do force it, same as when you do force a formatting style rather than letting anyone do what they please.

You can discuss to change it if some parts don't work but consistency lowers the failures, every time.


Enforcement should live in CI. Into people's dev environments, you put opt-in "enablement" that makes work easier in most cases, and gets out of the way otherwise.

Agreed, my company has some helper hooks they want folks to use which break certain workflows.

We’re a game studio with less technical staff using git (art and design) so we use hooks to break some commands that folks usually mess up.

Surprisingly most developers don’t know git well either and this saves them some pain too.

The few power users who know what they’re doing just disable these hooks.


It's a good thing you can't force it, because `git commit -n` exists. (And besides, management of the `.git/hooks` directory is done locally. You can always just wipe that directory of any noxious hooks.)

I can accept (but still often skip, with `git push -n`) a time-consuming pre-push hook, but a time-consuming and flaky pre-commit hook is totally unacceptable to my workflows and I will always find a way to work around it. Like everyone else is saying, if you want to enforce some rule on the codebase then do it in CI and block merges on it.


This article is very much "you're holding it wrong"

> They tell me I need to have "proper formatting" and "use consistent style". How rude.

> Maybe I can write a pre-commit hook that checks that for me?

git filter is made for that. It works. There are still caveats (it will format whole file so you might end up commiting changes that are formatting fixed of not your own code).

Pre-commit is not for formatting your code. It's for checking whether commit is correct. Checking whether content has ticket ID, or whether the files pass even basic syntax validation

> Only add checks that are fast and reliable. Checks that touch the network should never go in a hook. Checks that are slow and require an update-to-date build cache should never go in a hook. Checks that require credentials or a running local service should never go in a hook.

If you can do that, great! If you can't (say it's something like CI/CD repo with a bunch of different language involved and not every dev have setup for everything to be checked locally), having to override it to not run twice a year is still preferable over committing not working code. We run local checks for stuff that make sense (checking YAML correctness, or decoding encrypted YAMLs with user key so they also get checked), but the ones that don't go remote. It's faster. few ms RTT don't matter when you can leverage big server CPU to run the checks faster

Bonus points, it makes the pain point - interactive rebases - faster, because you can cache the output for a given file hash globally so existing commits during rebase take miliseconds to check at most

> Don't set the hook up automatically. Whatever tool you use that promises to make this reliable is wrong. There is not a way to do this reliably, and the number of times it's broken on me is more than I can count. Please just add docs for how to set it up manually, prominantly featured in your CONTRIBUTING docs. (You do have contributing docs, right?)

DO set it up automatically (or as much as possible. We have script that adds the hooks and sets the repo defaults we use). You don't want new developer to have to spend half a day setting up some git nonsense only to get it wrong. And once you change it, just rerun it

Pre-push might address some of the pain points but it doesn't address the biggest - it puts the developer in a "git hole" if they have something wrong in commit, because while pre-commit will just... cancel the commit till dev fixes it, with pre-push they now need to dig out knowledge on how to edit or undo existing commits


> they now need to dig out knowledge on how to edit or undo existing commits

This knowledge is a crucial part of effective use of git every day, so if some junior dev has to learn it quick it's doing them a favor.


That reads differently knowing that one single effect of that would be "it will be easier for AI content scraper to get high quality data for their overlords currently destroying the economy"

They had ample warning and ignored the license. what you're even on about?

[flagged]


The amount of armchair quarterbacking here is wild.

Then waiting to see how they addressed these points and what were the approaches taken and why ?

Here spent time to think and document all the IRC chats, the Twitter thread, the attitude of the SoC manufacturer, etc.

There has to be a backstory to suddenly come after 1.5 years for an issue that could have been solved in 10 minutes.


Then why didn't Rockchip solve it in 10 minutes?

Bad decision and risk/reward calculation for sure. If it's code that is core to your stuff, and it is GPL'd, it's (technically) very tricky to solve.

But here, as FFmpeg is LGPL and we talk about one single file, there is even less work to do in order to fix that.


Yeah, Rockchip seems to have screwed up badly but as per the GitHub DCMA notice:

https://github.com/github/dmca/blob/master/2025/12/2025-12-1...

> ... the offending repository maintainers were informed of the problem almost 2 years ago ([private]), and did nothing to resolve it. Worse, their last comment ([private]) suggests they do not intend to resolve it at all.

Seems like the reporter gave them a lot of time to fix the problem, then when it because obvious (to them) that it was never going to be fixed they took an appropriate next step.


That's bullshit. The FFmpeg devs were well within their rights to even send a DMCA takedown notice, immediately, without asking nicely first.

This is what big corporations do to the little guys, so we owe big corporations absolutely nothing more.

They gave Rockchip a year and a half to fix it. It is the responsibility of Rockchip to take care of it once they were originally notified, and the FFmpeg dvelopers have no responsibility to babysit the Rockchip folks while they fulfill their legal obligations.


Yeah. This is like waiting 90 days before releasing a full disclosure on a vulnerability, and then complaining you could have contacted us and given us time, we only had 90 days now. Gaslighting 101. Those 90 days gives all those with a lot if resources and sitting on zero days (such as Cellebrite) time to play for free.

Deadline and reminders? They aren't teachers and Rockchip isn't a student, they are the victims here and Rockchip is the one at fault. Let's stop literally victim blaming them for how they responded.

To be clear: Rockchip is at fault, 100%. I would sue (and obv DMCA) any company who takes my code and refuses to attribute it.

If you immediately escalate to [DMCA / court] because they refuse to fix, then that's very fair, but suddenly like 2 years after silence (if, and only if that was the case, because maybe they spoke outside of Twitter/X), then it's odd.


Maybe spend less time policing how other people are allowed to act, especially when you’re speculating wildly about the presence or content of communications

It's a call to push the devs to freely say what happened in the background, there are many hints at that "I wonder if...?" "What could have happened that it escalated?" "Why there were no public reminders, what happened in the back", etc, etc, nothing much, these questions are deliberately open.

Oh. Being rude and suggesting the devs made (in your opinion) a mistake based on your guess at their actions is not going to be an effective way to get them to elaborate on their legal strategy.

Also it’s rude, which is reason enough not to do it.


In the adult world you don't get any warnings when you break the law.

I don't think he stole entirety of published copyrighted works to make it

Copyright did evolve to protect corporations. Most of the value from a piece of IP is extracted within first 5-10 years, why we have "author's life + a bunch of years" length on it?. Because it no longer is about making sure author can live off their IP, it's for corporations to be able to hire some artists for pennies (compared to value they produce for company) and leech off that for decades

It's been good at enabling the clueless to get to performance of a junior developer, and saving few % of the time for the mid to senior level developer (at best). Also amazing at automating stuff for scammers...

The cost is just not worth the benefit. If it was just an AI company using profits from AI to improve AI that would be another thing but we're in massive speculative bubble that ruined not only computer hardware prices (that affect every tech firm) but power prices (that affect everyone). All coz govt want to hide recession they themselves created because on paper it makes line go up

> I used to type out long posts explaining how LLMs have been enormously beneficial (for their price) for myself and my company.

Well then congratulations on being in the 5%. That doesn't really change the point.


I’m a senior developer and it has been hugely helpful for me in both saving time and effort and improving the quality of my output.

You’re making a lot of confident statements and not backing them up with anything except your feelings on the matter.


Aren't you doing the same? Assuming you haven't actually measured your productivity or quality of work with & without gen AI.

That would be a terrible assumption to make then.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: