Hacker Newsnew | past | comments | ask | show | jobs | submit | Krssst's commentslogin

The end state of genAI could as well be a few billionaires being their enterprise and everybody else being unemployed or working at the factory. Robots are not there yet (far from it) and someone needs to build and maintain the thing as well as food for everyone. High unemployment could drive salaries down and make lots of thing unavailable to the common people while making humans cheaper than automation for boring manual work.

That's an extreme scenario but today's politicians are not very keen into redistribution of wealth or prevention of excessive accumulation of economic power leading to exceeding the power of the state itself. I see nothing preventing that scenario from happening.


Yes you are: the article says that the permission must be granted in general by authorities (I guess no war and not active military) and no penalties for breaching it.

As someone mostly outside of the vibe coding stuff, I can see the benefit in having both the model and the author information.

Model information for traceability and possibly future analysis/statistics, and author to know who is taking responsibility for the changes (and, thus, has deeply reviewed and understood them).

As long as those two information are present in the commit, I guess which commit field should hold which information is for the project to standardise. (but it should be normalised within a project, otherwise the "traceability/statistics" part cannot be applied reliably).


Yeah, nothing wrong with keeping the metadata - but "Authored-by" is both credit and an attestation of responsibility. I think people just haven't thought about it too much and see it mostly as credit and less as responsibility.

I disagree. “Authored by” - and authorship in general - says who did the work. Not who signed off on the work. Reviewed-by me, authored by Claude feels most correct.

To me, Claude is not a who, it's an it. Before AI, did you credit your code completion engine for the portions of code it completed? Same thing

> Before AI, did you credit your code completion engine for the portions of code it completed?

Code completions before LLMs was helping me type faster by completing variable names, variable types, function arguments, and that’s about it. It was faster than typing it all out character by character, but the auto completion wasn’t doing anything outside of what I was already intending to write.

With an LLM, I give brief explanations in English to it and it returns tens to hundreds of lines of code at a time. For some people perhaps even more than that. Or you could be having a “conversation” with the LLM about the feature to be added first and then when you’ve explored what it will be like conceptually, you tell it to implement that.

In either case, I would then commit all of that resulting code with the name of the LLM I used as author, and my name as the committer. The tool wrote the code. I committed it.

As the committer of the code, I am responsible for what I commit to the code base, and everyone is able to see who the committer was. I don’t need to claim authorship over the code that the tool wrote in order for people to be able to see who committed it. And it is in my opinion incorrect to claim authorship over any commit that consists for the very most part of AI generated code.


I do see your point. I suppose the question is what authorship entails, or should entail.

True. Might also vary depending on how one uses the LLM.

For example, in a given interaction the user of the LLM might be acting more like someone requesting a feature, and the LLM is left to implement it. Or the user might be acting akin to a bug reporter providing details on something that’s not working the way it should and again leaving the LLM to implement it.

While on the other hand, someone might instruct the LLM to do something very specific with detailed constraints, and in that way the LLM would perhaps be more along the line of a fancy auto-complete to write the lines of code for something that the user of the LLM would otherwise have written more or less exactly the same by hand.


This mirrors my thoughts.

I am doing the work. Claude is a tool, and I won't attribute authorship to it.

Future analysis is a valid reason to keep it, thats a good point and I agree with that.

People that don't want to use Apple products are not forced to do so. They can use Android (which has alternative stores, at least for the time being). Though I guess almost everyone logs in anyaway. Generally both major OS have good support from application developers, while on PC almost every one ends up being forced to use Windows at some point (to use Office, to play games).

And phones have been little spying machines from the start, people are more used to their phone spying on them than to their PC doing the same. I don't think macbooks require an Apple account for example.


Once you get NREs set up you don't need a constant uninterrupted supply of replacements as fossil fuels do (we burn them after all).

We'd need replacements as old infrastructure ages out but it seems much easier to wait out a supply disruption compared to oil since this just means using old equipment while the supply is cut; sure some might break after a while but electricity production wouldn't fall immediately.


I can't believe they made an account for that comment. Like each action carries the same weight. Renewables, esp solar are super low maintenance. When you buy panels, barring some manufacturing defect, you buy them for the life of the project, not the panel.

Solar lasts so long, it is a one time purchase.


As far as I remember it was working well in 7 and 8 (deterministic and shows programs that you expect it to show). From 10 it started behaving erratically (same time it got binged but maybe unrelated).

It had problems in 8. I would frequently type my search term, see it was the number one result. I would then attempt to arrow or tab down and hit enter to launch that result. Between arrowing down and hitting enter, the result list would update/reorder and suddenly I'm launching some unknown program. Happened all the time.

LTSC cannot be bought as a regular customer unfortunately. Legally, regular customers are only allowed to use the enshittified version.

You can get access to it, but it's a quest. You need to buy a volume license, and this requires at least 5 licenses (about $300). Then you'll be eligible to buy an LTSC version.

It doesn't require a corporation or anything, you can do that as a private person. But it IS annoying.


Why not just get the iso, install, activate with massgravel and be done for life?

Because it's illegal and that matters to some people

That's true indeed, but Microsoft is not giving us any other option so why not use the good version at home? I mean what is the risk really?

MS has always been (and probably still does) wanting you to pirate Windows instead of jumping to Linux or Mac.

French person here: there absolutely is a difference, at least in the "heard on TV" accent.

Could you be talking about the southern accent where maybe those sound similar?

A pet theory of mine is that people confusing "est" (sounds like "è", means "[he/she/it] is") and "et" (sounds like "é", means "and") while writing grew up with an accent that does not make the distinction between those sounds. (I don't criticize the mistake or the accent but have always been curious about this precise kind of writing mistake because those two words sound so different to me)


I dream of having a Firefox extension / feature that can check locally for LLM-generated text and highlight it automatically. Would likely have an immense resource usage, but worth it.

ive also dreamt about it. Surely something like this can be made, even with traditional algorithmic methods rules like checking for "not x but y" etc patterns should be possible. Highlighted with different colors for the different colors, with an overall rating for the page. Another promising avenue is words overused by AIs compared to the general corpus (may even be possible to narrow down the model used on longer pages)

> Refreshing to read something that doesn't seem written by AI too (would be ironic given the contents).

As much as I dislike the idea of not writing/checking code I am responsible for, it was a surprise to me seeing a few "anti/limited AI in coding" articles that don't pass an LLM detector. (I know those are not perfect but not much else one can do).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: