Hiding all this very important info (which literally affects the users life) behind an insignificant boring click!
Even the most paranoid user will give up in certain use cases, (like with covid 19 which even though didn´t agree, you needed to travel, work making it compulsory).
Every company that uses deciving techniques like this should be banned in Europe.
WOW such a great work. Myself I have been struggling with Mingw just to compile from source. Of course it works much cleaner then the hated visual studio, but then when it comes to cuda compile, that´s it.
Visual studio or the magority our there, It is invasive and full of bloatware like you say.
Same struggle with electron.
How to match it with cuda to compile from source the repos?
Even people in category #1 should be concerned. Even if their income is not directly affected, the potential for disruption is clearly brewing: mass unemployment, social and civil unrest.
I know smart and capable people that have been unemployed for 6+ months now, and a few much longer. Some have been through multiple layoffs.
I am presently employed, but have looked for a job. The market is the worst I've seen in my almost 30 year career. I feel deeply for anyone who needs a new job right now. It is really bad out there.
Alternatively: this is an America problem. I'm outside of America and I've been fielding more interviews than ever in the past 3 months. YMMV but the leading indicator of slowed down hiring can come from so many things. Including companies just waiting to see how much LLMs affect SWE positions.
It's from AI either directly or indirectly, either the top SWE's using AI are replacing 10 mid/juniors or your job is outsourced to someone doing it at half your Salary with a AI subscription. Only the top/lucky/connected SWE's will survive a year or two, if you have used any SOTA agent recently or looked at the job market you would have seen this coming and had a plan B/C in place, i.e. Enough capital to generate passive income to replace your salary, or another career that is AI safe for next 5-10 years. Alternatively stick your head in the sand.
I guess I just don’t see that happening right now. I’m at a big public startup and our hiring hasn’t changed much and we still have a ton of work and Claude code with SOTA models can shortcut some tasks but I’m still having a hard time saying it’s giving us much of a multiplier. Even with plenty of .MDs describing what we want. It can ad-lib some of the stuff but it’s not AGI yet. In 5-10 years I have no idea
In Europe it doesn’t seem too bad right now (for the 15+ yr cohort?). I interviewed at a handful of places and got an offer or two and my current team and company is hiring about the same as the last few years
I feel like that's a rather bad-faith take, so if you're going to make that kind of accusation you better back it up. People can legitimately believe that AI is not going to be the end of the world, and also not be privileged. And people can be privileged, and also be right. Not everything can be reduced down into a couple of labels, and how those labels "always" interact.
3 You realise that super-autocomplete is an incredible technology but the hype behind it far exceeds its capabilities and you're excited for the possibilities it may promise for making your work easier and more enjoyable.
For 1, unless you already have an self-sustaining underground bunker or island, you will be affected. No matter how much savings and total compensation you have. If you went out to get grocery in the last week, it will affect you.
Can it create employment? How is this making life better.
I understand the achievement but come on, wouldn´t it be something to show if you created employment for 10000 people using your 20000 USD!
Microsoft, OpenAI, Anthropic, XAI, all solving the wrong problems, your problems not the collective ones.
That’s the most HN reply ever. Obtuse and pedantic.
Tell a struggling undergrad or unemployed that “employment” is not intrinsically valuable, maybe they’ll be able to use the rhetoric to move a couple positions higher in a soup kitchen queue before their food coupons expire.
I'm struggling to even parse the syntax of "WHATEVER LEADS TO REWARD COLLECTIVE HUMANS TO SURVIVE", but assuming that you're talking about resource allocation, my answer is UBI or something similar to it. We only need to "reward" for action when the resources are scarce, but when resources are plentiful, there's no particular reason not to just give them out.
I know it's "easier to imagine an end to the world than an end to capitalism", but to quote another dreamer: "Imagine all the people sharing all the world".
Except resources won't be plentiful for a long while since AI is only impacting the service sector. You can't eat a service, you can't live in one. SAAS will get very cheap though...
Didn't you hear? We're heading towards a workless utopia where everything will be free (according to people who are actively working to eliminate things like food assistance for less fortunate mothers and children.)
Obviously a human in the loop is always needed and this technology that is specifically trained to excel at all cognitive tasks that humans are capable of will lead to infinite new jobs being created. /s
When 2 multi billion giants advertise same day, it is not competition but rather a sign of struggle and survival.
With all the power of the "best artificial intelligence" at your disposition, and a lot of capital also all the brilliant minds, THIS IS WHAT YOU COULD COME UP WITH?
What's funny is that most of this "progress" is new datasets + post-training shaping the model's behavior (instruction + preference tuning). There is no moat besides that.
"post-training shaping the models behavior" it seems from your wording that you find it not that dramatic. I rather find the fact that RL on novel environments providing steady improvements after base-model an incredibly bullish signal on future AI improvements. I also believe that the capability increase are transferring to other domains (or at least covers enough domains) that it represents a real rise in intelligence in the human sense (when measured in capabilities - not necessarily innate learning ability)
Hiding all this very important info (which literally affects the users life) behind an insignificant boring click! Even the most paranoid user will give up in certain use cases, (like with covid 19 which even though didn´t agree, you needed to travel, work making it compulsory). Every company that uses deciving techniques like this should be banned in Europe.
reply