Hacker Newsnew | past | comments | ask | show | jobs | submit | trilogic's commentslogin

Great article, thank you.

Hiding all this very important info (which literally affects the users life) behind an insignificant boring click! Even the most paranoid user will give up in certain use cases, (like with covid 19 which even though didn´t agree, you needed to travel, work making it compulsory). Every company that uses deciving techniques like this should be banned in Europe.


Humanity last exam 44%, Scicode 59, and that 80, and this 78 but not 100% ever.

Would be nice to see that this models, Plus, Pro, Super, God mode can do 1 Bench 100%. I am missing smth here?


The theory of evolution didn´t work on Horseshow crab? Darwin did you read that. Maybe nasa should read it too :)

Evolution rarely escape local maxima.

Finally we at Hugston managed to release the new HugstonOne version.

In the video we show briefly how to use it.

We want to inform our users that from the version 1.0.9 and later all the Entereprise Editions will be commercial.

Supported among thousands of models now Qwen3.5 397B, Qwen Next Coder 80B, Minimax 2.5, GLM5 etc.

However all the previous versions in Github and Hugston.com will be untouched and available for free as promissed.

Feel free to contact us for questions.

Best Hugston Team.


WOW such a great work. Myself I have been struggling with Mingw just to compile from source. Of course it works much cleaner then the hated visual studio, but then when it comes to cuda compile, that´s it. Visual studio or the magority our there, It is invasive and full of bloatware like you say. Same struggle with electron.

How to match it with cuda to compile from source the repos?


You are not worrried for one of the 2 reasons:

1 You are not affected somehow (you got savings, connections, not living paycheck to paycheck, and have food on the table).

2 You prefer to persue no troubles in matters of complexity.

Time will tell, is showing it already.


Even people in category #1 should be concerned. Even if their income is not directly affected, the potential for disruption is clearly brewing: mass unemployment, social and civil unrest.

I know smart and capable people that have been unemployed for 6+ months now, and a few much longer. Some have been through multiple layoffs.

I am presently employed, but have looked for a job. The market is the worst I've seen in my almost 30 year career. I feel deeply for anyone who needs a new job right now. It is really bad out there.


Agree. I feel like most of the people sounding the alarm have been in the software-focused job hunting market for 6+ months.

Those who downplay it are either business owners themselves or have been employed for 2+ years.

I think a lot of software engineers who _haven't_ looked for jobs in the past few years don't quite realize what the current market feels like.


Alternatively: this is an America problem. I'm outside of America and I've been fielding more interviews than ever in the past 3 months. YMMV but the leading indicator of slowed down hiring can come from so many things. Including companies just waiting to see how much LLMs affect SWE positions.

Alternatively, it's a loud minority.

As an American I found a new job last year (Staff SW), and it was falling off a log easy, for a 26% pay bump.


It's from AI either directly or indirectly, either the top SWE's using AI are replacing 10 mid/juniors or your job is outsourced to someone doing it at half your Salary with a AI subscription. Only the top/lucky/connected SWE's will survive a year or two, if you have used any SOTA agent recently or looked at the job market you would have seen this coming and had a plan B/C in place, i.e. Enough capital to generate passive income to replace your salary, or another career that is AI safe for next 5-10 years. Alternatively stick your head in the sand.

I guess I just don’t see that happening right now. I’m at a big public startup and our hiring hasn’t changed much and we still have a ton of work and Claude code with SOTA models can shortcut some tasks but I’m still having a hard time saying it’s giving us much of a multiplier. Even with plenty of .MDs describing what we want. It can ad-lib some of the stuff but it’s not AGI yet. In 5-10 years I have no idea

In Europe it doesn’t seem too bad right now (for the 15+ yr cohort?). I interviewed at a handful of places and got an offer or two and my current team and company is hiring about the same as the last few years

I feel like that's a rather bad-faith take, so if you're going to make that kind of accusation you better back it up. People can legitimately believe that AI is not going to be the end of the world, and also not be privileged. And people can be privileged, and also be right. Not everything can be reduced down into a couple of labels, and how those labels "always" interact.

3 You realise that super-autocomplete is an incredible technology but the hype behind it far exceeds its capabilities and you're excited for the possibilities it may promise for making your work easier and more enjoyable.

For 1, unless you already have an self-sustaining underground bunker or island, you will be affected. No matter how much savings and total compensation you have. If you went out to get grocery in the last week, it will affect you.

You can get stuff delivered now and just need a ring camera and solid locks :)

Delivered by other people in the same financial situation as you of course :)

What shall I need the bunker for?

Don't worry about it.

No worries ladies and gentleman, AI will solve it, insert coin, or better throw another trillion at it and all will be solved.

AGI is here


Ads Pff..


Can it create employment? How is this making life better. I understand the achievement but come on, wouldn´t it be something to show if you created employment for 10000 people using your 20000 USD!

Microsoft, OpenAI, Anthropic, XAI, all solving the wrong problems, your problems not the collective ones.


"Employment" is not intrinsically valuable. It is an emergent property of one way of thinking about economic systems.


That’s the most HN reply ever. Obtuse and pedantic.

Tell a struggling undergrad or unemployed that “employment” is not intrinsically valuable, maybe they’ll be able to use the rhetoric to move a couple positions higher in a soup kitchen queue before their food coupons expire.


For employment I mean "WHATEVER LEADS TO REWARD COLLECTIVE HUMANS TO SURVIVE".

Call it as you wish, but I am certainly not talking about coding values.


I'm struggling to even parse the syntax of "WHATEVER LEADS TO REWARD COLLECTIVE HUMANS TO SURVIVE", but assuming that you're talking about resource allocation, my answer is UBI or something similar to it. We only need to "reward" for action when the resources are scarce, but when resources are plentiful, there's no particular reason not to just give them out.

I know it's "easier to imagine an end to the world than an end to capitalism", but to quote another dreamer: "Imagine all the people sharing all the world".


Except resources won't be plentiful for a long while since AI is only impacting the service sector. You can't eat a service, you can't live in one. SAAS will get very cheap though...


Robotics has been advancing very quickly recently. If we solve long-term AI action planning, I don't see any limitation to making it embodied.


Didn't you hear? We're heading towards a workless utopia where everything will be free (according to people who are actively working to eliminate things like food assistance for less fortunate mothers and children.)


Who are some of those people?


Obviously a human in the loop is always needed and this technology that is specifically trained to excel at all cognitive tasks that humans are capable of will lead to infinite new jobs being created. /s


When 2 multi billion giants advertise same day, it is not competition but rather a sign of struggle and survival. With all the power of the "best artificial intelligence" at your disposition, and a lot of capital also all the brilliant minds, THIS IS WHAT YOU COULD COME UP WITH?

Interesting


What happened to you?


AI fried brains, unfortunately.


I mean, he has a point it’s just not very eloquently written.


I empathize with the situation, no elegance from them, no eloquence from me :)


What's funny is that most of this "progress" is new datasets + post-training shaping the model's behavior (instruction + preference tuning). There is no moat besides that.


"post-training shaping the models behavior" it seems from your wording that you find it not that dramatic. I rather find the fact that RL on novel environments providing steady improvements after base-model an incredibly bullish signal on future AI improvements. I also believe that the capability increase are transferring to other domains (or at least covers enough domains) that it represents a real rise in intelligence in the human sense (when measured in capabilities - not necessarily innate learning ability)


What evidence do you base your opinions on capability transfer on?


> is new datasets + post-training shaping the model's behavior (instruction + preference tuning). There is no moat besides that.

sure, but acquiring/generating/creating/curating so much high quality data is still significant moat.


>There is no moat besides that.

Compute.

Google didn't announce $185 billion in capex to do cataloguing and flash cards.


Google didn't buy 30% of Anthropic to starve them of compute


Probably why it's selling them TPUs.


Yeah they are both fighting for survival. No surprise really.

Need to keep the hype going if they are both IPO'ing later this year.


The AI market is an infinite sum market.

Consider the fact that 7 year old TPUs are still sitting at near 100p utilization today.


How many IPOs can a company really do?


As many as they want. They can "spin off" and then "merge" again.


IPO only ONE but followups are called FPOs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: