Hacker Newsnew | past | comments | ask | show | jobs | submit | tonyhart7's commentslogin

the last thing people need is another foot gun in C code

A 100% Rust kernel is now upstream in Linux 7.4 (kernel.402 points by rust_evangelist 6 hours ago | hide | 156 comments

future seems "safe"


this is why I don't bother with ARIA

"Microsoft is a buffet. You can get anything you want but it’s rare people leave a buffet saying “Man that food was great!”"

Tell this to all office alternative lmao

I tried them all and all they do is sucks, even the strongest competitor is (google docs,sheet) feels "lacking"


"Full disclosure, I haven't used microsoft office in a few decades."

and thats the problem, the fair comparison would be like You replace AWS with hetzner vps


So its a good idea :)

Yeah the problem is hetzner actually good, office alternative is all sucks

I think big tech being so big now that these "issue" is too small for their priority is saying something

You better thank god for MS for being lazy and incompetent, the last thing we want for big tech is being innovative and have a stronger monopoly


github value maybe as not apparent as other product

but github is pair well with MS other core product like Azure and VS/VSC department

MS has a good chance to have vertical integration on how software get written from scratch to production, if they can somehow bundle everything to all in one membership like Google one subs, I think they have a good chance


I like jujutsu kaisen too

its called universal basic income

its inevitable tbh


"What's some frontier research Meta has shared in the last couple years?"

the current Meta outlook is embarassing tbh, the fact they have largest data of social media in planet and they cant even produce a decent model is quiet "scary" position


Yann was a researcher not a productization expert. His departure signals the end of Meta being open about their work and the start of more commercial focus.

The start?

Llama 4 wasn't great, but Llama 3 was.

Do we all forget how bad GPT 4.5 was?

OpenAI got out of that mess with some miraculous post-training efforts on their older GPT-4o model.

But in a different timeline we are all talking about how great Llama 4.5 is and how OpenAI needs to recover from the GPT 4.5 debacle.


As a counterpoint, I found GPT 4.5 by far the most interesting model from OpenAI in terms of depth and width of knowledge, ability to make connections and inferences and apply those in novel ways.

It didn't bench well against the other benchmaxxed models, and it was too expensive to run, but it was a glimpse of the future where more capable hardware will lead to appreciably smarter models.


Just because they are not leading current sprint of maximizing transformers doesn't mean they're not doing anything.

It's not impossible that they asses it as local maximum / dead end and are evaluating/training something completely different - and if it'll work, it'll work big time.


Just because they have that doesn't mean they're going to use it for training.

"Just because they have that doesn't mean they're going to use it for training."

how noble is Meta upholding a right moral ethic

/s


A very common thing people do is assume a) all corporations are evil b) all corporations never follow any laws c) any evil action you can imagine would work or be profitable if they did it.

b is mostly not true but c is especially not true. I doubt they do it because it wouldn't work; it's not high quality data.

But it would also obviously leak a lot of personal info, and that really gets you in danger. Meta and Google are able to serve you ads with your personal info /because they don't leak it/.

(Also data privacy laws forbid it anyway, because you can't use personal info for new uses not previously agreed to.)


oh man… just because they have data doesn’t mean they will serve you ads :) Geeeez

I’ve long predicted that this game is going to be won with product design rather than having the winning model; we now seem to be hitting the phase of “[new tech] mania” where we remember that companies have to make things that people want to pay more money for than it costs to make them. I remember (maybe in the mid aughts) when people were thinking Google might not ever be able to convert their enthusiasm into profitability…then they figured out what people actually wanted to buy, and focused on that obsessively as a product. Failing to do that will lead to failure go for the companies like open AI.

Sinking a bazillion dollars into models alone doesn’t get you shit except a gold star for being the valley’s biggest smartypants, because in the product world, model improvements only significantly improve all-purpose chatbots. The whole veg-o-matic “step right up folks— it slices, it dices, it makes julienne fries!” approach to product design almost never yields something focused enough to be an automatic goto for specific tasks, or simple/reliable enough to be a general purpose tool for a whole category of tasks. Once the novelty wears off, people largely abandon it for more focused tools that more effectively solve specific problems (e.g. blender, vegetable peeler) or simpler everyday tools that you don’t have to think about as much even if they might not be the most efficient tool for half your tasks (e.g. paring knife.) Professionals might have enough need and reason to go for a really great in-between tool (e.g mandolin) but that’s a different market, and you only tend to get a limited set of prosumers outside of that. Companies more focused on specific products, like coding, will have way more longevity than companies that try to be everything to everyone.

Meta, Google, Microsoft, and even Apple have more pressure to make products that sanely fit into their existing product lines. While that seems like a handicap if you’re looking at it from the “AI company” perspective, I predict the restriction will enforce the discipline to create tools that solve specific problems for people rather than spending exorbitant sums making benchmark go up in pursuit of some nebulous information revolution.

Meta seems to have a much tougher job trying to make tools that people trust them to be good at. Most of the highest-visibility things like the AI Instagram accounts were disasters. Nobody thinks of Meta as a serious, general-purpose business ecosystem, and privacy-wise, I trust them even less than Google and Microsoft: there’s no way I’m trusting them with my work code bases. I think the smart move by Meta would be to ditch the sunk costs worries, stop burning money on this, focus on their core products (and new ones that fit their expertise) and design these LLM features in when they’ll actually be useful to users. Microsoft and Google both have existing tools that they’ve already bolstered with these features, and have a lot of room within their areas of expertise to develop more.

Who knows— I’m no expert— but I think meta would be smart to try and opt out as much as possible without making too many waves.


My thesis is the game is going to be won - if you define winning as a long term profitable business - by Google because they have their own infrastructure and technology not dependent on Nvidia, they have real businesses that can leverage AI - Google Search, YouTube and GCP - and they aren’t burning money they don’t have.

2nd tier winner is Amazon for the same reasons between being able to leverage AI with both Amazon Retail and AWS where they can sell shovels. I’ve also found their internal Nova models to be pretty good for my projects.

Microsoft will be okay because of Azure and maybe Office if they get their AI story right.

I just don’t see any world where OpenAI comes out ahead from a business standpoint as long as they are sharecroppers on other people’s hardware. ChatGPT alone will never make it worth the trillion dollar capitalization long term unless it becomes a meme stock like Tesla


Yeah that’s also about where I land.

never seen I say this but X(twitter) has more success in integrate their business product with AI (Grok)

I know I know that Elon is crazy etc but Grok example and way to integrate with core product is actually the only ways I can even came up tbh (other than character.ai flavor)


Actually haven’t used it at all so that’s a big blind spot in my understanding of the ecosystem.

If I was a Meta shareholder I might well agree with you. But as someone with very little interest in their products so far, I’m very happy for them to sink huge amounts of money into AI research and publishing it all.

I’m just calling balls and strikes. For all I care, the whole lot of them can get sucked down a storm drain. Frankly I think there’s way too much effort and resources being put into this stuff regardless of who’s doing it. We’ve got a bunch of agentic job stealers, a bunch of magic spam/slop generators, and a bunch of asinine toys with the big name LLM stuff: I don’t think that’s a net gain for humanity. Then there’s a bunch of genuinely useful things made by people who are more interested in solving real problems. I’ll care about the first category when it consistently brings more value than garbage “content” and job anxiety to average people’s lives.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: