Hacker Newsnew | past | comments | ask | show | jobs | submit | adventured's commentslogin

Which isn't at all accurate. Venture capital specifically exists to fund first, in the pursuit of success later - and the US has been by a dramatic margin the leader in doing that for the past ~60-70 years.

VC still requires startups to find themselves and prove something first. China basically has a program to do X and anyone can sign up to be a part of that program. All are funded and the winners emerge. I’m broadly generalizing that process but that’s not how VC approaches it.

So instead of "Come pitch us your varied and unique ideas and convince us how our investment will 1000x the returns" it's more like "we need this capability in this industry. Here is a pool of money for you to start figuring it out. We'll focus on the more successful companies over time until they can stand on their own and compete internationally."

I can't imagine why China is so dominant in so many areas when they explicitly plan and invest in capabilities they want to have instead of just relying on the market to "naturally" provide these capabilities or constantly relying on the same handful of inept and corrupt companies to deliver on national needs.


China has this process at the city state level. They can leverage their pegged currency to keep their citizen’s purchasing power lower than it should be to fund anything.

A downside is that their consumption economy is low, all their geo neighbors view them as a threat (reducing exports long term), and this contributes to high unemployment as productivity increases.


> who was democratically elected by the way

He was everything but democratically elected. He was installed. The Iranian people did not elect Mosaddegh. He was put there by a Shah and the elites of the Majlis, neither of which ever represented the people of Iran. At no point in the past century has Iran had representative government.

For the absurd 'democratically elected' premise to be true, there would have to be actual representative government. There wasn't, there isn't.


He was as democratically elected as the system at the time allowed and spent basically his entire political career on increasing the power of the majlis and getting rid of colonial interests.

The UK spent a lot of resources conspiring against this project, which ultimately failed, to a large extent because he did not have a solution to the blockade that followed nationalisation of the oil production. Perhaps he also did not expect as many members of the majlis to join the foreign conspiracy as did when the blockade got inconvenient.

It's also not like democratisation followed under the shah, rather the opposite, like the establishment of rather nasty security services and a nuclear program that the later revolutionaries inherited.


> increasing the power of the majlis

Right up until he was about to lose an election, then he suspended counting votes and tried to dissolve the Majlis in alliance with the communist party.


Not sure what you mean. In the -52 election he stopped the vote counting when enough of the majlis was filled that it could legally do work, and then tried to form a government which the shah blocked. This is what led to the proposal that the majlis give him six months of emergency powers.

He stopped the voting when he had enough friendly members, contra the constitution.

They're in the food micro delivery business. They deliver food from the expo to your table. Short hop logistics specialists.

It may not be reassuring, however it rather obviously demonstrates Microsoft has no monopoly re Office.

The whole reason for the use of "pay their fair share" while never actually defining what the fair share is supposed to be. It's solely about hit & run propaganda. Avoiding discussing actual numbers is a requirement to the politics.

What I'm seeing Ad infinitum on HN in every thread on agentic development: yeah but it really doesn't work perfectly today.

None of these people can apparently see beyond the tip of their nose. It doesn't matter if it takes a year, or three years, or five years, or ten years. Nothing can stop what's about to happen. If it takes ten years, so what, it's all going to get smashed and turned upside down. These agents will get a lot better over just the next three years. Ten years? Ha.

It's the personal interest bias that's tilting the time fog, it's desperation / wilful blindness. Millions of highly paid people with their livelihoods being disrupted rapidly, in full denial about what the world looks like just a few years out, so they shift the time thought markers to months or a year - which reveals just how fast this is all moving.


You aren't wrong, but you’re underestimating the inertia of $10M+/year B2B distributors. There are thousands of these in traditional sectors (pipe manufacturing, HVAC, etc.) that rely on hyper-localized logistics and century-old workflows.

Buyer pressure will eventually force process updates, but it is a slow burn. The bottleneck is rarely the tech or the partner, it's the internal culture. The software moves fast, but the people deeply integrated into physical infrastructure move 10x slower than you'd expect.


Internal culture changes on budget cycles, and right now, most companies are being pushed by investors to adopt AI. Have your sales team ask about AI budgeting vs. SaaS budgeting. I think you'll find that AI budget is available and conventional SaaS/IT budget isn't. Most managers are looking for a way to "adopt ai" so I think we're in a unique time.

> people deeply integrated into physical infrastructure move 10x slower than you'd expect.

My experience is yes, to move everyone. To do a pilot and prove the value? That's doable quickly, and if the pilot succeeds, the rest is fast.


I don't think you can guarantee it will get better. I'm sure it will improve from here but by how much? Have the exponential gains topped out? Maybe it's a slow slog over many years that isn't that disruptive. Has there been any technology that hasn't hit some kind of wall?

The broad concern that some people have is misplaced (China doesn't care about the average American home). The narrow concern is extremely plausible: that China would happily use it to target dissidents for example, or people that have fled China for various reasons. We've seen how aggressive they are over time in targeting those people, including physical kidnappings in the US and elsewhere.

The acquisition of iRobot should be immediately blocked on national security concerns. China would have no problem doing the same if the situations were reversed.


99% of humans are mimics, they contribute essentially zero original thought across 75 years. Mimicry is more often an ideal optimization of nature (of which an LLM is part) rather than a flaw. Most of what you'll ever want an LLM to do is to be a highly effective parrot, not an original thinker. Origination as a process is extraordinarily expensive and wasteful (see: entrepreneurial failure rates).

How often do you need original thought from an LLM versus parrot thought? The extreme majority of all use cases globally will only ever need a parrot.


It's astounding how subtly anti-AI HN has become over the past year, as the models keep getting better and better. It's now pervasive across nearly every AI thread here.

As the potential of AI technical agents has gone from an interesting discussion to extraordinarily obvious as to what the outcome is going to be, HN has comically shifted negative in tone on AI. They doth protest too much.

I think it's a very clear case of personal bias. The machines are rapidly coming for the lucrative software jobs. So those with an interest in protecting lucrative tech jobs are talking their book. The hollowing out of Silicon Valley is imminent, as other industrial areas before it. Maybe 10% of the existing software development jobs will remain. There's no time to form powerful unions to stop what's happening, it's already far too late.


I don't think is the case; I think what's actually going on is that the HN crowd are the people who are stuck actually trying to use AI tools and aware of their limitations.

I have noticed, however, that people who are either not programmers or who are not very good programmers report that they can derive a lot of benefit from AI tools, since now they can make simple programs and get them to work. The most common use case seems to be some kind of CRUD app. It's very understandable this seems revolutionary for people who formerly couldn't make programs at all.

For those of us who are busy trying to deliver what we've promised customers we can do, I find I get far less use out of AI tools than I wish I did. In our business we really do not have the budget to add another senior software engineer, and we don't the spare management/mentor/team lead capacity to take on another intern or junior. So we're really positioned to be taking advantage of all these promises I keep hearing about AI, but in practical terms, it saves me at an architect or staff level maybe 10% of my time and for one of our seniors maybe 5%.

So I end up being a little dismissive when I hear that AI is going to become 80% of GDP and will be completely automating absolutely everything, when what I actually spend my day on is the same-old same-old of trying to get some vendor framework to do what I want to get some sensor data out of their equipment and deliver apps to end customers that use enough of my own infrastructure that they don't require $2,000 a month of cloud hosting services per user. (I picked that example since at one customer, that's what we were brought in to replace: that kind of cost simply doesn't scale.)


I value this comment even though I don't really agree about how useful AI is. I recognise in myself that my aversion to AI is at least partly driven by fear of it taking my job.

I worked for a company that was starting to shove AI incentives down the throat of every engineer as our product got consistently worse and worse due to layoffs and the perceived benefits of AI which were never realized. When you look at the companies that have shifted to 'AI first' and see them shoveling out garbage that barely works, it should be no surprised that people both aware of how the sausage is made and not are starting to hate it.

> The hollowing out of Silicon Valley is imminent

I think AI tools are great, and I use them daily and know their limits. Your view is commonly held by management or execs who don't have their boots on the ground.


That's what I've observed. I currently have more work booked than I can reasonably get done in the next year, and my customers would be really delighted if I could deliver it to them sooner, and take on even more projects. But I have yet to find any way that just adding AI tools to the mix makes us orders-of-magnitude better. The most I've been able to squeeze out is a 5% to 10% increase.

But they do have their hands on your budget, and they are responsible for creating and filling positions.

I’m not anti-AI; I use it every day. But I also think all this hand-wringing is overblown and unbalanced. LLMs, because of what they are, will never replace a thoughtful engineer. If you’re writing code for a living at the level of an LLM then your job was probably already expendable before LLMs showed up.

except you know, you had a job. and coming out of college could get one… if you were graduating right now in compsci you’ll find a wasteland with no end in sight…

You’re assuming a lot about me that isn’t true, but let’s just say we can’t really know, can we? And I think it’s a bit reductionist to attribute the current job market to LLMs. The market started to suck long before LLMs became useful.

my apologies, I did not mean you as YOU, just general “you”…

and while we can’t know we can also… kind of know or look at data etc…

IntuitionLabs, “AI’s Impact on Graduate Jobs: A 2025 Data Analysis” (2025) -

https://intuitionlabs.ai/pdfs/ai-s-impact-on-graduate-jobs-a...

Indeed Hiring Lab, “AI at Work Report 2025: How GenAI is Rewiring the DNA of Jobs” (September 2025) -

https://www.hiringlab.org/wp-content/uploads/2025/09/Indeed-...


Edit: thanks for the gracious reply. I was probably overly defensive myself.

I didn’t read all of that, but what I gathered is that it’s relying on survey response about future expectation? And probably being conflated a bit with the end of ZIRP and the effect that had on the market in general. I think it’s rather more likely that tech companies were allowed to play with funny money for a while, driving up demand, and suddenly when we are on a rebound from that people want to point to AI as a scapegoat to avoid saying, “Yeah we over hired while it was advantageous and now we are cutting back to prior levels.” I’ve seen first hand what happens to tech businesses that try to go “all in on AI” and it isn’t a happy story for the company anymore than the employees.


It's not subtle.

But the temptation of easy ideas cuts both ways. "Oldsters hate change" is a blanket dismissal, and there are legitimate concerns in that body of comments.


>It's astounding how subtly anti-AI HN has become over the past year, as the models keep getting better and better. It's now pervasive across nearly every AI thread here.

I don't think you can characterise it as a sentiment of the community as a whole. While every AI thread seems to have it's share of AI detractors, the usernames of the posters are becoming familiar. I think it might be more accurate to say that there is a very active subset of users with that opinion.

This might hold true for the discourse in the wider community. You see a lot of coverage about artists outraged by AI, but when I speak to artists they have a much more moderate opinion. Cautious, but intrigued. A good number of them are looking forward to a world that embraces more ambitious creativity. If AI can replicate things within a standard deviation of the mean, the abundance of that content there will create an appetite for something further out.


There's no scenario where these delivery bots survive US city sidewalks. They will be hijacked, destroyed/attacked, vandalized heavily. The police will not be able to do anything about it. The business model will not survive the US, unless the companies plan to deploy delivery tanks. It'll thrive in safer cities around the world though.

I'm not sure if you'd consider London to be a safe city but these things won't survive in London either.

People are already pissed off about delivery ebike riders, who disobey laws and ride dangerously. But there's very little you can do about humans. A helpless robot that is causing a hazard to pedestrians? A ULEZ-style strike force will be mobilized to drive them out.

And what about blind and partially sighted people? The place for wheeled vehicles in on roads. If you want to exist in pedestrian areas then make a robot that can walk.


Well, London is a safe city by US standards.

But putting that aside, the biggest problems these things will have in the UK is a completely different conception of walkability even compared to, say, NYC.

People walk everywhere, pavements are cluttered and crowded, the vast majority of roads are not grid-structured almost anywhere in the UK, etc. So much so that when US firms do consider testing these things properly in the UK they will have to pick somewhere like Bath or Worthing or Hove: enough wealthy people to try it, and easy, grid-structured roads. Not many other good candidates.

The second problem they will face is the nature of protest. People won’t vandalise them. There will, however, be extensive civil mischief: people will box them in, mislead them, cover their sensors with googly eyes and woolly hats, put traffic cones on them, and generally make the whole scheme unworkable. And that is if councils don’t outright ban their operators.


Fundamentally I think they should just use the road and keep to the right (in the US), like other slow moving vehicles. They’d probably be fine in bike lanes where they exist.

Maybe they could enter the sidewalk for half a block at a curb cut like a cyclist would do to complete a delivery.


There's a very clear and obvious reason they are on the sidewalk. Bikes are not "probably fine" in bike lines themselves though. Bikes are mainly visible to drivers. These things are too small to be in the bike lanes let alone in an actual lane of the road. They'll just be a small speed bump to most cars.

I guess time will tell, but I think most cities in the U.S. have areas that are affluent / on the "right side of the tracks" where robots could traverse unmolested, and then other lawless no-go areas for robots.

True democracy is throwing these damn things in the river.

Or maybe you can ride them like a bronco?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: