Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I probably agree with much of the article, but I find this kind of statement really weird:

  we should pursue approaches to intelligence that treat embodiment and interaction with the environment as primary
So pursue it. What does arguing that we should do it imply?


> So pursue it.

And they do. But it's also completely normal for researchers to convince others to work on certain problems they care about.


Convincing people with arguments and then being more than just 1 person?

I mean, if someone is arguingtthat we should work harder to go to space, answering that they should just go ahead and do it themselves is quite far from being an helpful answer, isn't it?


It's not a helpful response, but then saying 'we should work hard to go to space' as a comment is generally accepted but is actually quite meaningless.

Why not say 'NASA' or 'my colleagues at NASA' or 'as a scientist' or 'humanity'. One should at least indicate the group the collective noun relates to, rather than assume this is understood. One shouldn't assume that one can speak for everyone, when that is most likely not the case.


So one can say "humanity" but not "we" (implying humanity)? Interesting take.


"We" is highly ambiguous. It ranges from 'me and my dog', to 'humanity', to anything in between. It's of course fine to us once the group has been defined.

That it invokes the idea of a consensus humanity, that one group can speak and decide for everyone (say, scientists or politicians) is a psychological trick, imo, in that it presumes a consensus.


$$$


... A need for capital?


This was actually the approach that Vicarious AI took while I was there, and even $250m in VC funding wasn't enough to prove it out, although that may have been a problem of having too much money and not enough focus.

I think the problem (if it can be called that) is that LLMs are useful today, while we still haven't solved the embodiment problem. There's a lot more research before that'll work well, while LLMs have uses today. So the money goes to the LLMs. While it's pretty obvious that solving the problem would change society, it's also not clear how close we are to doing it. That makes it much harder to get the capital as it is a much larger risk.


It is shocking that in this day and age people can burn $250M and fail to deliver a robot. Last time I checked cameras can be bought for a couple dollars and any SBC has GigaFlops of compute power.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: