Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Google operates on a live-code model. Anything running requires some level of active maintenance and support, so there's always incentive to trim. In this case, the Flash crawler is an obvious thing to trim (what's the point of indexing something that the average user lacks the tools to open and read?).

One thing I do wish they would / could do in situations like this though is open-source the core of that crawler, so if someone else finds a purpose for it, they could run their own on whatever corpus they care to index. But, such is the price of relying on closed-source software for things.



Information is still information. Sure I hate having to deal with flash, but if that information I need is nowhere on the web except this one flash or pdf deep down the interwebs, I will take a look.

I'm all for downranking it to the last page, but when your google results only has 2 pages total, every single result matters, including that flash content.


Who is paying to maintain the codebase to dissect Flash as the libraries it depends on change?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: