Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Todd from bit.ly here.

From day one, we've prized security, transparency, reliability, and openness at bit.ly. Along the way, we've made a number of product decisions based on those tenets. Among those are link permanence (link destinations don't change once created), the avoidance of anything that interferes with user experience (we've never framed, nor will we), and a dedicated focus on spam and malware detection, so that our users can click on bit.ly links with a high degree of confidence.

We take our responsibility as internet citizens seriously, and you'll see this exhibited even in the small details of the ways in which we manage flagged links (you'll notice we never actually disable a redirect, and at most simply insert an interstitial which retains the end destination link).

In the course of analyzing content for spam, malware, and phishing attacks, we rely on a number of systems, both internal and external. Over the course of the past year, a number of spammers have attempted to use various levels of indirection through redirectors (some of which are reconfigurable), in order to obfuscate and cloak their efforts. In fact, the bulk of shortens to bit.ly coming through other URL shorteners have tended to be attempts to spam the system. While our crawlers do of course follow links through redirections, the inclusion of modifiable redirects in the stream, and our analysis of the preponderance of spam attempts via these vectors have made it necessary and appropriate in some cases to block the URL shorteners.

Just to reiterate, the only goal is and always has been to protect the end user clicking on bit.ly links, regardless of the link source. Given that multiple layer wrapping of URL redirectors tends to be an edge case based on inappropriate API usage, confused users, or in the preponderance of cases, attempts to spam, we think this has been a fair approach. As such, you'll note that we did in fact update our interstitial warning pages with language better reflecting the reasoning behind the status. We're happy to see a healthy, vibrant, shortening ecosystem, and have no intention whatsoever to put a damper on other sites in the space.

Some have suggested we simply not shorten URLs already pointing to 3rd party short URLs. While this is a potential possibility, our API responses and the innumerable clients and scripts that use these methods aren't currently designed with this state in mind. Consequently, any changes would have to be carefully considered.

As with any product, bit.ly is a work in progress, and we're always interested in finding ways to best serve our users, while maintaining the integrity and openness of the product.



That's well-written PR Todd, but the bottom line is this: your false positives are doing potentially severe damage to third parties. These are people who aren't even using your service.

When these things happen you need a way to fix them quickly, or you will find yourselves in legal hot water sooner or later.

Also, I might suggest to you that you have neither the power nor the authority to be the link police on the internet. What you're doing is engaging in an arms race that A) is impossible to win, and B) has numerous innocent bystanders.

I've never really been one to decry the dangers of link shorteners, but this is a great example of how a link shortener—even with a team of stand-up ethical guys behind it—can be bad for the internet.

We've been thinking about signing up for bit.ly pro, and I have to say this throws a wet blanket over the whole thing.


We've been thinking about signing up for bit.ly pro, and I have to say this throws a wet blanket over the whole thing.

whitelabel shouldn't be nearly as bad - as long as you control the domain name you can always take your ball and go home.


Yes, definitely whitelabel was already a requirement for us.

My comment was more about the moral aspect though.


> While our crawlers do of course follow links through redirections, the inclusion of modifiable redirects in the stream, and our analysis of the preponderance of spam attempts via these vectors have made it necessary and appropriate in some cases to block the URL shorteners.

Why not do as tkaemming suggests and follow the redirections to link to the final endpoint URL?


The intent behind the very statement you quoted was to convey that we do precisely that. However, also mentioned was the fact that in a number of cases, modifiable destination redirects are embedded within the chain. In those cases, unless the redirect is crawled on every clickthrough, the integrity of the chain is difficult to assert.


Maybe you're misinterpreting me, because the linked post suggests that bit.ly isn't actually doing what I am suggesting.

My proposal is this: when a user submits a link to bit.ly to be shortened, bit.ly follows the link through 0..n redirections until it finds the final endpoint URL. This final endpoint URL is then stored as the bit.ly link.

Of course, this assumes that you don't care about the modifiable destination redirects in the chain, which maybe you do. In this case you would only follow redirects which match a whitelist of followable domains (other link shorteners).

Maybe (probably?) there's something I'm missing that makes this infeasible, but it seems like the most logical solution to me.


Isn't that exactly what they do? Some URL shortners that don't allow modifiable destinations (like goo.gl) are whitelisted - others are not.


In much the same way that we don't frame links, or permanently remove flagged links, we would never return a short link that pointed to anything other than the link requested by the API, or by the user via an interface. It's a simple, deterministic API. The downsides of the proposed approach are far more extreme than any potential upside.


While I see what spohlenz is saying, I think what you've described here is the right way to do it. Once you start changing the target from what I submitted, I would personally be suspicious.


But, if the other shortener is returning a 301 'permanent redirect' isn't it fully within the letter and spirit of the http spec to forget it and remember the target.

If the shortener was only returning an 302 then removing from the chain would be suspect, but they are saying 'this link always points here, use it'


I think the point was to remove all of the intermediate redirects and point all click-thrus to the bit.ly link right at the final destination. That way spammer reconfiguring their middle-man URL to point somewhere else will have no effect.


You say "any changes would have to be carefully considered". So, are said changes being considered or are they not?


We always consider ways in which to provide the best level of service to our end users and API users, while preventing unintended side effects. There are pros and cons to every approach, and they have to be evaluated with care.


That was very carefully crafted in order to not answer the question at all. Nicely done.


bit.ly is not doing right. cut the crap, fix it.


> our API responses and the innumerable clients and scripts that use these methods aren't currently designed with this state in mind

Well, you could just return the original URL when it is already shortened and this way there is no new "state" to worry about.


Offcouse that IS a new state. If I call bit.ly's shorten function, I expect a bit.ly link back, not the original URL. I can imagine, for example, not saving the bit.ly part in a db, only the identifier. That breaks (without proper checking) if the original URL is returned.


Seems reasonable to me. Just out of curiosity since this article is placing the blame at bitly's feet and not tweetdeck, how do the other shortener services behave given a simarly shortened URL. Are they whitelisting?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: