Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Find bugs before your users do (bughunt.io)
64 points by pauljohncleary on Oct 13, 2014 | hide | past | favorite | 40 comments


> Crawling http://bughunt.io/ for issues, results will appear below.

> Pending

> Oops, an error occured. Please contact us for support or try again.

;)

If you get "Oops: Request to start crawl failed" and click "Try It Out" again and again you get a lot of duplicate messages.


The crawler has been absolutely destroyed by HN, so I had to restart it- this is an unfortunate side effect of it being down. We'll add some better error handling on the client to fix this one.


This is one of those projects that has been on my side project list. I'm thinking you may want to play with the pricing a bit, since you could definitely charge more for the first paid plan, and its a big jump from the $9 to the $199 plans.

Are there any types of checks you plan to do in the future that you're not doing now?


Thanks for the feedback on the pricing, it's a very first draft, so will definitely have some rework and testing before we release the paid plans. The second tier is really targeted at SMBs, where the business value of finding a critical bug before customers run into it exceeds $199/month.

We want to do lots and lots of checks that we don't currently do, e.g. linting the JS, correct MIME type checking, screenreader support, SEO, HTML/CSS validation, i8ln, intelligent form submitting, different browser agent support, responsive design, external font rendering

The hard part so far has been getting the crawler right, as some of you can probably see it's still a bit buggy.


When trying to crawl a URL that sends a 302 with a relative URI reference in Location, it fails. E.g. if http://www.example.com sends a 302 with "Location: /en/".


thank you! I was hunting for a similar bug and I think you've uncovered it.

Relative location headers are in HTTP 1.1 so we should respect them: http://en.wikipedia.org/wiki/HTTP_location#Relative_URL_exam...


Looks cool. Spotted a bug.. clicking the link in the sentence.. "Crawling http://www.example.com..." doesn't work.


Thanks! will fix that one asap


Looks cool and its a useful tool. I couple of things I would suggest. The top bar where you mention funding shouldn't be positioned over the logo and top links. Because there isn't a way to remove it it makes the header links hard to click (you can only click the bottom half of them).

Also the business plan offers crawling for 50 websites. Is this plan aimed at digital agencies? If it is then that's good but if not then that number is probably too high and more links per crawl might be better?


That top bar is actually a cloudflare app- it should disappear when you tap/click it?

Thanks for the feedback on the pricing. You may be right, we probably need another plan that handles more links and less sites.


Looks nice! But when feeding my site, it seems to ignore the HTTPS scheme, just doing HTTP and getting stuck with the 301 response.

As already suggested, would be nice not needing to specify scheme.


I'm not sure how useful this is. AFAIK it crawls your websites and tells if you have 404 somewhere and it's all it does?


At my company I wrote our internal front-end testing tool (based on the awesome nightwatch.js[1]). We do unit testing with Chrome, Firefox, IE and Phantomjs, but we also grab screenshots at a variety of resolutions, checking all the links and images (not just for working-ness, but also for content), web font rendering, recording performance API data, and more. From their feature list I'd guess bughunt.io is planning something quite similar. This is just an MVP. It's an interesting idea.

[1] nightwatchjs.org


I am not anything close to a lawyer, but years ago as a web designer I had a startup client who got in trouble for soliciting the general public for investment. Your bit about looking for funding is more vague than theirs was, and I think investment laws have even changed a bit since then, but might be something to ask a lawyer about.


Did not know that, thanks, it's gone.


Looks nice but it does not work for me, it seems to load forever.

By the way, I've noticed a small bug, the website seems to try to load http://localhost:35729/livereload.js so you should check your sources to modify this to the production URL.


Same here, might be the HN effect, but it just loaded for 15m and then bailed out with "an error occurred".


HN effect, the MVP really isn't designed for this kind of scale... yet!


Good spot, thanks!


Cool idea.

We've been using Pay4Bugs though (http://www.pay4bugs.com), it's a crowdsourced solution. You should get together with them!

Nothing beats having a ton of bug finders with different browser versions or phones... Some things just can't be automated.


Thanks for the shout out! Automated testing is great...the more you can automate, the better! For the things you can't automate, Pay4Bugs (https://www.pay4bugs.com) helps you get lots of human eyes searching for problems in your product quickly and for minimal cost.


Why does it require http:// ? example.com should work without it.


There's no reason, we'll fix that one


It's been crawling my site for over an hour...is there a timeout period? Or a way to know if it really is still doing it's work? Or did it encounter an error and never returned anything to me.


I am not sure what the value of this service is. Simply loading a website in your web browser will verify all of those things bughunt.io does - something you do before push to production anyway.


Shameless plug: this is the kind of feature SEO4Ajax [1] already implements for SPAs and Ajax websites.

[1] http://www.seo4ajax.com/


"Privacy policy: Coming soon". That would be great.


agree, we'll add one, does anyone know of any good template resources out there?


http://www.docracy.com/

Really like this idea. Haven't been able to test it out yet due to HN overload. Will check it out again later.

Also, on the pricing tables. Headers and some text acting up with the responsive design for small screens.


If you enter a protocoless url (like "google.com"), it doesn't recognise it. Maybe default to HTTP if it's not entered?


It's the problem with using <input type="url">. It's your browser that is rejecting the URL.


Cool, but how does this differ from all of the SEO tools out there that do the same thing? brokenlinkcheck.com for example?


All of those tools are targeted at ensuring that bots can crawl a site correctly (usually for SEO purposes).

Bughunt aims to find bugs a real user would run into.

It does this by using an actual web browser to crawl (not just HTTP requests). This means it supports Single Page Apps, JavaScript and anything else (including external scripts & images) that the page requires to render correctly.

Bughunt can replace a large chunk of point and click QA regression testing, without ever having to write automated scripts.


Can it crawl non port 80 addresses? My site is in testing and it's on 8080. bughunt.io comes back with no response.


I've never tried that, but I'm going to guess that it won't take the port number into account. We could definitely make that happen though, as that's a very valid use case, thanks!


A bug you may want to fix: double click on "Try it out!" send the request twice and show twice the errors


Didn't work for me "Request to start crawl failed" after I entered a web page and clicked try it out.


Website is offline No cached version of this page is available.


>Error 502 Ray ID: 178ae6800de40b14 Bad gateway

Woops.


error 502 , Bad gateway




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: