The crawler has been absolutely destroyed by HN, so I had to restart it- this is an unfortunate side effect of it being down. We'll add some better error handling on the client to fix this one.
This is one of those projects that has been on my side project list. I'm thinking you may want to play with the pricing a bit, since you could definitely charge more for the first paid plan, and its a big jump from the $9 to the $199 plans.
Are there any types of checks you plan to do in the future that you're not doing now?
Thanks for the feedback on the pricing, it's a very first draft, so will definitely have some rework and testing before we release the paid plans. The second tier is really targeted at SMBs, where the business value of finding a critical bug before customers run into it exceeds $199/month.
We want to do lots and lots of checks that we don't currently do, e.g. linting the JS, correct MIME type checking, screenreader support, SEO, HTML/CSS validation, i8ln, intelligent form submitting, different browser agent support, responsive design, external font rendering
The hard part so far has been getting the crawler right, as some of you can probably see it's still a bit buggy.
When trying to crawl a URL that sends a 302 with a relative URI reference in Location, it fails. E.g. if http://www.example.com sends a 302 with "Location: /en/".
Looks cool and its a useful tool. I couple of things I would suggest. The top bar where you mention funding shouldn't be positioned over the logo and top links. Because there isn't a way to remove it it makes the header links hard to click (you can only click the bottom half of them).
Also the business plan offers crawling for 50 websites. Is this plan aimed at digital agencies? If it is then that's good but if not then that number is probably too high and more links per crawl might be better?
At my company I wrote our internal front-end testing tool (based on the awesome nightwatch.js[1]). We do unit testing with Chrome, Firefox, IE and Phantomjs, but we also grab screenshots at a variety of resolutions, checking all the links and images (not just for working-ness, but also for content), web font rendering, recording performance API data, and more. From their feature list I'd guess bughunt.io is planning something quite similar. This is just an MVP. It's an interesting idea.
I am not anything close to a lawyer, but years ago as a web designer I had a startup client who got in trouble for soliciting the general public for investment. Your bit about looking for funding is more vague than theirs was, and I think investment laws have even changed a bit since then, but might be something to ask a lawyer about.
Looks nice but it does not work for me, it seems to load forever.
By the way, I've noticed a small bug, the website seems to try to load http://localhost:35729/livereload.js so you should check your sources to modify this to the production URL.
Thanks for the shout out! Automated testing is great...the more you can automate, the better! For the things you can't automate, Pay4Bugs (https://www.pay4bugs.com) helps you get lots of human eyes searching for problems in your product quickly and for minimal cost.
It's been crawling my site for over an hour...is there a timeout period? Or a way to know if it really is still doing it's work? Or did it encounter an error and never returned anything to me.
I am not sure what the value of this service is. Simply loading a website in your web browser will verify all of those things bughunt.io does - something you do before push to production anyway.
All of those tools are targeted at ensuring that bots can crawl a site correctly (usually for SEO purposes).
Bughunt aims to find bugs a real user would run into.
It does this by using an actual web browser to crawl (not just HTTP requests). This means it supports Single Page Apps, JavaScript and anything else (including external scripts & images) that the page requires to render correctly.
Bughunt can replace a large chunk of point and click QA regression testing, without ever having to write automated scripts.
I've never tried that, but I'm going to guess that it won't take the port number into account. We could definitely make that happen though, as that's a very valid use case, thanks!
> Pending
> Oops, an error occured. Please contact us for support or try again.
;)
If you get "Oops: Request to start crawl failed" and click "Try It Out" again and again you get a lot of duplicate messages.