This feels like a move away from offering the greatest opensource project hosting site to a walled garden. I was just browsing some repository and found a non linked reference to another project which probably is also on Github but there is no way of searching for it when I'm not logged in (and knowing the search URL somebody posted earlier). Pretty big UI fail and I wonder if this was intentional and if so if they took into account the annoyed users...
I hope this will gain some more attion than the 6 comments right now, I really like Github for just browsing and exploring new projects, no need to be logged in for doing this and hopefully not just the first step to a more business driven Github.
I feel like this is a part of a trend that reflects how github itself is changing - slowly making access to things harder, then finally removing it.
Github used to have an awesome and public way to explore projects (http://web.archive.org/web/20130730125837/https://github.com...) - I remember using it a lot to look for projects to learn to read code because it allowed me to easily find projects by language. They replaced it with a flashier, but in my experience, much less useful tool that you have no control over - https://github.com/explore .
I know I'm not the only one thinking of moving my repos to another provider or perhaps a self-hosted solution because of this direction.
gitlab.com is a fine compromise, albeit I'm not a fan of their "private repos for free" model, it means a lot of software that might have utility if it were default-open will be lost behind default-privte repositories.
But if gitlab jumps the shark like github is doing we can always have anyone in the ecosystem selfhost a clone, albeit we would need to reimplement all the enterprise features people want.
In my experience it was the other way around. I signed up with gitlab only because they offer free private repos. Now since I already have an account with them and I don't have to go through the hassle of setting up stuff again, I'm only a couple clicks away from putting some project in the open.
As danmaz74 said, this will fragment the open source scene over multiple platforms which makes it harder to discover. Github always felt nice as a home for all these projects, I don't get the need for this move either.
They must know that people contributing to open source rather not want their stuff to be hidden behind some wall. I just wish they would have continued their offering, in my books they were the good guys, helping open source to new hights but also do well on a business side. I guess I was wrong.
If anything, github's lock-in fragmented "the open source scene". The internet and the web are distributed systems, there is no reason why all code has to be hosted with one proprietary service in order to be discoverable, just as there is no reason for hosting all websites with one proprietary service in order to be discoverable. That's what standardized protocols and interfaces are for.
Well, first, there is lock-in for non-owners of projects: You cannot submit pull requests to pull from git repositories hosted elsewhere and without signing in. Which also is an incentive to use it for other things, once you have an account that you need in order to contribute.
Secondly, they do have proprietary features, embrace, extend, extinguish style.
Unless they convert repos to a proprietary format that requires a Github client to use, don't allow git cloning or pushing, and are trying to add Github-only features to git itself, then I think "embrace, extend, extinguish" is a bit overblown.
That's not how it works. Microsoft didn't add Windows-only features to Netscape either. It added them to IE. And then lured people into using those in order to make them dependent on these proprietary features and in order to decrease interoperability of websites using those extensions with other browsers.
I don't follow ... patches via HTTP is a repository, patches via email is not?!
But apart from sending changes via email (which is actually a very good way to go for small(ish) changes--actually, it is so good, the Linux kernel developers use exactly that: git can generate emails that are both human-readable, so you can easily review and comment on patches, but also can be imported automatically into a repository by the recipient), you can just send pull requests via email? You simply put into an email the URI of your git repository, wherever it is hosted, and the recipient can pull from it. That is the power of an open distributed system like the internet.
And to spin this a bit further: Actual "hosting" isn't even necessary technically. Especially once we get rid of IPv4 and NAT, it should be trivial to spin up a git server instance on your workstation (or maybe your smartphone, as that probably is online 24/7) so people can pull from there. Whether that is practical? It well might be--but my point is more to show what is technically possible.
And yes, all of that was possible with a "previously popular solution"--git existed before github, obviously, and its popularity is what caused github. Also, all of that applies just the same for other distributed SCMs, of course.
I really wonder what the rationale behind this move is. I wouldn't even have noticed as I'm practically always logged in, but was that search bar lowering the conversion rates to free users, or what?
Google rate limits unauthenticated sessions. Try using Google over tor. They are pretty heavy-handed about it too. Github isn't tiny enough to use ddos as an excuse.
Simply sign your session ids with an HMAC, to verify the id without any I/O! And require something expensive to create a session (such as a hash-based challenge request). Sessions are usually not needed for static sites, so by the time the person searches, their browser will have earned one session. A spammer on the other hand wouldn't have the computational power to create the various sessions. Also, if a spammer's IP is detected, the computational complexity can be increased for that IP.
Well, at some point you have to block legitimate traffic. My thought was it doesn't have to be black and white. Maybe require the user to solve a captcha after the nth search from the same IP address? How big are the biggest of botnets? A few hundred thousand computers? Or maybe even require captcha for every unauthenticated search.
> Treat nonlogged in users as second class citizens. By always giving logged out always cached content Akamai bears the brunt for reddit’s traffic. Huge performance improvement.
I have ended up using this philosophy in a website I've been working on lately, where people can post puzzles from The Witness. This includes simplifying decisions such as not tracking solved puzzles unless you have a user id, and not allowing navigating to a random puzzle either, as this routine depends on stored solves and upvotes. Supporting not-logged-in users just means extra code for me.
In general, I don't think this is a bad attitude for a website to have.
It is perhaps different in the case of GitHub, a highly depended-on and well established website which is actually removing functionality here. I would still assume good faith about the reason. And mobile login rates are a problem for everyone, I'm pretty sure, not just GitHub.
To make the long story short, you can install our indexing engine on low cost platforms like $10 VPS plans from DigitalOcean/Vultr or on dated hardware like a 6 year old Dell laptop:
Right now we are indexing about 5,000 repos. We'll probably stop at 50,000 since we index an order of a magnitude more than GitHub does per repo. Our objective isn't really to index as many repos as possible, but rather we are focused on making it insanely easy for others to index what ever repos they want.
I was banging my head against this earlier last week. It was such a jarring change to visit GitHub and not be able to navigate. It gives me pause to see how suddenly an easily-navigable site can become almost impenetrable with the removal of a single search bar. What are they thinking? The "Explore" menu is worse than useless, too.
It is an interesting case study in UX - the difference between a great experience and an awful one can come down to a single, well placed search box.
What is GitHub's rationale behind this jarringly awful, blatantly bad decision?
For me search if one of GitHub's key features. It's nice to take a look other people's code, explore other coding styles, find examples of library usage which is great for under documented code.
Unfortunately it does let you escape common characters that are common in code, doesn't give you the option of maintaining white space/line breaks in your query, doesn't let you search in other branches.
Really bad move by Github. I sent them an e-mail nagging about it the very first moment I noticed it. I cannot imagine that they didn't expect any reactions, they just seem to not care about the whole open source community. I am looking for alternatives to it already.
> Off-topic: @dang, can we get a "Upvote this comment if has less than 1 point, else don't upvote it" button?
The font-color of the comment (aside from links already visited) should be enough to indicate if it has positive or negative karma. Individuals can decide whether to upvote or downvote further at that point.
> The font-color of the comment (aside from links already visited) should be enough to indicate if it has positive or negative karma.
This is by far the biggest problem I have with HN. If there's not enough light in the room, I have problems differentiating between shades of grey/black if the contrast isn't high enough. And it makes downvoted-to-hell comments extremely hard to read because all the mobile apps seem to replicate this, and you can't just ctrl+a to add contrast.
> The font-color of the comment (aside from links already visited) should be enough to indicate...
Yes, this is true. However, consider a comment with 0 points. If two HN visitors see it at roughly the same time, and both upvote it, it will have 2 points, rather than one point. What's more, neither visitor will see the effects of their action until at least one page reload and some time has passed.
Pretty new. I was bitching about it to a friend immediately after I discovered it last week.
Honestly, it makes Github feel much more like those vaguely-shady we-won't-tell-you-anything-substantial-until-you-give-us-an-email-address-that-we-can-sell-to-our-"valued partners" websites.
That's... not really the direction you want to be heading in when you want to be known as a stable, reasonable place for widely-used projects to be hosting their source code.
Yes, this looks new...and terrible as it further silos discover-ability, which instead should be easier for open source projects.
Github's mobile experience has been pretty bad in general. their mobile view doesnt even have the primary feed D: Instead it has a "repositories you contribute to", which is probably the most useless thing I could imagine for a logged-in view.
This is not the only problem I have with a mobile version.
Yesterday, I tried finding my own issue that I submitted two minutes ago to update it with a screenshot from my phone. Had to type in the full URL manually at the end.
I love being able to quickly edit my notes from my phone, but no, I have to revert to the desktop version to be able to edit the file.
Not being able to see my feed is definitely a huge disadvantage.
All in all, I use the desktop version quite a lot. At least it's easy to switch to it.
It's been there some time on mobile, but very new on Desktop. I used to sometimes search for random gems on Github while I was on the toilet and have had to switch to Desktop view for quite some time in order to get a search bar. Now it's also removed from the desktop view.
I personally don't like this and I hope Github decides to revert these changes as it doesn't really fit the open spirit. However if you still want to search and you're a duckduckgo user like me, simply use "bangs" in your url bar. e.g.
My reaction when I wanted to searched for a file in a larger github repo, and the search bar was missing: I just cloned the repo and used `git grep` locally.
What's the next step? Cloning only allowed for logged-in users?
I just noticed that yesterday ... and I think it's a bad idea. One of the great benefits of the original SF and, until now, GitHub was the ability to discover OSS software. I often knew what I was looking for but had absolutely no idea what the project was named (and on Github it was already a bit harder since you'd find multiple forks of the same project.
Since I commit and fetch repository data using SSH and a private key, I rarely (if ever) log into the GitHub UI
I guess I see only down-sides from a user perspective - what are the upsides for GitHub? Tracking?
Drupal.org recently did something similar with their issue tracker. Their rationale was that it's a performance issue, and there were large numbers of bots scraping the site using the advanced issue search. Requiring login for issue search allowed them to find out who was scrapping and then they could talk to them about what API improvements they needed, etc.
I noticed this a few days ago at work. I don't want to sign in to my personal Github account at work, but I'd still like to be able to search within a repo. I've been searching for code snippets with Google recently with lower (but non-zero) success rate.
Who uses github without being logged in? Why would you use it without being logged in? When I saw the front-page in the wayback machine I realized I hadn't seen the front-page in years. It certainly wasn't that fancy when I signed up.
This plus 2FA is a perfect combination for headaches. I'm already often not logged in while browsing GitHub because 2FA on every login is a pain, and now they're taking away features?
Not comparable but airpair have a similar trick, asking to signup if you want to see their whole samples http://i.imgur.com/pVuVjrY.png and not obfuscated code samples
I run a site (http://petihacks.com) with many of these little tricks if it's the interest of anyone
That would put me off using the site, let alone registering. Especially if it's meant to be social (i.e. I'm supposed to share links to content) -- I don't want to burden other people with the same hassle.
Facepalm. In this thread: A lot of overreaction from a lot of people who seem to expect the moon on a stick (for free) and who I could easily suspect have never had to run a resource-intensive system before.
Doing search well is computationally hard. More than that, it's next to impossible to cache for because there can be so many variants. All-in-all, it's the ideal sort of system to attack if you want to DDoS somebody.
Making people sign in is a tiny barrier to entry that stops anons hammering their system with uncacheable search requests.
Yes, there's probably also a business-marketing argument that goes into this but if that's the sort of trade-off we need to keep Github free for open source, indefinitely, I'm all for it.
And again, this is still free so stop being babies. If you don't like it you're very free to clone the repos out and search them yourselves. Want your anons to be able to search your stuff? Host it yourself and manage your own DDoSes.
Then one should get a couple of elastic search clusters, one for the guest users to hammer and DDoS and one for the authenticated users.
Not only does this make the search cheaper and easier for the guests (just an index of public repos, so no permissions involved), it makes it easier to continue offering search to authenticated users whilst being able to mitigate L7 DDoS on the authenticated search (just block the users involved in the attack).
Besides, there is a huge SEO benefit to having a searchable and discoverable interface, and a huge attention retention benefit to keeping users on your site to search once they have arrived there. On top of that, Github understand their data structures better than a search engine so it's easier to tune complex searches just for a codebase, or blog, or issue.
There's really no benefit to hiding search, and having written many community generated content sites the only reason I would hide search is as a stepping-stone to making the content part of a walled garden.
It's fairly inevitable Github will want to do this, I imagine they're looking at the number of users who are not signed-in or on free accounts as yet to be monetised. They would want more activity data to make arguments for advertising, or recruitment monetisation, or plan up-sell, etc.
That's where this makes sense, as a way to gather more user habit data.
I feel like in our present age SEO is vastly overstated.
Google at one point gave you a lot indicators to which pages people were landing on and why. Now all that information isn't shared with site operators, so it's impossible to qualify statements like "internal searchable interface lead to more Google search traffic".
Maybe it was true previously, maybe it was true even last month, but is it still true with the latest Google update---no one can measure.
Almost everything you mentioned does not make any sense.
1. Search is just not hidden in GUI for users, who are not logged in, but these users can still perform search as other people mentioned in comments
2. Github Search is definitely cached
3. It has nothing to do with DDOS. Do you expect Google requires you to sign in in order to prevent DDOS? Moreover, unless Github is totally wall-gardened, there are tons of ways to DDOS the service. I suggest you need to learn how DDOS actually works.
2. No disagreement that searching for the same thing can be cached. But that's not the problem. Search for "banana" "banana 2" "banana 3", etc. It's very simple for a botnet to hit a search function like this with unlimited variance, making it impossible to cache against.
3. You think that Github and Google receive the same revenue benefits from providing a free search?
I am a user of GitHub. In part to protect against potential credential leakage through XSS and similar, and in part because I rarely actually need to log in to GitHub for the things that I do with it, I'm rarely logged in to my GitHub account. This change has a significant impact on my "workflow".