* Free/open source components of Phabricator feeling half-finished because Phacility has redirected development resources to paid features
* Issues with Harbormaster (Phabricator's half-baked CI system)
* Developer's prefer using regular Git to Phabricator's nonstandard workflow, which requires installation of the Arcanist command-line tool (written in PHP to manage patches.
I am actually a big fan of Arcanist. It rather simplifies the Git workflow management. You simply develop new code in a separate branch from master, and use `arc diff` from said branch to start a Pull Request (called a "diff" in the Arc world). No need to use `git push` and manage upstream branches, simply rely on the diff abstraction. It has many great features including `arc diff --preview` which gives you a link to a preview of your PR so you can see what it looks like in the browser (much better than `git diff`).
Of course, there is a bit of a learning curve and most engineers already know the standard git workflow. That is a disadvantage of course, but only in the short term. If it's a short project, then I would recommend against it. However, if you're in it for the long run, arc is great.
I feel like the other points (CI, lack of support, etc.) much better show the weak points of Phabricator versus GitHub or GitLab.
Arcanist is actually a reasonably nice tool. It is certainly far superior to opening pull requests in a web interface.
The only real downside I've found with using phabricator at work is the lack of a decent built in CI system. You can integrate it with external ones fairly well but that seems like something you shouldn't have to do.
I'm partial to Gerrit if you're looking for a git native solution with branch, draft, patch, review, iterate workflows with proper merge/rebase handling.
It is a bit different from "GitHub flow" but mostly a big improvement in an organisation context (vs FOSS.)
Media wiki has the best docs but it's a bit dense.
You don't like my casual use of the word cancer? Fine. Personally I feel like cancer was a relatively good approximation for the cancerous like changes you need to make to established industry workflows once you begin to use phabricator.
Would you have preferred I used the word infection? Virus? Aids?
Which one of those metaphors crosses the "sense writing" line for you?
Describing software as "cancer" is something I'd expect to see on 4chan or reddit, not HN. It's hyperbole and your explanation ("relatively good approximation for the cancerous like changes [to] workflows") shows as much.
> Free/open source components of Phabricator feeling half-finished because Phacility has redirected development resources to paid features
There are no closed-source Phabricator components. The email is complaining about development focus being on the wrong parts (the ones paying customers use), rather the ones Haskell uses.
Gitlab seems to fit all the Open Source Project ideals and philosophy. They don't want the burden of managing a dozen difference software or services for their development, they want coding, not time messing with Ops. And it has to be Open Source and no lock it. And Gitlab could even host it for you with with out you having the hassle.
Amazon, Google. One of them likely to acquire Gitlab. Strategically speaking both company would be a great fit for Gitlab, the only problem is both company absolutely loathe Ruby and Rails.
Or there could be another slim possibility, Microsoft decide to Open Source Github Core.
Generally speaking when software companies get acquired, the product dies.
If a company is being bought by a bigger company, it only means that the profits aren’t good enough for that company to go public, while at the same time a return is owed to its investors. It means that the company played loose with other people’s money and selling the company is the way out.
In other words companies that are sold are companies that are struggling.
And if that’s the case, the product itself will struggle to generate profits post acquisition as well. And I think there aren’t many examples that are an exception to this rule.
We certainly see YouTube or Instagram flourishing as a result of scale. But those are the exception.
GitHub was sold because GitHub wasn’t profitable or hype-driven enough to go public. It remains to be seen if GitHub survives.
But back to a potential acquisition of GitLab, if it ever happens, I believe GitLab will die, with a community fork being the only way out.
I’m writing this because HN has a fetish for acquisitions, which is interesting since to me an acquisition is often a death announcement.
Let's say that's all true - why would anyone ever acquire a company if its for certain, by virtue of being on sale, flailing?
I think it's often the case that the product dies, but I honestly believe that's more often than not down to the new parent company having no idea how to run and manage the product and team.
If Microsoft was dumb enough to pay 7+ bn for Github, there's probably a good chance they'd do something stupid again and bust out another fat stack of billions for Gitlab.
Thanks for sharing this! Yes, instead of a version control system that lets people try out different integrations, GitLab provides an opinionated (yet flexible with key integrations and the option to opt out of anything you don't want) way to run the entire software development and deployment lifecycle.
You can learn more about the different stages of the DevOps lifecycle on our Product page [1].
Regarding acquisition, our goal is to go public on Wednesday November 18, 2020. You can read more about our goals and in which order we want to achieve them on our Strategy page [1].
>the only problem is both company absolutely loathe Ruby and Rails.
Google may, but I'm not sure this is true for Amazon. In the 2000s Amazon converted to a platform/services/API company. One of their directives was that teams could use any programming language they wanted to implement services, as long as the only interface to that service was via published API. As such, implementation language doesn't matter as much there, and Amazon's culture seems less picky over it. Has this changed since?
TLDR; Why GitLab instead of GitHub? From their discussion group thread:
- Good multi-platform hosted CI or at least workable integration
with other (existing) CI solutions
- Hosted review tool that we don't have to maintain ourselves (though
a little bit less good than Phabricator, allegedly)
- familiar GitHub-like workflow with no requirement to install extra
software locally
- Reuse of GitHub credentials
- Realistic path forward for migrating tickets from Trac
CI is becoming Achilles heels for GitHub. Personally I would probably also add lack of fine grained permissions for contributors, inflexible diff, very basic code review tool as well.
This is using platforms not supported by Travis, plus custom Docker images. And integrates with other GitLab pipelines, deploys to sonatype nexus, and stores artifacts for later use.
Even if you're not using the advanced features, it's still vastly better than Travis. The Travis stock images are so out of date it's not funny.
What's the point of moving from one proprietary service to another? One day GetLab will get acquired by another tech giant, or will stay the same - it doesn't matter too much, because it's already owned by private company and venture capitalists. They do what they want, and they definitely won't turn down million-dollar acquisition offer.
GitLab is an open core company [1], meaning that we ship GitLab CE which is open source and GitLab EE that is closed source. We try to be a good steward of the open source project. GitLab EE is proprietary, closed source code but we try to work in a way similar to GitLab CE: the issue tracker is publicly viewable and the EE license allows modifications.
In conclusion (TLDR), GitLab has an open core business model and ships both open and closed source software.
A small note: what's the difference between GitLab, GitHub etc? Are they companies with a substantially identical target: making money? Does they offer storage on their own server (or even worse others servers in a chain)?
So why the hell instead of moving from a company to another ANY FOSS dev do not came back to classic ML (mirrored offline in personal maildir) and use hosting, multiple if possible, only as a mean to offer a shared repo? Why not even serve the project via the repo itself like fossil?
There is a world outside the web, on our desktops.
Just as a suggestion: try to disconnect your desktop and look what you can or can't do. If you feel "empty" you are in danger.
Classic ML sucks, and gitlab is opensource that can be selfhosted. The world evolved and offers now better environments than 20 years ago. No reason to stay behind.
Well for me "evolving" from a standard thing to a monster web app that require far more resources and offer far less flexibility have a name: involution...
Of course I know that many are limited to webmails or obsolete, limiting and limited MUAs from the '90s but that's again involution.
If you have notmuch/emacs or (neo)vim *{mutt,pine} you have a far more advanced computing environment that no web up can even try to mach.
"stay behind" today's often means stay in a modern and colorful stone age instead of powerful technology.
Could you elaborate on why that is? The git and linux kernel projects have been using this workflow for decades and it works very well for them.
> The world evolved and offers now better environments than 20 years ago.
I think that's largely subjective. I have experience reviewing code via email and via a Github Enterprise instance.
The Github based review requires far more scrolling and makes it difficult to find comments that were made and how they were resolved. Plus, it makes the implicit assumption that a comment was resolved if the line commented on was changed in any way regardless of whether or not the change is related to the comment made. When that happens, it collapses the comment thread and requires that I expand them to find out what I commented on and determine whether it was addressed by going to another tab to read through the entire diff, find the approximate location of the line I commented on and see if it was changed as I expected.
With the email based workflow, my email client has a built in index of all the comments made on a patch set threaded by commit that I can click on and I can quickly find the comments I made and any responses to them (and whether or not I've already read them). I can easily get different versions of the patch set and run a diff between them to see what changed (either on the entire diff or on each individual commit).
> The world evolved and offers now better environments than 20 years ago. No reason to stay behind.
I believe that if something should be considered an improvement, then it should have all the capabilities of the previous version and introduce new features that were not possible to accomplish in the previous version. It shouldn't make things that were easy to do in the previous version more difficult or impossible to do.
Not the GP, but I contribute to the Git project. I think your some of your critique of GitHub Enterprise is correct (although don't you need to also "resolve discussion" if the lines change, or is that just GitLab?), just commenting on the "why not ML" aspect of this.
> With the email based workflow, my email client has a built in index of all the comments made[...]
The reason for why E-Mail based workflows like those used by Linux and Git didn't win over "just host on Git(Hub|Lab)" is because this requires a lot of setup & technical expertise from your contributors that just using a web UI gives you out of the box.
Right off the bat you need to have been subscribed to the list for a significant amount of time to do what you're describing, or if you're lucky (e.g. in the case of the Git project) use some E-Mail archive[1]. You're already looking at maybe a week of setup time for someone who's never used E-Mail in this way (which applies for most devs these days) just to get to the point you'd get in 10 seconds with a search box on GitHub or GitLab.
Does it pay off in a lot of ways? Sure, but at the cost of losing a lot of potential contributors. It's not a big deal for projects like linux.git or git.git whose contributors are by definition at the tail end of the competency curve when it comes being comfortable with setting up this sort of thing, but good luck running e.g. some popular WordPress plugin this way.
> The reason for why E-Mail based workflows like those used by Linux and Git didn't win over "just host on Git(Hub|Lab)" is because this requires a lot of setup [...]
That's a reasoned recurring argument and my usual answer is: quality vs quantity. More in detail we are not talking about a hello world software for first years of high schools students but large and complex software; if casual contributors do not have a proper email setup or the knowledge to make it it means they probably do not have enough IT knowledge in general to be valuable contributors. Emails are the base of FOSS communication infrastructure together with nntp news, without them there is substantially no free software nor free "ecosystems" so...
However there is a point in criticize actual "sorry state" of email development. I know "bigs of IT" do non like mails because they are an open standard that guarantee no lock-in but we as free software users/devs should help newcomers to gain knowledge and tell actual students "go get mbsync/notmuch/emacs/custom scripts for refile/delete afew/IMAPFilter for autorefile, ..." does not help, offer per-coocked solutions does. Something is happening from fish to precooked zsh configs to {doom,prelude,spacemacs,...} emacs config etc but it's still not enough.
> although don't you need to also "resolve discussion" if the lines change, or is that just GitLab?
I believe Github recently deployed a feature to do just that, but that hasn't made it over to the enterprise version. In either case, it's something that's been a source of problems for quite a few years.
> Right off the bat you need to have been subscribed to the list for a significant amount of time to do what you're describing
That's one inherent limitation with email lists. Fortunately, both public-inbox and gmane provide a NNTP gateway to allow for access to list archives with an interface that's, for all practical purposes, identical to email.
As an aside, I've always wondered why open source projects like git and linux never adopted NNTP (not necessarily on usenet) as a primary form of communication (and and email CC contributers of code that you're changing) instead of using an email list.
> this requires a lot of setup & technical expertise from your contributors [...] You're already looking at maybe a week of setup time
I simply don't see how it requires significant expertise or a week to set up. Using a client like Thunderbird for reading the list (or NNTP gateway) doesn't take that much to set up (other than entering the server information and credentials to send replies). Setting up one's git configuration for using git-send-email (to send patches) is only a one time thing as well and isn't any more complex.
Years ago, ISPs commonly provided help pages that gave the server information to set up your client to access one's email account hosted on the ISP and how to access the NNTP server to get on usenet. You didn't need significant technical expertise to follow the instructions on those pages and many non-technical people successfully configured their clients to receive/send their email and participate on usenet newsgroups.
> I simply don't see how it requires significant expertise or a week to set up[...]
I don't just mean the setup required to send a one-off patch. Obviously setting up some random E-Mail client with IMAP is easy. But the sort of setup required to get anything like feature parity with common operations in GitHub's or GitLab's interface.
E.g. when you open a Pull/Merge request on those sites. It's easy to apply & download the patch series locally to test it. For E-Mail client integration you need something that'll "git am" a range of messages. Likewise with "git push" to your topic branch updating the PR/MR. Sure you can do this all manually with git-format-patch and git-send-email, but having something that works smoothly takes a lot of work.
And nowadays the network effects of that setup don't make sense for most contributors, because so few projects still use E-Mail like this. There's a large long tail of contributors to these projects, e.g. the median for patches in git.git per contributor is 2.
> when you open a Pull/Merge request on those sites. It's easy to apply & download the patch series locally to test it. For E-Mail client integration you need something that'll "git am" a range of messages.
That's a good point. Like git-am is the inverse of git-format-patch. Unfortunately, no program has been written as the corresponding inverse of git-send-email. If a program like that existed, then one could get feature parity.
At least with Thunderbird, one can highlight the messages one wants to save and save them to multiple files in a folder.
> Likewise with "git push" to your topic branch updating the PR/MR. Sure you can do this all manually with git-format-patch and git-send-email, but having something that works smoothly takes a lot of work.
Based on what I've read, people typically rebase their patch series and push up a new set of emails as a reply to the original patch series. In Github/Gitlab, people typically make an incremental commit and push it up.
But, if one wants to maintain a clean commit history for a given feature development branch before it's merged, one will have to rebase and apply those incremental commits to their corresponding base commit. That's trivial if the feature is implemented in a single commit, but a bit more complex if it's multiple commits. The former case has been addressed by Github with their squash before merge feature I believe. The latter case requires manual rebasing (which arguably is more more than just doing it locally before pushing up the next version of the patch set).
Hem, notmuch/mu4e offer easy magit integration so you can see, test, patch etc straight from a mail messages... That's the power of a text-centric UI vs a graphical-centric UI.
I'll try it out. vim is my main editor, so it will be a bit of a learning curve to get used to emacs :)
As an aside, I have tried out gnus for reading the git mailing list via gmane, but I found it very slow in threading messages (when compared to Thunderbird). I don't know if notmuch/mu4e has the same issue though.
The reason people put their code online is because it is a convenient way to host it, and often platforms like these have some ancillary tools that make the code easy to work with.
There's nothing in the classic mailing list workflow that prevents hosting the code online for others to view. The Linux[1] and git[2] projects mirror their code on Github, for example.
GHC Discussion from November about moving to GitLab/moving away from Phabricator: https://mail.haskell.org/pipermail/ghc-devs/2018-November/01...