Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Xmake, a modern C/C++ build utility (github.com/xmake-io)
186 points by waruqi on April 9, 2019 | hide | past | favorite | 183 comments


a developer post a well rounded tool with documentation that was built with a lot of care. The comment thread is mostly about other tools or people dismissing the work done because another available offer is more popular or common.

When did "Show HN" threads became shark tank? Can people at least check and post about the tool itself instead of discussing CMake vs Ninja vs Meson?


>When did "Show HN" threads became shark tank?

From the 2007 announcement of Dropbox:

https://news.ycombinator.com/item?id=8863

""" 1. For a Linux user, you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem. From Windows or Mac, this FTP account could be accessed through built-in software. """


Does anybody actually use Dropbox anymore? Feel like it’s a dead product.


""" As of September 30, 2018, we served over 500 million registered users but only 12.3 million paying users. The actual number of unique users is lower than we report as one person may register more than once for our platform. As a result, we have fewer unique registered users that we may be able to convert to paying users. A majority of our registered users may never convert to a paid subscription to our platform. """


Dead meaning what, stable? I still use Dropbox on a daily basis and Paper is now my go-to for note-taking.


I used to, until they blocked the public folder. The sync between my computers was nice, but for me the big feature was copy/pasting (or directly creating) an image or static html page(s) with images (or flash files, at some point) in the public folder, right clicking in Explorer/Finder (depending on the OS i used at the time), copying the URL and pasting it on IRC/IM/Reddit/whatever when discussing stuff.

They tried to sell their 'get public URL' wrappers as the replacement for the functionality, but that only covered a tiny aspect (and OneDrive, Google Drive, etc already offered that stuff) - sharing photos - and even that was slower due to them wrapping the files in some sort of viewer instead of giving the raw file data like the public folder did, while using some sort of hash ID to identify the file instead of the actual filename (this was another convenience lost since with the public folder i could guess the URL to share without copy/pasting).

I never found anything as convenient as Dropbox's public folder while the rest of their offering wasn't anything i cared much about and even then by that time both Google and Microsoft's offers were better (and with Microsoft's available right out of the box in Windows).


I agree with the public folder issue. In my experience they still sync much better than OneDrive and Google Drive though. For me the syncing is a lot faster and has less issues.

They are also the only ones I know that have a Linux application.

That being said I don't pay for their premium service because it seems to expensive compared to OneDrive and Google Drive.


It's not dead, but the hype is long gone. Since my free gigabytes promotion ended, I never felt the need to use it. Dropbox still periodically spams me with upgrade offers, which I just ignore.


You may accomplish the behavior shift you desire by posting about the tool yourself. Then people will start supporting, disagreeing, posting questions, etc. Bashing the behavior might not work well.

People tend to stick to familiar topics ... as shown in the next Meson comment turned to Python2-vs-Python3-compatibility in a whim.


Yes, I agree very much.


I really want to use your tool in a future project. I know Lua well enough and being encumbered at work with a legacy project that requires a crazy build pipeline that uses gyp and friends, makes me eager to try some other stuff.

Thanks for trying to improve the ecosystem around native code building tools. I appreciate it a lot.


Thank you very much for your support.


If you post on Show HN you can expect feedback and discussion about related tools (f.e. in threads about Ghidra recently, comparisons with IDA Pro and Radare are made). It is up to the readers value and moderate the discussion(s).


Meson[0] has been gaining in popularity and has migration tools for cmake projects.

Large projects such as systemd and gnome[1] have migrated or have been migrating for years

[0] https://mesonbuild.com/

[1] https://wiki.gnome.org/Initiatives/GnomeGoals/MesonPorting


But the meson installation has many dependencies, it is not very lightweight, it needs python, ninja, and xmake has built-in luajit, no third-party dependencies, more lightweight


Python is preinstalled on any major Linux distro. Ninja is tiny, so those don't seem that heavyweight.


Python also has version compatibility issues, such as: python2.x, 3.x. Also need the user to install the specified version of python


Any new project will very, very likely be using Python3. It appears like that applies to meson. `python3` is the expected binary (`python` is still expected to point to Python 2 for the foreseeable future and `python2` being the explicit binary--some distros have jumped the gun on `python`). Similarly, they have completely different accompanies executables (pip3, wheel3, pydoc3) and completely separate library paths. What sort of compatibility issues have you seen?

The requirements say Python 3.5 or newer. That was released in 2015. Outside of major versions, Python has generally been forwards compatible (especially if you're testing). After more than a decade of use, the only grumbles with supporting old Python version is not getting to use new features (or working around fixed bugs).

I say this as someone who still uses Python 2 every day.


The sooner debian kills the python = python2 association and python3 becomes the default provider of 'python' the better, I think I read that it's scheduled for this year finally.


I don't blame the OS guys. Python has been around for 30 years and many distros built a lot of their utilities around Python 2. Getting rid of Python 2 sounds like a Herculean effort that could only start after Python 3 settled down and required libraries were ported over (or replacement libraries written using more modern conventions).

I'm glad we seem to be past the big inflection point and hope to put Python 2 behind us. But I can see never changing `python` from Python 2 (just not have one when Python 2 isn't installed).


On the contrary, Arch Linux has ‘python’ as Python 3 and ‘python2’ as Python 2. Although most of the Linux users are on Ubuntu, you can’t ignore those other distributions.


Arch was what I had in mind as an OS that jumped the gun against the advice of the Python devs (at least that's a simplified history). I thought there was another distro, but I wasn't sure if they had rolled that back.

To the point of the comment thread, that would only be an issue if you're maintaining a Python 2 codebase and using modern Arch. If you're creating a Python 3 codebase you're likely calling `python3` and even calling `python` isn't a problem for you.


> Python also has version compatibility issues, such as: python2.x, 3.x.

If that really bothers you then just create a virtual environment and locally deploy whatever version you want to run. That issue ceased to be a concern years ago.


> just create a virtual environment

Yaw super lightweight ...


I think GP means `python3 -mvenv my_venv` rather than running in a VM or any heavyweight virtual environment. At least that's how I interpret 'virtual environment' in a pythonic context, and they're certainly lightweight enough.


What I took GGP to be talking about though was lightweight in terms of cognitive load. Like "can I just run the thing please!" aka yak shaving [0]

[0] https://seths.blog/2005/03/dont_shave_that/


In other comments you talk about "lightweight" as cognitive load, I'm curious why Python is any different than other languages?

I haven't used virtualenv very much, but I've been doing similar things for longer than virtualenv has been around. You need a `python` binary on PATH and a PYTHONPATH pointing to where you want to get your libraries from. virtualenv packages this up into a directory in a more user friendly fashion.

Most docs I've seen on Python2/3 advise on choosing one or the other, ideally 3. Which implies mixing should either be done very carefully or with a plan to transition to 3.

I get that some accidents of history have caused headaches. The deep adoption by distros have caused problems when people just want to use Python/libs installed with the OS (as well as moving to Python 3), authoring packages hasn't been great historically, etc.


That you needed four paragraphs to get your point across kind of emphasises my point.

I don't mean to be rude, but the key to getting people's attention is to get your point across quickly. As soon as somebody starts going all you have to do is install a frobozz and murtle a plinth I really just lose interest.

EDIT - doubting myself I went back up the thread here and this topic kicks off with "meson installation has many dependencies, it is not very lightweight" ("lightweight" operationally defined here for this discussion) - to which somebody replies "it doesn't matter, Ninja includes Python" or words to that effect. "it's not the correct version of Python" is the general gist of what comes next to which our GGP here says "all you have to do" is set up a python virtual environment, which is around the point where I start to feel frustrated and the need to go do something interesting.

So to be clear, the discussion is not about python being lightweight, but meson, and trying to wave complexity away with yet another layer of complexity (even if it's something as well known as python virtualenvs) is not reducing complexity but hiding it, IMO.


I honestly wasn't trying to be combative. I tried to make my comment just long enough to give context (showing my understanding of the solution and the assumptions I'm making). I'm not trying to interject my opinion, but see where my assumptions might be wrong.

I agree long wandering threads are difficult to follow.

I have little interest in new C++ build systems, but will jump in and contribute when I see a familiar problem I want to learn more about like managing sizable Python codebases. Python 2/3 isn't relevant at all to meson and I don't see the docs mentioning any need for version or package management. I've only ever heard/seen virtualenv needed in project management, never in distributed software. Is your beef with Python or any software using an interpreted language? Outside of the 2to3 transition and being deeply integrated in many distros, it has the fundamental problems other interpreted languages have and are handled in the same way.


It is if you run it inside a Docker container... no mess, no fuss...


I cant tell if you’re joking or not ;)


Depends if someone's already written the Dockerfile for you ;)

But to be fair, I usually just clone a Virtualbox machine because I'm used to the workflow. :)


Actually it is, but nevertheless that's only required by those who for some reason still believe today that python2 vs python3 is an issue.


I have python 2 vs 3 issues if not every day, then at least 3 times a week, because of being stuck with a version of third party software that uses Python 2 (ArcGIS 10 series) for part of my code base. Of course Python apologists are now going to say 'oh you're just one guy in a niche situation', and 'yeah that's what you get for not upgrading to the latest version of all software', or any of the dozens of other excuses and blame I get anytime I have to defend why I need to use Python 2. Fact is still that I'm bound by external constraints and that Python versioning is a real problem in my day to day work, and yes that is in 2019. Look, Python isn't much worse or better than most other languages in its niche, but please do away with the 'everything about Python is easy and great' spiel.


In fairness to the python community by-and-large, they are very supportive of legacy users and niche configurations. Python 2.x seems like something like 15 years now past its sell by date and is still actively supported. There's a lot of patience there ... people will make the leap when they're ready.

I think the pressure to always be on the latest and greatest is what did for the ruby and perl communities, and this is why python, and to a certain extent javascript are still going strong!


Not disagreeing with you, just clarifying: I'm not even complaining so much about the Python 'development' community (i.e., those writing the interpreter and most of the people writing libraries); what irks me is the 'evangelists', Python users who feel the need to proselytize about things that are obviously not perfect, but claim they are and straight up dismiss any objections or real world concerns. I'm talking about the people who with a straight face call themselves 'Pythonistas'. Jebus, if so much of your self image and self worth is tied up in the programming language you like to use most, you need professional help. Of course all languages have people like this, and most 'communities' that are not software related have similar people too, it's just that posts like this seem to bring out this sort of hanger-ons. It probably also has something to do with me turning into Statler or Waldorf over time. Meh.


There are a few different meanings of the word lightweight, and I get a sense you're not using the one that I or GP are using.


I am also annoyed by Python 2 vs Python 3 situation, however not because of the current incompatibility but about the potential Python 3 vs Python 4 incompatibility that the Python developers have shown to not care about introducing in the future.

If they promise to never break backward compatibility again in the future i might change my stance though, i mostly try to avoid Python than anything than a simple calculator or quick and dirty scripts because i do not want to write any code that may break in 2-3 years without me doing anything wrong.


The switchover to Python3 has taken 10 years (as originally predicted).

Guido said there would never be a Python 4, though he has handed over the reigns so who knows.

As a Python dev, I wouldn't mind a Python 4 if it fixed some of the older / crustier corners of the API.


    export PIP_REQUIRE_VIRTUALENV=true
It's the only sane way to use Python.


Pretty much everything has moved to Python 3 now.


I can't think of a context in which I've used Meson where it ever felt not lightweight. Python is already installed in every system and ninja is quite small.


Python isn't installed on Windows, the pre-installed python version on macOS is stuck at 2.7.10.

ninja isn't installed on macOS and Windows, and both systems require to first install a package manager before installing ninja, or compile ninja from scratch.


ninja on windows just requires you to download ninja.exe and put it in your path


python and ninja are also bundled in the installer for meson.


AFAIK Meson requires a python installation, which is a non-trivial dependency and on some platforms requires additional manual setup steps.

Having everything in a single standalone executable is vastly preferable IMHO.


Depending on Python is actually much better than dependencies on other runtimes, just as Java. Firstly, Python is everywhere now. And secondly, you can just 'pip install --user' and use it even without root privileges (which is a great thing if you're in a restricted corporate environment).


It's pretty easy to install Python on Mac (use Brew) or Windows (use WinPython), and most Linux distros include it by default... running an installer is hardly an onerous task.


The Cmake migration tools are not very good.

Meson has no glob support.

Meson does not support any form of remote caching.


Meson doesn't seem to be a significant improvement over CMake beyond syntactic sugar. It uses the exact same, unreliable models as CMake, just with a slightly nicer-seeming syntax and about 242 more dependencies.


I really like Meson, but I've found with it that the corner cases where the simple syntax doesn't work get really hairy really fast, eg producing vst plugins, or linking against something that doesn't have a deep integration with pkgtool or such.


Meson looks nice, but it still lacks a way to tell it where your dependencies are installed(like cmake’s CMAKE_PREFIX_PATH). You can try to get by, by setting pkg config path, but it doesn’t help for dependencies that don’t support pkgconfig.


You can try xmake's dependency package management.

    add_requires("libuv master", "ffmpeg", "zlib 1.20.*")
    add_requires("tbox >1.6.1", {optional = true, debug = true})
    target("test")
        set_kind("shared")
        add_files("src/*.c")
        add_packages("libuv", "ffmpeg", "tbox", "zlib")


> a way to tell it where your dependencies are installed(like cmake’s CMAKE_PREFIX_PATH).

That's not how dependencies are discovered in cmake. Dependencies are added with calls to find_package, and if you have to include dependencies that don't install their cmake or even pkconfig module then you add your own Find<dependency>.cmake file to the project to search for it, set targets, and perform sanity checks.


That's not entirely true, CMAKE_PREFIX_PATH is used in find_package and find_library calls.


Why migrate from CMake to meson?


CMake feels a lot like C++89: Lots of things you can do, but there are problems:

* No standardization or opinionated design, so you can't share your work easily.

* No sane defaults, so your build system is always fragile, difficult to maintain, and done wrong.

* No best practices, so people keep making the same mistakes over and over.

* Misguided attempt to remain compatible with the steaming pile of legacy they've accumulated over the years.

* Bad documentation, so there's no way to learn how to do things better.

* Steep learning curve with limited payoff, so most people don't bother.

Meson does some of these things better. It's still not pretty, but it's nicer to use than CMake.


1 and 3 are not true anymore, the new canonical CMake way is targets with attached dependencies, header file search paths, compiler flags and possibly other things. Bad documentation - well yeah, it's specifically missing first-party best practices and howto documentation and the third-party documentation as well as really old first-party documentation in the CMake wiki often recommends bad old practices.

You forgot to mention that the language is awful (but that usually doesn't get in the way IME).


Xmake will be better, you can try it.


Nobody who has used CMake would ask that so I assume you haven't!

The answer is that CMake is mad and full of gotchas. Think of it like the PHP of build systems. Here is a classic example:

https://cmake.org/cmake/help/latest/command/if.html#variable...


This is a reasonable programming language design choice.

CMake actual gotchas I dislike enough to avoid it as much as possible include defaulting to cached information, even after I make changes, and preferring "smart", opaque and even hard to track down scripts to explicit user input. It seems optimized for cleverness and conciseness, at the expense of reliability and required user effort.


This is actually a useful comment and points out something I didn't know (about how the CMake if statement resolves variables).


In my experience the primary reason people migrate is that it is significantly simpler for ordinary developers to maintain and configure builds in Meson. They both target ninja but the learning curve for CMake is definitely steeper.


Cmake has tooling and is supported by IDEs. Does Meson have anything comparable?


A better question is does it need to have tooling or IDE support to be worked with?

It's good to have tooling and IDEs for CMake because CMake is complicated and hand-editing the files is very tedious. But if Meson eliminates the tedium of CMake by providing you with different abstractions then you don't actually need the IDEs or tooling.


CMake can generate IDE workspace files, which makes it possible to use Visual Studio almost natively - the almost being, the ide is "read only" with respect to project settings and files - but on a stable project, that feels very close to native.

I have not used Meson, but other build environments (e.g. Make) don't interact as well with IDEs as CMake does (with the exception of premake, which is mostly dead.


This is not what IDE support means for the most part. The big question is does the IDE understand how the files are compiled well enough for its autocompletion and jump-to-definition features to work. A build system/IDE combo which does not support this is DOA to most users. Sadly, there is no standardisation of the interface between IDEs, build systems, and compilers (though the language server spec from Microsoft is making some headway in this regard), so each of these integrations needs to be rebuilt each time, making development of new build systems extra painful.


> A better question is does it need to have tooling or IDE support to be worked with?

No, that's the wrong question and one that's only purpose is to deflect attention from its shortcomings.

All build tools need tooling because if they are adequately integrated into development workflows they are transparent and easy to use. Cmake meets that requirement, and until other alternative build systems do then they will always be far more complicated.


In my mind CMake and Meson are tools for engineers to use in solving problems. If one tool needs some support tooling in order to be usable, then I'm intrinsically less interested in using it simply because there's some extra stuff I have to bolt on before it becomes useful. So I don't see why you think it's "the wrong question" here.

Another poster has explained the IDE support is about IDEs being able to parse CMake files, and I can say that back in the day before CMake IDEs would parse the C/C++ directly using a compiler to output an abstract syntax tree that they would use. So for example Eclipse has this notion of "Build configurations" which allows you to control how this parsing occurs and which files the parser considers valid and what symbols are predefined. Which is IDE support very much like you're looking for from CMake. I worked with it for several years at my last job to provide support for other engineers working with a Make-based build system.


cmake has the best integration of any third party build tool, but if you can lock yourself to one IDE the ability to click "new file" and have the file created, added to version control, and the build system all in one easy step is powerful.


> A better question is does it need to have tooling or IDE support to be worked with?

C++ really benefits from it. ctrl+click on a symbol is much more sane than (re)teaching Argument Dependent Lookup rules to all of the engineers in your organization.


CMake is only complicated if you need to make use of its complicated features, which many build systems simply omit. Meson looks nice though.


Oh, the number of hours of my life I spent debugging CMake files of third party libraries.

You are very fortunate if you import libraries that just work. This is also true for "modern" CMake.


Bad code is an unfortunate fact of life. How does that go away. This could be either easier debugging, some enforcement of clean code (how is that possible), the tool is limited so you can't do complex things, or just so far only good coders have been involved not the masses of bad coders. My default assumption is the last, but I'm willing to be proven wrong.


Good defaults can take you very far.


what about Bazel ? https://bazel.build/


Bazel is very slow and heavy and only Google uses it.


Bazel has been significantly faster than cmake/make on the codebases I've tried it on. It has a bunch of warts, but it has been a better framework to build on than anything else I've encountered.

I'm probably biased as I write and maintain Starlark for C++ codebases on a daily basis.


In my benchmarks Bazel was the fastest of the lot, including Meson with Ninja.


Did you publish your benchmarks anywhere?



What I miss about these tools is some "relatively" straightforward dependency detection and generation.

That is, I have a bunch of .cpp files which need to be compiled into individual executables in a folder bin/. I also have a folder inc/ which contains some headers (.h) and those headers possibly also have some associated TU (.cpp).

Now g++ can already generate a dependency graph of headers for an executable. It is then (with a standard Makefile and some supporting bash scripts) quite straightforward to mangle that graph into a list of translation units (namely those files whose name matches a depended-on header) which must be compiled and linked into the executable.

That is, I can simply create a new "executable file" .cpp file in bin/, include some headers therein and when I say make, the Makefile automagically figures out which (internal) dependencies it needs to link in when linking the executable.

Now that I have these "relatively straightforward" scripts and the corresponding Makefile, the incentive to move to another (nicer) build system which would require me to rebuild this infrastructure to fit into this other build system's view of the world is quite low – unless there is some way to do this directly?

Xmake as shown here (and also Meson linked in a sister comment) appear to still require manual selection of dependencies.


> Now g++ can already generate a dependency graph of headers for an executable.

Actually, it cannot; and this should be well known. It emits in practice less than half of the information that it knows from path lookup, that a build system really needs to know.

* https://news.ycombinator.com/item?id=15060146

* https://news.ycombinator.com/item?id=15044438


Xmake can only simplify the maintenance management of dependencies, improve the usability and maintainability, but can not fully realize the way you say


Your workflow is one way to build and link, but not the only way. I might want to build several of those TUs into a static library, link against it, and ship it alongside a few executables.

And when it comes to creating a library, it's difficult to infer which TUs should be pulled in or left out since you'd need to see at least representative samples of the use of that library to be able to infer that.


How would a tool know from a header dependency, in which source file the implementation for the header lives? C or C++ don't require any relationship between a declaration file and implementation file. The implementation could be in an entirely differently named source file, or spread over various files, mixed with implementation code from other headers, or included right in the header.


In the common 2 step generation model of C and C++, this information is not needed. When generating object files it is not relevant which .c/.cpp file corresponds to which header files, because they are no inputs to that. Linkong has to happen when any object file changed.


Generating automatically the dependencies is trivial with gcc and GNU make, if you just take care to group adequately in directories and subdirectories.

I.e. you just have to put all the source files from which you generate object files that will go in the same libray in a set of directories which does not contain files that will not go there.

Similarly, all the source files for the object files required for an executable, except those that are in libraries, should be in a set of directories.

The sets of directories need not be disjoint, just a given set must not contain files that must be excluded for linking a certain target, as that will make the building process more complex.

Given this constraints, it is possible to write a universal GNU makefile usable for any project, which will generate automatically all dependencies.

For any executable or library you want to build, it is enough to write an extremely small makefile, containing 4 lists (of defined symbols, of source files, of directories with header files and of libraries) and the name of the generated file and its type (excutable, shared library, static library).

At the end you need to include a makefile that is good for any project targetting a certain CPU + operating system combination.

The makefiles per CPU/OS must define a few things, e.g. the compiler used and other utilities, option flags for all, locations of the tools and so on, then you include a unique makefile for all architectures and operating systems.

I have started using this method more than twenty years ago and I have never ever needed to write manually any dependency information.

Whenever I see the huge and intricate and impossible to maintain makefiles that are too frequently encountered in many software projects, I wonder how one is willing to waste so much time with a non-essential part of the project.

From my point of view, building easily any large software project is a problem solved a long time ago by gcc & GNU make, but for reasons that I cannot understand most people choose to not do it in the right way.

Of course having to use in 2019 a programming language which does not implement modules by any better method than including header files is something even more difficult to understand, but I still must use C/C++ in my work, as there is no alternative for most embedded computers.


Sorry, there are several typos in my message above. For most of them it is obvious which was the correct intended word.

However, one typo can lead to a confusion because an entire word is missing. In the 4 lists that must be written in the makefile, the most important list, as the other lists can be omitted, is the list of directories with source files (not a list of source files).

For simple projects the list will be reduced to a single source directory. Whenever you add, delete or rename source files, there is no need to edit the makefile of the project.

All changes can be taken automatically into account by GNU make, which can be instructed to scan the source directories for source files for all the programming languages that you use.


There's no way to achieve that with today's standard C++ as it requires metadata to access/infer package version numbers.

This will hopefully change with the introduction of C++ modules in C++20 but until them the best option available to C++ programmers is either manually managing third-party libraries or employing dependency management tools such as Conan.


This is for internal dependencies of a project and indeed outside the scope of the standard which does not say that if a function is declared in file abc.h then it is defined in file abc.cpp and that this file is compiled into an object file abc.o which then must be linked during linking of any file which includes abc.h.

However, just because it is (like most build system questions) outside the scope of the standard does not mean that it isn’t possible to define some project-internal rules about what gets compiled and linked into what and that the build system cannot apply those rules to take work away from users.

The few external dependencies my project has are installed semiautomatically before any compilation starts.


This is really great work, great documentation. It looks like CMake, but with a full featured scripting language.


thanks!


Does it have any distributed build or caching support? That is my minimum bar for a C++ build system. ccache and distcc/icecc are too limited, you want something integrated with your build system directly.


Distributed builds are being planned, but not yet implemented. You can see https://github.com/xmake-io/xmake/issues/274


For better or worse though, CMake has won! Many IDE's including Visual Studio can directly work with CMake files. In addition, even Google which is famous for doing things their own way, has now added official, first-class CMake support to their open source C++ library Abseil https://abseil.io/blog/20190402-cmake-support

If you are writing an open source C++ library, even if you support some other C++ build system, chances are you will also have CMake support as well.

While I have no doubt, xmake is easier to use than CMake (just having Lua over CMake's abomination of a language is a great improvement), the fact that so many libraries and tools already support CMake is going to make adoption an uphill battle.


Cmake won against the incumbent, which was autotools. Still it's still far from being an enjoyable tool, whose experience is made even worse by its god-awful docs.


Personally, I vastly prefer autotools, both as a user and developer. When I got to the point I needed some kind of build system, I found autotools much easier to learn than cmake.

As user, I find the experience with autotools to be much nicer as well. For whatever reason, the interface just seems more intuitive. I mean, ./configure --help will tell you basically all you need to know. An underappreciated bonus is that you don't have to install more stuff just to __build__ some program you might not even want. I've run into more than one project that required a specific version of cmake, which as luck would have it, was not the version packaged with my distro. This leaves you either building another cmake first or finding a tool/library that isn't so persnickety.

Given the choice between trying a project that uses CMake or or autools, I'll choose the autotools based project every time.


> An underappreciated bonus is that you don't have to install more stuff just to __build__ some program you might not even want.

sorry what ? I remember hours in my younger years searching which Debian package provided autowhateverflavoroftheday.sh so that I could build $random internet project


> > An underappreciated bonus is that you don't have to install more stuff just to __build__ some program you might not even want.

> sorry what ? I remember hours in my younger years searching which Debian package provided autowhateverflavoroftheday.sh so that I could build $random internet project

The whole point of Autotools is that distributed source packages can be built by themselves, without requiring any part of Autotools to be installed. They build even on obscure systems that don't have any working version of Autotools.

If you have to install autoanything to build a random project that uses Autotools, either you are doing something wrong, or the project is using Autotools wrong, or maybe the Debian package is using Autotools wrong.

That said, I know what you mean. I've had to seek out a number of different versions of Autotools just to get some things to build. But that is because a lot of projects and/or distro packaging blatantly uses Autotools differently than it's was designed to be used. I don't think Autotools should be blamed for this.


The portability is a very nice feature of autotools, but that distinction between developer sources and distributed sources it was designed around isn't as clear cut or widespread as it used to be. If I start a new C/C++ project today, chances are I expect users to be building from the git repo or a snapshot of it, rather than a semi-cooked tarball.

(I don't know what build system I'd pick these days - probably just write the Makefiles by hand.)


> That said, I know what you mean. I've had to seek out a number of different versions of Autotools just to get some things to build. But that is because a lot of projects and/or distro packaging blatantly uses Autotools differently than it's was designed to be used. I don't think Autotools should be blamed for this.

yes, it absolutely should be. If a tool is misused, it's generally because it's hard to use correctly. In contrast, if I see a repo with a CMakeLists.txt I know that it's going to be a simple cmake && make.


> If a tool is misused, it's generally because it's hard to use correctly.

Citation needed.

Tools get misused all the time. If I use a flat bladed screw driver as a pry bar/chisel/whatever, that doesn't mean the flat bladed screw driver is hard to use.


I find the docs to be fine...


Now we just need a better DSL that can generate Cmake files.


And CMake itself started as a simple DSL for generating Makefiles. We've gone full circle


CMake is still a DSL for generating makefiles.


Yea, but what it's not is simple.

Imho dealing with dependencies and making project build in one shot is hard in cmake.

Maybe I did things wrong though.


> Yea, but what it's not is simple.

That really depends on what you're trying to do. Cmake's happy path for building an executable that depends only on libraries that already support cmake is very straight-forward.


I quite like CMake, I find it to be the least bad of the bunch. With recent additions I'd say that the only problem is that many packages still need to pull a CMake module from somewhere to be found because they do not offer pkg-config files.


Here is an official dependency package repository for xmake. https://github.com/xmake-io/xmake-repo


One of the purposes of xmake is to solve the problem of C/C++ dependency packages.


So is Conan's[¹], and Conan is already supported by cmake.

[¹] https://conan.io/


xmake also support conan, and vcpkg/brew


cmake is NOT a build utility. It is dependency tracking and configuration utility


"CMake is an open-source, cross-platform family of tools designed to build, test and package software. CMake is used to control the software compilation process using simple platform and compiler independent configuration files, and generate native makefiles and workspaces that can be used in the compiler environment of your choice."

They would disagree with you.


Cmake is a makefile generator. The output of cmake is a series of makefiles. That's it. If you need to build a project and you don't have make, nmake, jmake or whatevermake in your system them cmake does nothing to get your project built.


CMake can generate:

* Borland Makefiles

* MSYS Makefiles

* MinGW Makefiles

* NMake Makefiles

* Unix Makefiles

* Watcom WMake

* Ninja

* Visual Studio projects

* Green Hills MULTI

* Xcode projects

And more: https://cmake.org/cmake/help/latest/manual/cmake-generators....


> CMake can generate:

Although you're conflating project transcoders with makefiles, nevertheless that's the whole point of cmake: generate makefiles that are used by some third-party program to actually build the software.


But it still depends on third-party IDE or build tools. It will be limited by their own features.

There are still many differences in the behavior of different IDEs.


no, they won't

"generate native makefiles" (I prefer Ninja myself) - this is what used to build software


Leveraging compositional abstraction doesn't change what you use the thing for. I use cmake (well I don't often, but when I do) to build my projects. How it accomplishes that is if little concern.


> I use cmake (well I don't often, but when I do) to build my projects.

no, you don't

remove from your system make (ninja, msbuild, ...) and see how cmake builds your project


>see how cmake builds your project

Yes exactly when you remove a piece of the build system the build system stops working.

When I use bazel I need python installed or it doesn't work. That doesn't mean my build system is python. It means my build system takes advantage of python. Same for cmake and make.


I'm sorry but make is not part of cmake build system. It predates cmake by decades. It is true build system used in by itself


Yes. Make is a build system. Cmake is also a build system that leverages make. This means that from the perspective of a user of cmake, make is a component/dependency/part of cmake. make can be used independently, and is absolutely developed independently, of cmake.

But when you remove a dependency of a tool you can expect the tool to stop working. That doesn't mean that the tool doesn't do what it says it does.


Yes, xmake is more similar to scons, not cmake.


Does it produce hermetic builds?


What's hermetic builds?


Hermetic Builds. ... Our builds are hermetic, meaning that they are insensitive to the libraries and other software installed on the build machine. Instead, builds depend on known versions of build tools, such as compilers, and dependencies, such as libraries.

Kind of like a container for building? I had to look it up myself.


Got it, xmake is hermetic Builds. It does not rely on any third-party tools, nor does it rely on make.

Unless remote dependencies are used, this is optional.


That, and also insensitive to builds you might have done of other versions of the source code, etc. I.e. it's not affected by "files left behind" that would require you to do a clean build and lose incrementality.


Nix does exactly what you want.


How would it do that besides building in a container?


Look into bazel, pants, and similar hermetic build systems.

You pin all dependencies and manage flags and such but via the build system.


I don't know if this is what the parent was intending (I'm curious to see other examples), but depending on your definition Nix (nixos.org) seems to fit that need.


mmh. cflags and cxxflags are command line options ? i would expect them to be defined as part of build file.


You can also define them in build file (xmake.lua)

    target("test")
        set_kind("binary")
        add_files("src/*.c", "src/*.cpp")
        add_cflags("-fPIC", "-Dxxx")
        add_cxxflags("-fPIC", "-Dxxx")
 
The command line arguments just give you a quick and easy way to modify cflags.


Not even that. Compiler flags are compiler/platform/library options, naturally, as build options need to be propagated from dependencies.

The people behind cmake already learned that lesson with their modern cmake approach, but it seems the xmake people didn't do a proper review of the state of the art.


I wish CLion supported this as an alternative to CMake for project definitions!


You can try clion/idea plugin for xmake.

https://github.com/xmake-io/xmake-idea


Im not a fan of LUA. The syntax of XMake.lua reads somewhat like CMake but easier to understand. What I'd really like to see is a build system in Python (3!) utilizing objects and dictionaries for tasks like this should be a breeze.


Do you consider waf to meet that criteria? https://gitlab.com/ita1024/waf https://waf.io

Or scons? https://scons.org


Does waf still force you to be c++11/14?


I'll check it out, thanks.


Just out of curiosity, what parts of lua do you not care for?


Probably just bad experiences with it in the past and the lacking IDEs. e.g. PyCharm makes writing Python a breeze.


Now we have N+1 incompatibile build systems.


I think half those N's are cmake itself considering the amount of time I see "requires cmake x.x or above" messages.

I wouldn't find having many build system such an issue if they'd just add a makefile (and maybe ./configure) to call that build system, giving devs a consistent interface and not having to lookup up how to do a simple build.


> if they'd just add a makefile (and maybe ./configure)

That's the whole point of cmake. Instead of running autotool's ./configure (which in fact is a whole dance involving autoconf, autoreconf, automake, and whatnot) just run cmake . to get yourself a fancy makefile.


> just run cmake . to get yourself a fancy makefile.

Well most instructions I've seen start of with "mkdir build && cd build && cmake ../src", which is a bit more complicated than just "make" with a default build dir. I'm not sure why they're all like this, I would have though supplying a default build directory is something cmake could handle.

My last and only big project with cmake we ended up with a makefile anyway to drive all the things cmake couldn't do, or that we couldn't do with cmake due to inexperience or some combination of the two. So we ended up with make calling cmake calling make, all because it apparently made it easier for IDE users (it didn't but that was definetly our fault).


I use Make to automate the initial CMake setup step - though by "Make" I probably mean something more like "glorified shell script", as the Makefile in question consists entirely of phony targets. It detects the OS with the usual bunch of ifs, and by default does one of 3 things:

- Windows - generates a VC++ project

- OS X - generates an Xcode project

- Unix (other) - generates Ninja files for Debug/RelWithDebInfo

(Typically there's also the option to generate Ninja files on OS X if you like - good for automated build and/or if you'd just rather use some other editor (which is not unreasonable).)

Once it finishes, on Windows you load "build/vs2017/whatever.sln" into VC++; on OS X you load "build/xcode/whatever.xcodeproj" into Xcode; on Unix (other) you change to "build/Debug" or "build/Release" and run "ninja". And off you go. After that, it all just kind of runs itself.

The Makefile consists of basically stuff like this:

    .PHONY:unix
    unix:
            rm -Rf build/Debug
            mkdir -p build/Debug
            cd build/Debug && cmake -G "Ninja" -DCMAKE_BUILD_TYPE=Debug ../..
            rm -Rf build/Release
            mkdir -p build/Release
            cd build/Debug && cmake -G "Ninja" -DCMAKE_BUILD_TYPE=RelWithDebInfo ../..
plus some ifs to ensure the right target(s) are available depending on host OS.

(I've found it beneficial to regenerate everything entirely from scratch each time in the Makefile - ensures you're always working from a clean slate, with no cached variables sticking around from old runs. The odd package does have an unusually time-consuming configuration process, but I've always ended up managing to bypass these somehow - it's possible a future revision of my "process" will have to actually address this properly.)

This process does confuse people that don't read the instructions, as they type "make", some stuff happens, and then nothing. But I've found it to work well enough.


> I'm not sure why they're all like this, I would have though supplying a default build directory is something cmake could handle.

cmake -H. -Bbuild && cmake --build .

Will do an out of source build with the default toolchain on windows/mac/linux. Has the added benefit of being parallel, and working with Visual Studio/Ninja/XCode.


if cmake was any good, the makefiles it produces would be portable, and distributing the cmake program itself would be unnecessary


Portable build files don't appear to be a design goal... if you want those, you probably need a different tool. They do exist.

As for distributing CMake, the intention is presumably that it's installed on the system of whoever's going to build the code, like make, or gcc, or whatever.


CMake is doing more than generating a build file, it's also a configuration tool (detecting compiler, finding dependencies, etc).


The dependences do not change upon distribution, so they can be safely encoded on the makefile. As for "detecting" the compiler, either it is in the path, or in the CC variable (or analogous), in which case the makefile can work; I do not honestly understand what is the task of cmake here. The only use that I can see is creating projects for other build-systems, e.g. for windows. But if you only distribute your code to posix systems, and your project is small(ish), then cmake does not really add anything.


The dependencies do change depending on the user. Different version of libraries will be in different locations, may require different build flags, etc, etc. Likewise the build may support a range of compilers which require (sometimes completely) different compiler options for a successful build. This is the reason autotools existed in the first place (and it was only intended to even out the differences between 'POSIX' systems in the first place).

In general though, if you're just talking about small projects, I have found the easiest way to incorporate smaller librares into a build system is to just ignore whatever build system they are using and re-write the build in the larger build system (even if they are the same tool!). This is mostly because the current state of build systems is so terrible.


> The dependencies do change depending on the user. Different version of libraries will be in different locations, may require different build flags, etc, etc. Likewise the build may support a range of compilers which require (sometimes completely) different compiler options for a successful build.

What you say is true, but it can be interpreted as either positive or negative features. I would say that code that depends on specific versions of a library or specific compiler options is bad code; and propagating bad code instead of fixing it is not a good idea. Cmake makes it very easy and convenient to ship bad code, as you explain. Thus, it is a force of evil! It allows, even encourages, the programmers to be sloppy without short-term visible consequences.


I'm talking about trying to make a shared library versus a static library vs an executable, or include different directories. I'm not talking about depending on '-O2' for correctness. Likewise with different versions of a library in different locations. And even in the case of more obscure flags, the cause is usually bad or incompatible compilers or libraries, not 'bad code' on the part of the project. A build system needs to be able to deal with a large variety of situations, because ultimately the responsibility for making the project build is on the project and the build system, not the user and their environment.


> Now we have N+1 incompatibile build systems.

To be fair, cmake was at a time the n+1 build system but nowadays it's pretty much the only option in C/C++ land.


Until you eventually you have n-k fit for purpose build systems. Having competing standards isn't always a bad thing e.g. if they do slightly different jobs


What does k mean?


Presumably it's the number which are not fit-for-purpose.


A number such that n-k is less than n+1


If the existing tools are really good enough to meet all user needs, there will be no N+1 build systems.


I really love it! When did you start the project?


What rubs me the wrong way is that a lot of build systems have a fatal combination of unfamiliar syntax and complete lack of debuggability.

Conan and Meson seem so much better in that regard.


Conan is orthogonal to the choice of build system, as in fact Conan's main choice of build system is cmake.


great!!


This looks thoughtfully created (and so documented!). I haven't gone through the entire doc and am not particularly clear but can you cross-build too? How would you run MSVC linker on Linux?


Support cross-build, but you need install mingw for linux, if you want to build win32 program on linux.

You can see https://xmake.io/#/home?id=cross-compilation and https://xmake.io/#/home?id=mingw


Great!


Don’t ever install software by piping arbitrary remote scripts into bash.


Assuming the script is hosted on an https site, is a popular tool (like brew or rustup) from a trusted source, why would this be any more dangerous than downloading and installing from a a package manager?

What would it take beyond https and a well-known site to make you comfortable doing this?


> Assuming the script is hosted on an https site, is a popular tool (like brew or rustup) from a trusted source, why would this be any more dangerous than downloading and installing from a a package manager?

Most package manager use an offline signature mechanism done at build time (rpm, dpkg, nix) and do not rely on HTTPS security for anything else than eyes-dropping reasons.

Relying purely on HTTPS is insecure. Nothing guarantee you that your source / script / package did not get hacked / modified between the time you uploaded it and the time your user downloaded it.

This is not hypothetical scenario, it already happened in the past with sites like sourceforge.


> why would this be any more dangerous than downloading and installing from a a package manager?

Normally the packager and developer are different people so there is a second set of eyes to at least give a cursory glance to the changes. It takes more than a single compromised account to publish malicious changes. There are of course a million exceptions and caveats to this and it's not perfect, but it's better than allowing developers to push code directly.


This is just one of the more convenient installation methods, like homebrew, of course, we can also compile and install directly.


Unless you read and thoroughly understand the script first.


Not even then. The Server can detect if you pipe to a shell or just download the script: https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-b...


“Don’t ever” is a statement that depends on your threat model.

The vast majority of users would trust Homebrew (for example) to not do something like that.


Thanks for that. (Now I hate bash slightly more than I hated it already.)


It has an emoji in the title...I'm sold.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: