Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been building distros for years, and the most wasteful part of it is running zillions of auto. process that waste most of the build time of each projects looking for mostly the same things.

I always thought it was insane. Perhaps it was a good idea 30 years ago when we were actually building for dozens of 'unstable' variants of UNIX with a dozen compilers, but these days?

And yes, MOST projects can be compiled just fine with a 2 pages Makefile, more often than not with -j for parallel build, and as a bonus, won't keep around a dozen turd files.

Oh also, it is supposed to help 'portability' but MOST of my time is wasted trying to fix autotools configs when it invariably break in some new interesting and arcane ways.



I doubt there are many new greenfield autotools using projects popping up around. So the question is not really if one should use autotools or not, but if projects should spend effort migrating away from working autotools setup which inevitably is also a breaking change for downstream consumers. And framing it like that it is suddenly much more difficult question, especially as that migration often might be quite non-trivial


A graceful migration path should be implemented over a long period to allow downstream enough time to process the changes.


maybe parallel process for a bit eh? but yeah, it’s going to be a lot of dev hours to remove it. Still, at least there is some motivation now.


> […] looking for mostly the same things.

In case folks don't know, caching exists:

> A cache file is a shell script that caches the results of configure tests run on one system so they can be shared between configure scripts and configure runs. It is not useful on other systems. If its contents are invalid for some reason, the user may delete or edit it, or override documented cache variables on the configure command line.

> By default, configure uses no cache file, to avoid problems caused by accidental use of stale cache files.

> To enable caching, configure accepts --config-cache (or -C) to cache results in the file config.cache. Alternatively, --cache-file=file specifies that file be the cache file. The cache file is created if it does not exist already. When configure calls configure scripts in subdirectories, it uses the --cache-file argument so that they share the same cache. See Configuring Other Packages in Subdirectories, for information on configuring subdirectories with the AC_CONFIG_SUBDIRS macro.

* https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/a...


But it doesn’t work reliably. If you change the configuration options, you have to clear the cache. The cache contents are not reliably shareable between multiple projects or different versions of autotools.


Autotools was always horrible, but it had a purpose back in the 90s, since then it's entirely vestigial and does nothing useful. It standardizes flags to config scripts and make targets, yes, but you can follow that standard without actually using autotools. It enables cross-compilation, yes, but the far biggest roadblock to successful cross-compilation is autotools, without that it's pretty easy.

Meanwhile, these days, actually trying to build on a new platform is harder if the software is using autotools than if it's using plain makefiles. Because for all the noise about feature checking, nobody including the projects using autotools, are actually using the defines from autotools, they check the OS or architecture.


I currently make my living porting software to non-Linux and non-x86_64 cross-built systems, and I can vouch with certainty from lived experience that your assertions are entirely untrue.


> using the defines from autotools

Is there a list of them somewhere?


You can decide what to test for, for example

AC_CHECK_FUNCS([mlockall])

AC_CHECK_HEADERS([cpuid.h])

will define respectively HAVE_MLOCKALL and HAVE_CPUID_H.


Nowadays we have dozens of 'unstable' variants of Linux distributions with a two compilers, and dozens of additional scripting and managed languages.

And a couple of BSD derived ones.

As per Distrowatch statement from 2023.

"There are over 600 Linux distros and about 500 in active development"


And dozens of non-Linux systems. AIX still lives, I still see HP-UX running, QNX is hidden away quietly running many things you take for granted every day, and there are a number of embedded executives make the world run.

There a lot more to the world that writing scripts to steal people's bandwidth by pushing ads to web browsers.


Definitely, I was only making the point that even reducing UNIX === GNU/Linux is kind of myopic.


Even reducing Linux == GNU/Linux is pretty myopic. After all, even Alpine is used in a lot of containers.


And how many are based on Debian / Ubuntu? lol.


During the UNIX wars heyday, except for special ones like Appolo (which used a Pascal dialect) or Tru64/QNX, they were also either based on AT&T UNIX, or the BSD spinoff.

A few common bases hardly matter, when every distro is a special snowflake with its own set of incompatible changes.


Dealing with a zillion of handwritten Makefiles sounds more hellish than dealing with autotools. These Makefiles will be more fickle, too. I'm not seeing a convenience win.


Are we talking about the same autotools? I'm getting flashbacks to running into some weird bash error in my configure file on line 3563. Where the configure script in question was generated by autoconf, automake, aclocal and all that jazz. Or autoconf mysteriously failing after automake works, or some similar nightmare. When autotools go wrong, trying to fix it feels like trying to breathe while you're held underwater.

I'm sure part of the problem is I don't understand autotools as well as I could. But when "build expertise in autotools" is on the table, stabbing my eyes out with a fork starts to seem like an appealing option.

Suffice it to say, I prefer to work with handwritten makefiles.


I'm not sure what the best solution is, but I agree on the pain dealing with autotools when they fail. Shotgun debugging where you start poking at things randomly can work for some stuff, but never works here.

After working on many different software libraries and frameworks, my firm belief is that you really need to understand the lower layers to use and troubleshoot them effectively.

Best case scenario there are excellent error messages and you can easily review logfiles to understand the root cause of the problem.

But that is rarely the case. Instead you have to have a detailed mental model of the library/framework you're using, and you must be able to quickly picture what it will do internally for the inputs that you give it.

Once you get to that point, many bugs don't appear in the first place because you immediately see that the input/usage doesn't make sense. And the bugs that do happen become much easier to figure out from the output and cryptic error messages you get.

All this to say that it is really unappealing to work on things like bugs in someone else's autotool scripts. I just want to compile the program to run it. I don't want to spend months of my life to understand the inner workings of autotools.


Yes. If you don;t know what you're doing things can seem difficult and you will have problems. This is not a property of the autotools but a property of life in general.


I seem to get on fine in life in general. I think the suckiness of auto tools gets to me because it’s so utterly unnecessary. Autotools are complex not because it solves a hard problem but because the design is bad. And everyone who interacts with it pays rent for their bad decisions.

Compare it to CMake or even cargo - which fundamentally solves the same problem, faster and more reliably on more operating systems. And for no cost at all. The opposite: it’s easier to configure and use.

Makefiles can be quite elegant. But it really seems like a waste of human potential to put up with such crap software as autotools. How many neurons do you have devoted to it? You could have used the same time and effort learning something that matters or that brings you joy.


My personal experience with CMake is contrary to your claim.

CMake based compilation breaks far more often for me than with Autotools and because of the crappy documentation I can rarely fix it myself.

Cargo and other modern package manager OTOH are bliss compared to both CMake and Autotools but they are usually language specific and other modern non language specific build system are still leading a niche existence.


> Cargo and other modern package manager OTOH are bliss compared to both CMake and Autotools but they are usually language specific and other modern non language specific build system are still leading a niche existence.

I hear you about CMake.

I think what kills me about all of this stuff is that there's no essential reason we can't have something as nice as cargo for C and C++ code. Compiling C isn't a fundamentally more complex problem than compiling rust or swift. But instead of solving the problem in a clean, generic, cross-platform way that understands package interfaces and boundaries, C/C++ instead accreted hacky, vendor specific macros and junk for decades. And then C build tools need to be horribly complicated to undo all of that damage.

Should your code export functions with __declspec(dllexport) (as VC++ demands) or __attribute((dllexport)) for gcc. Can't choose? Maybe your library should define its own idiosyncratic DLL_PUBLIC macro which looks at whether _WIN32 or __GNUC__ is defined on the platform. And now the header file (and thus the exposed functions) can only be machine interpreted if you know which platform / compiler you're building for. What does it do for clang? What should clang do? Aaahhhh nightmare o'clock.

All of that when C could just do what every other modern language does and have a nice machine readable public attribute, that gets interpreted differently based on whether the code is compiled into a dynamically linked library or compiled statically. I know hindsight is 20/20. And I'm hopeful that C++20's modules will eventually help. But there's so much stuff like this to unpick that it'll take decades, if it happens at all.

We can have nice things. It just takes some engineering and a willingness to change.


I never got the hate for Makefiles. Granted I mostly use them for simple projects, but compiling C project takes just a few lines of code (compile .c to .o, .o to executable, optionally provide some PHONY convenience utils) and is very readable and hackable. What's not to love?

I'm sure I'm missing something, and it's possible that they don't scale well, but I prefer it to any other could system for small C projects.


With autotools you get DESTDIR, --prefix and a bunch of other things that work the same across all projects. With Makefiles everybody is rolling their own thing and you never know what to expect, or frequently have to implement those things yourself.

That said, autotools, with its multiple layers of file generation, makes debugging rather annoying. And it's generally much easier to fix a broken Makefile than figuring out why autotools goes wrong.


Problem is that you have a wide range of makes with different syntax. It has gotten better nowadays where GNU make is available pretty much everywhere, but two decades ago you'd have a range of incompatible makes on different UNIX systems, plus a bunch of incompatible makes on Windows as each compiler would come with its own make.

Assuming you can ignore Windows you'd typically end up with two makefiles (Makefile and GNUMakefile) and a bunch of includes sharing code all make variants understand.


You dealing with a zillion handwritten configuration files anyway, just hidden behind autotools in ways that most developers are even more clueless about than we are about makefiles.


> Perhaps it was a good idea 30 years ago when we were actually building for dozens of 'unstable' variants of UNIX with a dozen compilers

It wasn't. Back then what was typically getting in your way for porting a piece of software was autotools - and fixing it was typically significantly more complicated than adjusting a well written Makefile would've been. My hate for that autocrap mainly comes from that period.


25 years ago autotools, and their predecessor the Cygnus tools, were a breath of fresh air. Porting stuff was a nightmare (imake, mkmk, many hand-rolled Makefiles that only supported the author's own system) and autotools made it easy, especially if you were running a non-homogeneous collection of Unix and Unix-like systems, including both libc5 and libc6 variants of Linux.


I was living through that era with a zoo of Unix variants to build for (including, but not limited to, AIX, HP-UX, Solaris, IRIS, Tru64 and various Linux flavours) - and I always was excited about stuff that did just have hand rolled Makefiles, as that was something that was way easier to fix as the average software using autotools.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: