The only reason I have started to use `eww` in Emacs to "read the web" is because people pushing those amounts of dark on everybody, it has become unbearable!
I've been trying to understand what Zig place is in the world so I can learn more about it. I like the idea of simplicity in programming languages but, other than that and since there's Go already, what is the proposition here?
In particular:
- LLVM is not enough, let's write our own compiler.
- Interfaces are an overhead, use comptime instead or roll your own vtables.
- In a world where "memory unsafe" languages are under attack... yeah, we don't care about that.
I'm not trolling, this are serious questions from afar that I would love to figure out before investing time with Zig.
One notable thing about Zig (and Andrew) is their willingness to rethink everything, and a lack of fear of digging all the way down and building their own versions of underpinning things. They believe incremental compilation should be an option, so they have to write their own compiler, linker, etc. They're already pushing the boundaries of what new languages can do, and—eventually—will be expected to do.
[Edit: expanding]
For instance, completely platform-independent cross compilation is something Go popularized, but Zig really nailed. (In fact, if you use cgo, the generally accepted method for Go cross-compilation is to use Zig as the C compiler!)
Another interesting thing about Zig is that it happily compiles C code, but brings more modern package management. Loris Cro has described how it would be quite reasonable (and pleasant) to maintain C code using Zig: https://kristoff.it/blog/maintain-it-with-zig/
What does incremental compilation have to do with writing your own compiler from scratch? Isn't Rust supports incremental compilation as does every language out there?
I guess he means that in order to achieve incremental compilation they need to write their own code generation and linker for every architecture and format. This is needed because incremental compilation here doesn't just mean storing object files in a cache directory (which has always worked that way). They also want to cache every analyzed function and declaration. So they have to serialize compiler state to a file. But after analysis is done, LLVM will start code generation from the beginning (which is the time expensive thing, even in debug builds)
Yes, but isn't that an implementation detail?
Shouldn't they prioritize getting to 1.0 (the language itself) and then work in implementation details like that? I mean, It's a monumental task to write compiler and linker from scratch!
Well, if your compilations turn to be submilisecond it's not an implementation detail :) *. As of now it is only supported for x86_64 Linux (only ELF format) and it has some bugs; incremental compilation is in its very early stages. Andrew talked about it in the 2024 roadmap video[1] why they are digging so low on the multiplatform toolchain (for besides incremental compilation):
- Fast build times (closely related to IC, but LLVM gives very slow development iterations, even for the compiler development)
- Language innovations: besides IC, async/await is a feature Andrew determined to not be feasable to implement with LLVM's coroutines. Async will likely not make it into 1.0, as noted in the 0.13 release notes. It is not discarted yet but neither is it the priority.
- There are architectures that don't work very well on LLVM: SPARC and RISC-V are the ones I remember
My personal point is that a language that is meant to compete with C cannot have a hard dependency on a C++ project. That, and that it's appealing to have an alternative to LLVM whenever you want to do some JIT but don't want to bring a heavy dependency
Ironically all major production C compilers evolved to be written in C++.
Also if they value compilation speed that much, maybe they shouldn't be that pushy into compiling always from source, without any support for Zig binary libraries.
> Shouldn't they prioritize getting to 1.0 (the language itself)
Nope. Different languages have different priorities and different USPs. For Zig sub-second compilation / incremental compilation, cross compiling toolchain are flagship features. Without those there is no point in releasing 1.0.
They want incremental compilation at the function level. So if you change a function, you recompile just that function. This necessitates a custom linker (indeed a custom linking strategy), and (I think?) a custom compiler.
> - In a world where "memory unsafe" languages are under attack... yeah, we don't care about that.
FWIW Zig does offer spatial memory safety, but does not provide temporal memory safety in the language (e.g. "dangling references"). It also fixes most of the 'sloppyness', UB and general footguns in C and C++ (and most memory corruption issues are a side effect of those).
I think Rust is the best option for the `Need to make a project` kind of work.
It is overall better, IMHO, and the ecosystem and safety pay dividends.
But Zig has several nice things (I don't use it directly but appreciate them, and is my way to cross-compile to musl):
* Is truly faster to compile
* Is far better to cross-compile
* Is far smaller
* comptime is a much better `macro` language. I don't like the ergonomics of the split on Rust between the 2 styles of macros, where proc-macros is a cliff of complications
I think Zig fits the bill for `C is a compiler target`. Whatever I need to integrate with C or generate C I think it is now better to target Zig.
Simple answer: Anywhere you'd use C, and want a nicer language. So: embedded, operating systems, drivers, compilers, PC applications etc.
I would love to try it out with a serious project, but am waiting on libs like HALs for microcontrollers, GPU API bindings, GUIs etc to mature to a usable point.
Well, I use C++ for that today and don't see much benefit in switching really. Switching to a memory-safe language is something that I can support and even sell to my team, but switching to just a "simpler" language I'm not sure...
The Zig stdlib and comptime features like generics and type inspection go way beyond C (and in parts also beyond C++ and Rust) though (e.g. Zig is much more than "just" a C replacement).
While other communities are already taking direct steps towards safety, the C++ community is still trying to define what safety means. I think it's funny and sad at the same time!
I didn't read the article (just browse it) but here's the TLDR from the article itself:
```
tl;dr: I don’t want C++ to limit what I can express efficiently. I just want C++ to let me enforce our already-well-known safety rules and best practices by default, and make me opt out explicitly if that’s what I want. Then I can still use fully modern C++… just nicer.
```
As is normal in C++, the defaults are wrong. Developers should "opt in" for unsafe instead of "opt out" of it!
It's "zero cost abstractions over what you would write by hand". If you argue that anyone doing array access should be doing bounds checks when in doubt, a C++ compiler performing bounds checks would still be considered zero(additional)-cost.
I'm all for better tools to help the compiler figure things out.
Here is an example where I can't communicate the invariants to the compiler:
```
std::vector<int> v;
...
v.push_back(2);
std::sort(v.begin(), v.end());
// no need to check i < size because we know we will find value 2 somewhere in the v.
for (int i = 0; i < v.size(); ++i) {
if (v[i] == 2) return i;
}
```
Note that in C++ you can manually mark code after the loop as unreachable, which would indeed skip the size check.
But that's as bad as not checking bounds in the first place.
I agree about devcontainers. Now you are pushing everyone in the team to use vscode which is bad on its own. I think docker is fine, but I mostly try to stay away from any project that even mentions vscode (an editor should not be part of any project IMO).
I don't get this. If a project has a devcontainer configuration, you don't have to use it - it's just there if you want to use it. Also the devcontainer format considers vscode an extension, it's not mandatory - it's just that vscode is about the only thing to fully support devcontainers, so it's the natural choice (for now).
It really depends on the audience. I find having an opinionated, but very easy to get started with setup (like vscode + devcontainers) really handy for juniors, or folks that rarely contribute (they might not if setup is painful). The more senior devs or those with strong opinions can use still use whatever they want.