A modern OS is definitely a virtual machine, where each process perceives that it is running on a single CPU with its own single contiguous bank of memory. Threads are a bit of a leaky abstraction but whatever.
What is interesting is that the operating system virtualizes a machine that doesn't actually exist: fake "hardware" that can execute syscalls like read/write/exit. A VM in the contemporary sense has the exact same functionality, with a different interface. Rather than read/write as syscalls, you have to send SATA commands to disk, or commands to a network card, or whatever. Instead of an exit system call as an interface you work with a hardware interface that powers down the physical machine.
Containerization is actually a logical next step from this. Why virtualize a REAL hardware interface only to virtualize a fake one on top of it? The only reason to do that is if you want multiple fake interfaces, eg Linux and Windows. When virtualizing a bunch of Linux machines, mostly you really just want isolation of your processes. Virtualizing real hardware is a hack because Linux was not capable of isolating processes on its own, so you had to run multiple copies of Linux! Now with cgroups and other resource namespacing in the kernel, it can isolate resources by itself.
The fact that an OS supplies system calls is mostly irrelevant – it is a separate concept (not listed in the original article) which we usually call “Software Libraries”. But innovation #2 did not list the standard libraries as a point of an Operating System – the process isolation is the point. Libraries had been in use long before.
I definitely agree that hardware virtualization is going the long way around, and that more refined process isolation is the way to go. The Operating System was made for this, and it should continue to do this; there is no architectural need for an additional level of isolation.
>If the profession ever wants to produce anything reliable in C++
This has already been done, so you are obviously wrong.
Exceptions are cute for small amounts of code. But as soon as you get a sizeable codebase, you have zillions of functions just waiting to explode your call stack in ways you could never expect.
Documentation doesn't fix it either. Even if you could somehow guarantee that every single function has detailed descriptions of its possible throws, you won't get devs to read all the documentation for every function in some piece of code that needs a small fix.
> Exceptions are cute for small amounts of code. But as soon as you get a sizeable codebase, you have zillions of functions just waiting to explode your call stack in ways you could never expect.
Exceptions do not "explode your call stack", and any function can fail in ways both documented and undocumented. Consistent use of exceptions that derive from a small and well-chosen set of base classes means that callers can choose where and how to respond to categories of errors, instead of being forced to litter all code with error handling conditionals. I have worked on large applications that made the transition to using exceptions, and when combined with strict RAII and other best practices, the effect is absolutely to make code easier to understand and much more robust.
The key is "small and well chosen set of base classes" for your exceptions. This is where you got the most benefit, and you probably know it. Without exceptions, you can have a class that represents well defined state such as either success or an error code, with the error code being well defined, returning an object of this class for all methods that need to return success/failure codes, and requiring that it be handled, so compilation fails if callers ignore the return value.
What's great about this is I might call a method that simply concatenates two strings, that is all it does, and I know this won't fail(OOM can always happen, but good luck recovering from that), so I don't return a status. I can call this method assuming no failure and no boiler plate is needed. With exceptions, there's always that possibility that an exception can be thrown. What if I call this method from code that needs to close resources? Great, now I absolutely must have RAII for cleanup, which is more complicated than just doing close(x). Also, what are you going to do when using 3rd party code with its own base exception class? Now, at the upper layers of your code, you have to have a handler for each base class. Without exceptions, someone might have their own status object they return, and I'll need to translate that, else my code won't compile. Your code will compile even though you might not handle that 3 party base exception class.
Of course, most times code just propagates the error up so that adds to boiler plate without exceptions.
I think the talk of using exceptions vs returning a status is a red herring. They have their positives and negatives, and we can argue this until the cows come home. The most important thing is to use a set of well defined success/error codes, whether this comes from exceptions or a returned status isn't really important. I've seen codebase with C++ exceptions working just fine, and I've seen c++ code with no exceptions, and it worked well. The commonality between these two cases is .... well defined error codes.
Exceptions are cute for small amounts of code. But as soon as you get a sizeable codebase, you start to understand the huge advantage they give: your code shrinks by an order of magnitude vs the same code with manual error propagation.
It was the server which allowed customers to download Google software onto their computers.
If your download server was slow and randomly disconnecting transfers for no good reason, resulting in potential customers giving up, wouldn't it be important?
You're blowing things way out of proportion because you don't have any idea what you're talking about.
It was a server involved in a fraction of Google's downloads comprising an even smaller fraction of Google's total egress. It took URLs of one form and redirected them to URLs of another form, where other C++ servers written by a staffed team (mine) actually redirected them to the actual C++ servers (again, deployed and maintained by my team) which deliver the downloads.
haberman wasn't talking about the state of Google's C++ codebase in 2005; he was talking about the state of Google's C++ codebase in 2014. This server was written years ago, using libraries that were years old at the time it was written. It wasn't maintained, and no team was responsible for it. The core libraries on which it was based were replaced by the libraries haberman lauded.
The only reason you can even try to use this server as an example is because you have no clue what you're talking about.
The different examples are all pretty much just different ways of writing the same thing. And then another different way of writing the same thing is presented, with "db:" prepended. Huh.
The only thing that is substantially different between any of these is the engine/protocol/adapter/whatever section. Perhaps that field should always refer to the protocol type and not the driver, but I consider that problem pretty negligible.
The db: prefix must (according to the article) always be there. What he's describing is the URI part, which follows RFC 3986. It's not two formats, just one: db:<URI>.
Yes, but for convenience (and compatibility with other proposals), I think it would be okay for a given implementation to recognize well-known engines and treat them as valid DB URIs even in the absence of a <code>db:</code> prefix. I've added notes to the <a href="https://github.com/theory/uri-db">documentation</a> to that effect.
Because it's more than that. It wasn't originally to speed up run time, but compile time. That history is wildly fascinating to me. We tend to take compile time for granted, unless we are bitching about Scala, or compiling C++ at Google.
The author also addresses the importance of history. Voodoo knowledge _does_ pervade modern computer science, and I for one am happy to see something different.
But is it really? I'm inclined towards anonymouz's and adamnemecek's idea here: Dr Richard clearly says indices are offsets, no? It's nice the author gets all sentimental about it, but that doesn't make him right.
If you ask the average 1st world human being "What are Macs good at?" They usually say something about ease of use and multimedia stuff. It's just what people think.
Now, on my Mac, I do exactly zero multimedia related things. So I agree that the multimedia thing isn't necessarily true, but that's what I hear about Macs from not-technicals.
What is interesting is that the operating system virtualizes a machine that doesn't actually exist: fake "hardware" that can execute syscalls like read/write/exit. A VM in the contemporary sense has the exact same functionality, with a different interface. Rather than read/write as syscalls, you have to send SATA commands to disk, or commands to a network card, or whatever. Instead of an exit system call as an interface you work with a hardware interface that powers down the physical machine.
Containerization is actually a logical next step from this. Why virtualize a REAL hardware interface only to virtualize a fake one on top of it? The only reason to do that is if you want multiple fake interfaces, eg Linux and Windows. When virtualizing a bunch of Linux machines, mostly you really just want isolation of your processes. Virtualizing real hardware is a hack because Linux was not capable of isolating processes on its own, so you had to run multiple copies of Linux! Now with cgroups and other resource namespacing in the kernel, it can isolate resources by itself.