I'm very excited about this! Minoca is an interesting system, and I applaud any attempt to make driver-writing less inherently horrible.
Minoca OS has been around for a while, but the news is that they're GPLv3. I think that's a great thing! The MIT license is good for software that wants to permeate through everything, but for building a community, the GPL is a good idea.
It seems that for any operating system to be successful, it has to carry around POSIX compatibility like an extremely expensive entry pass. I wonder when we will leave that behind? Or if we ever will? I'm glad POSIX is just a layer in Minoca, and not the base of the system, because these days it really should just be treated like a big wad of glue.
PS: I love the object manager. I don't see any particularly ground-breaking networking stack, though. A plan9-inspired networked file system approach would have been amazing, but it seems this project is content with today's more typical approach. Perhaps it is just trying to be less opinionated about network structure than plan9 was?
PPS: I'm terrible at organizing a comment. Maybe I need a blog.
Seems to me like POSIX compatibility is actually a cheap pass that gives you access to a huge software environment. Right on the homepage they mention already having packages for Python, Ruby, Git, Lua, and Node... would that, and thousands of other packages, be feasible without a workable POSIX layer?
> would that, and thousands of other packages, be feasible without a workable POSIX layer?
I think my problem is the core concept. POSIX stands for Portable Operating System Interface (and X stands for Xtreme?). In an age where we spin up entire operating systems to start a single application, why are we defining portability at the operating system level when network portability works so much better?
Keep in mind, this is also an age where systems like Qubes OS can make separate VMs cooperate with each other.
The only sell for POSIX I can think of is performance, and I don't know if I buy it anymore. Why do programs have to be cross-compatible when the concept of an operating system no longer means owning the hardware?
> why are we defining portability at the operating system level when network portability works so much better
A lightweight POSIX-capable system has real value today. Operating systems need to be in more places than just the data center. IoT devices don't have the resources to run a VM or any other fancy containerized environment. POSIX was designed to be used on systems with comparable resources to what many embedded processors now have. It makes sense to leverage the existing codebase where possible.
> POSIX was designed to be used on systems with comparable resources to what many embedded processors now have.
But not comparable environments. Most IOT environments are very small parts of very big systems, and POSIX defines a system with a teletype and a line editor.
I seriously doubt the value of the existing codebase. Saying code made for a server (like most existing unix code!) is fine for IOT feels wrong, and not just because IOT devices have kilobytes of memory and servers can have gigabytes/terabytes.
I don't think that a universal OS can work well when we know that universal programming languages, data transfer protocols, and everything else didn't. Imagine if we were still doing everything in PL/I. We recognize now that different programming tasks need different programming environments, but we still don't think that different programs need different program environments. It's just strange to me.
> Saying code made for a server (like most existing unix code!) is fine for IOT feels wrong,
There was a time when people ran Linux/SunOS/Ultrix/etc. with tiny amounts of RAM and swap. It's not unusual for an embedded device to have 8, 16, 32 MiB, or more RAM available. Many traditional *nix programs can run unchanged on such hardware.
I think the parent's comment was more along the lines of: just because an embedded device _can_ run a Unix doesn't mean it should have to adopt the semantics and history of the *Unix environment that POSIX mandates. In particular the model of TTYs, filesystems, Unix-style file/IO, Unix-style virtual memory / mmap, etc. It's not that this stuff is expensive on modern embedded systems (which as pointed out often have the hardware capacity of high end systems from 20 years ago), it's that these capabilities are either not needed, or impose a conceptual model of what a computer and operating system have to be that don't match up to what the machine is designed for.
There's a whole market of suppliers that's been going strong for decades. Especially for safety-critical embedded with untrusted, Linux/POSIX apps in separate partitions. QNX is one of best examples far as commercial adoption:
EDIT to add the QNX Desktop Demo that came on a single floppy. Throwing it in since you were mentioning resource-constrained systems further down. A floppy is 1.44MB with base QNX running in ROM's of embedded systems. Can scale such architectures it up or down however you wish. :)
I agree with you in general but IIRC Lua only depends on the C standard library and does not need POSIX to run. There are even versions of it that can run inside the kernel!
The best thing about POSIX at this point is back compatibility, nothing to be sneered at!
But it carries a lot of (in retrospect) bad habits and decisions from the 60s and 70s as well as a tendency to redundancy due to some competing standards that were unified and need for some back compatibility.
Now not everyone agrees on what is good and what is bad, so some experimentation in this area is good for everyone. Examples of what bother me include the ludicrousness of ioctl(), messed up / redundant semaphore semantics, ditto for IPC, primitive memory mgmt semantics, fork() -- a great hack for its time but since it's 99.9999% of the time followed by exec() should be split into separate address space and thread management), outdated and simultaneously simplistic and baroque security model(s), and various IO issues to many to go into in a HN comment.
But my loathed feature is undoubtably someone else's sacred cow. As I said, letting more flowers bloom is in everybody's interest.
- (Correct) IO is ridiculously non-portable and painful in so many ways that it isn't even funny anymore
- Locks are ridiculously non-portable and painful to the point where you're better off just using "mkdir"
- POSIX is stuck in the "everything is bytes and we slap an encoding on it some of the time" era thinking. This makes it painful and hard to implement proper text handling in many instances. This also leads to a lot of bad behaviours.
- Memory management is IMHO lacking from a user space perspective. For example, it's practically impossible to implement a cooperative memory cache on this. To the best of my knowledge no OS has the necessary interfaces, though.
- ioctl as you mentioned
- SysV/POSIX IPC is so bad that no one ever bothered actually using it for anything
- Personally I think it's a misleading API (conceptually, see above, the text example for example) almost to the point of deceptiveness. It's very easy to write correct looking programs that behave far from intended, especially in edge cases. IMHO code using it is practically unreviewable in everything but the most trivial cases. Non-portability is practically guaranteed, you have to test every platform. Portable code usually turns out to be quite ugly due to platform deficiencies and minor API incompatibilities.
> POSIX is stuck in the "everything is bytes and we slap an encoding on it some of the time" era thinking.
I actually view this as a feature. Encoding/decoding of data should be an application level thing, not an OS-level thing. As far as the OS is concerned, data should be bytes.
(Of course, it is true that, since POSIX defines a terminal spec, it has to at least specify how bytes are mapped to characters that print on the terminal. But I would rather see that removed altogether, so a terminal becomes just another application, than have an OS try to muck about with encodings.)
Applications have for the most part proven that they cannot be trusted to get text encoding and decoding right, especially not in any consistent way. Operating systems definitely should make it possible to deal with the raw byte streams, but the default and preferred method of text handling should be a standard higher-level interface.
> Applications have for the most part proven that they cannot be trusted to get text encoding and decoding right, especially not in any consistent way.
That's because text encoding and decoding is a mess. Operating systems doing it doesn't make it any less of a mess; it just inserts the mess deeper into everything. For example, look at all the quirks and edge cases in file name handling between different OS's, simply because nobody is willing to just admit that to the OS, file names should be sequences of bytes, which are easy to share between machines running different OS's.
The basic issue is that text encoding and decoding exists because bytes have meanings. But unless/until we invent artificial intelligence, computers can't deal with meanings (because the meanings are not simple computable functions of the bytes). And OS's, particularly, should not even try. Applications might have to try, but the cost if they get it wrong is much less.
Regardless of whether operating systems get involved in tasks like re-encoding text, they really should at least carry along the metadata about encodings whenever they're handling bytes that represent strings. Completely ignoring the problem and leaving it up to applications further up the stack just ensures that there will be incompatible competing standards for how to tell applications how to decode the string data they get from the OS. You don't want some apps trying to write filenames in UTF-8 while others use UTF-16, but allowing it to happen silently is even worse.
> Completely ignoring the problem and leaving it up to applications further up the stack just ensures that there will be incompatible competing standards for how to tell applications how to decode the string data they get from the OS.
I think it's naive to think Operating Systems aren't going to fragment in order to offer "features" (and lockin), and then papering over all that fragmentation has to happen in the application anyway.
unless there's a standard, and if there's a standard the application itself can deal with it.
> Regardless of whether operating systems get involved in tasks like re-encoding text, they really should at least carry along the metadata about encodings whenever they're handling bytes that represent strings.
I have no problem with this as long as the metadata itself is just additional bytes. But if the metadata needs to be decoded in order to figure out how to decode it, we have a problem... :-)
That's untenable. A higher-level API for strings with encodings needs to get the OS involved in the semantics to at least some extent, or else it merely obfuscates the problem instead of solving it. If the OS provides a way to store strings with a metadata field representing the string encoding, but doesn't define which bit pattern means UTF-8, then all of that extra complexity at best serves to call attention to the fact that encoding matters, but it does nothing to help applications ensure that they correctly interpret data created by a different application. If you're going to give your platform official APIs to address the very real problem of handling string encodings, then they ought to be useful enough to truly make it less of a problem. And since none of this actually precludes also including low-level byte-oriented APIs, there's no justification for stopping with a super-minimalist half-solution.
> If the OS provides a way to store strings with a metadata field representing the string encoding
You're missing my point. The OS should provide a way to store bytes. That's it. The meaning of the bytes is up to the application. If, to the application, the bytes represent text with a certain encoding, then it's up to the application to figure out how to translate the bytes, possibly using other stored bytes to decide. The OS doesn't need to get involved in any of this.
> it does nothing to help applications ensure that they correctly interpret data created by a different application
This is already a solved problem, and it isn't solved by OS's. It's solved by standards. For example, every web browser constantly has to correctly interpret data created by a different application. It can do so because HTML, CSS, JS, etc. are all standards that define how the bytes sent from the server to the client are to be interpreted. The browser doesn't even have to care what OS it's running on; all the OS is doing is giving it network sockets and a place for local data storage.
> If you're going to give your platform official APIs to address the very real problem of handling string encodings
If "platform" means "OS", then no, I'm not. If "platform" means "application framework", then sure, but an application framework is not the same thing as an OS. The fact that many OS's insist on also being application frameworks does not make the two things the same.
> If "platform" means "OS", then no, I'm not. If "platform" means "application framework", then sure, but an application framework is not the same thing as an OS. The fact that many OS's insist on also being application frameworks does not make the two things the same.
As I said originally, we tried that, and it doesn't work. Even within the context of a single locale, .NET applications will happily emit UTF-16 to be consumed by a Python script expecting all strings to be UTF-8, and with only byte-oriented APIs there's no side channel to convey that there's a mismatch that needs to be reconciled. Extending this problem from file and pipe contents to filenames is moving in the wrong direction. Operating systems absolutely should get involved in helping applications safely and usefully exchange information; that doesn't destroy the concept of an application framework, it just means that your OS is more than a hypervisor.
Very late post but I can't resist paraphrasing the old joke about regular expressions: some people, whenever they see an application-level problem, think "Oh, I'll just get the OS to solve it!" Now they have two problems.
> As far as the OS is concerned, data should be bytes.
If the OS knows the type of those bytes, it can do things like implement global garbage collection, intelligent caching, intelligent snapshotting &c. It can also enforce invariants across all user code.
Well, I think that OSes could do a lot more (and kernels a lot less … but that's a different story). Why _shouldn't_ an operating _system_ do an awful lot to ensure user safety, resource utilisation &c.?
Well, I'm not going to disagree except to say that there are posix_spawn and vfork. I don't think anyone thinks the IPC solutions on offer are great but perhaps we should lower our expectations on that.
I mean, it sounds like his objection is that an operating system that doesn't have POSIX compatibility is dismissed out of hand.
If we think of POSIX compatibility as something which is required in order for an operating system to be viable at any level, it means that there's a lot of energy in pioneering a new OS devoted to building up this compatibility simply so it can get some exposure.
If we really want to encourage a fresh approach and disseminate new ideas, we need to move away from criterion like this and look more specifically at the principles and work being put forth and whether or not those are objectively successful or have merit. That's a much better way to move forward.
I think if you were building an OS an an academic research exercise and publishing papers based on it, no-one would object to your results on the basis of a lack of POSIX compatibility.
The criteria is different if you're writing an OS for practical use with wide adoption, though. There, backward-compatibility is rightly considered a positive attribute - there is much existing software in the world, and the entire point of a practical OS is after all to run the software that people use.
Exactly the same situation exists with CPUs. If you come up with a new micro-architecture that you actually want to sell to customers, you'd better come to the party with a C compiler and a ported OS or three.
Interix which later became, SFU (Windows Services for UNIX) and then finally SUA (Subsystem for Unix-based Applications) was initially developed in the 90's by Softway Systems, as an optional addon subsystem to the Windows family of OS's.
For the record, it did not ship built-in or as part of the kernel. Even within Microsoft, it seemed the project was in perpetual life-support-mode.
Still, I do feel a small pang of sadness. There was once a time in the early 2000's, where I would have loved to see this chimeral oddity succeed.
And we now have the Windows Subsystem for Linux, which I believe is supposed to be entirely new, but carries on the strange saga of Windows and POSIX userspace coming together.
You can use the Plan9 file system on Linux afaik. Still nobody cares.
Mounting my sound card across the network sounds like a nice hack. It is more important that sound does not stutter though. That has soft real time requirements, which Plan9 does not address.
> You can use the Plan9 file system on Linux afaik. Still nobody cares.
Actually, start QEMU with some special arguments and a directory path, and the Linux guest inside will be able to see the given directory as a read-write 9P filesystem, mountable with a single command. (New files get QEMU's UID unless QEMU is run as root.)
> Mounting my sound card across the network sounds like a nice hack. It is more important that sound does not stutter though. That has soft real time requirements, which Plan9 does not address.
An incredibly good point, and why Plan 9 has zero adoption. :(
I have to say... the code is beautiful: Full English descriptive variable names and nearly every function is documented. As AngularJS has shown, cleaner, consistent code and architecture style = more contributors.
This looks like the NT kernel coding style, especially with the pointless typedefs for things like PVOID and the PascalCased function names with prefixes like `IopPerformIoOperation`. I looked up the Minoca founders on Linked In and, behold, they both worked as engineers on the Windows NT team. :)
NT source code typically (and I say typically, because it's a mish-mash of tens of thousands of people's work over 35 years of development) has local variable names in first-letter-lower camel case. If these guys worked on Windows, it's kind of surprising they didn't follow that convention.
I think it actually makes more sense, since in english we capitalize names (variables), but lowercase generic nouns (types). I agree that it is unusual, though.
Oh please. You have to start somewhere and this is where they're starting. By comparison, Linus' first announcement was:
Hello everybody out there using minix -
I'm doing a (free) operating system (just a hobby,
won't be big and professional like gnu) for 386(486) AT
clones. This has been brewing since april, and is
starting to get ready. I'd like any feedback on things
people like/dislike in minix, as my OS resembles it
somewhat (same physical layout of the file-system
(due to practical reasons) among other things).
I've currently ported bash(1.08) and gcc(1.40),
and things seem to work. This implies that I'll get
something practical within a few months, and
I'd like to know what features most people would want.
Any suggestions are welcome, but I won't promise I'll
implement them :-)
Linus (torv...@kruuna.helsinki.fi)
PS. Yes - it's free of any minix code, and it has a
multi-threaded fs. It is NOT protable (uses 386 task
switching etc), and it probably never will support
anything other than AT-harddisks, as that's all I
have :-(.
I don't think this is a first release announcement, it's been around for several years. This is the announcement that they are releasing the code under GPLv3. Previously it was closed source.
Amazing the submissive tone, and lack of confidence in that message. Compared to the holier than thou attitude Linus presents himself in these days. I would have never guessed he wrote that.
I don't think it's lack of confidence. It's just a realistic acknowledgement that it's a small start and in fact was unlikely to be useful. But he wanted feedback on how to make it useful.
I think Linus is just a practical person who is good at assessing reality. He doesn't get caught up in grand visions without action.
I don't think he is holier than thou now either. He's just busy and forcefully trying to get contributors to come to grips with things he has learned by experience on his project.
I've never really heard Linus preach about other people's projects, except where he intends to do better like svn. His advice is limited to the kernel as far as I can tell, and it's perfectly rational for him to be opinionated about that, because he has skin in the game.
In contrast, Stallman will preach about other projects -- in fact that is his main purpose. So I can see why people would call him holier than thou, but I don't see it with Linus. I do appreciate Stallman to a great degree too.
They want an OS that's smarter about power management and easier to maintain. They describe their high-level objectives in their FAQ:
Why is a new operating system necessary?
...The design requirements of today's devices have drastically evolved in areas of power management, security, serviceability, and virtualization... By starting from a clean slate, Minoca OS is able to incorporate those core tenets into the very fabric of the operating system...
How is Minoca OS different from other operating systems?
...One of the most noticeable differences at the kernel level is the uniform driver model, which provides a maintainable interface between the kernel core and device drivers... Another very noticeable difference is our strong emphasis on the kernel development environment. Being able to step through code in the kernel, boot environment, and even firmware was something we felt was critical to quickly developing and maintaining kernel-quality code...
We took a look at the operating systems out there, and realized it had been over 25 years since the major operating systems had been written. 25 years is a long time to accumulate baggage, not to mention the leaps and bounds by which the hardware has evolved during that time.
>Under the hood, Minoca contains a powerful driver model between device drivers and the kernel. The idea is that drivers can be written in a forward compatible manner, so kernel level components can be upgraded without requiring a recompilation of all device drivers.
This sounds really smart and it looks great overall <3
RHEL/ Centos defines a stable driver ABI, and even has tools that devs can use to check that their binary drivers don't use any symbols outside the ABI.
SUSE does too. As far as I'm aware, most stable distributions use a kABI checker. This is the same reason that Android doesn't have many kernel updates -- because proprietary driver authors don't feel like keeping up to date with kABI changes.
There is nothing that prevents a monolithic kernel from using the same model -- no theoretical barrier to it, in any case. Linux doesn't do this because its developers don't want it to.
> Can we achieve parity with what what operating systems are used for in today’s world, but with less code, and with fewer pain points? Can we do better? We’d like to try.
What you're seeing is the serial output. If you plug in an HDMI monitor, you'll hopefully already be at the shell prompt. Email me (chris@minocacorp.com) if you run into further issues.
Hey Chris! Very excited about the potential for a lightweight embedded OS.
I think I understand my confusion now, I was hoping for a shell on the serial port. Most of the work I do with BBB's is headless. Does the exclamation point mean it's finished booting to userspace? Do you think you could rig up the serial port with a shell?
How does Minoca handle GPIO's, and other things like RTC's and SPI? I'm sure you know of device tree and sysfs drivers for GPIO, but I can write you a quick overview!
Perhaps a stupid question but where is the architecture doc? (I think that should be the starting point, especially if you want help from a community.)
I'm also interested in what theoretical results from the last 25 years are being used. Some references to papers would be nice. Since the biggest hurdle is separation and thus security, I can imagine the OS has an embedded theorem prover, but I see no mention of it.
I happen to have an interest in OS development myself: it seems that two things have combined to make bespoke OS development practical again. First, the world has mostly standardised on amd64, UEFI &c. None of these standards may be terribly good (e.g. I've always x86 and its descendants tasteless compared to 68000, MIPS and others), but they're good enough, and now there's a critical mass of community support out there for them.
Second, readily-available virtual machines have made it easier than it ever was to rapidly iterate on an OS concept.
My own interest is in non-POSIX, non-Unix-inspired, non-consumption-oriented systems: I think that there's some huge headway to be made in building computers meant to augment the human brain, rather than to just reproduce the past or serve as an entertainment-enabler. But Minoca OS sounds pretty neat in itself. Kudos to the guys for putting in the work, and kudos for releasing it as open source. I hope that they're able to make some money from it too — good work deserves to be rewarded.
It looks like they have basically adopted the coding conventions of windows API code, down to the /++ --/ comments, style of declaring functions, uppercase types etc. Not that it's is a bad thing, just something I noticed.
Also things like KeCrashSystemEx look very much like KeBugCheckEx in Windows. There certainly is a lot of inspiration.
Edit: I see from Linkedin the OP is actually an ex-MSFTer who worked on Windows' HAL, so this makes sense. I wonder, tho, if this will present legal issues? Could someone say that it is an issue that the API is clearly inspired by the Windows API?
Windows Environment - If you're looking to develop on Windows, you may find this repository helpful, as it contains a native MinGW compiler, make, and other tools needed to set up a Minoca OS
development environment on Windows.
Nice to see that development on Windows is supported :)
I would love to see architecture and comparison to other kernels described!
Also, I wonder if this can be made to run on Cortex M4 with "MPU" but not "MMU" hardware? Something to run on the Teensy 3.6 would be interesting.
BTW: Binary compatible function driver interfaces are great! They served us well on BeOS. The Linux "recompile everything under the sun" approach really gets in the way for many real world situations.
Finally, I'm sick and tired of POSIX I/O. It's a really old model, and a really bad match to modern hardware and use cases. Someone needs to develop and popularize the callback/messaging based kernel/application interface of the future, complete with something other than the main() entry point...
Have to agree about the good work part. Sadly my personal opinion is that I really, really dislike the NT coding style. Now that makes me wonder how coding style affects peoples will to contribute. Compare this to the Linux kernel coding style and they're on different sides of the scale, I wonder how it will affect contributions.
Since Minoca seems to be a Corp., does anyone know what their business strategy will be? I could imagine a) a paid enterprise edition, b) paid support or c) paid consultancy (as in "contract the actual maintainers to implement device drivers for your servers").
Whenever a project is corporate-backed, I like to know the business plan upfront before I consider contributing to it.
Right now it seems that the business model is the MySQL one: provide the code as GPLv3 and sell licenses if someone wants to use it in a proprietary context.
And correct me if I am wrong but I think the corp is just the 2 developers right now.
>We at Minoca are trying to make open source work as a business model. One of the ways we're doing that is by offering Minoca OS source for sale under more proprietary licensing terms. To do this Minoca needs to own the copyright to its source. In order to support this business model while also allowing community contributions, we ask that contributors sign a Contributor Assignment Agreement. We're using Harmony Agreements. Before submitting patches, please fill out the CAA for individuals or companies.
Question to authors: do you have any graphics layer in-place? Or at least thoughts about its future architecture?
I am thinking about porting my Sciter [1] HTML/CSS UI Engine to an OS that can be used on small (IoT) devices. I think that HTML/CSS as a UI definition/declaration language is quite convenient.
OP here. Sciter looks cool, nice site too. Currently Minoca has support for a basic framebuffer, which we use to display our green terminal. Our biggest obstacle to having a GUI now is the lack of accelerated graphics drivers. We'd like to add them, but it will be a very large undertaking and probably require cooperation from one of the major graphics vendors.
If I were developing a new OS, I certainly wouldn't have a default text terminal...
That just looks gross.
But my interests differ. I would focus on the graphical user interface as my number one concern.
It would always run in a OpenGL type graphics mode, where you could spawn any number of text terminal-style windows, but I would definitely skip a full-screen text mode. Its just never needed.
Yes and no. There are so many dofferent parts to an operatong system that you quickly favor the text console because its so easy to do. If you are going to work on a GUI first thing, you might as well forgo the operating system bit and just write it as a program in some other OS while you perfect it
MacOS Classic was GUI first. There was no terminal; the closest thing that came to a console was the MacBugs debugging interface to inspect the assembly instructions.
And won't be under GPLv3. If your goals are to prevent others from locking up your code in proprietary products, and to give your code to the world, then this license is for you. If your goal is "widespread circulation," this is not the license you want.
Linux is not GPLv3 for a reason[1]. Not to mention it's only a kernel, not an entire OS. Basing your OS on a Linux kernel doesn't expose your other code to the GPL. Writing apps to run on a Linux system doesn't expose your code to the GPL.
The Minoca OS folks are asking for the community to contribute code to their OS that's licensed under GPLv3. GPLv3 supporters are necessarily a subset of open source supporters. I contend that the subset is small enough that a Minoca OS licensed under GPLv3 will continue to experience a lack of widespread circulation.
a barebones debian install of debian-testing or debian-unstable, running no services takes about 68MB of RAM, whether in i386 or amd64 kernel versions, and with a very recent v4.7 or v4.8 series kernel... how much more lightweight do you need?
But in embedded space, 68 MB RAM just for the system is pretty big. Many systems have a fraction of RAM of that. 1 - 256 kB RAM is common. Of course embedded systems can also have gigabytes of RAM. It's a wide spectrum, so as small as possible is preferable.
It depends; if gp has actual things to back-up what they're saying, then I'd actually like to read them. I'm interested in monokernels because of Haiku, but I'm definitely not an expert.
OTOH, somebody releases a whole OS and the response is "Snore! It isn't a microkernel." - Ok, so...why?
Start at "The Paper" here to skip past Linus vs Tannenbaum politics stuff. He describes the common counterpoints and shows with evidence, including existing systems, that they're not as big a deal as people say.
Animats and I think QNX is probably best of commercial ones in balancing all kinds of tradeoffs. It's been used for decades as a self-healing RTOS with good performance. First link is their description of it with second a demo of a product with QNX inside showing how fast it can be on non-desktop hardware.
Open-source one aimed at reliability & legacy compatibility you can play with. Took UNIX decades to get reliable despite all the labor but Minix 3's foundation did it with a handful of people over a few years. That's saying something.
Another FOSS one that aims at high-security integrating many best-of-breed components from CompSci like Nitpicker GUI and seL4 microkernel. First link is descriptive slides with second the actual site. This one is still new so will have bugs.
Finally, it's worthwhile to throw in an exemplary one from capability-security that further isolates things with self-healing properties. Based on commercially successful KeyKOS system on mainframes. No longer maintained but docs and GPL code still available for study or revival. Paper also describes other capability kernels.
So, there's you a few days worth of reading and a few years worth of thinking to do. Hope it helps shed light on why almost every safety- or security-critical system that ever did well in reliability or security was a microkernel-based system. These days, high-assurance is looking at eliminating even it with CPU's with built-in security, compiler techniques for automated safety/security, and DSL's for easy formal verification of OS or system components. Until that's finalized & while using traditional hardware, it's best to build on methods that already worked for decades.
Appreciate it. I try to stay evidence-driven. :) For extra data, Google Gernot Heiser with "L4," microkernel, or OKL4 terms + "evaluation" or "performance." He published lots of comparisons as they put it on lots of phones.
Took a look at the github page. Sounds really impressive, lads. I only wish it were a microkernel and licensed with an 'or later'. In fact, given how well copyleft works for system software, I'd consider AGPL. Anyway, it definitely deserves a closer look.
The goals for this new OS are quite vague and uninteresting. It seems set to repeat most of the problems with existing OSes.
Let's take the package manager for example. Wouldn't it be great if instead of yet another package manager that is a glorified system of install scripts... we had a declarative functional system ala NixOS? That's just scratching the surface.
"Ford already made a car! Making another one is a waste of time!"
"IBM already made a computer! Making another one is a waste of time!"
"Nokia already made a phone! Making another one is a waste of time!"
People choose projects they're passionated about. The majority will likely be a waste of time in the grand scheme of things. Who cares? It's what they want to work on. Is it likely to overtake Linux? No. Is it likely to gain enough adoption to merit taking a look at it? Maybe.
You don't even know if the way its architected would provide a better way at doing something another existing operating system cannot and yet you're immediately writing it off?
I think I have already heard this somewhere: "<something something> should be enough for everybody".
In fact, I wish we had more diversity as far as operating systems are concerned. For example, BeOS/Haiku would seem to me a saner choice for a desktop than an OS primarily designed for servers. (Windows and Mac OS used to be purely desktop OS, but not any more.) RISC OS is another interesting example.
I didn't say 100% of anything he says is vitriolic spew. There is no denying the man is a genius with C, but he's certainly built a reputation with his attitude.
Yet Another OS With the Vague Impossible-To-Evaluate Benefit of 'Newness'
edit: not to suggest I have a problem with people trying new things, but rather...why not write up a little essay on the architectural decisions you've made and their impact before releasing it publicly?
Their competition is dsl, arch, raspbarrien and other small/IoT OSs. Not to say their product is worse. But why are they forking instead of supporting existing technologies?
Not saying their work isnt useful. But theres an argument to be made for "why"?
Because taking away mindshare instead of consolidating efforts hurts all parties. Competition is often good when done for very good reasons, iojs vs nodejs, proprietary goals vs purely open source. Their goal is the same goal as many other projects however, instead of supporting those projects they are releasing their own which will likely have to go through a LOT of growing pains to compete with current projects. Simply releasing your own OS (written by two guys) seems naive at best and arrogant at worst.
I felt this way as well (since the mid nineties) until a couple of weeks ago when I read _Social Architecture_ by Pieter Hintjens. It has a strong idealogical flavour. He stresses that you should focus on building a community, with the code and platform as happy side-effects of that community.
For that reason, you want GPL because it forces everything back into the community. He gives examples of a friend of his who slaved on a BSD project that got forked with the friend having nothing to show for it. And NT adoption of much of the code from the BSD sockets API.
LLVM has taken off because the code is cleaner and more modular than GCC. Also Apple payed for all the boring grunt work. Maybe Apple would have adopted LLVM with GPL, maybe not.
> He stresses that you should focus on building a community, with the code and platform as happy side-effects of that community.
Exactly, so license shouldn't matter that much. I'd probably be a lot more interested in it were it BSD licensed, as it is now I have no real desire to play with this new OS.
"This software is distributed under the terms of the GNU General Public License version 3 (GPLv3). Minoca offers alternative licensing choices for sale. Contact info@minocacorp.com if you or your company are interested in licensing this software under alternate terms."
It looks like the developers want people modifying their code to contribute back, either by giving back the changes or by paying them (they offer an option to purchase a more permissively-licensed version).
Indeed. GPL makes sense in the short term, but in the long run an permissive licensed project forces its branches to contribute back to avoid divergence. GPL makes it harder for commercial entities to invest into it and also a permissive licensed project makes a bigger competition to e.g. Linux.
Minoca OS has been around for a while, but the news is that they're GPLv3. I think that's a great thing! The MIT license is good for software that wants to permeate through everything, but for building a community, the GPL is a good idea.
It seems that for any operating system to be successful, it has to carry around POSIX compatibility like an extremely expensive entry pass. I wonder when we will leave that behind? Or if we ever will? I'm glad POSIX is just a layer in Minoca, and not the base of the system, because these days it really should just be treated like a big wad of glue.
PS: I love the object manager. I don't see any particularly ground-breaking networking stack, though. A plan9-inspired networked file system approach would have been amazing, but it seems this project is content with today's more typical approach. Perhaps it is just trying to be less opinionated about network structure than plan9 was?
PPS: I'm terrible at organizing a comment. Maybe I need a blog.