Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Minoca OS: A new open source operating system (minocacorp.com)
711 points by EvanGr on Oct 31, 2016 | hide | past | favorite | 187 comments


I'm very excited about this! Minoca is an interesting system, and I applaud any attempt to make driver-writing less inherently horrible.

Minoca OS has been around for a while, but the news is that they're GPLv3. I think that's a great thing! The MIT license is good for software that wants to permeate through everything, but for building a community, the GPL is a good idea.

It seems that for any operating system to be successful, it has to carry around POSIX compatibility like an extremely expensive entry pass. I wonder when we will leave that behind? Or if we ever will? I'm glad POSIX is just a layer in Minoca, and not the base of the system, because these days it really should just be treated like a big wad of glue.

PS: I love the object manager. I don't see any particularly ground-breaking networking stack, though. A plan9-inspired networked file system approach would have been amazing, but it seems this project is content with today's more typical approach. Perhaps it is just trying to be less opinionated about network structure than plan9 was?

PPS: I'm terrible at organizing a comment. Maybe I need a blog.


Seems to me like POSIX compatibility is actually a cheap pass that gives you access to a huge software environment. Right on the homepage they mention already having packages for Python, Ruby, Git, Lua, and Node... would that, and thousands of other packages, be feasible without a workable POSIX layer?


> would that, and thousands of other packages, be feasible without a workable POSIX layer?

I think my problem is the core concept. POSIX stands for Portable Operating System Interface (and X stands for Xtreme?). In an age where we spin up entire operating systems to start a single application, why are we defining portability at the operating system level when network portability works so much better?

Keep in mind, this is also an age where systems like Qubes OS can make separate VMs cooperate with each other.

The only sell for POSIX I can think of is performance, and I don't know if I buy it anymore. Why do programs have to be cross-compatible when the concept of an operating system no longer means owning the hardware?


> why are we defining portability at the operating system level when network portability works so much better

A lightweight POSIX-capable system has real value today. Operating systems need to be in more places than just the data center. IoT devices don't have the resources to run a VM or any other fancy containerized environment. POSIX was designed to be used on systems with comparable resources to what many embedded processors now have. It makes sense to leverage the existing codebase where possible.


> POSIX was designed to be used on systems with comparable resources to what many embedded processors now have.

But not comparable environments. Most IOT environments are very small parts of very big systems, and POSIX defines a system with a teletype and a line editor.

I seriously doubt the value of the existing codebase. Saying code made for a server (like most existing unix code!) is fine for IOT feels wrong, and not just because IOT devices have kilobytes of memory and servers can have gigabytes/terabytes.

I don't think that a universal OS can work well when we know that universal programming languages, data transfer protocols, and everything else didn't. Imagine if we were still doing everything in PL/I. We recognize now that different programming tasks need different programming environments, but we still don't think that different programs need different program environments. It's just strange to me.


> Saying code made for a server (like most existing unix code!) is fine for IOT feels wrong,

There was a time when people ran Linux/SunOS/Ultrix/etc. with tiny amounts of RAM and swap. It's not unusual for an embedded device to have 8, 16, 32 MiB, or more RAM available. Many traditional *nix programs can run unchanged on such hardware.


I think the parent's comment was more along the lines of: just because an embedded device _can_ run a Unix doesn't mean it should have to adopt the semantics and history of the *Unix environment that POSIX mandates. In particular the model of TTYs, filesystems, Unix-style file/IO, Unix-style virtual memory / mmap, etc. It's not that this stuff is expensive on modern embedded systems (which as pointed out often have the hardware capacity of high end systems from 20 years ago), it's that these capabilities are either not needed, or impose a conceptual model of what a computer and operating system have to be that don't match up to what the machine is designed for.


There's a whole market of suppliers that's been going strong for decades. Especially for safety-critical embedded with untrusted, Linux/POSIX apps in separate partitions. QNX is one of best examples far as commercial adoption:

http://www.qnx.com/content/qnx/en/products/neutrino-rtos/neu...

Example of embedded product in case you wonder about performance cost of all that isolation, healing, and context switching:

https://youtu.be/vPo6gl8N0wM?t=1m20s

EDIT to add the QNX Desktop Demo that came on a single floppy. Throwing it in since you were mentioning resource-constrained systems further down. A floppy is 1.44MB with base QNX running in ROM's of embedded systems. Can scale such architectures it up or down however you wish. :)

http://crackberry.com/heres-how-qnx-looked-1999-running-144m...


> and X stands for Xtreme?

The X in POSIX is for unIX. Kinda like a pun on all the *nix things that were floating around back then (Xenix, Minix, AIX, Sinix, Ultrix, IRIX, ...)


See "The origin of the name POSIX" https://stallman.org/articles/posix.html


I agree with you in general but IIRC Lua only depends on the C standard library and does not need POSIX to run. There are even versions of it that can run inside the kernel!


Correct.


The alternative would be to provide a hypervisor and boot Linux in top.


> It seems that for any operating system to be successful, it has to carry around POSIX compatibility like an extremely expensive entry pass.

Curious - what are your main objections with POSIX? Is there another system you prefer?


The best thing about POSIX at this point is back compatibility, nothing to be sneered at!

But it carries a lot of (in retrospect) bad habits and decisions from the 60s and 70s as well as a tendency to redundancy due to some competing standards that were unified and need for some back compatibility.

Now not everyone agrees on what is good and what is bad, so some experimentation in this area is good for everyone. Examples of what bother me include the ludicrousness of ioctl(), messed up / redundant semaphore semantics, ditto for IPC, primitive memory mgmt semantics, fork() -- a great hack for its time but since it's 99.9999% of the time followed by exec() should be split into separate address space and thread management), outdated and simultaneously simplistic and baroque security model(s), and various IO issues to many to go into in a HN comment.

But my loathed feature is undoubtably someone else's sacred cow. As I said, letting more flowers bloom is in everybody's interest.


I think the main issues with POSIX are:

- (Correct) IO is ridiculously non-portable and painful in so many ways that it isn't even funny anymore

- Locks are ridiculously non-portable and painful to the point where you're better off just using "mkdir"

- POSIX is stuck in the "everything is bytes and we slap an encoding on it some of the time" era thinking. This makes it painful and hard to implement proper text handling in many instances. This also leads to a lot of bad behaviours.

- Memory management is IMHO lacking from a user space perspective. For example, it's practically impossible to implement a cooperative memory cache on this. To the best of my knowledge no OS has the necessary interfaces, though.

- ioctl as you mentioned

- SysV/POSIX IPC is so bad that no one ever bothered actually using it for anything

- Personally I think it's a misleading API (conceptually, see above, the text example for example) almost to the point of deceptiveness. It's very easy to write correct looking programs that behave far from intended, especially in edge cases. IMHO code using it is practically unreviewable in everything but the most trivial cases. Non-portability is practically guaranteed, you have to test every platform. Portable code usually turns out to be quite ugly due to platform deficiencies and minor API incompatibilities.


> POSIX is stuck in the "everything is bytes and we slap an encoding on it some of the time" era thinking.

I actually view this as a feature. Encoding/decoding of data should be an application level thing, not an OS-level thing. As far as the OS is concerned, data should be bytes.

(Of course, it is true that, since POSIX defines a terminal spec, it has to at least specify how bytes are mapped to characters that print on the terminal. But I would rather see that removed altogether, so a terminal becomes just another application, than have an OS try to muck about with encodings.)


Applications have for the most part proven that they cannot be trusted to get text encoding and decoding right, especially not in any consistent way. Operating systems definitely should make it possible to deal with the raw byte streams, but the default and preferred method of text handling should be a standard higher-level interface.


> Applications have for the most part proven that they cannot be trusted to get text encoding and decoding right, especially not in any consistent way.

That's because text encoding and decoding is a mess. Operating systems doing it doesn't make it any less of a mess; it just inserts the mess deeper into everything. For example, look at all the quirks and edge cases in file name handling between different OS's, simply because nobody is willing to just admit that to the OS, file names should be sequences of bytes, which are easy to share between machines running different OS's.

The basic issue is that text encoding and decoding exists because bytes have meanings. But unless/until we invent artificial intelligence, computers can't deal with meanings (because the meanings are not simple computable functions of the bytes). And OS's, particularly, should not even try. Applications might have to try, but the cost if they get it wrong is much less.


Regardless of whether operating systems get involved in tasks like re-encoding text, they really should at least carry along the metadata about encodings whenever they're handling bytes that represent strings. Completely ignoring the problem and leaving it up to applications further up the stack just ensures that there will be incompatible competing standards for how to tell applications how to decode the string data they get from the OS. You don't want some apps trying to write filenames in UTF-8 while others use UTF-16, but allowing it to happen silently is even worse.


> Completely ignoring the problem and leaving it up to applications further up the stack just ensures that there will be incompatible competing standards for how to tell applications how to decode the string data they get from the OS.

I think it's naive to think Operating Systems aren't going to fragment in order to offer "features" (and lockin), and then papering over all that fragmentation has to happen in the application anyway.

unless there's a standard, and if there's a standard the application itself can deal with it.


> Regardless of whether operating systems get involved in tasks like re-encoding text, they really should at least carry along the metadata about encodings whenever they're handling bytes that represent strings.

I have no problem with this as long as the metadata itself is just additional bytes. But if the metadata needs to be decoded in order to figure out how to decode it, we have a problem... :-)


That's untenable. A higher-level API for strings with encodings needs to get the OS involved in the semantics to at least some extent, or else it merely obfuscates the problem instead of solving it. If the OS provides a way to store strings with a metadata field representing the string encoding, but doesn't define which bit pattern means UTF-8, then all of that extra complexity at best serves to call attention to the fact that encoding matters, but it does nothing to help applications ensure that they correctly interpret data created by a different application. If you're going to give your platform official APIs to address the very real problem of handling string encodings, then they ought to be useful enough to truly make it less of a problem. And since none of this actually precludes also including low-level byte-oriented APIs, there's no justification for stopping with a super-minimalist half-solution.


> If the OS provides a way to store strings with a metadata field representing the string encoding

You're missing my point. The OS should provide a way to store bytes. That's it. The meaning of the bytes is up to the application. If, to the application, the bytes represent text with a certain encoding, then it's up to the application to figure out how to translate the bytes, possibly using other stored bytes to decide. The OS doesn't need to get involved in any of this.

> it does nothing to help applications ensure that they correctly interpret data created by a different application

This is already a solved problem, and it isn't solved by OS's. It's solved by standards. For example, every web browser constantly has to correctly interpret data created by a different application. It can do so because HTML, CSS, JS, etc. are all standards that define how the bytes sent from the server to the client are to be interpreted. The browser doesn't even have to care what OS it's running on; all the OS is doing is giving it network sockets and a place for local data storage.

> If you're going to give your platform official APIs to address the very real problem of handling string encodings

If "platform" means "OS", then no, I'm not. If "platform" means "application framework", then sure, but an application framework is not the same thing as an OS. The fact that many OS's insist on also being application frameworks does not make the two things the same.


> If "platform" means "OS", then no, I'm not. If "platform" means "application framework", then sure, but an application framework is not the same thing as an OS. The fact that many OS's insist on also being application frameworks does not make the two things the same.

As I said originally, we tried that, and it doesn't work. Even within the context of a single locale, .NET applications will happily emit UTF-16 to be consumed by a Python script expecting all strings to be UTF-8, and with only byte-oriented APIs there's no side channel to convey that there's a mismatch that needs to be reconciled. Extending this problem from file and pipe contents to filenames is moving in the wrong direction. Operating systems absolutely should get involved in helping applications safely and usefully exchange information; that doesn't destroy the concept of an application framework, it just means that your OS is more than a hypervisor.


Very late post but I can't resist paraphrasing the old joke about regular expressions: some people, whenever they see an application-level problem, think "Oh, I'll just get the OS to solve it!" Now they have two problems.


That's an integration problem and the solution is to pipe it through something that knows enough to do the conversion.


> As far as the OS is concerned, data should be bytes.

If the OS knows the type of those bytes, it can do things like implement global garbage collection, intelligent caching, intelligent snapshotting &c. It can also enforce invariants across all user code.


To me this means you have an application, not an OS--or perhaps an application that also happens to be an OS. (Emacs comes to mind...)


Well, I think that OSes could do a lot more (and kernels a lot less … but that's a different story). Why _shouldn't_ an operating _system_ do an awful lot to ensure user safety, resource utilisation &c.?


> have an OS try to muck about with encodings

Hardware encode/decode often requires DMA capabilities. There are many optimizations that kernel mode can bring.


x86 has string instructions that do not require DMA. Nobody needs to hardware accelerate string decoding and encoding though...


> Hardware encode/decode

Of text?


but is it webscale?


I'll also disagree on the "everything is bytes and we slap an encoding on it some of the time"... But add there:

- User level security, instead of app level or task level, or whatever.

- Aged IPC primitives that assume too much to the point that modern hardware has to be built around it (and slower because of that).

- Added later, non-core network support, leading to bad integration.

- Added later, non-core, or sometimes never added encryption support, leading to bad integration.


Well, I'm not going to disagree except to say that there are posix_spawn and vfork. I don't think anyone thinks the IPC solutions on offer are great but perhaps we should lower our expectations on that.


Is there any interest in developing a newer, better standard or is this something we're going to be stuck with forever?


I mean, it sounds like his objection is that an operating system that doesn't have POSIX compatibility is dismissed out of hand.

If we think of POSIX compatibility as something which is required in order for an operating system to be viable at any level, it means that there's a lot of energy in pioneering a new OS devoted to building up this compatibility simply so it can get some exposure.

If we really want to encourage a fresh approach and disseminate new ideas, we need to move away from criterion like this and look more specifically at the principles and work being put forth and whether or not those are objectively successful or have merit. That's a much better way to move forward.


I think if you were building an OS an an academic research exercise and publishing papers based on it, no-one would object to your results on the basis of a lack of POSIX compatibility.

The criteria is different if you're writing an OS for practical use with wide adoption, though. There, backward-compatibility is rightly considered a positive attribute - there is much existing software in the world, and the entire point of a practical OS is after all to run the software that people use.

Exactly the same situation exists with CPUs. If you come up with a new micro-architecture that you actually want to sell to customers, you'd better come to the party with a C compiler and a ported OS or three.


Microsoft has produced several successful operating systems without POSIX compatibility.



Interix which later became, SFU (Windows Services for UNIX) and then finally SUA (Subsystem for Unix-based Applications) was initially developed in the 90's by Softway Systems, as an optional addon subsystem to the Windows family of OS's.

For the record, it did not ship built-in or as part of the kernel. Even within Microsoft, it seemed the project was in perpetual life-support-mode.

Still, I do feel a small pang of sadness. There was once a time in the early 2000's, where I would have loved to see this chimeral oddity succeed.

[1] https://en.wikipedia.org/wiki/Interix


And we now have the Windows Subsystem for Linux, which I believe is supposed to be entirely new, but carries on the strange saga of Windows and POSIX userspace coming together.


You can use the Plan9 file system on Linux afaik. Still nobody cares.

Mounting my sound card across the network sounds like a nice hack. It is more important that sound does not stutter though. That has soft real time requirements, which Plan9 does not address.


> You can use the Plan9 file system on Linux afaik. Still nobody cares.

Actually, start QEMU with some special arguments and a directory path, and the Linux guest inside will be able to see the given directory as a read-write 9P filesystem, mountable with a single command. (New files get QEMU's UID unless QEMU is run as root.)

> Mounting my sound card across the network sounds like a nice hack. It is more important that sound does not stutter though. That has soft real time requirements, which Plan9 does not address.

An incredibly good point, and why Plan 9 has zero adoption. :(


I ran `cloc` on the minoca/os repository, here are the results:

  github.com/AlDanial/cloc v 1.70  T=22.43 s (84.3 files/s, 63498.9 lines/s)
  -----------------------------------------------------------------------------------
  Language                         files          blank        comment           code
  -----------------------------------------------------------------------------------
  C                                 1023         296251         262652         530443
  C/C++ Header                       438          93833         117592          69529
  Assembly                            95           9588           7773          13383
  make                               270           2236           6402           4784
  Bourne Shell                        38            799           1198           2787
  JSON                                 4              4              0           1367
  Pascal                               4            232             28            718
  Python                               4            194            257            589
  yacc                                 2             99              4            481
  awk                                  3             25             11            263
  Markdown                             3             27              0            172
  Windows Resource File                3             18              0             86
  lex                                  2             44            142             84
  Perl                                 1              7             10             35
  -----------------------------------------------------------------------------------
  SUM:                              1890         403357         396069         624721
  -----------------------------------------------------------------------------------


Shameless plug of `loc`, a rust implementation of `cloc` that is 100+ times faster:

https://github.com/cgag/loc


Has it any remarkable features other than "it's written in Rust"?


100x faster sounds like a remarkable feature to me.

(I haven't tried this yet, though I do use tokei, which is faster but not this fast. Been meaning to try it out.)


> that is 100+ times faster

is that not enough?


Depends on how slow cloc is. 100x faster than "already very fast" is not going to be a perceptible benefit.


The readme talks about dragonfly BSD's codebase; almost two minutes for cloc, just over one second for loc. That's a noticeable difference.


But how often do you cloc the BSD codebase?

I do clocs maybe once a week. So waiting for 2 minutes isn't really a huge issue here. (Well, it's less than 2 minutes for my code bases).

Smells like premature optimization ;)


This is totally true as well.


lol.

I too get tired of seeing all this "take X and write it in rust".

OTOH it's a new area and the younger crowd gets to try and make a name for themselves in something that isn't already fully established.

But seriously, I've never thought to myself "cloc is too slow" and even if I did, I'd run it overnight.


... in Rust!


Aw maaaan. No homebrew yet?


tl;dr, ~625k source lines of code, mostly C, with a bunch of C++ and Assembly too?


And about 2:1 code to comment ratio in the C side, which seems pretty good to me (without poking at other projects for comparison)



I don't think there's any C++, it's just referring to the C headers above.


I have to say... the code is beautiful: Full English descriptive variable names and nearly every function is documented. As AngularJS has shown, cleaner, consistent code and architecture style = more contributors.

Take a look here for example: https://github.com/minoca/os/blob/master/kernel/io/iobase.c#...


Why the initial capital on local variable names? I don't think I've ever seen that in C code. (Usually types and exported functions are capitalized.)

It kind of makes the code look like Pascal. Nothing wrong with that, really.


This looks like the NT kernel coding style, especially with the pointless typedefs for things like PVOID and the PascalCased function names with prefixes like `IopPerformIoOperation`. I looked up the Minoca founders on Linked In and, behold, they both worked as engineers on the Windows NT team. :)


NT source code typically (and I say typically, because it's a mish-mash of tens of thousands of people's work over 35 years of development) has local variable names in first-letter-lower camel case. If these guys worked on Windows, it's kind of surprising they didn't follow that convention.


Usually it's uppercase with a lowercase "Hungarian notation" prefix. Without the prefix, it starts with uppercase.


I think it actually makes more sense, since in english we capitalize names (variables), but lowercase generic nouns (types). I agree that it is unusual, though.


You basically just said that the Windows kernel code is beautiful, because they are using exactly the same coding style...


To be fair, the Windows kernel code is beautiful in a lot of ways.


Tcl has some nice code done in a similar style:

https://github.com/tcltk/tcl/blob/master/generic/tclEvent.c


Oh please. You have to start somewhere and this is where they're starting. By comparison, Linus' first announcement was:

  Hello everybody out there using minix -
  I'm doing a (free) operating system (just a hobby,
  won't be big and professional like gnu) for 386(486) AT
  clones.  This has been brewing since april, and is
  starting to get ready.  I'd like any feedback on things
  people like/dislike in minix, as my OS resembles it
  somewhat (same physical layout of the file-system
  (due to practical reasons) among other things).

  I've currently ported bash(1.08) and gcc(1.40),
  and things seem to work.  This implies that I'll get
  something practical within a few months, and

  I'd like to know what features most people would want.
  Any suggestions are welcome, but I won't promise I'll
  implement them :-)

                Linus (torv...@kruuna.helsinki.fi)

  PS.  Yes - it's free of any minix code, and it has a
  multi-threaded fs. It is NOT protable (uses 386 task
  switching etc), and it probably never will support
  anything other than AT-harddisks, as that's all I
  have :-(.
Good luck, guys. Post some docs to read.


Actually there's plenty of docs to read: http://www.minocacorp.com/doc/1375/api/KERNEL_API/

I haven't found something like an architecture overview of the kernel itself, but the docs story isn't bad for a 2-ppl team. Kudos.



lol gregg, so this is what you do in that cave


I'll second this. I'm sure the same comments here could be made about Linux when it started, and I don't see any point in negativity.

Good luck to these people, I hope this becomes one of the major OSes in ten years.


Hosting Node and Python already is a hell of a first release announcement.


I don't think this is a first release announcement, it's been around for several years. This is the announcement that they are releasing the code under GPLv3. Previously it was closed source.


Minoca was previously posted about 6 months ago [0], but it was not open source at the time.

I guess this is kind of like a re-release after not generating much buzz and seeing a lot of comments asking for open source.

[0] https://news.ycombinator.com/item?id=11662057


Amazing the submissive tone, and lack of confidence in that message. Compared to the holier than thou attitude Linus presents himself in these days. I would have never guessed he wrote that.


I don't think it's lack of confidence. It's just a realistic acknowledgement that it's a small start and in fact was unlikely to be useful. But he wanted feedback on how to make it useful.

I think Linus is just a practical person who is good at assessing reality. He doesn't get caught up in grand visions without action.

I don't think he is holier than thou now either. He's just busy and forcefully trying to get contributors to come to grips with things he has learned by experience on his project.

I've never really heard Linus preach about other people's projects, except where he intends to do better like svn. His advice is limited to the kernel as far as I can tell, and it's perfectly rational for him to be opinionated about that, because he has skin in the game.

In contrast, Stallman will preach about other projects -- in fact that is his main purpose. So I can see why people would call him holier than thou, but I don't see it with Linus. I do appreciate Stallman to a great degree too.


> So I can see why people would call [Stallman] holier than thou

Well he is a saint...

https://stallman.org/saint.html


Good at assessing reality??

The same Linus that fundementally did not believe in source code management for what 20? Years because it made developers "soft"!?


He (a) changed his mind, (b) wrote git.


How he reconciled (b) with his generational ignorance of SCM would be interesting to hear from him.

(a) better late than never!


Did the kernel work for those 20 years? Are git and bitkeeper fundamentally different than his options at the time?

I would consider the possibility that he knows something that you don't.


You might infer that the tone he uses in some emails is for a specific purpose, not because he isn't aware of how to be polite.


Thanks for injecting some sanity :)


Ok, but why are they doing this?


They want an OS that's smarter about power management and easier to maintain. They describe their high-level objectives in their FAQ:

Why is a new operating system necessary?

...The design requirements of today's devices have drastically evolved in areas of power management, security, serviceability, and virtualization... By starting from a clean slate, Minoca OS is able to incorporate those core tenets into the very fabric of the operating system...

How is Minoca OS different from other operating systems?

...One of the most noticeable differences at the kernel level is the uniform driver model, which provides a maintainable interface between the kernel core and device drivers... Another very noticeable difference is our strong emphasis on the kernel development environment. Being able to step through code in the kernel, boot environment, and even firmware was something we felt was critical to quickly developing and maintaining kernel-quality code...

http://www.minocacorp.com/support/faq/


We took a look at the operating systems out there, and realized it had been over 25 years since the major operating systems had been written. 25 years is a long time to accumulate baggage, not to mention the leaps and bounds by which the hardware has evolved during that time.


Because they can!


For learning?


>Under the hood, Minoca contains a powerful driver model between device drivers and the kernel. The idea is that drivers can be written in a forward compatible manner, so kernel level components can be upgraded without requiring a recompilation of all device drivers.

This sounds really smart and it looks great overall <3


That's the "port driver"+"mini port driver" model of Windows.


Not surprising considering the Minoca founders both worked on the Windows kernel team, according to Linked In. :)


Honestly, the inside looks rindunculously like the NT kernel.


So true. it really is my biggest problem with the monolithic Linux kernel.


RHEL/ Centos defines a stable driver ABI, and even has tools that devs can use to check that their binary drivers don't use any symbols outside the ABI.


SUSE does too. As far as I'm aware, most stable distributions use a kABI checker. This is the same reason that Android doesn't have many kernel updates -- because proprietary driver authors don't feel like keeping up to date with kABI changes.


There is nothing that prevents a monolithic kernel from using the same model -- no theoretical barrier to it, in any case. Linux doesn't do this because its developers don't want it to.


That's a fantastic initiative.

> Can we achieve parity with what what operating systems are used for in today’s world, but with less code, and with fewer pain points? Can we do better? We’d like to try.

I appreciate the uncertainty!


Any images out there for Beaglebone? Quite interested!

Edit: Haaa, use your eyes and you shall see!

http://www.minocacorp.com/download/#beaglebone-black

Edit: Boo :( I haven't gotten it to boot yet. Boot messages are as follows:

    Minoca Firmware Loader
    Boot Device: 00000008
    Launching bbonefw.bin.
    Jumping to 82000000...
    !


Chris from Minoca here.

What you're seeing is the serial output. If you plug in an HDMI monitor, you'll hopefully already be at the shell prompt. Email me (chris@minocacorp.com) if you run into further issues.


Hey Chris! Very excited about the potential for a lightweight embedded OS.

I think I understand my confusion now, I was hoping for a shell on the serial port. Most of the work I do with BBB's is headless. Does the exclamation point mean it's finished booting to userspace? Do you think you could rig up the serial port with a shell?

How does Minoca handle GPIO's, and other things like RTC's and SPI? I'm sure you know of device tree and sysfs drivers for GPIO, but I can write you a quick overview!


Perhaps a stupid question but where is the architecture doc? (I think that should be the starting point, especially if you want help from a community.)

I'm also interested in what theoretical results from the last 25 years are being used. Some references to papers would be nice. Since the biggest hurdle is separation and thus security, I can imagine the OS has an embedded theorem prover, but I see no mention of it.


This link was posted elsewhere in the thread:

http://www.minocacorp.com/documentation/developers/knowledge...


"Minoca OS was written by two developers, Evan and Chris, over the course of a few years."

Big kudos.

Going to look at it when time allows


How cool!

I happen to have an interest in OS development myself: it seems that two things have combined to make bespoke OS development practical again. First, the world has mostly standardised on amd64, UEFI &c. None of these standards may be terribly good (e.g. I've always x86 and its descendants tasteless compared to 68000, MIPS and others), but they're good enough, and now there's a critical mass of community support out there for them.

Second, readily-available virtual machines have made it easier than it ever was to rapidly iterate on an OS concept.

My own interest is in non-POSIX, non-Unix-inspired, non-consumption-oriented systems: I think that there's some huge headway to be made in building computers meant to augment the human brain, rather than to just reproduce the past or serve as an entertainment-enabler. But Minoca OS sounds pretty neat in itself. Kudos to the guys for putting in the work, and kudos for releasing it as open source. I hope that they're able to make some money from it too — good work deserves to be rewarded.


The code gives me flashbacks to working with the Windows API.

Cool project though and it's nice that they already have a handful of drivers.

EDIT: To clarify, I'm not saying it's bad at all, they're just using a very verbose naming convention that I don't particularly like working with.


It looks like they have basically adopted the coding conventions of windows API code, down to the /++ --/ comments, style of declaring functions, uppercase types etc. Not that it's is a bad thing, just something I noticed.

Also things like KeCrashSystemEx look very much like KeBugCheckEx in Windows. There certainly is a lot of inspiration.

Edit: I see from Linkedin the OP is actually an ex-MSFTer who worked on Windows' HAL, so this makes sense. I wonder, tho, if this will present legal issues? Could someone say that it is an issue that the API is clearly inspired by the Windows API?


While there are the usual legal bizzarities ( http://blog.smartbear.com/apis/api-copyright-and-why-you-sho... ), there's no reason this should be any different than copyright of code. Design-inspired is not design infringement.


http://www.minocacorp.com/community/

Windows Environment - If you're looking to develop on Windows, you may find this repository helpful, as it contains a native MinGW compiler, make, and other tools needed to set up a Minoca OS development environment on Windows.

Nice to see that development on Windows is supported :)


I wish them well, but I'm rather surprised to see no support for 64-bit architecture. Seems an odd place to start.


well, like they wrote, they want to support small, power limited devices as well.


Today even smartphones are ARM64. They are powerful, yet power limited.


Look at the tag at the end of the article. It's targeting Internet of Things devices, which will be a lot more power constrained than smartphones.


I would love to see architecture and comparison to other kernels described!

Also, I wonder if this can be made to run on Cortex M4 with "MPU" but not "MMU" hardware? Something to run on the Teensy 3.6 would be interesting.

BTW: Binary compatible function driver interfaces are great! They served us well on BeOS. The Linux "recompile everything under the sun" approach really gets in the way for many real world situations.

Finally, I'm sick and tired of POSIX I/O. It's a really old model, and a really bad match to modern hardware and use cases. Someone needs to develop and popularize the callback/messaging based kernel/application interface of the future, complete with something other than the main() entry point...


> Finally, I'm sick and tired of POSIX I/O.

How do you feel about kqueue or I/O completion ports? Are those something like the "interface of the future", or just greasing the old machinery?


NT coding style and some similarities with naming. Very clean looking in what I've seen so far. Nice work guys.


Have to agree about the good work part. Sadly my personal opinion is that I really, really dislike the NT coding style. Now that makes me wonder how coding style affects peoples will to contribute. Compare this to the Linux kernel coding style and they're on different sides of the scale, I wonder how it will affect contributions.


Windows can do this with its subsystems (Win32, POSIX, OS/2, Linux). Perhaps they will reuse some of that idea so your preference might be swayed.


Since Minoca seems to be a Corp., does anyone know what their business strategy will be? I could imagine a) a paid enterprise edition, b) paid support or c) paid consultancy (as in "contract the actual maintainers to implement device drivers for your servers").

Whenever a project is corporate-backed, I like to know the business plan upfront before I consider contributing to it.


Right now it seems that the business model is the MySQL one: provide the code as GPLv3 and sell licenses if someone wants to use it in a proprietary context.

And correct me if I am wrong but I think the corp is just the 2 developers right now.


They are hiring [0], so they have money from somewhere?

[0] http://www.minocacorp.com/careers/


From the CONTRIBUTING.md:

>We at Minoca are trying to make open source work as a business model. One of the ways we're doing that is by offering Minoca OS source for sale under more proprietary licensing terms. To do this Minoca needs to own the copyright to its source. In order to support this business model while also allowing community contributions, we ask that contributors sign a Contributor Assignment Agreement. We're using Harmony Agreements. Before submitting patches, please fill out the CAA for individuals or companies.


Question to authors: do you have any graphics layer in-place? Or at least thoughts about its future architecture?

I am thinking about porting my Sciter [1] HTML/CSS UI Engine to an OS that can be used on small (IoT) devices. I think that HTML/CSS as a UI definition/declaration language is quite convenient.

[1] http://sciter.com


OP here. Sciter looks cool, nice site too. Currently Minoca has support for a basic framebuffer, which we use to display our green terminal. Our biggest obstacle to having a GUI now is the lack of accelerated graphics drivers. We'd like to add them, but it will be a very large undertaking and probably require cooperation from one of the major graphics vendors.


Once you get at least one accelerated graphics driver implemented, would you design your own GUI framework/toolkit, or use an existing one like Qt?


How does this compare to Minix 3? THat's a microkernel-based OS, so drivers are isolated, and it's also pretty small.


Interesting take!

If I were developing a new OS, I certainly wouldn't have a default text terminal...

That just looks gross.

But my interests differ. I would focus on the graphical user interface as my number one concern.

It would always run in a OpenGL type graphics mode, where you could spawn any number of text terminal-style windows, but I would definitely skip a full-screen text mode. Its just never needed.

Does that make sense?


Yes and no. There are so many dofferent parts to an operatong system that you quickly favor the text console because its so easy to do. If you are going to work on a GUI first thing, you might as well forgo the operating system bit and just write it as a program in some other OS while you perfect it


MacOS Classic was GUI first. There was no terminal; the closest thing that came to a console was the MacBugs debugging interface to inspect the assembly instructions.


Since it runs well on ARM, you guys might want to consider reaching out to the Raspberry Pi organization for collaboration.


sounds like it could be interesting, looking forward to seeing more is it a monolithic kernel or a microkernel? hybrid? :P


"...it’s simply not in widespread circulation."

And won't be under GPLv3. If your goals are to prevent others from locking up your code in proprietary products, and to give your code to the world, then this license is for you. If your goal is "widespread circulation," this is not the license you want.


Is Linux more widely used than any of the BSD's?

Yes.

So license doesn't seem to be a sufficient impediment if the functionality advantages are sufficient.

This meme "seems" right, but I see less evidence for it in the field.


Linux is not GPLv3 for a reason[1]. Not to mention it's only a kernel, not an entire OS. Basing your OS on a Linux kernel doesn't expose your other code to the GPL. Writing apps to run on a Linux system doesn't expose your code to the GPL.

The Minoca OS folks are asking for the community to contribute code to their OS that's licensed under GPLv3. GPLv3 supporters are necessarily a subset of open source supporters. I contend that the subset is small enough that a Minoca OS licensed under GPLv3 will continue to experience a lack of widespread circulation.

1 - https://www.youtube.com/watch?v=PaKIZ7gJlRU


a barebones debian install of debian-testing or debian-unstable, running no services takes about 68MB of RAM, whether in i386 or amd64 kernel versions, and with a very recent v4.7 or v4.8 series kernel... how much more lightweight do you need?


On a PC, 68 MB is nothing.

But in embedded space, 68 MB RAM just for the system is pretty big. Many systems have a fraction of RAM of that. 1 - 256 kB RAM is common. Of course embedded systems can also have gigabytes of RAM. It's a wide spectrum, so as small as possible is preferable.


Exciting times. First http://github.com/fuchsia-mirror/ and now this


I would like to see a built-in natural-language interface so I can, for example, "ps -elf | grep something" with a voice command.

In the future we will need these interfaces so if this is built in to the OS that would be nice advantage :)


"need". I don't think you understand what the term really means.


Well if you want to play that game, we don't "need" any computers at all.


Very little information on their site about the actual architecture of the system. Very bad flag there.

I'd be interested if it was based on a post-Liedtke microkernel, but I suspect it's yet another boring monolith.


[flagged]


I have such links. I just can't tell if you're being serious or sarcastic.


It depends; if gp has actual things to back-up what they're saying, then I'd actually like to read them. I'm interested in monokernels because of Haiku, but I'm definitely not an expert.

OTOH, somebody releases a whole OS and the response is "Snore! It isn't a microkernel." - Ok, so...why?


Start with this paper which lays the reliability benefit out pretty well with specific examples:

http://cs.furman.edu/~chealy/cs75/important%20papers/secure%...

Start at "The Paper" here to skip past Linus vs Tannenbaum politics stuff. He describes the common counterpoints and shows with evidence, including existing systems, that they're not as big a deal as people say.

http://www.cs.vu.nl//~ast/reliable-os/

Example from high-assurance world that has many features & assurance activities a FOSS attempt at secure microkernels should consider copying:

http://www.ghs.com/products/safety_critical/integrity-do-178...

Animats and I think QNX is probably best of commercial ones in balancing all kinds of tradeoffs. It's been used for decades as a self-healing RTOS with good performance. First link is their description of it with second a demo of a product with QNX inside showing how fast it can be on non-desktop hardware.

http://www.qnx.com/content/qnx/en/products/neutrino-rtos/neu...

https://youtu.be/vPo6gl8N0wM?t=1m20s

Open-source one aimed at reliability & legacy compatibility you can play with. Took UNIX decades to get reliable despite all the labor but Minix 3's foundation did it with a handful of people over a few years. That's saying something.

http://wiki.minix3.org/doku.php?id=www:documentation:feature...

Another FOSS one that aims at high-security integrating many best-of-breed components from CompSci like Nitpicker GUI and seL4 microkernel. First link is descriptive slides with second the actual site. This one is still new so will have bugs.

https://archive.fosdem.org/2015/schedule/event/genode_os_sec...

https://genode.org/about/

Finally, it's worthwhile to throw in an exemplary one from capability-security that further isolates things with self-healing properties. Based on commercially successful KeyKOS system on mainframes. No longer maintained but docs and GPL code still available for study or revival. Paper also describes other capability kernels.

https://www.cs.ucsb.edu/~chris/teaching/cs290/doc/eros-sosp9...

So, there's you a few days worth of reading and a few years worth of thinking to do. Hope it helps shed light on why almost every safety- or security-critical system that ever did well in reliability or security was a microkernel-based system. These days, high-assurance is looking at eliminating even it with CPU's with built-in security, compiler techniques for automated safety/security, and DSL's for easy formal verification of OS or system components. Until that's finalized & while using traditional hardware, it's best to build on methods that already worked for decades.


Neat, a bunch of these had escaped my radar, even.

I'm glad to see you around; A bit of hope to contrast with all the zero-research-done yet anti-microkernel naysayers.


Appreciate it. I try to stay evidence-driven. :) For extra data, Google Gernot Heiser with "L4," microkernel, or OKL4 terms + "evaluation" or "performance." He published lots of comparisons as they put it on lots of phones.


You're right to mention these. Spent several evenings reading Heiser's blog and NICTA SSRG papers a few month ago.

I'm more positive about microkernels these days; Activity is increasing and milestones are being reached.


This is impressive. I'll give this a shot :-)


Very cool. Is it possible to run it in VirtualBox/VM Ware?


You need design help?


Would be amazing to see DotNet Core run on Minoca


Is it able to self-host?


Why's it written in C? :(


What would you rather?


Took a look at the github page. Sounds really impressive, lads. I only wish it were a microkernel and licensed with an 'or later'. In fact, given how well copyleft works for system software, I'd consider AGPL. Anyway, it definitely deserves a closer look.


The goals for this new OS are quite vague and uninteresting. It seems set to repeat most of the problems with existing OSes.

Let's take the package manager for example. Wouldn't it be great if instead of yet another package manager that is a glorified system of install scripts... we had a declarative functional system ala NixOS? That's just scratching the surface.


this should be interesting. POSIX support? I'm game.

I hope they don't try to compete with Linux. just do their own thing and see where it goes. usually the competitive edge kills projects like these.

happy to see new kernels popping up!


can it run on raspberry pi?


Yes, they have pre-built images for RPi: http://www.minocacorp.com/download/


[flagged]


This is a really narrow minded point of view.

"Ford already made a car! Making another one is a waste of time!"

"IBM already made a computer! Making another one is a waste of time!"

"Nokia already made a phone! Making another one is a waste of time!"

People choose projects they're passionated about. The majority will likely be a waste of time in the grand scheme of things. Who cares? It's what they want to work on. Is it likely to overtake Linux? No. Is it likely to gain enough adoption to merit taking a look at it? Maybe.

You don't even know if the way its architected would provide a better way at doing something another existing operating system cannot and yet you're immediately writing it off?


I think I have already heard this somewhere: "<something something> should be enough for everybody".

In fact, I wish we had more diversity as far as operating systems are concerned. For example, BeOS/Haiku would seem to me a saner choice for a desktop than an OS primarily designed for servers. (Windows and Mac OS used to be purely desktop OS, but not any more.) RISC OS is another interesting example.


[flagged]


We detached this flagged subthread from https://news.ycombinator.com/item?id=12840413.


Do you think that 100% of Linus's emails are vitrolic spew? He writes kind emails too, you know. Even today.


I didn't say 100% of anything he says is vitriolic spew. There is no denying the man is a genius with C, but he's certainly built a reputation with his attitude.


MINHOCA


YAOSWVITEBN

Yet Another OS With the Vague Impossible-To-Evaluate Benefit of 'Newness'

edit: not to suggest I have a problem with people trying new things, but rather...why not write up a little essay on the architectural decisions you've made and their impact before releasing it publicly?


Just because you're not the target demographic doesn't mean the demographic doesn't exist.


If the demographic exists, they wouldn't know it from reading this blog post.


Their competition is dsl, arch, raspbarrien and other small/IoT OSs. Not to say their product is worse. But why are they forking instead of supporting existing technologies?

Not saying their work isnt useful. But theres an argument to be made for "why"?


Because they can. There's your why.

Now you get to answer the "why not?".


Also, as they put it themselves:

> We wanted to see if with 25 years of hindsight and a clean slate we could create something interesting and unique in the operating systems space.


Because taking away mindshare instead of consolidating efforts hurts all parties. Competition is often good when done for very good reasons, iojs vs nodejs, proprietary goals vs purely open source. Their goal is the same goal as many other projects however, instead of supporting those projects they are releasing their own which will likely have to go through a LOT of growing pains to compete with current projects. Simply releasing your own OS (written by two guys) seems naive at best and arrogant at worst.


Also, GPL3? Not my favorite license. (MIT/BSD are the best IMO)


I felt this way as well (since the mid nineties) until a couple of weeks ago when I read _Social Architecture_ by Pieter Hintjens. It has a strong idealogical flavour. He stresses that you should focus on building a community, with the code and platform as happy side-effects of that community.

For that reason, you want GPL because it forces everything back into the community. He gives examples of a friend of his who slaved on a BSD project that got forked with the friend having nothing to show for it. And NT adoption of much of the code from the BSD sockets API.


GPL doesn't force anything back into the community. It only requires freedom for users, not code for upstream.


So why has llvm/clang taken off as well as hit has? It's got a huge developer community and no one is forced to contribute back.


LLVM has taken off because the code is cleaner and more modular than GCC. Also Apple payed for all the boring grunt work. Maybe Apple would have adopted LLVM with GPL, maybe not.


> He stresses that you should focus on building a community, with the code and platform as happy side-effects of that community.

Exactly, so license shouldn't matter that much. I'd probably be a lot more interested in it were it BSD licensed, as it is now I have no real desire to play with this new OS.


Can you explain why you prefer BSD as a contributer (I'm assuming you're not running an OS company) so much that you won't help a GPL project?


Seconded on the BSD license being the way forward.


"This software is distributed under the terms of the GNU General Public License version 3 (GPLv3). Minoca offers alternative licensing choices for sale. Contact info@minocacorp.com if you or your company are interested in licensing this software under alternate terms."

https://github.com/minoca/os/blob/master/LICENSE

Basically, the old MySQL business model.


It looks like the developers want people modifying their code to contribute back, either by giving back the changes or by paying them (they offer an option to purchase a more permissively-licensed version).


Indeed. GPL makes sense in the short term, but in the long run an permissive licensed project forces its branches to contribute back to avoid divergence. GPL makes it harder for commercial entities to invest into it and also a permissive licensed project makes a bigger competition to e.g. Linux.


"lean, maintainable, modular, and compatible with existing software." - i wonder how long that will last.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: