Hacker Newsnew | past | comments | ask | show | jobs | submit | adamretter's commentslogin

I'm really surprised I didn't see any mention of LwDITA yet. It can be expressed in XML, HTML, or Markdown. For us it is the sweet spot between the too little provided by Markdown, and the too much provided by DITA or DocBook


Hmmm... How do you mean?


It was just a dig at XML's reputation for being unpleasant.


Aggh! Sorry. I have contacted the site's admin to let them know. In the meantime there is also a LinkedIn Group - https://www.linkedin.com/groups/2043439/

There is also a registration page with Ticket Source - https://www.ticketsource.co.uk/booking/select/ngdvzeggmorw


That video is from 2013. Is anyone aware of an update for the last ten years which covers 2013 to 2023? I would like to understand if seL4 is still considered "current" or there have been newer developments since then that are worth considering. I have searched around a bit, and apart from Google's Fuscia Zircon, or Unikernel's like UniKraft, other L4 spin-offs and XNU I am not finding too much about newer modern microkernels.


There was an seL4 summit last year:

https://www.youtube.com/@seL4/videos

Anyway the trend has been that regular mainstream kernels steadily adopt more microkernel-like features when it can be shown to not harm performance too much. MacOS/iOS aren't technically microkernels, but they incorporate Mach into the core and a typical system will have thousands of possible servers that can be reached via Mach. Those servers are all sandboxed pretty heavily too, so you get the security benefits. The core filesystem and networking stack do still run in kernel mode because there aren't many benefits there (moving them to user space doesn't remove them from the TCB and) but over time more and more stuff has been kicked out to user space. The same can be seen in Windows where over time more subsystems get extracted to user space servers.

Linux has a less well defined architecture than Apple's platforms and there are way fewer services reachable via DBUS than on macOS, but the same trends can be seen there too with support for direct user space access to devices, FUSE, user space schedulers, eBPF and so on.

So there isn't I think much interest in pure microkernels now. Linux has got flexible enough that you can make it as micro-kernelly as you want, but the current balance seems about right for nearly all use cases. The stuff that remains in-kernel generally isn't a big source of vulnerabilities, moving stuff to userspace wouldn't help anyway but would reduce performance a lot.


I don't see this trend with Linux. FUSE is very old and does not seem to get much traction. User space schedulers: where are they used? eBPF is like the other way around: people want to run more stuff inside kernel.

Honestly I feel that Linux server users are performance freaks and will kill for 0.1% performance. So it's very unlikely that they'll trade anything for it. They don't need stability, they'll just recreate server if necessary. They need absolute minimum of security (otherwise they would use VMs instead of containers).


It is definitly there with containers all over the place, killing away any performance benefits of a monolithic kernel.


On Linux it's more about what can be done. Agree that server users don't care about microkernels.


For high-speed networking, exokernel concepts are now being used in the form of DPDK (user space) and eBPF/XDP (user code dynamically verified and loaded into kernel space). Exokernels aimed to move kernel functionalities not into a bunch of separate processes like microkernels, but into libraries. In the late 1990s, I worked on such a system which unfortunately fell victim to the dotcom crash.

https://en.wikipedia.org/wiki/Exokernel


QNX 8.0 was just released. The version bump represents a rewritten microkernel.


Is QNX 8 seL4 based?


QNX predates the first L4 release by at least 10 years. Unless they had a major rewrite I wouldn't assume so.


Interestingly, QNX designers have learned and applied one of the same lessons as L4 designers: asynchronous messaging is messy regarding resource management and slower than well-executed synchronous messaging. QNX and L4 both use synchronous messaging for the vast majority of tasks.


I find it unfortunate, since I think async should be the default model for communication. Similar to message passing with shared memory as an optimization, I wonder if async messaging with sync messaging as an optimization is feasible. Async in general does make reasoning about the program more difficult.


My intuition is similar to yours, but I trust people who have done the thing more than your or my intuition.

Sync seems to require very responsive receivers, which is a desired property anyway, so maybe the downside isn't that great.


Mach (with Hurd) went down this rabbithole and utterly failed. Mailboxes were the cause of the desaster.


The comment I was replying to seemed to imply that QNX 8.0 is a full rewrite. I'm not sure how relevant that statement was here, unless the rewrite is seL4 based.


seL4 is still being worked on. There are recent changes to the wat time is tracked. I would say in terms of research seL4 is still up there. The current trend is very much on verification of user space and also verification chains down to RISC-V.


Probably the biggest development in seL4 since this is the MCS (mixed criticality systems) addition, which provides capabilities for budgeting CPU usage to give guarantees for components that need higher priority. There's some videos by Gernot Heiser on YouTube covering it.


I don't know how complete it is -- it doesn't list DeVault's Helios -- but various projects are listed at http://www.microkernel.info/


I just thought if L4 pistachio was a thing in the days (C++ rewrite, the new thing when I did my masters in Karlsruhe), someone must have written a microkernel OS in Rust by now and here it is: https://www.redox-os.org/ . So sad that Liedke died so early, I really wonder what L5 would have looked like.


OKL4 is the most widely used L4 spinoff, and Kernkonzept's L4Re based on Dresden's Fiasco comes much closer than seL4. https://l4re.org/

I don't consider seL4 current, more like academic research.


Motūrus OS (https://github.com/moturus/motor-os) has a newer microkernel.


seL4 remains the state of the art.


I set this up to provide RDP access to 10 Ubuntu lxde VM's that we used for students on a training course. It worked very well in the browser for the most part, but isn't yet quite as smooth as using Microsoft Remote Desktop client. Very impressive though :-)


What were you using for RDP server on Ubuntu? was it xrdp?


Have you considered Meta's RocksDB as an option?


I wrote a time-traveling database (where you can query a table/row as of a specific point in time and join it to data at another point in time; we used this for AI training to predict future behavior in users) completely from scratch (that was the coolest work project ever, btw) that was built on Hadoop/Hbase. I understand RocksDB is fairly similar ... however, I want to stay as far away from any of those kinds of APIs. I have scars from dealing with hbase and writing query planners and figuring out how to do performant joins in a white-room type environment. No. Thank. You.

It was fun at the time, but I don't want to go near it ever again.


[RocksDB](https://rocksdb.org/) isn’t a distributed storage system, fwiw. It’s an embedded KV engine similar to LevelDB, LMDB, or really sqlite (though that’s full SQL, not just KV)


Yes, it's based on the same paper as hbase, IIRC.


To be perhaps overly detailed: Hbase is an open source approximation of bigtable. Bigtable _uses_ leveldb as its per-shard local storage mechanism; Rocks is a clone+extension of leveldb.

Bigtable and hbase are higher level and provide functionality across shards and machines. Level and rocks are building blocks that provide a log-structured merge tree storage and retrieval mechanism.


> Bigtable _uses_ leveldb as its per-shard local storage mechanism

Ah, that's probably what I'm conflating with it then.

Thanks for the information.


By default RocksDB uses a ByteWiseComparator to sort the keys in the SST. However, RocksDB allows you to provide any comparator you wish. So ultimately it will depend on the performance of the comparator that you implement.


Back around 1996 I was working for a non-profit rural digital project in Devon, Southwest England, called Project COSMIC. If I recall correctly there were 2 main types of ISDN in the UK at that time (a) on-demand, which was similar to dial-up, and needed an ISDN modem, or (b) leased line ISDN which was always on 24x7 and terminated in a Router. I was lucky enough at COSMIC to have access to a 64 Kbit/s ISDN leased line. As I recall it was insanely expensive, I think it was about £16,000 / year in 1996, and that doesn't include the initial install cost where they had to drag a cable across the fields and install poles.

On the end of COSMIC's leased line, after the Bay Area Networks router, we had a simple Ethernet Hub, a Windows NT 4.0 Server, and several Windows 95 desktops. We used IIS to serve websites and email for customers from the NT server, and all the machines had public IP addresses without a firewall - the Internet was a different place back then!

A few years later ~1999 they upgraded to a 128 Kbit/s ISDN leased line, I think costs had decreased and the additional capacity did not cause a large jump in price. It was brilliant! Many games of Quake were hosted ;-)

Later in around 2004 I worked for a company that had a leased line, but they used an entirely different technology - LES10 (10 Mbit/s) - but it had a range limitation and so you had to be within perhaps 10 KM of your ISP.

Of course, just before that, around 2000/2001 some parts of the UK started to get ADSL trial role-outs. I was lucky enough to be at University in one of the first areas (Derby), and we were able to get a 1 Mbit/s connection for about £50 / month which we shared between 5 of us in our student house. After that I didn't see ISDN around much, and LES10 was a pretty niche use-case anyway.

Today I am happily sat on the end of a 100 Mbit/s consumer microwave connection for about €30 / month.


> 64 Kbit/s ISDN leased line

That would have been Kilostream which wasn't part of the ISDN system. Kilostream pre-dated ISDN. The reason for its high cost was that you could throw as much data as you wanted to 24hrs a day and it was a point to point dedicated circuit, i.e. you couldn't "dial up" different locations the way you could with ISDN.


@teh_klev That's interesting. I thought I also remembered the DSU being branded with BT ISDN logo, but perhaps I am mistaken, it was a long time ago now!


Personally I love my Redis shirt and my RocksDB shirt - both excellent materials and colours.


This looks great :-) However reading the How it works section, it states:

"Each form needs an email address to be set for completed forms to be sent to when they’re submitted"

So... erm is an email Inbox needed for integration? - how very 1990!

In addition to email, for use by developers in other UK GOV departments - surely HTTP PUT of an XML or JSON document to a user nominated URL (with provided auth token) would have been trivial to achieve?


It looks like this service is intended to replace the kind of random "notify us of a missing manhole cover" type forms that are found in their thousands on government websites. For those types of applications, emailing the form to a relevant mailbox is probably the correct thing to do, and in many cases it's the way the existing forms already work. Only a small fraction of government services will have their own custom backend application supporting them.


I get the sentiment, but an email is just a standardised queue. What were you going to do after the HTTP PUT? My guess is it onto a queue.


It's a standardized queue with a lot of features too...

You can set up filters and forwarding rules. It doesn't require a programmer or special knowledge to do so.

The queue can be viewed by one or many humans.

Messages don't have to be dealt with in-order.

read/unread state, stars, labels, drafts all allow construction of advanced workflows with no specialist knowledge.

Sure, all these things can be done better with special software, but there is a massive benefit to something that all your existing untrained and probably-not-well-paid employees can set up themselves.


As far as I know, the audience for gov.uk forms is teams that have no developers. So for that audience, email is very much the right approach.

Often the thing you are replacing is 'fill in this pdf and email to this inbox', so it allows people to improve things for external users without changing their workflow.


Old school HTML Forms don't require an active backend. They can be a purely static site.

> surely HTTP PUT of an XML or JSON document to a user nominated URL

This is the bit that's more tricky than a static site. And then you need to do something with the data.


The main intent here is for non-technical folks to be able to replace the tens of thousands of low volume PDF forms, not for technical ones to use it while building actual services.


You can optionally pass an onSubmit function that will return the fax number that the form should be submitted to after printing.


it is really cool but at the same time I can not stop smiling when the governement is telling you of about the best practices how to use web technologies, it is just funny


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: