This is very similar to HPX, the general purpose C++ runtime system for parallel and distributed applications by the Stellar Group - https://github.com/STEllAR-GROUP/hpx.
I recently saw the excellent video presentation by Hartmut Kaiser on this https://www.youtube.com/watch?v=5xyztU__yys and a lot of concepts in folly futures are quite similar. However the most striking thing in HPX was that all the building blocks are serializable, and the presenter mentioned that it is so because you could serialize and move a thread to a different machine and run it there.
>> Bitcoin is not a protocol in any meaningful sense of word. It is a single C++ codebase that you have to be bug-for-bug compatible with.
I don't want to discuss what a protocol is and if Bitcoin is a real protocol or not. But it's important to notice that it's more sensible to bugs/unspecified/unexpected results than normal software.
If Chrome is not bug-compatible with IE6, you can complain, mark one of them as unsupported, make two versions of your page, peek the common denominator, ...
The same happens with MicrosoftWord and OpenOficeWrite, the different implementations of LaTeX, email support of Unicode characters, C/C++ compilers, Python/PyPy/IronPython/Jython, ...
With Bitcoin, each bug difference is a hard fork waiting to happen.
On the other hand, bug-for-bug compatibility requirement is not unique to Bitcoin. If your Doom fork is not bug-for-bug compatible with original Doom, replays will go out of sync.
The discussion between Andrei Alexandrescu (Facebook) and Ian Lance Taylor (Google) is highly intellectual, especially the part about how each language addresses concurrency.
"D has resolutely exited C++'s shade because it is good
at things C++ is not good at, in addition to being good at things C++
is good at. Also, Go would be tenuous to frame as a better C because
it cannot do everything C does - e.g. unsafe memory access and manual
memory management, which are needed in certain systems - and it
interoperates poorly with C."
In short, he states that D can do unsafe memory access and manual memory management. Go can't.
Note that the discussion in question dates back to 2010, before the existence of cgo. Go now has quite good C interoperability. And Go can now do unsafe memory access and manual management using the "unsafe" package.
Yes, if I was to learn a systems programming language, I would only pick from those that allow manual memory manipulation. Either that or the ability to trivially integrate with C functions that can do manual memory manipulation.
I fear you are confusing systems programming language and operating systems programming language (or conflating).
Go is a great systems programming language especially for modern concurrent systems — http servers etc. It talks to C easily so you can integrate all kinds of system level code into your apps. Having a Garbage collector makes it trivial to write long-running daemons.
Modern operating systems have traditionally been written in C (and C++) plus some assembly language code. D can do everything C & C++ can do but it would still no doubt need the assembly language code.
Go lacks manual memory management. Some say that this would be a barrier for writing an operating system while others don't. The fact that you would have to use some assembly language code to talk to the hardware and you might need to add some manual memory management via assembly language code. After that I'm sure the garbage collector would make the OS more reliable and potentially a little quicker in places.
Either way I don't see why Go gets criticised for needing a bootstrap layer when operating systems written in C and C++ also need this.
There are tons of user-level applications that need manual memory management and that don't have anything to do with operating systems, like for instance video encoding/decoding/streaming or high-frequency trading.
What really bothers me about Go is not really that it's garbage collected, but rather that its garbage collector sucks so badly. Really, not all garbage collectors are created equal.
For example I'm working on a startup and we've been integrating with various bidding exchanges for serving targeted ads. All the bidding exchanges want the response to be generated in under 100ms, which includes the network roundtrip. This means on the server-side, the average must not be higher than 10ms per request, preferably lower. Scala on the JVM can handle it, but when I tried out Go, it was a disaster ... as that garbage collector stops the world and it's totally unpredictable, so you end up with spikes of latency that can upset your partners and given enough incoming requests, it can also blow up your buffers/queues, crashing your servers. It's also non-compacting, but that's a given, as it's not even fully precise yet.
Which is the reason for why integrations with bidding exchanges are usually written in C++ too (in case it's not clear, we are talking about B2B web services). We've gone with the JVM because it provides a good productivity/performance balance, but eternal vigilance is needed in profiling the memory allocation patterns and tuning the garbage collector to handle the load. And Go requires even more tuning. Which is why sometimes I fantasise about a high-level language that allows for manual memory management, as things would be so much easier ... although I'm rooting more for Mozilla's Rust, than I am for D.
Sociomantic is a German based company that does online real-time bidding and is using D to do it: you can find more info in their blogpost about Dconf 2013 with links to two talks talking about their company and how they dealt with the GC issues you mention.
Interesting you chose Scala. I have built and managed Jetty based adservers to talk to bidding exchanges and wondered how would the system perform if the adserver was written in Scala instead. Going with Jetty initially was the right choice as none on the team were experts in Scala, and the base assumption was that Jetty is battle tested, has good documentation and is already optimized to perform great from the get go. So, we simply chose Jetty. I didn't get a chance to build an equivalent system in Scala to compare performance. Will be great, if you can throw some light on how you went about building highly performant adservers in Scala, and scaled them. Does coding in Scala make it a lot more easy to deal with concurrency (for development and debugging), keeping a small codebase, help in rapid iteration, and in general make it fun to program and manage the servers? Do you have a blog post or a write up on this topic? I encourage you to make one, if you don't have one already. Good karma, and an opportunity to showcase your engineering chops to hire other great engineers.
I wrote a simulation software to test a product in my previous job which used akka mostly but also used AHC which is a plain java http client library without problems. Size-wise the code would periodically get more features but by refactoring often it never got beyond 2500~3000 LOCs. Readability was never harmed; Scala - contrary to what some people think - allows for very clear code. People that inherited the code is working on adding features. Obfuscated code can be written in PHP if you want (or are unable to do otherwise). Another piece of scala software was an internal web service written using unfiltered on jetty. It's so stable sometimes I just forget it's there.
I think that Objective-C would qualify as a reasonably high-level language and offers a choice between manual and automatic, but very predictable memory management.
> Modern operating systems have traditionally been written in C (and C++) plus some assembly language code. D can do everything C & C++ can do but it would still no doubt need the assembly language code.
Actually, contrary to C and C++ language specification, in D, support for inline assembly is part of the language specification.
> downvote
sambeau 50 minutes ago | link | parent | flag
I fear you are confusing systems programming language and operating systems programming language (or conflating).
Go is a great systems programming language especially for modern concurrent systems — http servers etc. It talks to C easily so you can integrate all kinds of system level code into your apps. Having a Garbage collector makes it trivial to write long-running daemons.
Modern operating systems have traditionally been written in C (and C++) plus some assembly language code. D can do everything C & C++ can do but it would still no doubt need the assembly language code.
Go lacks manual memory management. Some say that this would be a barrier for writing an operating system while others don't. The fact that you would have to use some assembly language code to talk to the hardware and you might need to add some manual memory management via assembly language code. After that I'm sure the garbage collector would make the OS more reliable and potentially a little quicker in places.
While I am on the D and Rust field, I support Go's ability to do this.
Go is no different from Oberon in system capabilities. And Oberon was used to write quite a few desktop systems used at Zurich's ETHZ during the mid to late 90's.
The OS bootloader and the kernel package for hardware interactions were written in Assembly, with the remaing parts in Oberon.
>I fear you are confusing systems programming language and operating systems programming language (or conflating).
Systems programming doesn't seem to be a very well defined term. My understanding is that it is certainly not application programming and it requires pretty tight management of hardware resources. That includes things like operating systems, database systems, embedded systems, networking software like firewalls, etc.
I guess it's a difference in terminology. I've never heard anyone refer to writing an HTTP server as "systems programming"; I'm used to "systems programming" meaning the same thing as "operating systems programming". Sometimes people refer to writing higher level OS components (like init or libc) as "systems programming" as well, but I would think that a server that isn't a core part of an OS would just fall under the category of "server programming".
Go can do unsafe memory access, but doing things like pointer arithmetic or casting of a memory blob to an arbitrary struct type is way more painful than it is in C/C++.
In a lot of contexts, this is a feature (makes terrible code smells in normal code easier to see), but depending upon the type of coding you do, you do sometimes find yourself wishing it were a bit easier when dealing with things like graphics APIs where you just want to lock a texture, update some bits in-place, and then unlock it. Sometimes what might be a couple lines of C code are 10s of lines of Go code where you either do crazy gymnastics with the unsafe package, or juggle things in and out of byte buffers.
Rust has safe manual memory management, at the cost of a learning curve. In D you must use the garbage collector if you want memory safety, but it avoids all the complexity of lifetimes and uniqueness. This difference makes the two languages feel pretty different, even if at their core they're pretty similar.
D is more like C++ without macro and with GC plus some engineering features (unit test is a language feature? weird). Rust looks different and does different. For example, idiomatic foreach block in Rust is a syntax sugar of passing lambda, which is more friendly to parallelism. While in C++/D it is a syntax sugar of classic for-loop with iterator, which is more efficient in unparalleled environment.
You might thing that unit test as a language feature is weird. But our experience with it is that minor-seeming weird feature has been an enormous success.
It has literally transformed writing D code, and for the better. There's another little feature, -cov, which will tell you which lines of code are covered by the unit tests.
It's hard to understate the improvement these engender.
> For example, idiomatic foreach block in Rust is a syntax sugar of passing lambda, which is more friendly to parallelism
This changed with the recent 0.8, `for` is now syntax sugar for using the Iterator trait, which optimises exactly like C++ (the vector iterator even vectorises when LLVM can do it).
Rust has actually moved to have iterators that are very similar to (a subset of) D's ranges.
I actually think I do. Go does offer a couple of primitives for poking into memory, but it is my opinion that a systems language needs much more refined control over memory layout and allocation.
Still, I think you raised valid question in the first part of your message, though the second is a bit offensive and I also thank Andrei for his clarification and use the opportunity to congratulate for this milestone of D language. I like D and even if I use C++ at work, I know I'd enjoy using D especially for the features that let my code "see" and "build" the other parts of the code.
I think that a lot of confusion has been caused by Go using the term "systems language," because (as I understand it) Go doesn't mean it in the same way that C++ and D do. The Go folks seem to be thinking systems in the sense of large networks of computers and the like (the kind of stuff that Google typically does), whereas the C++ and D folks are thinking systems in terms of stuff like operating systems. What Go is trying to do does not necessarily require low-level primitives (though it can benefit from them), whereas what C++ and D are trying to do does require such primitives.
The problem is by deciding that they can use "systems language" (which they dropped) because "system" means "a set of connected things or parts forming a complex whole" results in every language falling under "systems language."
Others have been using it to distinguish languages suitable for writing operating systems/drivers years before Go introduced this confusion.
When I hear "systems programmer" I think computer-to-computer. Therefore when I hear "systems language" I think computer-to-computer even absent Go's use of the term. I not be disappointed if a systems language were not appropriate for creating operating systems.
Now does it allow unsafe memory access, or does it not?
And does the unsafe package allow you to build a C-style manual allocator or not (regardless of whether it integrates with Go's new or make operators)?
However, like many flaws with HN's underdeveloped software, you can't actually do that on HN because the parser includes the trailing > in the URL. So your best bet here is with whitespace.
I'd argue that the world has moved on since 1998. We live in a world full of URL parsers of various abilities, and doggedly enclosing URLs in angle brackets because of a memo written so long ago it also references gopher seems stubborn to no obvious gain. Given the state of the modern internet, I definitely wouldn't consider angle brackets to be the best way to delimit URLs.
> What is certain is that D's
type system is expressive enough to allow libraries to reject during
compilation embarrassments such as transporting pointers over the
network.
How does the language do this? Doesn't that have more to do with the design of the program itself? It certainly is a interesting security idea, much more strict than sending remote procedure calls or serializing objects and sending them across the network.
This really removes any hurdle I might have of inspecting the assembly code due to sheer laziness of opening up my Visual Studio and start debugging. Awesome work. If you add some type of interactivity to the generated assembly, it can be made more visual.
If you download it to any of the desktop apps, I think you should be able to convert it without a serial number; I've done it several times before and I don't have a Kindle. Here[1] are some Calibre plugins that automate it fairly easily. You could also use the Cloud Reader if you don't want to go through the hassle.
My issue (well, one of my issues) is that I'm on a rather pristine new work computer, and while I do use Calibre at home, I don't want to put it on this machine. Additionally, I don't want do deal with the hassle of creating a Kindle account and installing an application that I would literally use once.
In comparison, if it was a DRM-less epub, or even a PDF, it would be the work of maybe 30 seconds to SCP it to my iPad, open it in iFile, and load it into iBook. No extra applications, no obscure steps to remove the DRM.
The note about the Calibre plugins is appreciated, though, and I'll be sure to keep it in mind if this comes up again. It was more a point about the fact that I've don't recall having an experience with DRM so blatantly removing value from a product that it was the difference between consuming it and walking away.
That's fair; I also find trying to get books off of the Amazon store irritating. I personally have a nook, but I'll buy eBooks from any store. Regardless, I have to either go through the Kindle desktop app or (ugh) Adobe Digital Editions before I can get it into Calibre, and it's always more of a hassle than it should be.
It appears that Apple has been deciding to partner Amazon in order best Google's Play strategy. I say this as Amazon Prime Videos is now available to be consumed on iPad officially. Also both these developments have taken place after Google launched the Nexus 7. Maybe the talks have been going on for a long time, but I would say this definitely pushed them into making a decision.
It's clear that Amazon is reacting to the Nexus 7. Once Kindle Fire sales slowed down, making more of their content available on iOS (note that Cloud Player has also recently launched on iOS) is an obvious way to make their ecosystem more attractive than Google's.
That being said, I doubt Amazon and Apple are working together. There still aren't any buy buttons in any of Amazon's apps (Kindle, Cloud Player or Instant Video). I'd expect Amazon to insist on some flexibility there as a condition of any partnership.
More importantly, there are also serious antitrust implications to Amazon and Apple working together against Google: They're established players in digital music, digital video and ebooks and, especially combined, are dominant in all three. Ganging up on Google (which is a relatively new entrant in all three) would be a sure-fire way to attract the attention of the DoJ. There isn't even an argument that the DoJ wouldn't find out or wouldn't be able to gather evidence. They're already pursuing a case against Apple for facilitating agency pricing. Any new arrangement Apple makes with Amazon would inevitably come out as part of that. I'd be shocked if either company's general counsel were stupid enough to let talks even exist, let alone go anywhere.