Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What will be the legacy of Go? (cheney.net)
101 points by clessg on Nov 14, 2015 | hide | past | favorite | 107 comments


Go was written to solve an internal problem at Google. Google needed efficient server side applications. Python was too slow, and C++ was too hard to write, too buggy, and a pain for string work. Google then developed good-quality libraries for all the things one needs for such applications. That made it an excellent language for server side work.

As a language in general, Go is practical and mediocre. This may be a strength. Go lacks generics, and thus has to over-use introspection. In practice, while "interface{}" implies extra run-time work, it tends not to end up in compute bound code. (Two notable exceptions: sorting and database access.) So that turned out to be less of a hassle than expected. Also, from Google's internal perspective, they're mostly using their own data structures, and type-specific code may not be a big problem.

Go's approach to concurrency hand-waves too much on the data sharing issue, and has soundness problems, but at least the language talks about concurrency and has some reasonable primitives. C and C++ don't even provide syntax for talking about concurrency.


The C section is so wrong it's unbelievable. Or why I created the Pastebin below. There were plenty of good languages before C, not just asm. C wasn't designed: a tweaked BCPL and B to work with PDP-11. C wasn't even meant to be portable.

http://pastebin.com/UAQaWuWG

Helps to understand what it is exactly and then the rest (like ditching it) becomes more obvious. P-code was probably the first portable, assembly language with good results. Or LISP from the functional side.


Yeah, here is some additional info:

This is the typical C fanboism that sells C as the AD of systems programming.

Here are some other languages, for those that don't want to go through the article, about well known systems programming languages before C had any relevance outside AT&T.

- Algol on B5000 (1961)

- PL/I on System 360 (1964)

- PL/M on CP/M and AS/400 firmware (1972)

- Mesa on Xerox PARC Alto (1970)

- XPL on Sytem/360 (1971)

- HAL/S on AP-101 (1970)

- JOVIAL on 465L (1970)

- CPL on Titan and Atlas (1963)

- BCPL on IBM 7094 (1970)

But lets pretend C (1972) was the first one.

Interesting enough, just like they did with Go, C dropped most of the features used by the other systems programming languages of the day.


Appreciate the list. I'll incorporate this into my links in future somehow. You should leave off CPL in the future as it was designed but never implemented. The inability of EDSAC to compile it was very reason BCPL and later C existed.


Most of those languages are as close to dead and buried as can be, except maybe on an academic's or aging hobbyist's bookshelf, or perhaps running on a ghost machine in a forgotten room at a large corporation or government office. The majority of people who programmed in those languages in their heydays have long since retired.

Sure, there were languages before C that did Important Things, but C works as a starting point of discussion because most programmers today have at least heard of it, even if they never use it.

Aside from NASA or JPL, who would even start a new project in any of those languages, because of one important paradigm?


Read my link... just the numbered list if you're in a hurry cuz it takes only minute or two. Any discussion of C vs other languages should be able to explain why C is the way it is and why a replacement should be different. If C wins, it should be due to being the best design for systems programming. Any information about C should also be accurate.

As pjmp said, article rewrites history by falsely claiming it was first HLL and that it was designed for portability. Neither were true. Additionally, competing languages (even on PDP-11!) had better maintainability, safety/security, and ability to code in the large. Many were portable with a few performing as well. BCPL was whatever could compile on a 1960's computer: nothing more. C was BCPL w/ structs and byte-orientation.

So, should we start with one of the better languages that resemble good languages of today to define attributes for systems programming? Or should we start with a semi-language that had no design, tons of problems, and all so it could compile on a 1960's machine? I think the former is the obvious choice. That those supporting the latter lie about C's origin, design, competition at the time, and so on to push it instead is... more disturbing.

Hence, me countering it readily.

"Aside from NASA or JPL, who would even start a new project in any of those languages, because of one important paradigm?"

There were repeatedly languages from that time available. The best for modern 3GL users were Wirth's line: industrial Pascal's, Modula-2, Modula-3 (especially), Oberon's. All were used to write OS's on minimal hardware with more readability and safety than C. Ada was rougher but safer and did the job too. LISP and ML derived languages only got better and better over time with Racket & Ocaml being the best today. I've even seen Ocaml used with an 8-bit runtime (!). There were also macro-assemblers that focused on semi-HLL's for ultra-efficiency and optimization. LLVM comes to mind. ;)

This stuff didn't fade into obscurity or only for obscure platforms. Good versions stayed around in industrial form for decades with little adoption while people tried building on something that wasn't designed for all its uses. Just compiling on an EDSAC in 1960's.


You need to re-read the OP's linked article.

It's not an academic paper or even anything like.

It's. Notes. From. A. Presentation.

It's not the entire history of compiler languages and the lineages and branchings thereof which make Game of Thrones look like Jane and Dick.

If you're going to spend four sentences speaking about a core language, that is still in active use, that the audience will have heard of then C is a far better goto language than any of the ones listed above.

> All were used to write OS's on minimal hardware with more readability and safety than C.

Awesome. That decade abended years ago. They are no longer germane to a discussion at a conference in 2015, unless you're deep diving in language lineage.

Which the author clearly wasn't.

Edit: my bad. It was seven sentences.


That is all fine and dandy if the presenter didn't lie to the audience:

"Before C there was assembly language...."

Is clearly a misinformation given to the audience.

Just because it wasn't a scientific paper doesn't make it less wrong.

The presenter has clearly driven all in the audience that don't know better, that C was the first high level systems information programming language.

Had he said that C was the systems programming language that won the hearts of the systems programmers we wouldn't be discussing this.

This misinformation has been happening since the mid-90.


""Before C there was assembly language" is technically correct, unless you know something I'm unaware of.

That there were other languages between those two isn't important in the context of creating an example in one presentation and it for-sure is not factually wrong.

Just so we're absolutely crystal-clear here, how, exactly, would you have phrased the paragraph about C, without wandering off into the endless maze of computer language lineages, but keeping the same general intent, in the context of the remainder of the presentation?


It factually wrong because there were options to do programming without Assembly before C existed.

One such example is the Burroughs B5000 system with Algol in 1961. This is just one example from many, available to anyone that cares to a little research.

I would have phrase it like this:

"Before C there were other systems programming languages, but for various reasons they lost to C and now it is widespread through the industry as we know it."

Followed by his note on how all computers in the room have their OSes coded in C.


Okay, but would phrasing it that way add ANYTHING to the point he's trying to make? Would it help explain his point? (And it's not factually wrong, you're just deliberately misreading it so you can flog the dead horse about the ancient lost civilizations of programming languages before C.)


It wouldn't fool his audience that C was the very first systems programming language to exist.

However telling the truth is worthless, lost time and doesn't contribute to the learning of younger audiences.

Please excuse me, I have some books to burn.


"C certainly wasn’t the first high level language, but it was one of the first to take portability seriously"

That's totally wrong. The C philosophy and structure were almost 100% due to BCPL with minor changes. BCPL was portable by accident due to its simplicity. They just got rid of every good feature of a systems language to the point that the remainder, barely a language, could be put on about anything.

Thompson tweaked BCPL into B to make it work on his terrible PDP-7. That wouldn't work on a PDP-11 so more tweaks by Ritchie into C. That couldn't even handle UNIX alpha so that added structs. That's C 1.0. A year later, people started porting it because it was barely a language & could fit in their hardware. All that was in my link about its history from documents and people that made it.

http://pastebin.com/UAQaWuWG

"Before C there was assembly language"

Also wrong and contradicting his claim about HLL's before C. As pjmlp listed, there were numerous systems languages in production before C was designed. It's not like we made a huge leap of faith into structured HLL with C. We actually cut out maintenance, reliability, readability, integration, etc capabilities of existing languages (esp CPL) to iterate into C. And, later on with good hardware, people started adding many of those back because C was garbage that led to all kinds of problems.

We can certainly accept a need to learn C language and tooling for working with existing OS or app code. That's called legacy system effect. However, we should never push false justifications for that which detract from superior language design that would've enabled more robust, maintainable, productive development. We can accept both the what and the why of C at the same time.

As a side effect, more people might explore features of C alternatives to build the next, best, systems language. Wirth has been doing that for years. Ada still gets updated with one version able to prove absence of many, runtime errors at compile time. Clojure is showing LISP's advantages and it was once used for OS's w/ incredible flexibility + robustness. So on and so forth. Gotta counter the C disinformation and misinformation so enough people know why they should do the opposite of it so some [more] eventually will.

Note: There's also the value of removing inaccuracies in historical writings that have negative effects. Mere editing and revision. I hear that's important to some people, too.


I think it's funny you disregard historical accuracy in a presentation about the legacy of a language where the author looks "at historical examples to set the stage" with an image of "the History of Programming Languages." You're really grasping at straws trying to argue actual truth or historical evidence aren't relevant in such a post. Worse, any reader reading such things would assume the person did some research on it.

Then, from that point, almost everything the author says about C's history, purpose, and effect were wrong. The only thing correct is everything in the room at the time was written in C [probably]. Which is actually meaningless given it's a historical accident far as the language itself goes.

So C's legacy is that most, mainstream OS's are written in C. The reason? Thompson and Ritchie used C for UNIX plus shared it with lots of people with poor hardware. That's it. It wasn't designed, it wasn't intentionally portable, it wasn't better than anything then or now... nothing really about C itself justified it sticking around. Purveying such myths makes people think it's technically superior and that we should keep investing into it and similar languages for system use. Very damaging myth. So, I counter it everywhere I see it.

"If you're going to spend four sentences speaking about a core language, that is still in active use, that the audience will have heard of then C is a far better goto language than any of the ones listed above."

I can bring up C without lying about it's history, portability, and so on. The author was probably mislead by others rather than intentionally doing it, though. Hence the need for these comments.

"That decade abended years ago. They are no longer germane to a discussion at a conference in 2015, unless you're deep diving in language lineage."

Many are still around, in commercial use, and some updated. Still better than C at robust, systems programming. I agree the author doesn't really need to go into them nor did I ask that in original comment. Just that any statements are accurate in a post about history and legacy. This is why I didn't gripe about "C++ also codified the ideas of zero-cost abstractions," which existed before C++. Yet, C++ brought them to widespread attention. Probably the author's intent and so no gripe from me.

Didn't you consider the fact that I left the rest of the write-up alone in my comment? That it might be for a reason I had lazer-like focus on the inaccuracies in the C entry rather than attacking the presentation as a whole. I get what the article is about and that's why it should maintain historical accuracy more than most.


No, that is called rewriting history.

Maybe we should totally ignore what the Greeks, Romans and Egyptians among other civilizations brought to mankind, because we weren't born in those days.


That says something about human nature, do we care about the genes or the face that carries them ?


Thanks for the illuminating link.

Nevertheless, I unfortunately do not see an easy solution in the near future that avoids C/C++ much as I would love to ditch them. Let us leave aside the question of legacy code and focus on future systems projects.

General issue: C and C++ have an advantage in the sense that we understand their many problems well due to their long history, and they have formal specs that are far more carefully written than most others, where documentation is spotty and formal specs are often lacking. Example: I have not studied Rust closely, but it seems to me that there are sections that essentially must be placed under "unsafe" for performance or low level access. The oft-repeated claim is that it minimizes the surface, since it localizes the unsafe sections. I have not seen any hard, formal guarantees in either the spec or implementation as to what is precisely meant by this. I am very interested to know how I can reason that the unsafe section has zero side effects outside of it. Such a statement in general must be false, since the unsafe block clearly communicates with the rest of the program, else it is dead code. And if the interface is sandboxed (an imperfect solution), then it implies some kind of performance penalty across the barrier, implying additional cognitive load for a developer since he/she has to reason about the possible performance impacts for such things, among others like array bounds checks. Lastly, if the guarantee regarding the extent of the "unsafe" block is as complicated as I suspect it to be, the load is increased further. Of course, the load comment is really relevant to Rust vs C and not Rust vs C++.

Specifics: 1. Go - garbage collector, large runtime/binary sizes, portability issues (AIX: https://groups.google.com/forum/#!topic/golang-nuts/IBi9wqn_...). 2. Rust - too new, general points above. 3. D - lacks the same energy as the above projects, so likely fares worse than the above. Furthermore, due to its closer ties to C++, its rather incremental nature resulted in a significant loss when C++11/C++14 came around. 4. Others - too many to list, focused on above due to their frequency on Hacker News.


Focusing on Rust "unsafe" as some sort of way to ding its safety drives me crazy.

"unsafe" blocks in Rust is designed to be just like the compiler backend. We could have built things like Vec::new and vector indexing into the code generator, like Go or Java did. Then we wouldn't need the "unsafe" construct in the language (except to interface with kernel32.dll and/or syscalls). But that would make our lives harder for absolutely no extra safety gain. It's precisely the same thing as implementing things in the compiler backend, but it's safer than doing that, because we get to write code in a stronger type system than that of raw LLVM.

More succinctly: What makes the compiler backend safe while unsafe blocks are not? Can you name one thing that makes the compiler backend safer than code written in unsafe blocks?


I did not "ding" it, but asked for an honest, transparent response if available. It can only help the Rust community by making such things clear. I focused on it as it is what I got probing Rust since after all there can be "no free lunch".

The compiler backend may be viewed as a smaller component that needs to be trusted. General code lies on top of it, and there is a big difference due to code volume.


The quantity of unsafe code in the standard library represents far, far less code than that of the compiler. Furthermore, unsafe blocks don't turn off the type system or anything, they only let you do four extra specific actions (possibly only 3 someday). The correctness of the compiler can be leveraged to assist in determining the correctness of the standard library, and living in a typical library structure makes the code easier to understand and audit than if it were entangled up with compiler logic.


I don't understand what "there is a big difference due to code volume" means.


I was referring to all usage of Rust by clients across the world versus a single compiler and reference standard library. Many of these clients will need "unsafe" blocks, and the combined length of these "unsafe" blocks will exceed that of the standard library and compiler assuming Rust adoption is high.

This is what concerns me: I wished Rust's "unsafe" blocks could have been exclusively confined to some things in the reference language compiler and standard library. Unfortunately, it seems like many reasonable systems applications still need access to unsafe blocks for reasons of performance, low level access, etc and such needs are not uncommon given the target systems audience.


  > I wished Rust's "unsafe" blocks could have been 
  > exclusively confined to some things in the reference 
  > language compiler and standard library.
This sadly isn't possible in a language at the systems level. If you don't provide an escape hatch then users will just use the C FFI as an escape hatch, which results in far more potential unsafety than Rust's `unsafe` blocks.


> I can reason that the unsafe section has zero side effects outside of it

There are basically a core set of invariants that `unsafe` code must hold, and you can reason about it in those terms.

http://doc.rust-lang.org/nightly/nomicon/ talks a lot about this.


"General issue: C and C++ have an advantage in the sense that we understand their many problems well due to their long history, and they have formal specs that are far more carefully written than most others, where documentation is spotty and formal specs are often lacking. "

See, even that is misleading. The C specs have a ridiculous amount of undefined behavior and dark corners that can mess up developers. C++ probably does too more than I know. Their specs are actually so hard to formalize that people got Master's degrees, etc for pulling off part of that after decades of people trying. Whereas, many LISP's, ML's, and Wirth languages (eg Oberon/Modula lines) were described quite succinctly and including code. C & C++ specs are so horrible you can't even be sure what your code will do even if you write it to spec. !?

"Example: I have not studied Rust closely, but it seems to me that there are sections that essentially must be placed under "unsafe" for performance or low level access."

Others are addressing the point about Rust, which I'm ignorant about. What I can say is the generic rule on this issue. Most safe-by-default type languages wrap unsafe behavior behind function calls which still have type/interface checks. If that code behaves correctly, you get to leverage type system of safer language to make sure it works with the rest correctly. If it behaves incorrectly, one of two things happen: local damage where an incorrect result is obtained; application crash or hack. You wouldn't be using the unsafe code unless you thought it was necessary. So, doing safe + a little unsafe adds no risk vs going all unsafe in C, etc. Yet, it counters many risks. So, it's a good tradeoff even if unsafe code can have unpredictable effects on the rest.

Note that one can attempt to further mitigate without sandboxing by writing that unsafe code in a way amendable to static analysis or through testing of what it does. One can use something like Frama-C or SPARK Ada to model that one part to make it as bulletproof as possible. Only then include that algorithm into the otherwise safe program. And again wrapped with interface safety to counter even more risk.

C and C++ will always have an advantage over sheer momentum. However, there's always alternatives available that lack the problems of the above. They don't have to have the GC, portability, and size issues as Ada and Wirth's languages always showed. I mean, Wirth's Pascal/P was safer than C, efficient (although not max), and more portable given backend was simple, stack-machine anyone could implement. It was ported to 70+ architectures that differed a lot. The ports of Oberon compiler and OS to each new hardware took 1-2 undergrads under 2 years each time due to good design, modularity, and simplicity. What kind of effort have the UNIX's and C compilers took to put on each new architecture? ;)

Anyway, I agree if you want max contributions by existing programmers that C is likely best choice for OS development. Or at least the kernel of something like JX Operating System or VerveOS where rest is done safely. If you want best results, then avoiding C and using safer alternatives is best. You'll get so much more done w/ higher robustness in the time you save debugging. :)


I agree with the point regarding C++'s spec, but don't fully buy the point about C's spec. C's spec is surprisingly succinct. C code written to spec is not harder to think about than any other language I have worked with. I have had as much trouble getting my Python/MATLAB/Julia code to dance the way I want it to as I have with C. What is different is that if you get it wrong, all hell breaks loose in terms of security issues.

Regarding the wrappers behind unsafe behavior, in addition to some of your ideas, a lot can be done even in C, see e.g netstrings http://cr.yp.to/proto/netstrings.txt, and other safer interfaces. It requires thought, but such thought needs to be devoted in designing other languages. Unfortunately, the C/C++ standards committee rarely accepts such slower, safer extensions, forcing clients down the dark road of third party libraries and the endless choices available there, some of which are horrible and actually worse than the stdlib. I believe a lot of the problem is that there is heavy disagreement as to what interface is best, see e.g strlcpy and its adoption. Getting a large committee on board with solving something is a monumental problem, even if all acknowledge that the current situation is terrible :).

"Best results" is a very loaded term, and I tend to avoid it due to the large number of dimensions to it, the most common being the classic performance and security axes. Nevertheless, we mostly agree on the key points, with some differences in the details.

TL;DR: I have not found something representing a Pareto improvement over C. All improvements tradeoff some aspects, and it is thus sometimes not clear that there is a better alternative to C. My stronger claim is that the above is true even if one ignores legacy issues.


Very much appreciated Olve Maudal talk about C origins. Thanks a flot !


Very welcome and good to see the info getting well-received. Can't get rid of support for this monstrosity (C) until people see exactly what and why it is.


I don't react as strongly (was it the main motivation behind your research ?) alghouth, C weaknesses really do allow for too many nasty issues in code we rely on. What astonishes me is how something which seems a crude plagiarism became the mainstream of low-level and fast for so long. Even UNIX ... I sense a strong pragmatism about market, ecosystem, network effect and the like. I left the video thinking Unix/C was the WordPress/PHP of its domain and era....


It was a partial motivation. Many have stars in their eyes seeing C as best-designed system language to the point that modifications aren't considered for bare-metal. One motivation is countering that with cold-hard truth about its alleged design and real history. Other people already see its issues but might want to learn more or enjoy the full context. This re-enforces the move away from such features where possible along with providing evidence to deliver in future conversations supporting that. Either benefit is a fine result to me.

"What astonishes me is how something which seems a crude plagiarism became the mainstream of low-level and fast for so long. Even UNIX ..."

Richard Gabriel explained that well in his Worse is Better essays. Hence why you see me mention it a lot. Has much to do with ease of participation, economics, and group dynamics. Things like C, UNIX, and early OSS made it easy to get started to gain a critical momentum. Past that, it's basically all momentum and its effects. One usually can't counter momentum so much as create an alternate momentum or divert flows off the existing momentum. Hence, co-development of radical approaches like SAFE or JX OS's plus legacy-supporting approaches like Nizza Security Architecture or Cambridge CHERI processor & CHERIBSD. Yet, the full momentum of UNIX and C remain as more gets piled onto them despite the cost.

https://www.dreamsongs.com/WorseIsBetter.html

Note: Ignore his emotional rants tied to specific tech to focus on the effects of the New Jersey approach on market take-up. Truthfully, I think he discovered the time-to-market effect of technology and a method of achieving it before it became a common thing people wrote about. Then again, I haven't studied the history of that enough to be sure.


I think Go will ultimately be remembered for reaffirming the importance of single binary deployments.

The world of deployments e.g. build scripts, containers, extravagant tools are all unnecessary with Go. And I suspect future languages will learn from this.

The language itself not much. It doesn't offer anything new that Java 1.0 with Quasar doesn't already have.


I think the single most important 'thing' go offers is his third point. The removal of the thread. The ability to very easily run multiple concurrent pieces of code cannot be understated. I have my grievances with go as i do with every other language, but what makes me more or less automatically start most new projects in go is that i know that at some point i will probably be sending the same http request to N servers or some variation of a similar task, and even if it can be made to work in any other language, the ease of which it is done in go keeps me coming back.

As an aside i think the composition over inheritance argument deserves a mention. The io.Reader and io.Writer interfaces () and their close friends io.ReadCloser, io.ReadWriter etc) have completely transformed how i approach tasks that involve shifting data through a number of whatevers.


What do you mean by "removal of the thread?" Go is multithreaded and is vulnerable to classic thread-safety issues, like races and reentrancy.


Go runtime library contains user-space scheduler which maps lightweight Go threads to heavyweight OS threads in M:N manner. By the way, Go runtime library allows running user code in only one OS thread per process, all other OS threads must be in the state of blocking system call. So, no mutexes / interlocked instructions are required to access global data from goroutines, but this comes at the cost of reduced parallelism compared to C/C++ (which author of original article seems to be unaware of).


I doubt if this is totally true. Upto Go version 1.4, the process would use only 1 core/cpu per process by default, but you could change it by:

numCPU := runtime.NumCPU()

ret := runtime.GOMAXPROCS(numCPU)

From 1.5 the process the default value of MAXPROCS is total CPUs on the machine.

So its not 'reduced parallelism compared to C/C++' rather the concurrency infrastructure (channels etc) makes it transparent to the programmer. I don't have to worry about creating a threadpool myself like C/C++ and Java. So overall there is a net gain, in my understanding. Am I wrong any where in this?


Wow, uh no. You're 100% wrong, unless I'm misunderstanding you. You can absolutely have user code running on N threads simultaneously (where N is the number of cores of your machine). And you definitely do need mutexes.

Your statement holds only if GOMAXPROCS is 1, which granted used to be the default, but was always able to be increased, and the default now is the number of cores on the machine.


Ah, now I see these things changed recently. I wrote original comment because at the time of Go 1.1 or 1.2 my colleague wrote racy code which accessed global variable without mutexes. I tried to educate him to use mutexes, by reducing his code to the minimal version demonstrating race condition, but failed. Then I digged into the source code of Go runtime library and discovered that only one thread could be in the running state at any moment (other threads could be in the state of blocking system call).

So, things changed since that time. OK, nice to see Go evolves. Interesting, how many code in production got broken during upgrade to Go 1.5 due to this backwards-incompatible change. Overall situation seemed totally reasonable to me at that time: Go is a language for average programmer, so it should prefer safely over runtime performance; and coding cowboys who are comfortable with mutexes, interlocked instructions, memory barrier instructions, and lock-free data structures will use C/C++ anyway.


But that wasn't true either at the time of Go 1.1, nor ever I believe. It was the _default_ behavior to have a single core running at a time, but you could just tweak that with the [GOMAXPROCS](https://godoc.org/runtime#GOMAXPROCS) variable, documented everywhere. (That's what changed in Go 1.5; now the default for GOMAXPROCS is the number of cores in the machine.)


He's referring to interacting with goroutines rather than system threads directly.


Which Go neither invented nor seriously popularized, unless we are to rewrite history to ignore the actor systems in common use on both the CLR and the JVM (to say nothing of Erlang et al, but "popularized" rather rules that out) well before Go's public release.

Unless the qualifier is "popularized among Ruby and Python people," and, well, sure, but the number of Prometheuses to bring fire to those folks is large and ever growing.


Go didn't invent co-routines, but the primitives of channels and the select statement that go along with them, while not ground breaking, amazingly simplify a lot of concurrency patterns that can get overly bloated and/or difficult to reason about in other languages.

I have simply never seen another language that makes it as easy as Go does to reason about parallelism/concurrency (maybe Erlang).


Languages don't need to make parallelism and concurrency easy if they're sufficiently expressive by themselves. Go bakes in things that a competent programmer doesn't need baked, while ignoring things that make a competent programmer better at their jobs (yes, it's the generics problem again!). Making channels primitives isn't a positive to me when it's something I can implement as well or better in userland--which I can, even in a language as middling and Java-1.1y as Go.

Go's channels are effectively a locking message queue (Java's had one of these since at least Java 5) plus a (usually global) thread pool. Not too long ago I implemented the moral equivalent in C++11, using only the standard library and in a unit-testable format, in forty-eight lines. Selecting across them is nice syntactic sugar, but is likewise able to be mimicked in plenty of other languages. Or, alternatively, I can use way more pleasant abstractions like Akka or Celluloid in not-Go languages (sending up Erlang when Akka provides a very similar experience in Scala or Kotlin--even Java, if you're using Java 8--is...curious).

I guess you can make an argument for TOOWTDI, but I don't find that to be persuasive when the OW in question is middling.


This is an attitude I've encountered before & can't really get my head around. Channels are a really bad implementation of a queued message concurrency pattern that has been standard in other languages for years.

The select pattern maps directly to any number of interupt style programming abstractions that are available in every language I've programmed in the last 15 years.

Quite simply I find the go concurrency story primitive to the point of painful. I'd love to figure out why my opinion on that is so far outside the common refrain


While channels definitely aren't anything new, and are just a renamed locking queue structure, why do you say they're "a really bad implementation"?


They don't scale well because of lock contention, the syntax is missing standard concepts like timeouts etc, & the behavior around edge cases around initialization, closed channels, nil values etc is bizarre.


Aren't they "lock-free" though?


Not in the least. They are a thin veneer on top of a giant mutex.


That's quite disturbing :P Any idea why they didn't implement channels as a lock-free list, ala java.util.concurrent.ConcurrentLinkedQueue?


I don't presume to speak for Go's developers, but I would guess because it's not that big a deal, especially in a cooperative environment. "If I can't obtain the mutex, yield" is a perfectly defensible thing, and is easier to write and probably to maintain than a lock-free list.

Having Java's concurrent stuff is nice, don't get me wrong, but I can understand it not being the biggest priority in the world to go rebuild that later. As I've mentioned elsewhere in this thread, there are much bigger beefs.


What other mainstream programming language has a queue pattern built right into the language itself?


What is the value of building something like that into the language? Why should a language have a "queue pattern" built into the language itself? Go has to have channels as a primitive to be usable in 2015 because its lack of generally-accepted features makes it impossible to do the same in userland. Same with its lists, same with its maps. I don't need it built into C++ or Scala or Java/Kotlin or C# or D, because these languages aren't unwilling to let me do it myself (but in all cases there are standard libraries to help me do it, even in the cases where it is not expressly already available).

You are implicitly casting as something to be praised one of the greater missteps of Go.


Missteps is a bit harsh, don't you think? Like them or not, channels and goroutines are quite integral to the Go language. And I think it's perfectly clear it's put in on purpose, and that it does guide the design of software written in the language.

You could perhaps compare it to Python, where async functionality has a solid and well established userland implementation in Twisted, but where asyncio still made a big splash around the community. Language constructs matter.


You misunderstand me. There is nothing in Go channels, syntactically or semantically, that is improved by being in the language itself, except insofar as the language does not provide meaningful and useful abstractions to its users to allow them to do it. I don't care why they say they did it, I care that it isn't very good and that I can't effectively replace it because the core developers don't give me the tools to do what is trivial in any other statically-typed language I see in common use. Channels and goroutines exist as core language features because the language is inexpressive because Go fundamentally does not trust end-use programmers to do smart things--so core developers had to do it instead.

"Misstep" was the kindest phrasing I had for the kind of trainwreckish design decisions and institutional reification of developer mediocrity that get you to what you're defending.

Your chosen tools have contempt for you, and it mystifies me as to why you would defend them for their failures.


Scala, Java & C# are all languages I've used professionally that have concurrent queues built into the standard library.


It would be quite unfair to have the "removal of the thread" be a legacy of Go, given that this is not something Go invented.


Well, to be fair, Go doesn't bring anything "new" to the table either.


Precisely this.

While we're talking about what it popularizes, there's also the sense of epistemic closure that I've never seen to the extent that I do in the Go community. You're right in that it doesn't bring anything new to the table; that's its developers' intent, but its fans have tried to turn it into some kind of revelation. Which it's not, of course, it's Java 1.1 with a somewhat nicer syntax. And that's not a crime, it's not unforgivable--but it's also nothing special, and by god am I so very tired of its partisans holding it up as the best thing since sliced bread because they don't know what the rest of the world looks like.

(This expressly acknowledges the--rather few, IMO--Go fans who do understand the rest of the world and use it for their own reasons; I think they're hurting themselves, but I respect the choice.)


> The world of deployments e.g. build scripts, containers, extravagant tools are all unnecessary with Go.

I'm not arguing that Go makes deployment easier but it doesn't make all these things unnecessary ,as a server is only one component of an app among many others.

I'd like to see more languages supporting some sort of web-app packaging though( I know I can do something like this with python , i.e. bundle everything in a single executable). I think it is an elegant solution.


https://github.com/GeertJohan/go.rice/

lets you bundle resources into the executable. It's not part of the language but I don't think Python's is either (?)


Windows has been doing that since it exists.

Also UNIX systems like Aix have support for it.


>The language itself not much. It doesn't offer anything new that Java 1.0 with Quasar doesn't already have.

It offers lots of abstractions Java didn't/doesn't: Lots of concurrency features, Ad hoc interfaces, Closures, Stack resizing, Type switches, Multiple returned values, Slices.


The legacy of Go will be a movement by many developers to statically typed, compiled languages after spending many years with Python, Ruby, and Javascript.

Although it is not groundbreaking, it will continue to be used because it takes the ease of dynamically typed languages and gives you fast compilation.


Perl: the language that showed us brevity.

Despite all its various layers of madness, it made a generation of programmers more productive than they'd known was possible. Tasks that used to be major undertakings often boiled down to simple scripts. Python, Ruby and many other languages since have borrowed from and built upon the brevity that Perl first showed us and it's unlikely that people will ever go back to the verbosity that used to be common for text processing and sys admin tasks.


> Go is still young, with a long productive life ahead of it [...] compared to the number of people who will use Go during its lifetime, we are but a tiny fraction

An assumption lurking in the last paragraph. Remember JavaFX and Dart, both languages backed by large corps who then pulled their support. Go could be at the peak of its popularity, about to tank. We don't know.


I'm thinking of Turbo Pascal for DOS. It was very popular, and then it sank to obscurity almost overnight, basically because it didn't have any answer to "how do I write a Windows program in this?" Its niche had been eliminated.

I don't think Go will vanish unless "network servers that run in text mode on unix" somehow stop being a thing. There is no Google product whose change of strategy could strand Go (unlike the others you name). If somehow all computing moves to phones and tablets, it could be stranded, but I can't imagine that happening.


> "how do I write a Windows program in this?"

Turbo Pascal for Windows, it had two releases 1.0 and 1.5, introduced the Object Windows Library later adopted by Turbo and Borland C++ for Windows.

It was replaced by Delphi with its Visual Control Library afterwards.

The problem was Borland management going nuts and no having a proper version for UNIX systems, which were gaining market share back then.


Was it not reborn as Delphi? It was always Pascal. But I do agree, a non-visual service lang is probably a thing.


Delphi was really not so bad. Back in High School we first learned with Pascal, then Delphi. It was pretty simple to put together simple Windows GUI Programs. Never really used it outside of school though.


> It was pretty simple to put together simple Windows GUI Programs.

It was one of the most powerful Windows (GUI) programming tool back then, and easy to create beautiful UI components and had great third-party components market. Plus it has incredible compilation speed.



Not supported by the Android team, so don't expect to see it here:

http://developer.android.com/index.html

http://developer.android.com/ndk/index.html


It's still in its early days though from Google:

https://github.com/golang/mobile

https://github.com/golang/go/wiki/Mobile


I will change my opinion when I see the Android team change their official statement that only Java matters, as communicated at Google IO 2014.


> Remember JavaFX and Dart, both languages backed by large corps who then pulled their support. Go could be at the peak of its popularity, about to tank. We don't know.

Ehh... to be fair, I don't ever recall JavaFX and Dart getting so much organic "love" from developers. Go has at least made it past the point where said corporation is trying to make it a "thing", which is more than you can say about JavaFX and Dart.


I don't get what you are speaking about JavaFX.

It has replaced Swing as the official Java UI framework, all new UI features since Java 7 is being done in JavaFX.

Swing joined AWT as legacy code and bug fixes only.


> > JavaFX and Dart, both languages

> I don't get what you are speaking about JavaFX

From context you know I meant JavaFX Script, discontinued by Oracle in 2010.


Dart won't be built into Chrome, but dart2js works fine and usage at Google is increasing. Saying that Google "pulled support" isn't accurate.


I think Go was google's alternative to buying Java, and/or Sun. As such, and given the recent concerns about Oracle's investments in the jvm (they laid off some java evangelists, and it's not at all clear to me why Oracle will be willing to fund jvm development), I think Go is going to be with us a very long time.


Seems like FUD. 549 contributors according to GitHub. Open Source. Fork it. https://github.com/golang/go


"I mean, who doesn’t want to be simple ? And what better way to frame a debate as simple; good, complexity; obviously bad. Could we say then that simplicity will be Go’s lasting legacy ?"

I think this misses the point. Complexity isn't bad. Only Incidental complexity is bad. Inherent complexity exists in the domain itself and is unavoidable. If this inherent complexity is not expressible by abstractions in the core language itself. Then it must be expressed in libraries, or user code, or in the way end-users configure and use the program.

Go is interesting because it created abstractions for concurrency in the core language itself. Go is a more complex language because of it, but concurrent programs user code is less complex because of it.

(On a side note C++17 is getting a more abstract version of go-routines. Thanks Go.)


Co-routines go back to Modula-2 (1978) and first COBOL optimizing compilers, Go didn't invent anything there.


More recent inspirations: Win16/32 API has "Fibers" (used in WinWord), Lua has coroutines.


Minor nit re: Rails legacy. IMO a major reason it was so popular was because of Ruby. It's certainly not the right hammer for every nail but the syntax is one of the best to work with ever. Whenever I can find a project where writing a little Ruby makes sense I use it. Because I enjoy it :)

While it's useful to me and dominates what I code in atm, I don't necessarily like writing Go. Better than C/C++, sure, but I wouldn't go so far as to say I enjoy it.


Have you tried Crystal? I used to think I loved Ruby for its syntax, until I tried Crystal, then I found out that Ruby's dynamic typing and the sublime object model are just as important, if not more important, than the syntax.


C, C++ and Go are going strong. All operating system kernels we currently use are in C, as well as most low level drivers and libraries. Most PC client applications are in C++ including most video games. Common web servers are in C. They will be around for many more decades.

Go already replaces Java and dotNet as a leaner more suitable server language in some companies. Nodejs and Go apps (+ JS in the web browser) replace older CRUD web apps written in Ruby/Python/Perl. Most languages will stay around for a long time, although a less popular for beginners.

Java and dotNet will have a harder stand. On the one hand dynamic typed languages like Lua, Javascript and Hack/PHP with their JIT compilers are already (almost) as fast and allow developers to be more productive while still offering optional typing when/where speed matters. On the other hand new static compiled languages get more approachable (Go, Rust, Swift), are faster and use less memory.

Paradigm shifts surfaced exotic languages and trends. So Erland/Elixir, Go, Nodejs and others are very popular for todays web services.


Java and .NET have AOT compilers so compilation to native code is not a point.

They have support for generics and FP like programming that Go will never get.

Go will never have the access to the system resources on Windows and Windows Store like .NET does.

Go will never had access to Android frameworks like Java does.

Go doesn't have a UI toolkit like JavaFX and XAML.

All companies selling IoT SDKs are paring Java with C stacks.

Go doesn't have the IDE tooling support that Java and .NET enjoy.

Java and .NET having a hard time? Only if one is living in a bubble.


Some of your info is a bit outdated.

Speaking of Go, it's appearently commonly used for server side services/apps. Well the same can be said for Java and C#/F#/VBNet.

All applications we all use on the client side are C/C++/ObjectC/web apps. (exceptions are maybe Eclipse/IntelliJ, mind that Visual Studio is C++). Java on Android is an exception for legacy reasons (Google bought the Android company; and we pay the price buy using 2x-4x as powerful hardware (CPU cores & memory) comparing to iPhone to get similar UX & latency). Even the Win10 startmenu is still a C++ application for performance reasons. AOT is helpful for certain execution paths like JIT and isn't magic. Even the Win8/10 calculator app has a loading splash screen, the FirefoxOS calc app starts faster. XML based UI languages work, but aren't that great (e.g. resizing the Win10 startmenu isn't what I would call responsive design in comparision what we known from HTML5&CSS3).

Beside that you wrote a good comment above: https://news.ycombinator.com/item?id=10568915


> ll applications we all use on the client side are C/C++/ObjectC/web apps.

Not every company is the same.

Since when does Apple have server frameworks for Objective-C?

WebObjects was re-written in Java.

My current customer uses everything native, with web apps just for small CRUD maintenance tasks of a few DB systems.

> Visual Studio is C++

Visual Studio is a mix of C++ and .NET code, its UI infrastructure was completely rewritten in VS 2010.

> Even the Win10 startmenu is still a C++ application for performance reasons.

Have you seen the code? How sure are you that it isn't .NET Native?

> Even the Win8/10 calculator app has a loading splash screen,

Anyone that knows WP dev, knows those splash screens are optional.

> the FirefoxOS calc app starts faster.

Which is behind WP and BlackBerry in sales.


One advantage of Go is that more code is dependent on the network which is where Go shines at. Go looks like a hammer when the network looks like a nail. For example, Go was one of the first to start adding HTTP2 support. If Go is always there for network tools, it will creep into areas that perhaps many would not have foreseen like security systems that were previously done in C and C++.


> I think a strong contender could be a lack of inheritance; Go took away subtypes. [emphasis mine]

Go took away subtypes? News to me.


I'm not sure Go will have a legacy in 50 years. If you gave me even odds I'd go with none.


Care to explain why? This comment doesn't really say anything without an explanation.


Most programming languages don't have a legacy. They're just forgotten. That's the default. So far I've not seen anything from Go to make me think it will be more than a footnote.


How about Docker and other tools in the container space?

Go is a great systems language. The stdlib is through and handling IO and pipes is safe and fast.

Amazon is using as the mission critical ecs-agent that orchestrates their container service.

Perhaps containers will be forgotten too. But my guess is that we will be seeing some go binaries controlling very important parts of Linux and distributed systems in the far future.


Probably the most important feature of a systems language is the ability to predict and manage resource usage, and even though garbage collection is pretty good these days, it's still a feature that will turn a good number of people off from considering Go to be their go-to systems language. Some might not even consider it a systems language for having it, let alone a great one. It is mostly a feature of convenience, after all.


Not to mention the all too common bug to include a "defer" in a loop body. (Which often happens through "innocuous" refactoring.)


Go is problematic as a systems language. Calling it safe is odd, when it's glaringly allowed for Null. It's safe like Java is safe, which means that the memory will always be safe to use, but the program will still die due to bugs that a compiler could have helped prevent.

As a comparison Rust has no runtime, has no garbage collector, and is a pure systems language, and also helps developers be truly safe. Based on my experience with Go, I still don't understand where it fits, it's a compiled language, but has a runtime and a GC. So it's somewhere between C++ and Java, it's less verbose and faster to program in, but if that's the desire, than perhaps the better choice will be Kotlin in the long run.


> which means that the memory will always be safe to use

In another post [1], the author of the article claims that Go will not "tolerate unsafe memory access".

A reference to a variable could be sent across a channel and nil'd from the sending goroutine before being dereferenced in the recipient goroutine, causing a crash. Doesn't this invalidate the "safe memory" claims, or are concurrency-related memory issues not considered when making that claim?

[1] http://dave.cheney.net/2015/07/02/why-go-and-rust-are-not-co...


Dereferencing nil is not an unsafe memory access in Go, just like a null pointer exception in Java isn't unsafe. Both are well-defined operations (sure, they yield errors, but they do so with 100% reliability and complete control), they're not undefined behaviour like in C or C++.

Unsafe memory accesses refers to things like accessing an array out-of-bounds, or reading from a pointer that has already been freed. (I suspect a data race could actually cause an array to be accessed out of bounds in Go, but I'm not sure.)


Thanks for the clarification! What if instead of the sender goroutine setting the pointer to nil, the GC frees the object that the pointer was pointing to before the recipient goroutine reads it? Wouldn't that be considered "reading from a pointer that has already been freed"?


That can't happen. The GC won't free the object until nothing references it, and in your proposed scenario either the channel or receiving goroutine will have a valid reference.


Thanks for the explanation. I didn't realize Go's GC also worked for pointers/references.


Let me qualify that: work for them across goroutines


Great points and comparison to Rust.

I'll tweak my view. Go is a pragmatic systems language.

It's design and tooling is better than scripting languages for systems glue.

There will be many places where it's not suitable vs C or Rust.


> Perhaps containers will be forgotten too.

OT but my guess is that we'll move towards deploying applications as part of a unikernel instead of running them inside operating systems designed for multi-user usage. Multiple applications may be run on physical hardware using some sort of hypervisor, but I don't see the point in having all this overhead just to wall your application off of with Docker.

With that said, I do use Docker, but only because it's a step up from what I've previously used. Long term I'm looking towards deploying my applications with a unikernel.


Docker


As a slightly improved perl that really didn't change anything about languages.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: