Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I work at Google, but let's be honest: Java and C++ continue to be the only game in town for existing and new projects, Go occupies only a very tiny niche in our code base, mostly for the reasons that Chuck mentioned: a lot of the parallelism value that Go claims is already widely available, battle tested and very robust in Java and C++.


The parallelism features of Go are a side-effect of the concurrency features, which are far more about how you structure your code than about how it runs.

Go's main feature is what it removes. I got stuck with C for a long time because nothing better with the same simplicity came along. Languages march forward with increasing complexity while offering very little in return for it. Go is a reaction to that and a solution to that problem.


"ava and C++ continue to be the only game in town for existing and new projects"

That is simply not true. Yes, Go occupies a small niche, but there are growing numbers of people using Go for real things at Google.


From my understanding, Go doesn't make things possible that are not available in Java/C++, but it attempts to make them easier and less error-prone.

The big problem with parallel/concurrent/distributed programming is that it is very hard to do both correctly and efficiently and scalable with current languages (and at the same time, keep the code maintainable). It has been bolted on, so to say, in may different ways. OpenMP, OpenCL, IPP, the list goes on. All those abstractions serve a certain purpose, and are useful at a certain scale, but are less useful outside of that, and generally you need to worry about way too many low-level details.

OpenMP: loops are easily parallelized, but due to data sharing, writing code that scaled to a large number of cores is very hard. Supports some other primitives like task parallelism, but in a limited way. At least you don't need to manage threads manually.

OpenCL: designed for huge numbers of cores, but very much focused on GPU-based development, makes it very inflexible on general purpose CPUs.

IPP: Work queue primitives to automatically distribute load over multiple cores. This is the most advanced approach, but fits very clumsily into C++ using arcane template syntax, which quickly results in macaroni code.

A language that would truly support scalable high-performance parallelization without having callbacks all over the place would be great. I'm not sure how far Go delivers on this promise, but it's certainly a step in the right direction. Which is expected from a company like Google, which understands these problems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: