What tax benefits this move might have is my question as well. It seems like if this were purely about internal organization they could have effectively reorganized Google in the same way without making Alphabet (although I'm not an executive so I could certainly be missing something here).
Yeah, if we're talking about JavaScript, the explanation should start with promises. It's a pattern lots of people know, and it's a very real-world example. No need to front-load the explanation with a bunch of complication before people even know why they should care.
I suspect some people (not saying this author) butcher the explanation intentionally. Makes it seem like an ineffable topic that only the smartest programmer can understand.
I agree, and I actually think a better word for `bind` is `then`.
For anyone unfamiliar, the `then` of Promises corresponds to the monadic `bind` or `flatMap` (if the given function returns a Promise) or the functorial `map` (if the given function returns a plain value).
A monad then is just the interface shared between Promises (`then` and `resolve`, collections (`flatMap` and `wrap`), etc.
The hard part for me was visualising the pattern for more complicated types like parsers (functions from strings to results) and continuations.
I kind of agree with this... In many monads binding creates a time-dependent sequentiality!
But there are some that do not. For instance, the "reverse state monad" would not make sense with 'bind' named 'then' since... as the name implies, state flows backwards through time in it!
if (cond1)
return val1;
if (cond2)
return val2;
if (cond3)
return val3;
return defaultVal;
is simpler and better than
if (cond1) {
return val1;
} else if (cond2) {
return val2;
} else if (cond3) {
return val3;
} else {
return defaultVal;
}
or something like that. In a certain sense this is objectively false as the second one uses fewer features than the first (a statement we can make formal using monads, but that's unrelated). I'd argue this is doubly true because whenever non-linear flow control begins to be used pervasively it is hard to know what code will be executed and under what conditions. With nesting at least these conditions are obvious.
In any case, we can solve this conundrum through the existence of an Option or Maybe type which encapsulates the imperative behavior of short-circuiting failure directly and limits it's scope.
rather than (if I understand the monadic way of doing things correctly)
ra <- fa()
rb <- fb(ra)
rc <- fb(rb)
How is the scope of the short-circuiting limited? We will obtain a None result in case of error but we still do not know when the error occurred. Granted, we can add some guards but so can we with promises. Since the return statements are always at the end of each separate function the scope should not be a problem?
I am kind of new to this so please bear with me. Thanks.
The idea is that from "inside", as we are using do-notation, we merely think of working in an imperative language where failure might occur and short circuit things while from "outside" we see the type of this computation as Maybe and can examine whether or not it actually failed. The "inside"/"outside" dynamic is the scope delimitation.
> Is there any reason to use monads in (let's say) Javascript rather than promises?
Not intrinsically, IMO. The main value of monads is the shared interface, and its utility is contingent on tools that recognise that interface.
In that respect, it's as useful as providing e.g. a `map` method for promises, arrays, dictionaries, etc. Though in my experience JavaScript rarely seems to be written with this kind of generic interface in mind (e.g. there's no coherent interface mandated between classes in the standard library).
Yep, when I see "flat management" I really hesitate to apply. Especially as a minority in tech, I would rather find a hierarchical place with good managers who value good arguments than trust the herd.
Unless you have a team of unusually conscientious engineers, "democratic" or "flat" decision-making often translates to who shouts the loudest.
Not to mention engineers often want to complete the project in the most technically exciting way possible, rather than the way that will most benefit the product. Flat structure + too many engineers like that and you end up with this http://www.smbc-comics.com/?id=2597, written in Haskell.
> Yep, when I see "flat management" I really hesitate to apply. Especially as a minority in tech, I would rather find a hierarchical place with good managers who value good arguments than trust the herd.
If I were a minority or worse at getting along with others I'd definitely avoid flat management schemes as well.
> Agreed
Unless you have a team of unusually conscientious engineers, "democratic" or "flat" decision-making often translates to who shouts the loudest.
> Not to mention engineers often want to complete the project in the most technically exciting way possible, rather than the way that will most benefit the product.
Also sadly true and even has a name: Resume Driven Development
Aww, why pick on Haskell here? You know Bump[0] used Haskell. Point being that Haskell can be a choice which is a choice that most benefits the product. I recently had experience writing an app to deal with medical data sets that really benefited from correct by construction or wholemeal[1] programming.
Yeah the Haskell part is mostly tongue and cheek :P I wouldn't use it for any old project but I could see how the benefits of Haskell could outweigh its difficulties in some scenarios. Personally, I like functional programming, but I'm not sure it's suitable for the masses (perhaps some day? https://www.youtube.com/watch?v=oYk8CKH7OhE).
>Not to mention engineers often want to complete the project in the most technically exciting way possible, rather than the way that will most benefit the product.
Well, this has a habit of happening everywhere, no matter who is running the show.
Repeated major failures would be something like regularly failing to deliver or making a critical error multiple times. A pattern of behavior that doesn't fit the culture, NOT a one off mistake. A great example of this would be the engineer who accidentally leaked House of Cards a week early. The only consequence is that a "process" was removed to make doing what he was trying to do easier. I don't think firing this individual was ever considered. He didn't have a pattern of carelessness and this just happened to be a mistake.
I'm told the only "fuck up" that will get you fired on the spot for the first occurrence is sexual harassment.
My experience is fairly consistent with others I know, there are of course a few exceptions. Though please do bear in mind I have a severe observation bias here. I interact with significantly more people that want to work at Netflix than I do those who don't.
Tolerance of failure is different for ICs and Managers, partially because Managers tend to have such an ability to really screw things up in a way that isn't technical and has an impact on a bunch of other people in the company.
The shortest IC tenure I've seen here was the result of being a brilliant jerk (which interviewers did not catch during the interview cycle (and I speak here as one of the people who interviewed this person)). The shortest Manager tenure I've seen here was noticeably shorter than this, and was the result of pissing off your engineers.
I've not seen any IC here screw up on a technical level in such a way as to get fired for a first or even second screwup -- it really takes a pattern. My shortest IC termination was six months from hiring to termination.
As someone from a group that's underrepresented in tech, a co-op structure would make me hesitant to apply. I would be concerned that as a minority my voice would be lost. At least with a hierarchical structure I sign on knowing who the decision makers are and what their approach is (and thus I can find a place where the decision maker values solid arguments rather than whether the majority agrees).
Most co-ops (including ours) are run by a representative democracy. The difference between a co-op's hierarchy and that of a privately-owned corporation is that the co-op gives you transparency into the decision-making process.
I've never felt I've known whether the decision makers at my previous employers valued solid/logical argument... because I've never been party to the top level decision-making process unless I was a part of it.
I would like to know how they gauge performance too. Some cultures are about the perception of productivity, some are about actual productivity. Taken to an extreme, either case can make for a miserable work environment.
That is precisely the point. People with similar qualifications should have similar salaries. The "not great" trends implies salaries were correlated with something other than merit and qualifications.
That's not true. There is a market for labor that is constantly in flux based on supply and demand. Let's say that you were hired when there was excess supply and limited demand, you might have settled for a reasonably middle-of-the-curve salary. Fast forward to today where it's much harder to hire and all of a sudden, fresh grads can command salaries that might be equal to what someone with 2-3 years of experience was previously offered.
In addition, if you are hurting for someone in order to hit a deadline, it might just make sense to pay unreasonably as long as the numbers still work out.
>Fast forward to today where it's much harder to hire and all of a sudden, fresh grads can command salaries that might be equal to what someone with 2-3 years of experience was previously offered.
Perhaps -- and existing salaries should be adjusted to reflect that.
Exactly. If companies don't adjust existing salaries to reflect that, then they are basically telling their employees to quit because the best way to get a "raise" is to switch companies.
But of course they won't, because salary information isn't available, so it is difficult for employees to make these decisions.
In other words, companies can systematically distort the free market in labour by withholding information about the true salaries it offers from its employees.
The door doesn't swing both ways though - people don't get a salary cut when supply later exceeds demand. If you wanted a salary system that did respond to job market fluctuations, you'd have to open up to the possibility of getting cut pay.
Inertia can often set in especially if you're someone who is not very vocal about asking for a raise or keeping track of how in-demand your skills are.
How about: People with similar performance should have similar salaries. I've seen qualified and capable people add no value, and less qualified people hustle to perform well.
Results are hard to quantify. You can jump on a successful project and skew your contribution factor. But what if you are working on important tooling that requires difficult problem solving. This important tooling might not be quantifiable directly like, say, an advertisement framework. It may, however, be more valuable in the long term.
I agree. I also agree that using a spreadsheet to determine this is quite difficult. I'm not saying that Google is in the right here. I'm just saying that Google's own metric for software engineer levels is not a statement of ability.
The public schools I attended were far more economically, culturally, ideologically, and ethnically diverse than both my private college and post-college work environments (and you can throw in gender diversity too wrt my work environments). I would not have met kids like this otherwise -- certainly not if I had been in a homeschool group socializing with the children of parents who had similar philosophical views about schooling.
Public education is certainly in need of big reforms, but I'm grateful for the diversity I was exposed to attending one. It's a big part of what keeps me grounded in the ridiculous Silicon Valley bubble.
What is your programming interview like? If it's whiteboarding algorithms, it's likely to be eliminating a lot of good people who get nervous, and selecting for people who are good at studying interview questions and coding under pressure in an artificial environment. There is good evidence it doesn't predict job performance well, see: http://www.wired.com/2015/04/hire-like-google/ and https://twitter.com/mxcl/status/608682016205344768
We completely agree. We do all of our interviews via screen share, allowing candidates to use whichever language and environment they're most comfortable with. We also give a selection of problems, one of which is algorithm based, allowing candidates to choose which they prefer.
Ultimately though people are aware they're being watched and assessed under timed conditions so it's going to be somewhat stressful. If we think someone is so nervous they're clearly not able to code at all, we'll offer a take home test as well before making a final decision.
As a matter of interest, does your company also ask managers to go through a management test as part of the hiring process? Let them run a team for twenty minutes or an hour and evaluate them on that basis? If not, why not? :-)
I think "take-home" style tests are the only reasonable way to evaluate this kind of technical competency. They are the closest to realistic. Throw people a small problem of the kind they'll actually be asked to work on. Let 'em deal with it--including documenting their solution!