Hacker Newsnew | past | comments | ask | show | jobs | submit | jonmwords's commentslogin

Here's the interview I published with the creator! http://www.readwriteweb.com/archives/how-a-google-engineer-b...


> So I made a demo and started showing my co-workers what it was and realized that some of them had never seen the Game of Life before.

So let me get this straight, there are Google software engineers, that passed all the 3 month interview process which some considered the toughest in the world, and they never have heard of conway's game of life.

this makes me feel a little better about myself now.


As a small counterpoint, when I interviewed at Google, one of the interviewers asked me how I might design an app to play the Game of Life, starting with simple cases and scaling up to boards with millions and millions of cells (so it was secretly about designing a distributed computation system, I think).


I would bet that 95% of Google SWEs have heard of the Game of Life, but you can be an exceptionally good software engineer without that knowledge. It's usually considered recreational mathematics, which is a small or non-existent part of most core university CS curriculums, and not everyone in the field explores such topics outside of their formal work or studies.

Also, their interview process is usually not 3 months. Most engineers have a 45-minute phone interview or two, followed by a day of onsite interviews lasting 4-5 hours.


It's possible that not all of the co-workers he showed were engineers.


'Knowing' algorithms is hardly great.

Most of them who cleared it might be kind of guys who read interview questions on programming forums every day and master the art of gaming interviews.


The only reason I've heard of it is that they had the Martin Gardner essay collections at my library growing up.


I find it interesting that your metric of smart is whether or not someone has knowledge of the Game of Life.


I think this is less about IQ and more about culture. (And probably not even about looking down on other people.)


The comment makes no mention of the word "smart", just that Google's interview process is long and hard.


It's hinted at by saying that the author feels better about himself now (implied superiority). Whether the particular trait that he feels superior about is "smart" is left up to your imagination!

I heard about the game of life, but I couldn't tell you the rules or describe it exactly. I never bothered to look into it. Judge away!


My interpretation would be that he saw Google-employees as omniscient demigods, and now has a more realistic view.


Hehehe


> I can’t talk about the way we actually implemented it.

It's all there, right? In obfuscated, minimized, but still-readable JavaScript.


I guess, in the same sense that the implementation of any local binary executable is "all there" because you could theoretically figure out what all the machine code does. (But granted, I'm sure that reverse-engineering obfuscated JS is generally much easier than reverse-engineering a binary.)


Found it and beautified is as much as I could. Really tough to read still: http://cl.ly/K74W


     RWW: So it took two months to do this?

     PD: It’s not a trivial problem. It’s tricky. 
         I’m quite proud of the way I did it. 
         I can’t talk about the way we actually implemented
         it.   (laughs)
I wonder how they implemented it so that it renders so efficiently.


I figured caching the results of common patterns appearing in blocks of 2x2, 3x3, 4x4, etc. pixels which are surrounded by all 'off' pixels, you can easily skip many repetitive calculations.

  ----    ----
  -oo- =\ -oo-
  -oo- =/ -oo-
  ----    ----

  -----    -----    -----
  --o--    -----    --o--
  --o-- =\ -ooo- =\ --o--
  --o-- =/ ----- =/ --o--
  -----    -----    -----

  ------    ------    ------
  -oo---    -oo---    -oo---
  -oo--- =\ -o---- =\ -oo---
  ---oo- =/ ----o- =/ ---oo-
  ---oo-    ---oo-    ---oo-
  ------    ------    ------
Since repeating patterns tend to turn up a lot on their own in CGoL they would end up cached, and large blocks of n cells could be computed in 1 move rather than requiring o(8n) time. That seems to be how Hashlife works: http://en.wikipedia.org/wiki/Hashlife


Here is a CoffeeScript implementation of Hashlife

https://github.com/raganwald/cafeaulife



Where doesn't it give appropriate credit?

1. The description line for the Github package reads "Gosper's HashLife in CoffeeScript"

2. The readme says: "Cafe au Life implements Bill Gosper's HashLife algorithm."

3. The docs say "Cafe au Life is based on Bill Gosper's brilliant <a href="http://en.wikipedia.org/wiki/Hashlife>HashLife</a...; algorithm."


Well, I could be wrong but I swear did an 'F3' on the page earlier to search for Gosper and it seemed like it wasn't there then.

Apologies for not being able to use a browser and falsely accusing you!


Credit for ideas is important. I'd rather 100 people falsely accuse me than have one omission slip through. Thanks for caring.


FYI: It has its own home page: http://recursiveuniver.se


I think listlife is a simple and efficient algorithm: http://dotat.at/prog/life/life.html

Some years ago I implementend something similiar in javascript: http://pmav.eu/stuff/javascript-game-of-life-v3.1.1/

The benchmark can be really slow in some steps.


I've seen a detailed explanation of a very sophisticated implementation of GOL in java (using caching, cycle detection and similar things), but I can't find it right now. Damn.

Edit: found it http://www.ibiblio.org/lifepatterns/lifeapplet.html

I tend to think of cellular automata optimization as being related to data compression. This is also a simple concept with no simple solution, and what solutions are best depends on the type of data being processed. In Conway's Life, patterns tend to be blobby.

For blobby universes, one should probably consider dividing the universe up into blocks approximately the size of the blobs. For Life, 4x4 to 8x8 seem reasonable. I chose the upper bound, 8x8, for reasons of convenience: There happen to be 8 bits in a byte. I strongly considered 4x4, but it didn't work out as nice....

You should put the blocks in some kind of list, so that you waste zero time in the empty parts of the universe.

Already, note a complication: New elements in the list must be introduced if the pattern grows over a block's boundaries, but we have to know if the block's neighbor already exists. You can either do a simple linear search of the list, or binary search, or keep some kind of map. I chose to make a hash table. This is solely used for finding the neighbors of a new block; each existing block already keeps a pointer to its neighbors, as they will be referenced often.

There must also be an efficient algorithm within the blocks. I chose to primarily blaze straight thru each block. There are no inner loops until all cells in a block are processed. Also, fast-lookup tables are employed. I look up 4x4 blocks to determine the inner 2x2.

Note: CA programs typically consist of 2 main loops (plus a display loop), because CA rules operate on the cells in parallel, while the microprocessor is conceptually serial. This means that there must be two copies of the universe, effectively, so that no important info is destroyed in the process of creating the next generation. Often these 2 copies are not symmetrical. It was a great struggle for me, since almost every time I took something out of one loop to make it faster, I had to add something else to the other loop! Almost every time, that is; the exceptions to that rule lead to the best optimizations. In particular, there are good tradeoffs to be considered in bit-manipulations: shifting, masking, recombining to form an address in the lookup table....

It can also be considered that sometimes the contents of a block may stabilize, requiring no further processing. You could take the block out of the list, putting it in a "hibernation" state, only to be re-activated if a neighboring block has some activity spilling into it. These blocks would take zero processing time, just like a blank region of the universe.

Period-2 oscillators might also not be very difficult to detect, and remove from the processing time. This might be worthwhile in Life, because the blinker is the most common kind of random debris. Higher period oscillators are much more rare. It is also possible that gliders could be detected and simulated. You will get diminishing returns from this kind of optimization, unless you take it to an extreme (cf. HashLife).

Also, a block of cells that's completely empty might not be worth deallocating and removing from the hash table for a while. That takes some processing time, which could be significant in the case of an oscillator moving in and out of its space repeatedly. Only when memory gets low should the oldest blocks from the "morgue" be recycled.

When the program is fast enough, it should be considered that it isn't worth displaying generations any faster than the eye can see, or at least not much faster than the refresh rate of the monitor. Especially in windowed environments, display time can be a real bottleneck.


Meeeeee, too. Me, too. Believe me, I tried.


Can you talk about what is difficult about it? I know that Windows game of life programs can use hashes of positions to compute new steps even for massive worlds very efficiently. Is the JS difficulty with the graphics on the canvas? Or something else?


He wouldn't give me any specifics at all, but I can tell you that the challenge was related to the extremely low tolerance for lag on a search result page that's treated as law company-wide. He had to do it without slowing down the search page AT ALL, for anybody. Check it out; it even works on mobile.


Oh, I thought you meant you tried to implement something like this and had trouble. LOL. Well, thanks.


Great read. Thanks. :)


Glad you liked it. I'm glad people are finding out about this and think it's as cool as I do.


Thanks for that, Matt! I've replaced the promo video with the actual demo in the post.


The guarantee that App.net won't kneecap its developers is that its developers are paying customers.


If a service doesn't kneecap its developers, yet those developers have no userbase because nobody uses the service, does it make a sound?


I don't know. Twitter said they were going to launch annotations in the API, too, and that never materialized.


Delivery is always up in the air isn't it?

One argument for app.net appeared to be "controlling my own data" so I thought I would point people toward what Twitter has said about giving users the ability to download all of their tweets soon.

I hope they deliver but I'm not too worried about it if they don't. I currently archive my tweets using IFTTT: Delicious.com for the ones with links (http://ifttt.com/recipes/332), Dropbox for the whole stream (http://ifttt.com/recipes/37991).


It's not for common users. It's for people who want to pay for this service. Developers will build federation out to Twitter to reach those who don't want to participate.


The whole spec is there on GitHub for your perusal and criticism, though. https://github.com/appdotnet/api-spec


That doesn't have to happen. Even though the conversation has barely begun, most of the people who are behind this are already concentrating on federation with Twitter. That's how the reach problem will be solved.


Fair points, but the fact is that the app was selling more when it was visible in search. When it dropped out of search, it stopped selling.


From the chart it seems like the sales were much more drastically affected by external promotion (being featured by Apple, mentioned on TUAW, and cross promoted from the other app) than any search algorithm changes.

In my opinion, considering how many timer apps there are in the App Store already, making $5k in a little over a month is a pretty good result. That's a nice little passive income stream of $127/day for an app he probably doesn't have to put that much more work into (although I don't think those numbers will be sustainable).


Key points from interview with Google engineering director Peter Magnusson during Google I/O. He explained their cloud direction. He insists that the future is toward completely managed services. Do you think so?


RWW has just posted a stark counterpoint to this article from another guest contributor. http://www.readwriteweb.com/start/2012/06/is-there-a-better-...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: