> So I made a demo and started showing my co-workers what it was and realized that some of them had never seen the Game of Life before.
So let me get this straight, there are Google software engineers, that passed all the 3 month interview process which some considered the toughest in the world, and they never have heard of conway's game of life.
this makes me feel a little better about myself now.
As a small counterpoint, when I interviewed at Google, one of the interviewers asked me how I might design an app to play the Game of Life, starting with simple cases and scaling up to boards with millions and millions of cells (so it was secretly about designing a distributed computation system, I think).
I would bet that 95% of Google SWEs have heard of the Game of Life, but you can be an exceptionally good software engineer without that knowledge. It's usually considered recreational mathematics, which is a small or non-existent part of most core university CS curriculums, and not everyone in the field explores such topics outside of their formal work or studies.
Also, their interview process is usually not 3 months. Most engineers have a 45-minute phone interview or two, followed by a day of onsite interviews lasting 4-5 hours.
Most of them who cleared it might be kind of guys who read interview questions on programming forums every day and master the art of gaming interviews.
It's hinted at by saying that the author feels better about himself now (implied superiority). Whether the particular trait that he feels superior about is "smart" is left up to your imagination!
I heard about the game of life, but I couldn't tell you the rules or describe it exactly. I never bothered to look into it. Judge away!
I guess, in the same sense that the implementation of any local binary executable is "all there" because you could theoretically figure out what all the machine code does. (But granted, I'm sure that reverse-engineering obfuscated JS is generally much easier than reverse-engineering a binary.)
RWW: So it took two months to do this?
PD: It’s not a trivial problem. It’s tricky.
I’m quite proud of the way I did it.
I can’t talk about the way we actually implemented
it. (laughs)
I wonder how they implemented it so that it renders so efficiently.
I figured caching the results of common patterns appearing in blocks of 2x2, 3x3, 4x4, etc. pixels which are surrounded by all 'off' pixels, you can easily skip many repetitive calculations.
Since repeating patterns tend to turn up a lot on their own in CGoL they would end up cached, and large blocks of n cells could be computed in 1 move rather than requiring o(8n) time. That seems to be how Hashlife works: http://en.wikipedia.org/wiki/Hashlife
I've seen a detailed explanation of a very sophisticated implementation of GOL in java (using caching, cycle detection and similar things), but I can't find it right now. Damn.
I tend to think of cellular automata optimization as being related to data compression. This is also a simple concept with no simple solution, and what solutions are best depends on the type of data being processed. In Conway's Life, patterns tend to be blobby.
For blobby universes, one should probably consider dividing the universe up into blocks approximately the size of the blobs. For Life, 4x4 to 8x8 seem reasonable. I chose the upper bound, 8x8, for reasons of convenience: There happen to be 8 bits in a byte. I strongly considered 4x4, but it didn't work out as nice....
You should put the blocks in some kind of list, so that you waste zero time in the empty parts of the universe.
Already, note a complication: New elements in the list must be introduced if the pattern grows over a block's boundaries, but we have to know if the block's neighbor already exists. You can either do a simple linear search of the list, or binary search, or keep some kind of map. I chose to make a hash table. This is solely used for finding the neighbors of a new block; each existing block already keeps a pointer to its neighbors, as they will be referenced often.
There must also be an efficient algorithm within the blocks. I chose to primarily blaze straight thru each block. There are no inner loops until all cells in a block are processed. Also, fast-lookup tables are employed. I look up 4x4 blocks to determine the inner 2x2.
Note: CA programs typically consist of 2 main loops (plus a display loop), because CA rules operate on the cells in parallel, while the microprocessor is conceptually serial. This means that there must be two copies of the universe, effectively, so that no important info is destroyed in the process of creating the next generation. Often these 2 copies are not symmetrical. It was a great struggle for me, since almost every time I took something out of one loop to make it faster, I had to add something else to the other loop! Almost every time, that is; the exceptions to that rule lead to the best optimizations. In particular, there are good tradeoffs to be considered in bit-manipulations: shifting, masking, recombining to form an address in the lookup table....
It can also be considered that sometimes the contents of a block may stabilize, requiring no further processing. You could take the block out of the list, putting it in a "hibernation" state, only to be re-activated if a neighboring block has some activity spilling into it. These blocks would take zero processing time, just like a blank region of the universe.
Period-2 oscillators might also not be very difficult to detect, and remove from the processing time. This might be worthwhile in Life, because the blinker is the most common kind of random debris. Higher period oscillators are much more rare. It is also possible that gliders could be detected and simulated. You will get diminishing returns from this kind of optimization, unless you take it to an extreme (cf. HashLife).
Also, a block of cells that's completely empty might not be worth deallocating and removing from the hash table for a while. That takes some processing time, which could be significant in the case of an oscillator moving in and out of its space repeatedly. Only when memory gets low should the oldest blocks from the "morgue" be recycled.
When the program is fast enough, it should be considered that it isn't worth displaying generations any faster than the eye can see, or at least not much faster than the refresh rate of the monitor. Especially in windowed environments, display time can be a real bottleneck.
Can you talk about what is difficult about it? I know that Windows game of life programs can use hashes of positions to compute new steps even for massive worlds very efficiently. Is the JS difficulty with the graphics on the canvas? Or something else?
He wouldn't give me any specifics at all, but I can tell you that the challenge was related to the extremely low tolerance for lag on a search result page that's treated as law company-wide. He had to do it without slowing down the search page AT ALL, for anybody. Check it out; it even works on mobile.
One argument for app.net appeared to be "controlling my own data" so I thought I would point people toward what Twitter has said about giving users the ability to download all of their tweets soon.
I hope they deliver but I'm not too worried about it if they don't. I currently archive my tweets using IFTTT: Delicious.com for the ones with links (http://ifttt.com/recipes/332), Dropbox for the whole stream (http://ifttt.com/recipes/37991).
It's not for common users. It's for people who want to pay for this service. Developers will build federation out to Twitter to reach those who don't want to participate.
That doesn't have to happen. Even though the conversation has barely begun, most of the people who are behind this are already concentrating on federation with Twitter. That's how the reach problem will be solved.
From the chart it seems like the sales were much more drastically affected by external promotion (being featured by Apple, mentioned on TUAW, and cross promoted from the other app) than any search algorithm changes.
In my opinion, considering how many timer apps there are in the App Store already, making $5k in a little over a month is a pretty good result. That's a nice little passive income stream of $127/day for an app he probably doesn't have to put that much more work into (although I don't think those numbers will be sustainable).
Key points from interview with Google engineering director Peter Magnusson during Google I/O. He explained their cloud direction. He insists that the future is toward completely managed services. Do you think so?