Couldn't they just write all locations say to a public s3 file, maybe separated by geo squares depending on your viewport? client would just keep polling the s3 file(s). Unlimited bandwidth (at a cost yes..)
Back when S3 and EC2 were brand spanking new services, and EC2 didn't have persistent disks, I got to see a quite well designed analytics pipeline. The inbound tracking http requests went into a pool that aggregated them locally in mysql, then wrote out the database files to S3. A separate pool would pull down these files and merge them into a local mysql databases for the web app UI. So kindof like a very simplistic map reduce atop s3 using mysql ISAM files.
It was impressively simple to keep running, and they easily scaled it to handle massive spikes related to sports events and such. The only real downside is the delay in new data hitting the UI layer, but this was built out while a lot of people still had webtrends installs kicking around, so it was perfectly acceptable for their customers.