Hacker Newsnew | past | comments | ask | show | jobs | submit | srean's commentslogin

Could you elaborate please.

Hmm.

This was a long time ago, so we didn't have GPUs or fancy rendering h/ware. We addressed every pixel individually.

So a radar image was painted to the screen, and then the next update was painted on top of that. But that just gives the live radar image ... we wanted moving objects to leave "snail trails".

So what you do for each update is:

* Decrement the existing pixel;

* Update the pixel with the max of the incoming value and the decremented value.

This then leaves stationary targets in place, and anything that's moving leaves a trail behind it so when you look at the screen it's instantly obvious where everything is, and how fast they're moving.

Ideally you'd want to decrement every pixel by one every tenth of a second or so, but that wasn't possible with the h/ware speed we had. So instead we decremented every Nth pixel by D and cycled through the pixels.

But that created stripes, so we needed to access the pixels in a pseudo-random fashion without leaving stripes. The area we were painting was 1024x1024, so what we did was start at the zeroth pixel and step by a prime number size, wrapping around. But what prime number?

We chose a prime close to (2^20)/phi. (Actually we didn't, but that was the starting point for a more complex calculation)

Since phi has no good rational approximation, this didn't leave stripes. It created an evenly spread speckle pattern. The rate of fade was controlled by changing D, and it was very effective.

Worked a treat on our limited hardware (ARM7 on a RiscPC) and easy enough to program directly in ARM assembler.


Thanks for the story.

What's decrementing a pixel ?

I(x,y,t+1) = I(x,y,t) - c ?


Exactly that ... for a given pixel, reducing the existing level/brightness by some value, the default is usually 1, or a fixed percentage.

Ah! Now I understand.

I was stepping out with my wife for a day out and had read your reply very cursorily. That reading had left me quite puzzled -- "I would have done exponentially weighted moving average (EWMA) over time for trails. Why is \phi important here in any form. Is \phi the weight of the EWMA ?".

Now I get it, decrementing the pixels were quite peripheral to the main story.

The main story is that of finding a scan sequence that (a) cycles through a set of points without repetition and (b) without obvious patterns discernible to the eye.

In this, the use \phi is indeed neat. I don't think it would have occurred to me. I would have gone with some shift register sequence with cycle length 1024 * 1024 or a space filling curve on such a grid.

This becomes even more interesting if you include the desiderata that the minimum distance between any two temporally adjacent pixels must not be small (to avoid temporal hot spots).

Finding MiniMax, min over temporal adjacency, max over all 1024* 1024! sequences, might be intractable.

Another interesting formulation could be, that for any fixed kxk sized disc that could be drawn on the grid, the temporal interval between any two "revisit" events need to be independent of the disk's position on the grid.

I think this is the road to small discrepancy sequences of quasi Monte Carlo.


I used to be skeptical about this and would try and find alternative explanations, for example, cognitive biases, coincidences, search requests from another device routed through a common wifi.

However, I have changed my mind through a lengthy process of attrition of possible explanations.

Recently my wife was around her friend who was having a vertigo spell. We talked about it when we met. None of us searched about it. Lo and behold my YouTube feed has videos on how to mitigate vertigo.

It's possible that they transferred information across two phone devices that came in close proximity, the owner of one who has a history of vertigo. But even that is a stretch, why transfer 'vertigo' specifically ?


>>Recently my wife was around her friend who was having a vertigo spell. We talked about it when we met. None of us searched about it. Lo and behold my YouTube feed has videos on how to mitigate vertigo.

Again, while the simplest explanation is the most tempting one, we just have to consider that Google has an absolutely stunning amount of information on any of us. Like, it definitely knows your friend is your friend. It knows what your friend searched for recently, and it knows you met and spent some time together. So of course it makes sense to show you videos about some stuff that it marked as "interesting" for them. They are probably getting videos for stuff that you have looked up recently, whether you talked about it or not.


Yes, as you would note, this was one of my hypotheses. However, it is on shaky ground.

This friend has suffered from vertigo chronically, it was not a new one off. My wife's and her friend's phones have been close proximity many many times before. It's certainly odd that Google would recommend vertigo only after a vertigo spell happened in the presence of my wife. None of the three searched for vertigo.

Phone motion sensors detecting a vertigo spell ? Well that's a possibility, but I doubt Google would be running such a detector 24x7, seems too expensive, unless the opportunity to show a timely ad is lucrative enough to cover the cost.

Although none of the three searched for vertigo, the friend may have searched for her pharmacy to refill her meds.

This is not the only incident. I have come to believe what I now believe about this eavesdropping, after a long period of whittling out competing hypotheses. I would usually file these incidents under confirmation bias. But these have happened just too many times.

A quantative Bayesian analysis would have been the right thing to do. On that count I am delinquent. I will, however, grant you this, human intuition is terrible at Bayesian analysis and tends to see significant patterns when there are none.


The reason occam's razor works is useful is it draws the one line connecting two points, rather than any squiggle that passes through them.

Is that really true ?

Golden ratio is very specific, whereas any proportional that is vaguely close to 1.5 (equivalently, 2:1) gets called out as an example of golden ratio.

The same tendency exists among wannabe-mathematician art critics who see a spiral and label it a logarithmic spiral or a Fibonacci spiral.


Certainly some art critics and artists over-apply and over-think so-called 'golden' geometry. What I think is happening is very simple... that artists avoid regularity (e.g. two lights of the same color and intensity, exact center placement, exact placement at thirds, corner placement, two regions at the same angle, two hue spreads of equal sides on opposite sides of the RYB hue wheel etc etc). These loose 'rules' of avoidance can be confused with 'rules' of prescription such as color harmony, golden section etc.

> especially when it’s taught by Anant Sahai, who’s a delightfully classic hardass immigrant professor ...

That's a rather odd way to begin an article. I understand the contributions of immigrant and temporarily-immigrant skilled professionals, I have been the latter myself, and yet this grates a little.

If one wants to express gratitude or compliments, I think there are better ways.


That's right.

Church does not have to be a church of faith, it can well be a church of reason.

What matters is that people with shared values get to spend time together on a regular basis without getting into status games that might eventually show up no matter what the church.


I've tried a few types of churches of reason and they are pretty sad, honestly. Hard core, dedicated, non-religious person here, so I'm not saying that people should go to Church, but I've never seen anything approximating a Church of Reason that would have satisfied my (admittedly minimal) social desires.


I hear you. For me things that have worked are those that are built around a hobby -- travelling to the wooded hills, astronomy, music recitals, caring for strays / abandoned pets.



Interesting suggestion! Thanks!


Traumatic childhood almost always messes with how one attaches with people. A small exceptional fraction somehow manage to remain unaffected.

When attachment styles get warped, behaviors that were a self protective behavior in childhood, become self-defeating behaviors in adult life. The person is quite oblivious to all this because those behaviors and fragile modes of attachment feel perfectly normal -- it is like growing up in a different g (acceleration due to gravity).

It feels like - I am right, it's the others who are wrong, unfair, greedy, needy, flakey, stupid.

For me this book [0] was very helpful for understanding what's going on in and around me

[0] https://www.goodreads.com/book/show/9547888-attached


I am always in awe when people are able to manage such an unsavoury baggage. That's some tough going.


Yes.

Our lopsided emphasis on individualism, our definition of economic efficiency that does not include the mental health value, these have been detrimental to our connections, roots, community, family etc.

We said, let the mom and pop stores die, their replacements provide the same value but more efficiently. Let community bonds die they intrude upon our individual destiny.

But we did not correctly account for the value provided by those that we chose to replace. So it is not surprising that we find ourselves here.

Could it have played out any other way ? I doubt it. Our world is an underdamped system, so we will keep swinging towards the extremes, till we figure out how to get a critically damped system. The other serious problem is that the feedback system is so laggy, that's a biggy in feedback control loops.


The world has become a much bigger place. You used to know who to avoid, the default was someone was acceptable. Now the ones to avoid move around and it's all too likely that a newcomer is such a person.


> Now the ones to avoid move around and it's all too likely that a newcomer is such a person.

This seems a wild generalization to make, though I guess "be suspicious of newcomers" is a little biologically hardwired. What's your epistemology for believing "newcomers" are "the ones to avoid"?


The problem is the bad guys move around a lot more than the good guys.

That's not an explanation, that's a restatement of the claim.

I think it's still likely that most new people you'll encounter aren't malicious. I have to wonder what your mental image of a 'newcomer' looks like.


> Our lopsided emphasis on individualism

This reads like that pattern where people assign blame for all issues to whatever thing they happen to not like. The US is the least individualistic it has ever been, but there was much more community and less loneliness in the past. That make it pretty obvious that the issue here isn't "individualism".


I am not from the US but your observation, if correct, would offer a counterexample worth thinking about.

You are saying that in the past, more resources were spent supporting individuals than the resources spent supporting communities and yet communities were stronger. That sure would be an interesting thing to understand if true. My interest is certainly piqued, seems too good to be true though.


This.

I also think that a lot of the waste can be done away with by using application specific codecs. Yes, even gzip compresses logs and metrics by a lot, but one can go further with specialized codecs to hone in on the redundancy much quicker (than what a generic lossless compressor eventually would).

However to build these one can't have a "throw it over the 3rd party wall" mode of development.

One way to do this for stable services would be to build hi-fidelity (mathematical/statistical) models for the logs and metrics, then serialize what is non-redundant. This applies particularly well for numeric data where gzip does not do as well. What we need is the analogue of jpeg for the log type.

At my workplace there has been political buy in of the idea that if a long / metric stream has not been used in 2~3 years, then throw it away and stop collecting. This rubs me the wrong way because so many times I have wished there was some historic data for my data-science project. You never know what data you might need in the future. You, however, do know that you do not need redundant data.


Or what Upton Sinclair said.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: