Hacker Newsnew | past | comments | ask | show | jobs | submit | happyhappy's commentslogin

This doesn't look very idiomatic, with assert() and __int__ etc. Please don't try to learn Go from this.

Edit: to be a little more constructive, do the Go tour instead.


I think you missed the point of this project. The __int__ is just a placeholder you are expected to replace with the real value.

The assert() is just to make it fail until you have the right value for the variable its testing.


I agree and think this is a nice and novel way to play around with go. The use of the double-underscores to highlight terms to replace is clear and easy to read. And the included assert() is perfectly valid as well as the docs about assert are strictly talking about a builtin... not about using the idea of asserting something in tests.


That is what I was wondering: http://golang.org/doc/faq#assertions

> Why does Go not have assertions?

> Go doesn't provide assertions. They are undeniably convenient, but our experience has been that programmers use them as a crutch to avoid thinking about proper error handling and reporting. Proper error handling means that servers continue operation after non-fatal errors instead of crashing. Proper error reporting means that errors are direct and to the point, saving the programmer from interpreting a large crash trace. Precise errors are particularly important when the programmer seeing the errors is not familiar with the code.

> We understand that this is a point of contention. There are many things in the Go language and libraries that differ from modern practices, simply because we feel it's sometimes worth trying a different approach.

I was wondering how the "go test" function was working as well, here is a start of an answer: http://golang.org/doc/faq#How_do_I_write_a_unit_test


> programmers use them as a crutch to avoid thinking about proper error handling and reporting

Funny, because asserts in C are used to find bugs but not for "error handling and reporting".


I tend to use Go's panic() in many situations where, in C, I would make assertions.


The point is that they are commonly abused to provide error handling and reporting.


That is what the Go developers claim. I don't see this abuse very often in C, either.

If a speed sensitive function in C is documented to work for integers in the range 0 < x < 256, then it makes sense to put in an assert() as a courtesy to the users of the function.

It does not make sense in every situation to do a range check and return an error, or worse, "panic" (the latter didn't work for the Ariane 5).


But you're giving an example where you think an explicit panic is justified.

Losing asserts doesn't eliminate either option. It simply requires you to have exicit error handling or a panic. This is the point.


I wondered the same thing. Noise in the picture should also make it extremely hard to predict. For cryptographic use one would only need 256 bits of entropy to seed a CPRNG, and I can't see how this amount of entropy wouldn't be present in a normal photo.


Telegram should be considered insecure. See this post for a nice summary: http://security.stackexchange.com/a/49802


That seems more like an ad-hominem than anything.

Their secret chats specifically seem like they would be entirely secure.


Steve Gibson did a pretty detailed analysis of Telegram on Security Now #444, recorded 25 Feb 2014. He concluded that Telegram's security was inadequate and recommended Threema as an alternative.

http://twit.tv/show/security-now/444

https://threema.ch/en


Thanks for the link to the security-now show, good stuff and does a good job of explaining the reservations in the crypto community around Telegram. I'm now looking at other options.

I do like that Telegram launched with an API, but ya, it sounds like security isn't something that can be trusted. Thanks for the informative link.


Putting someone's credentials into question is not necessarily an ad hominem.

But in any case, you seem to have missed this bit: "The protocol they invented is flawed. Here is a nice blog post explaining why."


It would be interesting if you could find a reputable cryptographer to agree with either of those sentences.


Interesting product. Have you considered writing an output plugin for Heka (https://github.com/mozilla-services/heka), so that people could use the parsers and client side aggregators etc written for Heka with your service?


We'd certainly be open to that if asked. We practice Complaint Driven Development [0] when it comes to API integrations and the like -- we prioritize what our customers ask for.

For our core experience, we went with a custom agent because that allowed us to viciously simplify the setup process. But we're very much open to working with other tools as well.

[0] http://blog.codinghorror.com/complaint-driven-development/


"In fact, Tetlock and his team have even engineered ways to significantly improve the wisdom of the crowd"

Does anyone know what these methods might be?


One example briefly mentioned was providing feedback to respondents to let them know when they were getting things right or wrong.


And how would that work?

Take their example question - 'Will North Korea launch a new multi-stage missile before May 1, 2014?' That's a yes/no question. But none of the participants knows the answer. They are just trying to forecast the future based upon the balance of probabilities. So if NK does launch a missile, the people who answered 'no' were wrong, but the reasons upon which they came to pick 'no' could still have been right. So giving feedback to these people and telling them that they were wrong does not magically aid/improve predictions.

Perhaps a simpler example could be used. 'Will the next roll of this dice score 3?' Well, it's obviously more likely that some other result will happen. But if the 3 does come up, you can't say that all those people who said 'no' are worse at predictions...


Even trickier - it sounds like the participants are giving probabilities. So if you guessed some possibility was 42% likely and then it did in fact happen - are you right or wrong? I don't think there is an answer to that question.


PredictionBook [1] does it that way. You get a chart of how many times you assigned a certain probability and what percent of times

[1] http://predictionbook.com/


Simple math applied to a large enough set of predictions will let you know whether or not your probabilities are accurate. It is the same as playing poker - every hand you are essentially betting on probabilities multiple times (okay right now I thing I have a 50% chance of winning this hand and the pot is X and I have to put in Y to keep playing, I'm going to continue). In the short term it is very hard to know whether or not you are putting money in when you "should" because of variance. In the long term, the variance disappears and you can see that, for example, you are making an estimated 2 bets/100 hands you play which means you are "more correct" than the people you are playing against at estimating the odds. A little more complicated than that of course because you are also using bets as weapons to drive better hands out through bluffing and semi-bluffing, etc but the general point holds that a large enough sample size of actual results versus compared results means that your probabilities will hold or they won't


Works great for poker, where you play lots of hands. Works less great for the situation in the article. Just how many North Korean missile launches do you need a person to predict before you know they are lucky or not?


Doesn't matter, as long as they predict a large enough sample of events. Just requires an assumption that they will be as accurate on every type of event, which probably isn't true, but close enough for statistics


If I understand your question correctly, you should look into PAR2


I'm very interested in this. Can you name some of these strategies?


Two strategies I was thinking of were technical peer reviews and formal proofs. Of course, these aren't mutually exclusive either with each other or with automated testing, and these are all generic terms that each cover a multitude of specific implementations.

All three have a strong track record of finding bugs when implemented well. All three also add significant overheads, so there is a cost/benefit ratio to be determined. The relative costs of implementing each strategy will surely vary a lot depending on the nature of any given project. The benefit for any of them would likely be significant for a project that didn't have robust quality controls in place, but you'd get diminishing returns using more than one at once.

I could easily believe that skilled developers had evaluated their options and determined that for their project some other strategy or combination of strategies provided good results without routine use of automated testing and that the additional overhead of adding the automation as well wasn't justified.


I've been thinking about something similar. I don't see how timed expiration would conflict with the two most important features - the filling mechanism and the replication of hot items. Am I missing something that would make timed expiration impossible?


Yeah on the first pass of the problem you seem right.

The CAS must have an authoritative node (my mind wanders thinking about replication and failover) but the key it protects - with the version baked in - can be replicated surely?


If you have a bug in another application on the server running the cache that causes it to grow its memory use, your cache would suddenly disappear/underperform, and the failure could cascade on to the system that the cache is in front of. If instead you let the offending program crash because the cache is using regular memory, this would not happen. Just a thought.


If the cache uses regular memory, you hit swap...

I think the failure mode of a system-level cache is rather better than per-application islands that can conflict.


If you hit swap, again only the offending application or instance is punished, not everyone else (for instance by pummeling a backend database server that other services are using as well)


Actually not to my thinking:

If program A hits swap, it means that cold pages are written to swap so that A can get those pages; this initial writing is done by program A, its true. But A may not be the cause of the problem, A is just the straw that breaks the camel's back.

And those pages that got written to swap likely belong to others, and they pay the cost when they need those pages back...

In my practical experience, when one of my apps hits swap, the whole system becomes distressed. It is not isolated to the 'offender'.

You can of course avoid swap, but with your OS doing overcommit on memory allocations, you are just inviting a completely different way of failing and that too is hard to manage. You end up having to know a lot about your deployment environment and ring-fence memory between components and manage their budgets. If you want to have both app code and cache on the same node - and that's a central tenet of groupcache - then you have to make sure everything is under-dimensioned because the needs of one cannot steal from the other; your cache isn't adaptive.

That's why I built a system to do caching centrally at the OS level.

I hope someone like Brad is browsing here and can make some kind of piecing observation I've missed.


I understand google solves that problem by not enabling swap on their servers.


That's rather common. If swapping can harm your application, than don't swap. On a machine where slowdown is tolerable (temporarily, on a desktop), swap is fine. On a machine whose entire purpose is to serve as a fast cache in front of slow storage, swapoff and fall back to shedding or queuing requests at the frontend.


Without any specific knowledge of Google's practices, I can say this is certainly true - this is standard nowadays.


That is my experience as well. In my thought experiment the 'offender' would be a server instance, not a process running among other applications on a single machine. Applications that hit swap often have memory leaks, and hitting swap is then just a matter of time. Creating a cascading failure may be preventable however.


Values that must expire could perhaps use a truncated time value as part of the key? It's not as flexible as regular expiration, because the entries would only expire when you cross the point in time where the truncated time value changes, which could be a problem in a lot of applications (sudden surges of requests every x seconds)


[deleted]


Read the article for reasons for using groupcache.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: