I agree and think this is a nice and novel way to play around with go. The use of the double-underscores to highlight terms to replace is clear and easy to read. And the included assert() is perfectly valid as well as the docs about assert are strictly talking about a builtin... not about using the idea of asserting something in tests.
> Go doesn't provide assertions. They are undeniably convenient, but our experience has been that programmers use them as a crutch to avoid thinking about proper error handling and reporting. Proper error handling means that servers continue operation after non-fatal errors instead of crashing. Proper error reporting means that errors are direct and to the point, saving the programmer from interpreting a large crash trace. Precise errors are particularly important when the programmer seeing the errors is not familiar with the code.
> We understand that this is a point of contention. There are many things in the Go language and libraries that differ from modern practices, simply because we feel it's sometimes worth trying a different approach.
That is what the Go developers claim. I don't see this abuse very often in C, either.
If a speed sensitive function in C is documented to work
for integers in the range 0 < x < 256, then it makes sense to put in an assert() as a courtesy to the users of the function.
It does not make sense in every situation to do a range check and return an error, or worse, "panic" (the latter didn't work for the Ariane 5).
I wondered the same thing. Noise in the picture should also make it extremely hard to predict. For cryptographic use one would only need 256 bits of entropy to seed a CPRNG, and I can't see how this amount of entropy wouldn't be present in a normal photo.
Steve Gibson did a pretty detailed analysis of Telegram on Security Now #444, recorded 25 Feb 2014. He concluded that Telegram's security was inadequate and recommended Threema as an alternative.
Thanks for the link to the security-now show, good stuff and does a good job of explaining the reservations in the crypto community around Telegram. I'm now looking at other options.
I do like that Telegram launched with an API, but ya, it sounds like security isn't something that can be trusted. Thanks for the informative link.
Interesting product. Have you considered writing an output plugin for Heka (https://github.com/mozilla-services/heka), so that people could use the parsers and client side aggregators etc written for Heka with your service?
We'd certainly be open to that if asked. We practice Complaint Driven Development [0] when it comes to API integrations and the like -- we prioritize what our customers ask for.
For our core experience, we went with a custom agent because that allowed us to viciously simplify the setup process. But we're very much open to working with other tools as well.
Take their example question - 'Will North Korea launch a new multi-stage missile before May 1, 2014?' That's a yes/no question. But none of the participants knows the answer. They are just trying to forecast the future based upon the balance of probabilities. So if NK does launch a missile, the people who answered 'no' were wrong, but the reasons upon which they came to pick 'no' could still have been right. So giving feedback to these people and telling them that they were wrong does not magically aid/improve predictions.
Perhaps a simpler example could be used. 'Will the next roll of this dice score 3?' Well, it's obviously more likely that some other result will happen. But if the 3 does come up, you can't say that all those people who said 'no' are worse at predictions...
Even trickier - it sounds like the participants are giving probabilities. So if you guessed some possibility was 42% likely and then it did in fact happen - are you right or wrong? I don't think there is an answer to that question.
Simple math applied to a large enough set of predictions will let you know whether or not your probabilities are accurate. It is the same as playing poker - every hand you are essentially betting on probabilities multiple times (okay right now I thing I have a 50% chance of winning this hand and the pot is X and I have to put in Y to keep playing, I'm going to continue). In the short term it is very hard to know whether or not you are putting money in when you "should" because of variance. In the long term, the variance disappears and you can see that, for example, you are making an estimated 2 bets/100 hands you play which means you are "more correct" than the people you are playing against at estimating the odds. A little more complicated than that of course because you are also using bets as weapons to drive better hands out through bluffing and semi-bluffing, etc but the general point holds that a large enough sample size of actual results versus compared results means that your probabilities will hold or they won't
Works great for poker, where you play lots of hands. Works less great for the situation in the article. Just how many North Korean missile launches do you need a person to predict before you know they are lucky or not?
Doesn't matter, as long as they predict a large enough sample of events. Just requires an assumption that they will be as accurate on every type of event, which probably isn't true, but close enough for statistics
Two strategies I was thinking of were technical peer reviews and formal proofs. Of course, these aren't mutually exclusive either with each other or with automated testing, and these are all generic terms that each cover a multitude of specific implementations.
All three have a strong track record of finding bugs when implemented well. All three also add significant overheads, so there is a cost/benefit ratio to be determined. The relative costs of implementing each strategy will surely vary a lot depending on the nature of any given project. The benefit for any of them would likely be significant for a project that didn't have robust quality controls in place, but you'd get diminishing returns using more than one at once.
I could easily believe that skilled developers had evaluated their options and determined that for their project some other strategy or combination of strategies provided good results without routine use of automated testing and that the additional overhead of adding the automation as well wasn't justified.
I've been thinking about something similar. I don't see how timed expiration would conflict with the two most important features - the filling mechanism and the replication of hot items. Am I missing something that would make timed expiration impossible?
Yeah on the first pass of the problem you seem right.
The CAS must have an authoritative node (my mind wanders thinking about replication and failover) but the key it protects - with the version baked in - can be replicated surely?
If you have a bug in another application on the server running the cache that causes it to grow its memory use, your cache would suddenly disappear/underperform, and the failure could cascade on to the system that the cache is in front of. If instead you let the offending program crash because the cache is using regular memory, this would not happen. Just a thought.
If you hit swap, again only the offending application or instance is punished, not everyone else (for instance by pummeling a backend database server that other services are using as well)
If program A hits swap, it means that cold pages are written to swap so that A can get those pages; this initial writing is done by program A, its true. But A may not be the cause of the problem, A is just the straw that breaks the camel's back.
And those pages that got written to swap likely belong to others, and they pay the cost when they need those pages back...
In my practical experience, when one of my apps hits swap, the whole system becomes distressed. It is not isolated to the 'offender'.
You can of course avoid swap, but with your OS doing overcommit on memory allocations, you are just inviting a completely different way of failing and that too is hard to manage. You end up having to know a lot about your deployment environment and ring-fence memory between components and manage their budgets. If you want to have both app code and cache on the same node - and that's a central tenet of groupcache - then you have to make sure everything is under-dimensioned because the needs of one cannot steal from the other; your cache isn't adaptive.
That's why I built a system to do caching centrally at the OS level.
I hope someone like Brad is browsing here and can make some kind of piecing observation I've missed.
That's rather common. If swapping can harm your application, than don't swap. On a machine where slowdown is tolerable (temporarily, on a desktop), swap is fine. On a machine whose entire purpose is to serve as a fast cache in front of slow storage, swapoff and fall back to shedding or queuing requests at the frontend.
That is my experience as well. In my thought experiment the 'offender' would be a server instance, not a process running among other applications on a single machine. Applications that hit swap often have memory leaks, and hitting swap is then just a matter of time. Creating a cascading failure may be preventable however.
Values that must expire could perhaps use a truncated time value as part of the key? It's not as flexible as regular expiration, because the entries would only expire when you cross the point in time where the truncated time value changes, which could be a problem in a lot of applications (sudden surges of requests every x seconds)
Edit: to be a little more constructive, do the Go tour instead.