The over fatality rate is actually lower since that listed rate only applies to symptomatic patients. Given their estimate for Asymptomatic 35% you arrive at 0.26%.
I am available if your company needs consulting services for Haskell. I can help with development, design, maintenance, and training. If you would like to learn more, please email me at consulting@jamesparker.me.
Thanks, I'm no longer with the project, but I'll mention it to the one who is taking care of it, but last I heard was they were looking to migrate to Java or integrate some of its functionalities to another system.
I've been working on PKAuth, which addresses this problem (not at the TLS level though). I wrote a short blog post [1] with a demo video about the underlying protocol. I'd appreciate any feedback!
Since Baltimore City is getting more money per student, why aren't they able to properly maintain their buildings [1]? Poor management? Corruption? Lack of heating and working bathrooms is inexcusable.
That might be true generally (and cities like New York and DC have really high per capita spending for that reason), but Baltimore is quite a bit less expensive than the surrounding suburbs. You can buy a gorgeously renovated townhouse in a nice neighborhood for $400-500k, a fraction of what it would cost in Montgomery County.
How expensive would it be to put space heaters in every classroom? Even window heaters are only $500. Surely a school district that spends $17m a year on maintenance can come up with that. My guess is bureaucracy/mismanagement is the real problem. The larger a school district is, the more bureaucracy.
Suburban areas in the US aren't old enough to get hit with infrastructure maintenance costs like the urban areas are. This is a really interesting article about the profitability and maintainability of growth: https://www.strongtowns.org/the-growth-ponzi-scheme/
Baltimore has much older infrastructure and buildings than any of its suburbs. That would lead to higher costs not to mention higher costs for being an urban metro to begin with.
He was also concerned that there's no security proof in the whitepaper. To me, it seems like you could feasibly launch an attack with less than a majority of the computing resources.
My understanding is you only need 33% of the hash power at any given time. Since PoW is only done as part of sending transactions, it probably takes less hash power than you'd think to cause problems.
>Why does everyone repeat that Byzantine consensus requires maximum 33% of participants to be dishonest?
Not 33% of participants, 33% of the hash power, could just be one participant with a pile of GPUs or ASICS or "JINN" chips lol. That's the claim made by the IOTA author, anyway.
Right now it wouldn't surprise me if someone could amass 90+% of IOTA hash power anyway.
Okay but it was more of a general question. I see Hashgraph and others always saying that they need 33% of participants to be honest. But with unforgeable message signatures that limitation doesn't apply.
Virtual Voting in hashgraph requires a 2/3 agreement.
Of course PoW provides some protection against sybil attacks, but the reality is that with enough hashpower the network can be overtaken. (Hence why HashGraph is a closed network.)
First, a paper written three weeks ago of an unproven currency is a bit of stretch.
Second, I didn't say it was necessary to order transactions, I said that is what it is used for, which is correct. You are replying to a point that I didn't make.
You said proof of work is ABOUT ordering transactions. I was trying to say that it's not. It's used for other things: namely as a way to determine the next miner, like leader election in consensus protocols. It also adds a lot of computation on top of the transactions to show that the miner is heavily invested in the ecosystem and thus serves as an economic incentive. It has almost NOTHING to do with ordering transactions. Transactions are ordered by the blockchain, and everyone has to verify them anyway.
So much misinformation, where to begin. IOTA is using Keccak/SHA-3, then they are developing a new kind of LIGHTWEIGHT cryptographic primitive together with the world leaders of this field
Unfortunately that doesn't disable the "Recommended by Pocket" crap on the New Tab Window. I have Firefox installed on 8 different machines and the option to remove "Recommended by Pocket" in the New Tab Preferences only appears in half of them.
If you're missing the option then you can open about:config and set "browser.newtabpage.activity-stream.feeds.section.topstories" to false to get rid of it. I also blew away the "browser.newtabpage.activity-stream.feeds.section.topstories.options" key that contains all of the configuration crap for pocket.
Unfortunately non of this is or the op's settings are synced in your Profile so you have to change it on all of your machines. :(
Sure; I'm just complaining about the _existence_ of `browser.newtabpage.activity-stream.feeds.telemetry`. I'm not comfortable having tracking so near user data.
Now that I'm not on a mobile and can actually look at the code, it looks like it's defined at [1]. I must be reading TelemetryFeed.jsm wrong, though, because that says addSession() holds on to the URL (as .page) and createPing() puts it into the ping...
Tracing back through the code, this is only triggered with a URL by the RemotePages watcher, which notifies when the URL matches one of a whitelist. The only whitelisted URLs currently are about:home, about:newtab, and about:tabcrashed.
> I have Firefox installed on 8 different machines and the option to remove "Recommended by Pocket" in the New Tab Preferences only appears in half of them.
The gear you're referring to opens the "New Tab Preferences" screen that I was referring to.
That's very nice. It's approaching the QNX level of microkernel.
One unusual feature of QNX is that the kernel doesn't parse strings anywhere. There's a "resource manager", but that's a process. Programs register with the resource manager for a piece of the pathname namespace ("/dev", "/fs", etc.) and then get open requests sent to them when a pathname starts with their part of the namespace. Parsing creates a large attack surface, and getting it out of the kernel is a win.
QNX tries to avoid variable-length objects in the kernel. Messages are variable length and copied by the kernel, but from one user space to another, not queued in kernel space. Most of the ways a kernel can run out of memory are avoided in QNX. If the kernel is out of resources, some system calls return errors, but the kernel doesn't crash.
If you're doing a kernel in Rust, it's helpful to think that way. Rust doesn't handle out-of-memory conditions well.
Do you happen to know languages that do handle out of memory conditions well? That seems like an interesting topic. If I understand how it's done in C, I wouldn't call that "well", but it does provide the mechanisms for doing it (which is more than can be said for all languages). Language level features (or coding styles in C that could be implemented at a language level elsewhere) that provide for increased and intuitive control would be interesting.
Ada is one of the few languages that takes out-of-memory conditions seriously. The exception Storage_Error is raised.
Java, C# and Microsoft's common runtime have out-of-memory exceptions, but I'm not sure how reliable they are in a limited-memory environment. They're more like "GC isn't helping much" exceptions.
https://www.nature.com/articles/d41586-020-01738-2
Edit: I misread your qualifier of "under 50 age group". Overall fatality rate seems to be around 0.4% according to the CDC site.