Hacker Newsnew | past | comments | ask | show | jobs | submit | prohor's commentslogin

GCP also has sustained-usage discounts, which are very convenient, as even without reservation as in AWS, you get up to 30% discount if you have more constant usage, but you still have the full flexibility of on-demand.

When it comes to pricing comparisons of cloud see Cloudorado: https://www.cloudorado.com/


I have the same feeling, that the note notation is way too complex. I would love to see something like this getting traction.


What I miss in the U2F tokens is a small display that would show transaction details which is approved.

Imagine you have a malware on your PC; it can send another transaction than the one you see in the browser, while the URL would still match. Having transaction summary on the token would be the last verification point where can still spot something is wrong.


I have a Ledger Nano S that I use for cryptocurrency and it does basically this. It won't sign transactions unless you approve them from the device, and the little screen on the device shows the address(es) to which you're sending.

https://www.ledger.com/products/ledger-nano-s

It's $100, which is probably too much for your average user, but cheap enough that it's got to be feasible for a U2F kind of thing in a few years.

I guess even the addition of the screen, though, kind of necessitates using a cord so you can see that screen, which makes it less clean than my Yubikey Nano (which is far less obtrusive). But I think we're getting closer.


Thanks. Yes, this is something what I think of. Does it show on the screen what operation you confirm with U2F? Or just "U2F authentication"? But cord makes it far less convenient unfortunately.


I use it for gpg, ssh(gpg-agent) and u2f. These are official applications that you can install on the device that does the above. It doesn't show the operation, just the website trying to access.


I'm not sure if it supports U2F. If it does, I haven't used it. It just seems to prove that what you were conceptually describing can exist, and at a not-completely-unreasonable price point.


Ssh and gpg sadly doesn't display what requests it, might be a protocol limitation. Same deal for FIDO


The screen has to be run by the crypto chip if you want it to be properly 'secure' and most can't do that. At least not the ones who you can buy of the shelf unfortunately.


If you have malware on the PC you can still be MITM'd when you actually, say, access your bank.


Not if the encrypted data is displayed by the crypto chip, via a round trip between the bank and the chip (encrypted). Yes, technically it can be done, but this requires physical access to the device.


Ledger and Trezor have a display, though I don't know whether their U2F UX is any good.


The release is also improving sandboxing for Linux:

https://www.bleepingcomputer.com/news/security/firefox-57-br...

Sandboxing for Windows was introduced in version 54.


Rally nice analysis. It is a pity that open access to data like this is still exception, not a rule. I wish to have such data for my city.


Agreed - just check cloud comparison, AWS is rarely at the top: https://www.cloudorado.com/cloud_server_comparison.jsp


Well, if you're going to use bad figures, then sure, AWS won't win. The default size there is 768MB RAM, 1 cpu, and 50GB disk... which it says AWS will provide for $54. Whereas in actuality a t2.micro with those specs only costs $14, lower than all the listed prices (which are all clearly out of date)

Not to mention all the big names missing from that list. For some reason Dimension Data makes the list (and it's woeful, from experience), but there's no Digital Ocean, OVH, Hetzner, etc...


As per my other reply: A t2.micro does not allow you to use more than 10% of the vCPU on a sustained basis. Any use over that needs to be earned, and you only earn 6 credits (for one minute each) per hour.


Wow thanks for sharing this link. Didn't know about this.

One thing I noticed though is the pricing seems a bit biased; for example for AWS it recommends an m1.small with 1GB Ram and 20GB of Storage at $35 a month ... However if you used a t2.micro that would give you the same specs for $10.79


Not quite the same, you don't own the whole core on the t2 and will get cpu throttled.


> you don't own the whole core

Moving the goalposts here. 'Not owning the whole core' is the default in the cloud.


For the other instances you get a specific number of units of processing capacity that you can use 100% of continuously if you like. For the micro instances, you get a base level and build up credits towards bursts, and can not maintain 100% utilization continuously. It's very much different and not the default. To quote Amazon:

> "A CPU Credit provides the performance of a full CPU core for one minute. Traditional Amazon EC2 instance types provide fixed performance, while T2 instances provide a baseline level of CPU performance with the ability to burst above that baseline level. The baseline performance and ability to burst are governed by CPU credits."

A t2.micro allows only 10% of the vCPU baseline performance. Anything above that needs to be "earned" at a rate of 6 credits per hour. The t2.micro can accumulate a maximum of 144 CPU credits (+ the 30 initial credits, that do not renew), each good for 1 minute of 100% use.

So in other words, you can on average only use 100% of the CPU for 6 minutes per hour.


m1.smalls are also ancient, when the current generation m4 is more than a year old at this point.

Odd site.


that's a very handy site, previously I had mostly been using http://www.ec2instances.info/ and http://www.gceinstances.info/

Thanks for pointing it out!


For me the difference seems to be in handling large number of distinct logs. In Kafka every log & partition is a separate file and moreover it keeps it open. So, storing multiple logs results in writing to many files so eventually random write IO; and also you may hit limits of open files. You can multiplex logical logs in each Kafka log, but then you read unnecessarily other logs.

Keeping SS tables makes it more sequential write and reasonably sequential write, as long as you have enough RAM to get multiple records of each log, so they constitute a continues blocks in flashed file.

Actually you could get very similar result using Cassandra, which also uses SS tables. The difference is that Cassandra keeps merging files, which actually makes much more IO traffic than clients. Cassandra will typically need 16x more IO for merging then actual data write rate. You can limit it a bit if you create time shard tables.


6 years ago I submitted cloud computing comparison - https://www.cloudorado.com/ . It has hit first page with 38 comments. There was a nice spike in traffic that I've never seen later but it faded quickly. Now traffic mostly comes from search and some links that popped up here and there. The site is live and provides revenue (but not spectacular; fraction of what I need for living).


There is similar plugin for Firefox for years. There is also one for Chrome that uses the same algorithm, so generates the same password. It keeps user name and version of the password for a domain in regular FF password storage.

https://addons.mozilla.org/pl/firefox/addon/password-hasher/


I just wonder - if the same key is used for enabling password manager and 2FA ... is it still 2FA? I mean, having the token you get both access to password and second factor to a service.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: