Yeah America has a totally irrational credit scoring. It makes no sense whatsoever. If I never needed credit, then I don't have a history thus I have a bad score... go figure.
If the password is too short, an attacker can brute force the hash by simply trying all 6 character combinations of allowed password characters, provided that the hashes get leaked or hacked, which happens quite a lot these days. (Or if the website is stupid enough to allow an attacker to brute force try all these combinations without throttling you or stopping you at the xth wrong entry)
Not that I'm a fan of USB Micro, but there are cables for sale that have a reversible USB Micro plug, as well as a reversible USB plug on the other end, however I have no experience with their durability.
No the regulation is that all chargers use USB, the other end of the cable doesn't matter. The situation that arose out of that is that you can charge pretty much any phone with any USB charger, which was the intent.
I'd like Apple to have chosen a USB standard for the other end of the cable, however to be honest, the lightning connector and it's counterpart look a lot more robust than USB-C or micro/nano USB. USB-C is also quite a bit bigger than lightning. So perhaps some day we'll see USB adopt lightning as a standard if that is at all possible. iPad Pro lightning supports USB3 speed, so we know lightning can do USB3.
The build quality is definitely good. However I am very unpleased with the temperature sensor, it seems to only be accurate to more than 1 degree, with the software further adding a tolerance in when to turn on/off the AC/Heating I find myself constantly manually setting the temperature as I'm either freezing or getting too hot. A solution would be remote temperature sensors which people have been asking for since the beginning, but that never happened. I believe there are now thirdparty solutions that hook into Nest perhaps via IFTTT, but that turns the Nest in just a thermostate with remote on/off switch, which is basically the only feature that mostly works for me. (yeah even that isn't always robust)
Yeah I lost my preorder for PS4 over that issue, K-mart just cancelled my preorder because they thought my email address was suspicious. mind you this was at the time when you'd have to wait months for the machine to be in stock again. I spoke on the phone with their manager and there was nothing they said they could do about it. So I've never shopped there since.
Because when you start confusing pattern recognition and neural net training with "intelligence" and "learning" and "consciousness" then everybody who doesn't know enough about the technology will get wrong expectations, it's even worse as even people working on AI are having these wrong expectations, so yes a winter is coming. Besides that, the AI we have today is used all wrong, to spy on people even more and to consolidate central services and the big players.
I'd be careful about drawing conclusions from those tests. We know that the number of bits in a container does not directly influence how much RAM it consumes. Therefore, there must be something the images are doing that consumes memory which is not happening in the "smaller" images. The key would be to find the culpable process or daemon(s).
It could well be due to things like shared libraries. A larger distro will have more options enabled, causing more shared libraries to be linked into the same running processes, and thus more shared libraries to be fully loaded into memory.
A smaller distro might even statically compile most things - Alpine does. If you dynamically link shared libraries, the whole library is loaded into memory to serve the process. If you statically link, only the actually used part of the library is included in the binary.
Statically linked binaries can't share the library in memory between each other like dynamically linked binaries can, but if all your processes are running in separate containers, they won't share those libraries anyway (unless they're all in a VM and the rarely used "samepage merging" is enabled for the VM).
Finally ... simplicity has knock-on effects. Making things simpler and smaller (not easier), and reducing the number of moving parts in the implementation, makes cleaning up more stuff easier.
That's not really how it works. Both executables and shared libraries are mapped into the virtual address space of the process, then only stuff that is actually used will be faulted (read) into physical memory. At page granularity, so yes, there is some bloat due to unused functionality, but it's not as bad as requiring the entire thing to be loaded into memory.
That's an awful lot of conjecture. I'd wager that most of what you would actually be running in a container would not have its memory usage significantly affected by the presence or absence of optional shared libs.. I'm with the parent on this; such claims warrant research.
Not really, it was an educated guess, and then a description of how binaries and libraries work on modern unix systems.
Here's a quick demo based on the trivial example of the memcached docker images I mentioned in another thread:
vagrant@dockerdev:/host/scratch/janus-gateway$ sudo docker run --name=mc_big --detach --publish=11212:11211 --user=nobody sylvainlasnier/memcached /usr/bin/memcached -v -m 64 -c 1024
67c0e406245d341450c5da9ef03cbf60a8752433a4ace7471e2a478db9a62e07
vagrant@dockerdev:/host/scratch/janus-gateway$ sudo docker run --name=mc_small --detach --publish=11213:11211 --user=nobody ploxiln/memcached /bin/memcached -v -m 64 -c 1024
11037b69acfbc0de7601831634751cd342a7bafe9a25749285bc2c2803cc1768
vagrant@dockerdev:/host/scratch/janus-gateway$ top c -b -n1 | grep 'COMMAND\|memcached'
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
5984 nobody 20 0 316960 1192 768 S 0.0 0.1 0:00.02 /usr/bin/memcached -v -m 64 -c 1024
6091 nobody 20 0 305256 780 412 S 0.0 0.0 0:00.00 /bin/memcached -v -m 64 -c 1024
Notice the significant difference in RES (resident set size) and SHR (shared memory). Less trivial processes will have more shared libraries and bigger differences here. Multiply this kind of result times all the contained processes. It adds up.
Sorry, I was responding to your post in the context of logician's "an important concern" assertion. You and jabl are correct technically of course.
Within the context of "an important concern" though; the difference in RES and SHR between the two is about ~330kb. I suspect most people wouldn't find that significant particularly given memcached's common use cases.
But it's unbounded wasteful, why does nobody get that. There is no correlation with proof of work energy spent and actual value of transactions. Even worse, proof of work is almost exponentially getting worse over time.
> Even worse, proof of work is almost exponentially getting worse over time.
Huh? Exponentially? The difficulty level is rising exponentially, but that's owing to continued investment in specialized hashing hardware. I certainly don't think that the actual amount of electricity being put into hashing is still growing exponentially. It would only grow exponentially if the price of Bitcoin continued to appreciate exponentially, and that hasn't been the case in years.
The network burns precisely as much energy as is needed to throttle the rate of consumption (roughly 25 btc per 10 minutes). It is bounded by the intrinsic and speculative value of Bitcoin.