Website allegedly claims that it can't take down user's content because it's decentralized, would the DMCA request go through to the individual user? I can see that being a problem with negligent people, I expect that the company would probably step in regardless and take down videos.
Shout out to Seagate, every drive that my friends and I have bought from them have eventually failed, good to see that they fail in non-consumer use too, not just me! Stick to the Western Digitals.
Yev here from Backblaze -> The Western Digital drives that you've purchased will fail too. That's part of the whole point of these reports, all of the drives eventually fail out or reach a state where we have to replace them - it's not any one specific manufacturer. That's part of why having a backup is so important, even the SSDs in newer machines will eventually go wonky.
I appreciate your diplomatic language, but the time to failure does matter, and consumers don’t have the same cost structure that incentivizes replacing working drives the way you do.
FWIW I share GP’s experience with Seagate. I had quite a few of them, ranging in size from 500 gigs to 2TB. Every last one of them died relatively quickly, while most of my Hitachis, Toshibas, and WDs from that era still work.
Seagate earned a permanent boycott from this customer.
This data isn't particularly useful in more ways than that because there are a lot of variables that differ between running in a data center vs desktop/home NAS. I like reading these data because it's interesting to me, but I make purchasing decisions based on other sources, usually in terms of ease of dealing with support for end customers (when I need a drive replaced), noise (it's in my house), heat sensitivity (my house is warmer than a data center), and sensitivity to things like vibration (my case is far less stable than enterprise storage clusters).
That being said, I occasionally use these data to break ties or see variations between generations of drives.
I just purchased a newer WD Red (WD80EFAX) to replace an older HGST (pre-WD if memory serves) drive. That drive (and another of the same model I purchased elsewhere to test) is not recognized by the UEFI on the motherboard. The OS probes the drive and fails to negotiate DMA transfers.
Turns out there is a publicly available KB article that mentions known bugs with transfer rate negotiation on some of their SATA3 drives. Of course WD won't say which drives. There's even a utility referenced in the article that you can use to disable SATA3 support. But WD won't make it publicly utility.
Meanwhile their support is stuck in a "have you tried power cycling the computer" loop.
If memory serves Western Digital / HGST / SanDisk were among the first to jack up prices after the Thailand disasters. Fuck em.
Meanwhile is the difference in failure rate between Seagate and Western Digital even statistically significant? For most comparisons you're looking at a fraction of a percent.
I feel like there might be a difference - do drives fail more if they're not continuously running? When I was working in a lab it was generally close to 70% failure rates for drives that were sitting in a box for more than a couple years.
Do you think the seagate barracuda deserves to take the “death star” title from HGST? I think HGST has redeemed themselves with sub 1% failure rates for years. Seagate managed to put out a 30% failure rate drive.
The "Death Star" title really belongs only to that one specific model (75GXP) produced by IBM and its spectacular failure mode (shredding the magnetic media off the platter almost entirely in some cases)
I've never gotten to hear one fail, but these were the way I learned never to use drives from a single batch in a RAID array back in the day (most of the time these days I never match models in a single array and try to avoid matching manufacturers; gotten paranoid over the years after successive RAID failures)
Thankfully it was just stressful - we had backups, but drives in our array started failing one by one, with about a weeks interval; unfortunately it took something like 4 days to rebuild the array each time, so we wasted a lot of time shifting writes elsewhere so the last backup we had + writes going elsewhere combined remained recent enough in case a second drive would fail. We got off easy, but the time we spent probably cost us more than having set up a more redundant system in the first time would have.
I'm not sure about today, but 4 years ago we bought 120 HP notebooks from Verizon (they had cell modems in them) to use with a project. We had a horrendous failure rate of 100 hard drives in the first six months. They were Seagate drives. We ended up just replacing the remaining 20s hard drives as a preventative measure.
Then I stand corrected. Is there anyway to avoid drive failures or should we accept the fact that they will break eventually? Also, the data is interesting and appreciated.
Yev here -> There's no way to avoid drive failure. We circumvent drive failure issues by using our vault architecture (https://www.backblaze.com/blog/vault-cloud-storage-architect...). It's essentially like a giant RAID++. Anything mechanical will break down eventually, which is why we try to push people towards having backups - the more copies of the data you have the less chance you have of all those mediums failing simultaneously, resulting in loss.
There's no real way to avoid failure in any particular piece of hardware. Eventually, everything from the HDD to the computer it's in will fail. The only wait to ensure data durability is to back up across multiple independent physical sites. Even then, there's never a guarantee - it's just that the chance of data loss decreases to levels where other issues start being more problematic, e.g. global nuclear war.
I really wish that consumer OSs would mandate RAID (mirroring at the minimum), requiring additional steps to install on unprotected storage. This should have been implemented even back during the early days of consumer hard drives, or at the minimum have a warning on boot that "Your data is stored on temporary storage -- go [here] to configure redundancy". At least something that puts it front and center that storing data on a single hard drive will eventually lead to data loss.
Also would be nice to have a similar warning to "Your data has not been backed up in [x] days".
Ugh, god why? Why must people in our industry daydream about over-complicating things all the time? Very few use-cases benefit significantly from mirrored disks. The amount of data a user actually cares about having a backup of is often significantly smaller than the OS and related garbage on a disk that they can do without. Besides which, a local mirror isn't a good backup anyway!
My thinking is that this would serve a similar purpose to the trend for web browsers to warn users of insecure websites -- the more "in your face" the warnings are, the more incentive there is for providers to be secure.
I've had friends / relatives experience drive failure a few time in the past, and the look of horror they have when there is very little I can do to help them recover their photos etc. is something that I hate seeing.
And having the OS give a simple warning (that can be dismissed, with a "don't show me this any more" checkbox) would not over complicate things, and may end up saving some people's data.
Really no different than the current warning that Windows has, when you don't have antivirus installed. Or the fasten seatbelt signal that comes on the dashboard of your car when you start it. Or the flashing red light that gets added to some stop sign controlled intersections.
Also it would be nice if there was a standard way that backup software could inform the OS of backup status (this way it would serve as a secondary check in case the backup software's internal reporting fails to notify the user of bad backups). Just a little nice-to-have.
(I'm not advocating for this to be a legal requirement, just a nice feature if any OS vendor wants to add it).
> My thinking is that this would serve a similar purpose to the trend for web browsers to warn users of insecure websites -- the more "in your face" the warnings are, the more incentive there is for providers to be secure.
Another thing I hate about the industry today: being hostile to the user and trying to force them to use their computer how you want them to use it.
> I've had friends / relatives experience drive failure a few time in the past, and the look of horror they have when there is very little I can do to help them recover their photos etc. is something that I hate seeing.
Then teach them about proper backups. It doesn't take a rocket scientist to understand "don't keep all your eggs in one basket". Or if you're going to implement some stupid forced user-hostile scheme at least use something that actually qualifies as a backup.
> And having the OS give a simple warning (that can be dismissed, with a "don't show me this any more" checkbox) would not over complicate things, and may end up saving some people's data.
Here's what will happen: the user will dismiss the dialog without even reading it. We have decades of experience showing us this. Users have learned that "warning" dialogs are meaningless precisely because of crap like this. Oh yeah, and they're super annoying.
> Really no different than the current warning that Windows has, when you don't have antivirus installed.
I've looked into RAID at home but came to the conclusion that there are plenty of easier/cheaper ways to do backup. I just use USB drives, have a couple rotating Time Machine backups plus Backblaze. (Plus, when I think of it once a year or so, I have one more copy of my main data disk that I keep in a fire box.)
I don't really use Windows but you can do something similar--though in my experience it's not as simple.
I don't really care if I avoid any downtime. So long as I have belt and suspenders backups, I'm pretty comfortable.
If we're talking about "protecting" consumers, I think it might be wiser for all OS vendors to provide a free tier of version-controlled cloud storage.
Backup is much more important than RAID in most situations. RAID doesn't protect from user error, virus, fire, etc.
I think Windows does remind to create backup (but not very loudly and it accepts local backups to another drive which is poor solution).
Unfortunately, almost everything is cheapest possible. For example, I would prefer ECC RAM as standard. It's very cheap for production (one additional chip per RAM stick), but Intel wants to force people wanting reliability to pay for Xeon CPU and most people don't care.
More specifically, adopt a 3-2-1 scheme for data you care about: store it in three places, two onsite but on different media, and 1 offsite. Backblaze is probably as good a choice as any for offsite storage, Tarsnap is also really popular here.
Good question and nope. They're simply too expensive for us. As they get lower in price they do become more interesting, but their $ to density is just nowhere near where we'd need it to be in order to continue providing our service at the rate we do. Some of our boot drives are becoming SSDs though, and they are in the "raw" data that we publish.
As a teenager, I helped a friend revive a Seagate drive after it bricked due to faulty firmware. If I recall correctly, my friend had actually installed some firmware updates for the drive, but had not installed one recently enough to avoid the problem. We had to run wires to contacts on the board to allow us to run commands in a terminal (in Windows) on the only machine we could find with a serial port. When it worked, we felt like hackers from a movie!
I think those in external USB drives are the most likely to fail. It seems that they put their lower quality stuff or at least drives that didn't quite pass qc into those because they know the huge majority of those will have some data out on them and then never touched again.
If you rip them apart they are the same as their laptop drives with dragster. Same with WD, I ripped out a green drive. Users in forums have recommended buying certain external drives and ripping them apart to save money. Certain high capacity WD red drives cost less when they were bundled into an “NAS” enclosure.
How do you know? As I said in my comment, they may be drives that didn't quite meet the qc threshold.
All drives usually ship with a certain number of bad sectors. An excessively high number of bad sectors can mean there are quality issues with that specific drive. They may be 'binning' those more poorly testing drives for the external USB drives.
Yes, that is what I was getting at. I almost exclusively "shuck" my drives like that for my server. But I have gotten a seemingly high number of failures over the years.
Dissenting opinion: I've owned and used a lot of Seagate drives. I have had a total of one fail on me. Given that they are usually less expensive they are then competing drives, I think they're fine. At the rate of disk size growth I end up replacing the disks for size reasons before they fail anyway.
My experience is the opposite. 100% of the Seagate drives I have purchased for personal use (total: 4) experienced failures within the warranty period, and of the drives I had a hand in purchasing professionally at least half needed replacement within the warranty period (~64).
I think Seagate is fine if you are willing to overbuild and deal with the refurb process within the warranty period. I'm not, and will not personally buy a Seagate product again unless their reputation improves (similar to what happened with IBM/HGST post-deathstar). It's not worth the time or aggravation, to me.
All I can say is my experience differs. The only Seagate drives I've had that kind of failure rate with were the 3TB ones that were out, IIRC, around the time of The Great Disk Shortage.
I've still got my 1tb seagate up and running from around six years ago? No problems with that but all the 3tb I've encountered all seemed to have failed quite quickly. Maybe it's the specific model that I got but the higher capacities, in my case, tend to be more faulty.
I used to work for a long defunct storage manufacturer, reached the point where we skipped either even or odd (i can't remember) firmware versions on the seagate drives. RMA counts would always be higher on the even/odd number.
Call it paranoia but I see this "WD is amazing, Seagate is shit" sentiment everywhere, but the data doesn't back it up. Is this guerilla marketing by Western Digital?
I think there's some selection bias as Seagate drives are fairly abundant in oem systems where the oems included the lowest tier drive possible to shave some cost off which may be less well built and more susceptible to damage over time from vibration/impacts than higher tier drives.
I suspect that most of the time after a hard drive failure the drive gets replaced with a higher tier drive (typically from another manufacturer like WD as the person feels burnt by Seagate for having a hdd failure) which may prove more reliable.
Well, JFYI, in 2009 (starting 2009, last user posted a few days ago) there was the great Seagate 7200.11 Bricking Season (unrelated to quality of the actual hardware, the issues were related to firmware) that killed so many disks (most of which could be revived) that I guess noone hasn't had one or knew not someone with the issue (or heard someone talking of the matter).
By any metric a thread on an all in all "niche" technical board with almost 5,000 posts and nearly 4,700,000 views should mean that a lot of people experienced the issue:
There was a certain line of Seagate 3TB drives that seemed particularly prone to failure, as reflected in consumer reports as well as Backblaze's statistics at the time:
I got burned pretty badly by those as I bought 12. I now avoid seagate if possible because I'm pretty sure they knew their disks were faulty. Even if they didn't know they didn't handle the whole fiasco in a good way.
Funny how everyone chimes in with their Seagate horror stories. My experience with them isn't much better.
But another fun story: I had a PSU blow up a couple years ago in a machine with three WDs and three HGST. All WDs were dead after that, the others worked flawlessly. Probably not a large enough sample size for any definite conclusions but at least it put a failure mode on my radar that wasn't there before.
I've ever only had one drive failure. It was a 3 TB Seagate. I remember it failing in like three years, so after the warranty had already expired.
My oldest drive is a 500 GB Western Digital from 2008 that's still operational today. I imagine its end is near, but I've thought about that for a couple of years now.