HA! Take that, Suddenly Self-Destructing Drives!Scharon Harding said:The longevity of HDDs is also another reason for shoppers to still consider HDDs over faster, more expensive SSDs.
Facebook has that data internally I'm sure, but they haven't published any studies that I've seen.Obviously mechanical drives. Has anyone done any sort of life testing on SSD?
The longevity of HDDs is also another reason for shoppers to still consider HDDs over faster, more expensive SSDs.
Anecdotal evidence, but I've had two SSD failures total. And for those failures, I think that was a bad Sandisk lot, because I had two other drives from the same line that are still kicking since 2017 but were manufactured at a different time period. My 14-15 year old Crucial C300 is still puttering along just fine. Of course, I have some even older WD Velociraptor 300GB drives that are still working too. That said, cheaper HDDs have had a lifespan of about a decade for me and I simply wouldn't trust them to reliably backup my data past that point. In contrast, my MLC based Crucial and Samsung drives still have 95%+ health and those are all 10+ years old now.HA! Take that, Suddenly Self-Destructing Drives!
If you look through the backblaze numbers, the failure rate for a lot of recent hard drives is more like 1-2% per year, so with modern hard drives that would be vanishingly unlikely unless you happened to buy a bunch of bad drives somehow.Interesting claim. I have yet to have an SSD fail. Back when I was using HDDs, I had one fail every year or two (I typically have 3-5 drives in my main personal PC). The 7200 RPM drives seemed to fail a lot more, regardless of brand, so before making the transition to SSDs (~13 years ago?), I was sticking with those.
The problem with Backblaze's figures is they only have one type of workload - backup. So as much it might tell you what backup does to disks I don't think you can extrapolate this to other uses.
Interesting claim. I have yet to have an SSD fail. Back when I was using HDDs, I had one fail every year or two (I typically have 3-5 drives in my main personal PC). The 7200 RPM drives seemed to fail a lot more, regardless of brand, so before making the transition to SSDs (~13 years ago?), I was sticking with those.
Obviously, my sample size is pretty small, but the high failure rate I was experiencing with HDDs (On 24/7 but definitely closer to a typical consumer workload, otherwise) was just as much reason for me to transition as performance.
I've had the highest failure rates with 7200 rpm performance HDDs. Most anything 5400 rpm was slower, but cheap and reliable. I've had zero failures with upper tier or enterprise 7200 rpms drives though (e.g. RE4, IronWolf Pro, WD Gold, etc.)A LOT of people used to cite Backblaze stats as a reason to not buy any kind of storage made by Seagate, and it was always annoying to me. Thankfully I don't see it as much now.
I've had 3-5 HDD's in all my computers ever (starting from the late 90's, most of which were scavanged/bought used, also I would replace my computers every year or so, so that is a lot of drives) and I think I've had ... four HDD failures, if I count the ones that worked when I took them out but a decade later was dead. Live HDD failues, as in a HDD that failed when it was in use? One.
fwiw Backblaze. https://www.backblaze.com/blog/ssd-edition-2023-mid-year-drive-stats-review/Obviously mechanical drives. Has anyone done any sort of life testing on SSD?
Full disclosure: I’m the delivery guy for hard drives to Backblaze. I always drop the Seagate drives a couple times before taking them in.
The longevity of HDDs is also another reason for shoppers to still consider HDDs over faster, more expensive SSDs.
“It’s a good idea to decide how justified the improvement in latency is,” Doyle said.
I've had great luck with a Samsung SSD that is going on 10 years as an OS drive.Anecdotal evidence, but I've had two SSD failures total. And for those failures, I think that was a bad Sandisk lot, because I had two other drives from the same line that are still kicking since 2017 but were manufactured at a different time period. My 14-15 year old Crucial C300 is still puttering along just fine. Of course, I have some even older WD Velociraptor 300GB drives that are still working too. That said, cheaper HDDs have had a lifespan of about a decade for me and I simply wouldn't trust them to reliably backup my data past that point. In contrast, my MLC based Crucial and Samsung drives still have 95%+ health and those are all 10+ years old now.
Must not have been doing this for long then. Enterprise 7200 drives were god awful about 15 years ago, while our 10k and 15k drives were at low single digit failure rates the enterprise 7200 drives were more like 10-13% with Seagate drives from HP going above 100% AFR for a few years for us.I've had the highest failure rates with 7200 rpm performance HDDs. Most anything 5400 rpm was slower, but cheap and reliable. I've had zero failures with upper tier or enterprise 7200 rpms drives though (e.g. RE4, IronWolf Pro, WD Gold, etc.)
I've had great luck with a Samsung SSD that is going on 10 years as an OS drive.
But I've steered clear of everything Sandisk since there portable SSD issues that they wouldn't acknowledge for the longest time. Then when Sandisk did acknowledge it, they claimed to fix it via firmware, but many people still reported the same failure issues.
This is the kind of thing people used to say at work just before a drive let out the click of death or bled blue goop.My 320mb (megabyte) IDE IBM hard drive from 1994 still boots Win3.1 and loads Doom II just fine.
Who says older drives are unreliable?
There's a strange oeriodicity to the graph that makes me wonder if we're seeing a methodological problem that's creating aliasing. Why would drives fail most often at "odd.5" ages, and least often at "even.5" ages? I'd expect the individual ticks to show more random noise next to their neighbors, and less of a modulated 2-year sinusoid.
When I worked at IBM, Boca Raton where the PC was made, hard disks were called, “hard files”.My 320mb (megabyte) IDE IBM hard drive from 1994 still boots Win3.1 and loads Doom II just fine.
Who says older drives are unreliable?
I've got an original 32GB X25-E drive in use in a laptop that I ran Home Assistant on, that workload moved over to an RPI but the laptop still runs Linux. Its original life was as a log partition for a database server so it was used hard and put away wet until it got retired to my personal use =)I've still got a pair of intel X25M 160GBs striped in a machine that is booting from them, working great.
That, and at home PC’s are not running 24/7, so they constantly get shut down and started again.One of the biggest differences between home use and datacenter use is the variations in temperature that occur at home. Drives that are in use 24/7 are going to sit in a much narrower temperature range than those that sit dormant for long periods, and then get spiked under a heavy load occasionally.
I've had failures with both, but since excising spinning rust to an always-on NAS that never moves and is hooked up to clean power they've lasted longer. It also helps that the NAS does more diagnostics, making upcoming failures easier to catch earlier; failure rate is still lower, but the impact of a failed drive is lower too. The SSDs, however, will just die with zero warning. It's why I run them in a RAIDZ1 configuration plus regularly back up that pool to the HDD-backed pool.Interesting claim. I have yet to have an SSD fail. Back when I was using HDDs, I had one fail every year or two (I typically have 3-5 drives in my main personal PC). The 7200 RPM drives seemed to fail a lot more, regardless of brand, so before making the transition to SSDs (~13 years ago?), I was sticking with those.
Obviously, my sample size is pretty small, but the high failure rate I was experiencing with HDDs (On 24/7 but definitely closer to a typical consumer workload, otherwise) was just as much reason for me to transition as performance.
My favorite upgrade is to swap someone's 5200rpm drive for an SSD. It's like a brand-new computer. Most of those are gone now, but it was wizardry when I could tell someone to give me their laptop and by tomorrow it'll be better than new.Fucking VERY justified. I will NEVER have another platter drive in a normal end-user computer. Do I use them in my NAS, and as external backups for my NAS? Sure, but that is a very different use case.
True. But I do think it's a good analog for readers who are in the Data Hoarder/Home NAS market.The problem with Backblaze's figures is they only have one type of workload - backup. So as much it might tell you what backup does to disks I don't think you can extrapolate this to other uses.
Full disclosure: I’m the delivery guy for hard drives to Backblaze. I always drop the Seagate drives a couple times before taking them in.
I had 2 SATA SSDs fail in the past year. Surprisingly, none of my 17 desktop attached drives have. And of course my back up server, which still had a 2007 dated SATA HDD that did fail, but its other 6 drives, all HDD, are performing well with various POH numbers in the 30K+ range. The one that died was over 80K.Interesting claim. I have yet to have an SSD fail. Back when I was using HDDs, I had one fail every year or two (I typically have 3-5 drives in my main personal PC). The 7200 RPM drives seemed to fail a lot more, regardless of brand, so before making the transition to SSDs (~13 years ago?), I was sticking with those.
Obviously, my sample size is pretty small, but the high failure rate I was experiencing with HDDs (On 24/7 but definitely closer to a typical consumer workload, otherwise) was just as much reason for me to transition as performance
Interesting claim. I have yet to have an SSD fail. Back when I was using HDDs, I had one fail every year or two (I typically have 3-5 drives in my main personal PC). The 7200 RPM drives seemed to fail a lot more, regardless of brand, so before making the transition to SSDs (~13 years ago?), I was sticking with those.
Obviously, my sample size is pretty small, but the high failure rate I was experiencing with HDDs (On 24/7 but definitely closer to a typical consumer workload, otherwise) was just as much reason for me to transition as performance.
I've still got a pair of intel X25M 160GBs striped in a machine that is booting from them, working great.
One unfortunately limitation is the size of the data set, the size of the drives, and the type of drives.
It's old data, but I remember Techreport (RIP) doing endurance testing. https://techreport.com/review/the-ssd-endurance-experiment-theyre-all-dead/Obviously mechanical drives. Has anyone done any sort of life testing on SSD?
I go back to about 2015 or so with sSDs of various types and have had at least a dozen fail over that that span. IMO and only backed up by my experiences, heat and power are the enemies of SSDs. My 30+ HDDs are lasting well, and aside from a failure every year or two on them, they are mostly trouble free and the ones that failed since 2020 were all reasonably new and under warranty. Temps are the real enemy of HDDs. Keep them under 35C and all will be good. SSDs are not as easily monitored for temps and as such this can easily go undetected as a source of failure. Electrical spikes, both up and down can easily kill an SSD. Brownouts occur much more commonly than you would expect. The variance of them is usually mitigated somewhat by the utility company but they sneak in and can kill sensitive electronics. UPS are mandatory IMO.I know its only a sample size of like 20 for me or so, but I have yet to see an SSD that i've owned or been adjacent to fail.
How do these HDD rates compare to SSDs? I has to be higher, but does a similar dataset exist for SSDs? And yes, i am aware that you can wear out an SSD by using it enough.
They did some, but the last time was September of 2023. I asked about it in the comments of their 2024 year-end article, and Stephanie Doyle answered with the following:Obviously mechanical drives. Has anyone done any sort of life testing on SSD?
There are some funky things about us reporting on SSDs. Some of it has to do with the ways drives are tracked internally at Backblaze, and some of it has to do with the ways we use SSDs in our drive fleet. I wouldn't say we've given up on the report, but we've paused for the moment to make sure we're providing meaningful, comparable data.
My most recent HDD failure was a Seagate drive. A Barracuda 7200.9 1TB. Died this past summer. 70K+ PoH. Some do last better than others.You really don't need to, Seagate did it already.
Edit: grammar