Hidden truth about SSDs

  • SSDs offer many performance advantages over HDDs, but at the higher price. The only problem is limited number of times you can write to SSD. Some time ago I read about a test - they were continuously writing to SSDs to test their lifespan. And they got to hundreds of terabytes. The problem is it's not how you use SSD. Typical use is not continuous sequential writes - it's lots of random small writes, usually with part of the disc occupied by system. When you change file on HDD it's get overwritten. When the file's size increases the rest is added next to it. Or where free space is - that's why HDDs get fragmented and you experience performance drops. SSDs don't suffer from fragmentation. But there's other problem. In SSD cells are organized in blocks. When you change file you don't just change a few bits - everything in the block is written into a different block. And later the old block is cleaned. That's why you need to leave some free space on SSD - so you have free blocks to write, to not need to erase them before writes. But it has huge implications. That means that even if you just change one letter in a file - you loose the whole block. The actual writes to the disk are much higher than it would appear. And it dies faster. But to not end on a pessimistic note. If your SSD hasn't failed during first month (and you don't have triple level cell Samsung) it will serve you for the next 5 years without problems.

  • The good news is that real world looks way better than the theory in this case, something that rarely happens.

    My personal statistic is that in the last 15 years I have replaced something like 400 HDDs (luckily not all on my PC :lol: ) , but in the last 5 years I have replaced exactly ZERO SSDs

  • So wait 10 years and 'profit optimalization' by manufacturers will bring you many SSDs to replace 😉
    I know of some very problematic SSDs.
    SSDs main advantage is lack of moving parts - but have you ever tried to recover data from one? 😛
    All SSDs are on a death counter - you can't cheat the physics.

    My main point was that SSDs write much more data than you would think (and that synthetic benchmarks are useless).

  • @hondac:

    SSDs main advantage is lack of moving parts - but have you ever tried to recover data from one? 😛

    Luckily was not needed, as I said.

    Then, the theory says that when the spare cell re ended the SSD becomes read only, not dead. But, again, no real world test.

    An SSD can also suddenly die, because a problem different than wearing, like any other electronic equipment. But HDDs are included in this worst scenario case.

    My main point was that SSDs write much more data than you would think

    Well that was discussed over and over when the first "affordable" SSDs started to be available.

    Given the lack of real world test i took very seriously those considerations, 5 years later I can say I partly wasted some of my time.

    In short, data are safe only when a backup is available, no matter the storage taken in account. And the SSDs are more reliable than expected. Then is up to the single user and his personal preference and experience to evaluate pros and cons.

  • Moderator

    Over the years, I have had a single thumb drive (solid state, obviously) fail on me, and eight HDD failures, just dealing with my family's computers. I had BOTH HDDs in my wife's old tower fail, one right on the heels of the other. Certainly, there are molecular/electronic limitations on SSDs, but I have never had one fail (yet) and I am here to bear witness to the fact that the mechanical limitations related to HDDs are at least as limiting, if not more so. Generally, if you drop an SSD, nothing really bad is likely to happen. If you drop an HDD, you may spend up to a couple thousand dollars trying to retrieve the data from it, and still fail (just ask my daughter, after my granddaughter tripped over the charging cable of her work laptop and brought it crashing to the floor).

    So, long story short, the furious writing caused by browsing with the Blink engine is, in my opinion, equally likely to contribute to the eventual failure of ANY kind of drive subjected to it. It's simply a bad, bad programming design and needs to be rectified.

  • @The_Solutor:


    SSDs main advantage is lack of moving parts - but have you ever tried to recover data from one? 😛

    Then, the theory says that when the spare cell re ended the SSD becomes read only, not dead. But, again, no real world test.

    I'll have to ask about this. SSDs suffer from something called read disturb. The data eventually needs to be re-written after being read because of errors caused by read disturb. So SSDs at the end of life would only be readable for a certain amount of time.

  • @The_Solutor:

    Then, the theory says that when the spare cell re ended the SSD becomes read only, not dead. But, again, no real world test.

    Unlike phase change media, which become more stable with more write cycles in them, flash cells can't hold their data for long at the their end of life. So once you exhausted almost all write cycles, the flash memory will simply forget its contents very soon and turn all bits into 1. So much about "read-only usage".

    In real life this problem shows up on Samsung 840 SSDs 2 years after their introduction, where older files, which were written a long time ago take longer and longer to read, as the controller has more and more issues guessing the correct value for the triple-level cells with error correction working hard.

    These are 120-250 GB SSDs with 5-10 TBs written to them.That means we have flash cells with just a few dozen cycles (multiplied by 3) on them, which already lose their content too fast.

  • @jtsn:


    Then, the theory says that when the spare cell re ended the SSD becomes read only, not dead. But, again, no real world test.

    Unlike phase change media, which become more stable with more write cycles in them, flash cells can't hold their data for long at the their end of life. So once you exhausted almost all write cycles, the flash memory will simply forget its contents very soon and turn all bits into 1. So much about "read-only usage".

    No, this isn't a good description of that happens (of what should happen).

    A 120 GB SSD has 128GB of space. You see just 120 because 8 GB are meant as spare cell sand are unavailable to the system

    Each time a cell nears his end of life is replaced by a new one taken from the spare 8GB.

    An exhausted SSD is not a completely worn device. Is just a device with no more spare cells.

    The controller will be obviously aware of this, and should block any further write operation, turning the disk in a read only device.

  • The SSD controller performs wear leveling by writing data to different blocks instead of using the same block over and over again. In theory the flash memory should fail around the same time. Well, not fail, I think the controller logic would determine that writing to a block would result in too many errors and would retire the block from data writes.

    I think a lot of the spare cells are used for critical data (logical to physical address maps and other look up tables). Not spare space. But different controllers do things differently.

    Flash memory cells develop errors for a lot of reasons. Read disturb is one of them. Cell leakage is another. So, basically, writing to flash is bad for the memory, reading the cells is bad for the memory, and not doing anything to the cells is bad for the memory.

    So, errors will accumulate on flash memory cells after the SSD becomes a read only device. Reads will cause errors. And since the data cannot be re-written, errors will occur due to cell leakage. Eventually the data on the SSD will become unreadable. I don't have any idea how long this would take.

  • tl;dr backup your data even if you have a fancy solid state drive

    I will also add that the only way to actually prevent data storage fatigue would be to use media which can only be written to once… which would be very inefficient.

  • I was a very early adopter with the SSDs, I've even used the hybrids. I'm not sure how long ago that was but my first one was a very expensive SSD and was only 20 GB or so in size. Since then, the OSes have become more adept at handling functions such as "trim" and Linux does this pretty well (a bit differently than Windows) so long as you avoid a buying the drives that are on the blacklist.

    Anyhow, I've not had a single one fail and I must be on 7 or 8 years of use by now. Obviously those drives are no longer active but they lived their lives and now sit idle somewhere without any failure. The MTBF is pretty out of whack where real world use comes into play. I've had each one last with nary an issue. The price is worth it when the increased speed is considered. I mean, yeah, after all I once had to pay $400 USD for 4 MB of RAM (that's not a typo). Heck, at one point I had to pay an ungodly sum just to add a couple of memory chips so that I could type in lowercase letters. (TRS-80 model II anyone?)

  • My observations since having one.

    SSD reliability has improved, but if you go for the biggest drives they are often abusing the limits of cell density.
    The newest cutting-edge drives can fail due to the jump to another cell technology which has not matured.
    From Googles recent study of their SSD racks It seems to be the heating up during write cycles that causes most degradation.
    SSDs simply fail differently, and more likely with no warning, as you will not usually get a slow-down first, or hear a clicking sound.
    From the first day you use it, it is always worth running a reliable SMART monitor to keep an eye on changes in the drive over time.

    As noted already, disabling services like Indexing are very useful.

    My own 120GB Corsair SSD is nearly 2 years old.
    "Overprovision" has been enabled to extend the life and improve performance.
    Power on hours: 9417 (1 year 27 days 9 hours)
    SSD life left: 90%
    Reads: 6.2 TB
    Writes: 8.4 TB

    Depending on your OS and usage, the read/write ratio will vary.
    Linux users should see a much longer SSD life, but unfortunately you may have to use Windows to configure the drive with the official tools.
    Do get the tools from the manufacturer if possible, as they contain useful features.

    If you have enough RAM you can use all RAM Temp file caching. This is much faster and avoids disk wear.
    I have an ASRock MoBo so if I had more RAM I could use XFast to create a RAM disk and move the main Temp folder abusers.

    I recommend downgrading your old HD to be a secondary internal drive.
    You can get hybrid drives, but why waste your old drive if it still works.
    Create a TEMP partition on the old drive and tell Windows and all your programs to use folders in that partition.
    If the old drive is a reasonable speed, try moving the Swapfile/Pagefile to this TEMP partition, and test a few high memory programs to see if performance is still good.

    Having a separate partition for TEMP and caching has a big benefit to all OSs, as it avoids fragmentation of your system.
    Your browsing, downloading, un-packing and installation process will not make a mess in the partition where your software lives.
    Any stubborn malware that has been cached by your browser or java etc. can easily be got rid of by quick-formatting the TEMP partition.

    While I am on the topic of fragmentation, I have to differ with the wide-held belief that fragmentation is not an issue with SSDs.
    Correct, it does not affect the speed, but it does waste huge amounts of space, with part-used blocks and bloated MFT.
    Defragging and reclaiming well over 2 GBs of unusable blocks on my 120GB SSD after the first 6 months was a blessing. It is almost full now so I should probably defrag again.
    Depending on your budget and the size you get, space may become a problem before you expect it.

    The recent AusLogics defrag tool has specific options for use with SSD, including TRIM and optimisation of Windows settings for better SSD life.
    This is a good idea to enable, as Windows does not care that you now have an SSD and so continues to thrash and abuse the drive.




  • That sounds to me like SSD controller failure which is VERY common with ssds after a few years - especially Sandforce based ones, but others also. I've had two SSDs failed (an OCZ Agility 1 and a Corsair Force F120). Disconnect your SSD, and try it in another computer to make sure it is not something else (e.g. your motherboard), but my guess is no computer will detect or be able to use your SSD since it has simply died.

  • https://en.wikipedia.org/wiki/Write_amplification
    It's something not mentioned during reliability tests. In real usage writing less than 20GB of data can result in more than 100GB of actual writes. And many manufacturers void their warranty after certain number of data writes - the promised terabytes don't look so great when you count write amplification.
    I've got Intel SSD with SandForce and I've got write amplification over 4,5 (Vivaldi isn't helping…).
    It's a paradox - SSDs excel on smaller random files over HDDs, yet it's these files that kill them faster.

    Currently manufacturers are trying to maximize profit. Prices are stable while manufacturers move to triple level cells, which are cheaper but slower and dies much faster (and can't keep data for too long). The best time for buying SSDs is over. Almost everyone produces SSDs now - but care only about results in one benchmark and not quality (after all it's data that's most important on a drive, not how it fares in benchmarks...).

  • Still theory, not real world facts.

    We will see some stats about newer SSDs in the next years.

    But even if was true, the newer SSDs are more affordable than older ones i can't see the problem.

    If an SSD costs today 1/4 of the two years ago cost, an user can afford a 4x redundancy for the some money, the same good old idea behind RAIDs where "I" was originally for Inexpensive.

  • Endurance tests posted earlier was theory - not real usage. Write amplification is a fact.
    And Vivaldi abusing disks with TopSites is also a fact (I have yet to experience this in latest snapshot but there are many topics already).

    In technology usually I get more over time for less. But now I'm getting less for the same price.
    I wrote that prices are stable - but quality decreases. It's really hard to find any cheap MLC drive now. Most succesors to MLC drives are TLC. Worse performance and durability for the same price. Better buy older drives while you still can.
    Remember what happened with Samsung 840 EVO (a TLC drive)? And solution was firmware rewriting data to different cells after some time (limiting lifespan even more).

    I'm not saying SSDs are bad, I myself got my system on one - but consumers should be wary what are they buying. Newer models are not always better. And sometimes manufacturers are even quietly replacing fast components with slower ones like Kingston with V300.

    We'll see what 3D evolution brings.

  • Could you create a subforum "Hardware" and move this topic there?
    Place it near the already existing subforum "Software".

  • @hondac:

    I wrote that prices are stable


    Ehm… no

    I just got a M2 1TB drive for little more than 200 euro


Looks like your connection to Vivaldi Forum was lost, please wait while we try to reconnect.