Programming and SSDs

Programming, for all ages and all languages.
OSwhatever
Member
Member
Posts: 595
Joined: Mon Jul 05, 2010 4:15 pm

Programming and SSDs

Post by OSwhatever »

We are now in an era where spinning hard disks is a thing of the past and most computers are shipped with SSDs instead. This arises a question how these types of drives are suitable for programming as the erase cycles are limited. In terms of operating system programming and also other type of programming for example building Android, each iteration will produce megabytes of data and in some cases even gigabytes.

For me I tend to recompile a lot, easy making mistakes right. Each of these compile cycles produces a lot of files and as well creates a new images file for each recompilation. Each of these produces megabytes of new data that will be stored on the SSD.

The question is, is SSDs suitable for what I'm doing or will I wear out the SSD before its normal life cycle. You can do the math and you realize its probably not that bad but still. Do anyone have any experience with this? Do you think my concern is unjustified? My computer has an extra SD-card slot, would buying an SD-card and use that one a better idea?
simeonz
Member
Member
Posts: 360
Joined: Fri Aug 19, 2016 10:28 pm

Re: Programming and SSDs

Post by simeonz »

My personal experience with building on SSD is limited to a short duration (1-2 years) in an office where I worked. No SSDs failed in that period I believe, even though the use was intense. I cannot confirm for longer spans.

However, you can seek out the manufacturer's advertised TBW (Total Bytes Written/TeraBytes Written?) to determine how much use you will get. It is usually 10x-100x TB, which translates to thousands of builds and years of service. Some manufacturers cheat unfortunately, by reporting much higher write output on the drive SMART stats (probably by reporting post write-amplification data.) You can check real tests on this website, to see if you can find your model and confirm their advertised TBW.

Also note that some commodity SSDs may have annoying and severe Drive Life Protection, like my boot SSD for example.

As a general rule of thumb, MLC technology drives (if you can find that info) should be better to TLC drives, and 3d-NAND drives should be better to planar NAND. Also, sequential writes are easier on the drive (given reasonable FTL), whilst random writes will cause write amplification and thus increased flash wear. It still depends on your free space. Basically, if you keep enough of your drive space free (several dozens of GB), your file system will be able to keep the file allocations unfragmented, and since build output should be sequential i/o no write amplification will occur. Even random IO (such as small object files) will be mitigated by enough free space, assuming that the TRIM commands are working for your OS/storage driver. Lastly, make sure that you have enough RAM, because paging out memory is random IO, and some builds (especially parallel builds) can eat your memory fast.
Last edited by simeonz on Tue May 30, 2017 11:47 pm, edited 1 time in total.
User avatar
bluemoon
Member
Member
Posts: 1761
Joined: Wed Dec 01, 2010 3:41 am
Location: Hong Kong

Re: Programming and SSDs

Post by bluemoon »

Harddisk is not obsolete, it just shifted from primary storage to, for example, to role of tap drive, which is great for its greater capacity and GB per dollar.

As for compile code with SSD, yes it shorten the life due to wear off. I took these step to counter:
1. reduce the generation of intermediate object (incremental build, proper setup of dependency)
2. use tmpfs / ram disk, with a 32GB machine I slice 512MB/1GB for that purpose and so far it's enough for any of my project. It can't hold huge things like building gcc tho.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Programming and SSDs

Post by Rusky »

SSD wear hasn't been an issue for years. Driver controllers' all have flash translation layers that do block remapping, drivers all know how to use the TRIM command, and the drives themselves have improved. They actually last longer than most HDDs at this point.
User avatar
eryjus
Member
Member
Posts: 286
Joined: Fri Oct 21, 2011 9:47 pm
Libera.chat IRC: eryjus
Location: Tustin, CA USA

Re: Programming and SSDs

Post by eryjus »

Rusky wrote:SSD wear hasn't been an issue for years. Driver controllers' all have flash translation layers that do block remapping, drivers all know how to use the TRIM command, and the drives themselves have improved. They actually last longer than most HDDs at this point.
That's what I understood as well. I believe the actual bit-failure rate is quite high (as with USB thumb drives), but the manufacturers plan for this with tons of extra room.
Adam

The name is fitting: Century Hobby OS -- At this rate, it's gonna take me that long!
Read about my mistakes and missteps with this iteration: Journal

"Sometimes things just don't make sense until you figure them out." -- Phil Stahlheber
Boris
Member
Member
Posts: 145
Joined: Sat Nov 07, 2015 3:12 pm

Re: Programming and SSDs

Post by Boris »

If you are worried,
Compile in a ram disk.
Or tell your OS to do aggressive disk caching.
onlyonemac
Member
Member
Posts: 1146
Joined: Sat Mar 01, 2014 2:59 pm

Re: Programming and SSDs

Post by onlyonemac »

I still use a mechanical disk for my main data storage (which includes code compilation). With all the system files on the SSD I get most of the performance benefits of an SSD, but I don't trust the technology to reliably hold my personal, irreplaceable, data (yes, I have backups, but that's not the point). I don't doubt that code compilation, especially disk-heavy compilation that makes my hard disk thrash like crazy, would be faster on an SSD though, at the expense of disk lifetime.
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing
User avatar
iansjack
Member
Member
Posts: 4683
Joined: Sat Mar 31, 2012 3:07 am
Location: Chichester, UK

Re: Programming and SSDs

Post by iansjack »

onlyonemac wrote:I don't doubt that code compilation, especially disk-heavy compilation that makes my hard disk thrash like crazy, would be faster on an SSD though, at the expense of disk lifetime.
And this thrashing is reducing the lifetime of your mechanical disk. SSds nowadays are more reliable, more efficient, and should last longer than mechanical disks, whatever the usage.

The only reason not to use one is cost.
User avatar
Sik
Member
Member
Posts: 251
Joined: Wed Aug 17, 2016 4:55 am

Re: Programming and SSDs

Post by Sik »

Also a lot of programs love to write data in cache files and I somehow doubt that heavy compilation is going to be any worse than what those programs are doing.
User avatar
SpyderTL
Member
Member
Posts: 1074
Joined: Sun Sep 19, 2010 10:05 pm

Re: Programming and SSDs

Post by SpyderTL »

I just watched a video where they discussed this topic at length. Essentially, most manufacturers will set their warranty period so that the hardware can handle 10 to 20 GB of read/write activity every day for the length of the warranty.

Also, since the hardware will try to remap data in order to reduce the wear on any one cell, it depends on how full the drive is. If you only have 100 MB free on the drive, then that area will be used more than the rest of the drive, which will reduce the life for that area. So using less of your drive should increase the overall life of the drive.

I also found tests where they got anywhere from 200-500 GB of read/write activity a day for (the equivalent of) 10 years before the SSD started complaining. I doubt that I've ever owned a HDD that would perform that well...
Project: OZone
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
simeonz
Member
Member
Posts: 360
Joined: Fri Aug 19, 2016 10:28 pm

Re: Programming and SSDs

Post by simeonz »

Sik wrote:I also found tests where they got anywhere from 200-500 GB of read/write activity a day for (the equivalent of) 10 years before the SSD started complaining. I doubt that I've ever owned a HDD that would perform that well...
The tests probably used sequential I/O, unless they were testing enterprise SSDs. Such as perform appending writes and remove the written files in whole before repeating the process. This reduces the wear significantly. (This is also the wrong way to test the worst case sustained write performance.)

Whether or not random writes are a common workload is another story. But the reason this matters is because when you perform random writes, the effective volume of data written to flash is multiplied by (total space) / (TRIM reclaimed space + manufacturer reserved space). This amplification cannot be avoided by FTL. Sequential I/O is garbage collected without write amplification, assuming moderately intelligent FTL. That is why TBW should be rated at maximum allocation, based on original write volume, not post-amplification. Indeed, the low-end TBW at the moment is equivalent to 20GB per day for commodity models.

@OP
So, basically, looking at the numbers, I have to agree that HDDs have lower durability. The specs show that they have DWPD (diskfull writes per day) or TB/year ratings at about or lower to those of SSDs. Commodity HDDs occasionally have a slight lead. Looking at the specs so far, SSDs are the top tier of the endurance market. Also, HDDs suffer from modes of failure not present in SSDs. With technologies like 3d-NAND or non-NAND, the SSD lead will obviously further.

Still, for life span, I would make sure I have enough RAM, which is a good idea anyway and optionally keeping some free space.

Here is an article comparing endurance of several enterprise HDDs and SSDs. And these SSD tests are performed with sequential I/O, which surpasses the rated TBW significantly.
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re: Programming and SSDs

Post by Solar »

OSwhatever wrote:The question is, is SSDs suitable for what I'm doing or will I wear out the SSD before its normal life cycle. You can do the math and you realize its probably not that bad but still. Do anyone have any experience with this?
Indeed I have. Everyone in our department of software engineers has a workstation with a SSD drive. Several years of being used nine-to-five, five days a week. I haven't heard of a single SSD-related failure so far.

And over the years, I have taken more than one system back home (when the office workstations are replaced with newer models after a couple of years). My kids are minecrafting the crap out of the old workstations, and still the SSDs are just fine.

(That being in addition to the numbers provided by others. Perhaps it makes you rest easy to have some anecdotal evidence added.)
Every good solution is obvious once you've found it.
onlyonemac
Member
Member
Posts: 1146
Joined: Sat Mar 01, 2014 2:59 pm

Re: Programming and SSDs

Post by onlyonemac »

Sik wrote:Also a lot of programs love to write data in cache files and I somehow doubt that heavy compilation is going to be any worse than what those programs are doing.
That's why /home, /var, /run, and /tmp are all on the mechanical disk.
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing
Nable
Member
Member
Posts: 453
Joined: Tue Nov 08, 2011 11:35 am

Re: Programming and SSDs

Post by Nable »

onlyonemac wrote:That's why /home, /var, /run, and /tmp are all on the mechanical disk.
Modern systems have /run and /tmp in RAM+swap via tmpfs.
onlyonemac
Member
Member
Posts: 1146
Joined: Sat Mar 01, 2014 2:59 pm

Re: Programming and SSDs

Post by onlyonemac »

Nable wrote:
onlyonemac wrote:That's why /home, /var, /run, and /tmp are all on the mechanical disk.
Modern systems have /run and /tmp in RAM+swap via tmpfs.
Swap is also on the mechanical disk. (Also, I've found that they're in RAM *and* they get written to disk to save RAM, so mounting tmpfs over another mounted partition ensures that they are written to the underlying mechanical disk partition, not the SSD.
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing
Post Reply