Wednesday, January 6

Measuring SSD performance

Newegg.com Inc.
First off, I'd like to express my undying love for newegg.com.  It's a paradise of geekyness that I'm not totally sure how the world did without prior to 2001.



Ok, let me compose myself here and get down to business.  I just did an upgrade of my main desktop, the one I do all my work from, to a new Phenom II X4 965 BE, added an 850W power supply and finally upgraded to a Solid State Disk in opting for a 30G OCZ Vertex Turbo.

Of all the improvements, the most noticeable was the SSD by far.  Sure my machine now compiles code while my 4 cores running at the stock 3.4ghz barely acknowledge my existence. Sure, I have been working for the last 10 minutes and completely forgot that the machine was transcoding my entire library of videos and music in the background.

The thing I notice is the nearly instantaneous response I get when I ask for something.  Applications open instantly. Boot time is ridiculously fast.  My computer can now beat my TV from cold boot to working environment.  It takes me 36 seconds to reboot this thing from the time I say "restart" to the time I'm back at a working screen.  Post to working desktop is about 16 seconds.

You may say, "But you're running linux, why would you have to reboot?"  The fact is, booting is a very disk intensive process and it's something almost everyone is familiar with.  So even though I reboot very little, maybe once a month, it's still a reminder that this disk is smokin fast.

There are some theoretical drawbacks.  SSDs have a limited number of write cycles, at some point, once those cycles are used up, you get unusable space on your disk.  It's sort of like the argument that processor dies can crack if you turn your machine off and on due to thermal expansion.  When was the last time you actually had a die crack? My father, the miser that he is, shuts off his machine every single time he gets up to save on power.  He has a 6 year old Dell Dimension Craptaculous with dust coming out of every air hole and his processor is still working.  If you know enough to be worried about thermal damage, you probably aren't the type of person who is going more than 6 years between processor upgrades.

The same is true of the SSD.  By the time this thing succumbs to it's write cycle limit we're going to be storing data in crystals and you won't be able to buy a hard drive that's less than a terabyte.

You may have also noticed that the disk is fairly small.  I have always been a fan of small boot drives.  As recently as 2 years ago, I was still running a 10G boot drive at an awful 5400rpms.  Still, seek time was greatly reduced just by the fact that there was less space to seek. 

As with anything, there are advertised and actual performance numbers.  My drive is advertised for sequential write at 100 to 145 MB/s and seqential read up to 240MB/s.  Below are the actual numbers I get using hdparm.  Can you find the SSD?

$ sudo hdparm -t /dev/sda
/dev/sda:
 Timing buffered disk reads:  460 MB in  3.01 seconds = 152.81 MB/sec 

$ sudo hdparm -t --direct /dev/sda
/dev/sda:
 Timing O_DIRECT disk reads:  640 MB in  3.00 seconds = 213.32 MB/sec

$ sudo hdparm -t /dev/sdb
/dev/sdb:
 Timing buffered disk reads:  216 MB in  3.01 seconds =  71.78 MB/sec

$ sudo hdparm -t --direct /dev/sdb
/dev/sdb:
 Timing O_DIRECT disk reads:  250 MB in  3.00 seconds =  83.32 MB/sec

$ sudo hdparm -t /dev/sdc
/dev/sdc:
 Timing buffered disk reads:  228 MB in  3.01 seconds =  75.69 MB/sec

$ sudo hdparm -t --direct /dev/sdc
/dev/sdc:
 Timing O_DIRECT disk reads:  198 MB in  3.03 seconds =  65.41 MB/sec

$ sudo hdparm -t /dev/sdd
/dev/sdd:
 Timing buffered disk reads:  172 MB in  3.02 seconds =  57.02 MB/sec

$ sudo hdparm -t --direct /dev/sdd
/dev/sdd:
 Timing O_DIRECT disk reads:  176 MB in  3.02 seconds =  58.33 MB/sec

Here's more using dd:
$ sudo dd if=/dev/sda1 of=/dev/null bs=4k skip=0 count=51200
51200+0 records in
51200+0 records out
209715200 bytes (210 MB) copied, 1.18009 s, 178 MB/s

$ sudo dd if=/dev/sdb1 of=/dev/null bs=4k skip=0 count=51200
51200+0 records in
51200+0 records out
209715200 bytes (210 MB) copied, 2.3639 s, 88.7 MB/s

$ sudo dd if=/dev/sdc1 of=/dev/null bs=4k skip=0 count=51200
51200+0 records in
51200+0 records out
209715200 bytes (210 MB) copied, 3.04812 s, 68.8 MB/s

$ sudo dd if=/dev/sdd2 of=/dev/null bs=4k skip=0 count=51200
51200+0 records in
51200+0 records out
209715200 bytes (210 MB) copied, 3.4337 s, 61.1 MB/s

While I don't appear to be getting the promised 240MB/s, it still laid waste to all the other disks in there, which are all 7200RPM WD drives, new as of last April.  In all honesty, the test would be a lot more fair if I had a Velociraptor to compare to, but from what I've read, it's still no contest.

So if you are looking into things and thinking now may be the time to try out an SDD with your next upgrade, I say, don't think about it again. Just do it.  The numbers speak for themselves.  That said, there are some things you can do to get the most out of your drive. Obviously, this applies to linux users... if you're using Windows I'm sure there's some kind of freeware optimizer out there.

1. Most filesystems are, by default, set up for traditional hard drives.  In /etc/fstab, set the options noatime and nodiratime on your ssd.
2. While I did a lot to discourage you from using the write cycle argument as a reason not to get one, there are some things you can do to cut down the number of write cycles you actually make.  If you're still on the fence and thought to yourself, "What about log and tmp files?", worry no more.  Just create ramdisks in your fstab setup.  My first impression of this idea was, "You aren't consuming memory with log files on my machine."  While I might not suggest this for a server, I did some checking on sizes and the log file was never over 6MB and the others were 5-10K.  Nothing to worry about on a modern system.  This is probably not a bad idea on any system with more than a couple gigs of memory.

tmpfs    /var/log     tmpfs    defaults    0  0
tmpfs    /tmp         tmpfs    defaults    0  0
tmpfs    /var/tmp     tmpfs    defaults    0  0

3. Manage your swappiness.  Add the following to your /etc/rc.local file.  This will discourage a lot of swapping.  It makes your machine more memory dependent, but, again, if you're going to drop $100+ on a solid state drive, you are more than likely operating at more than 2GB of memory.

sysctl -w vm.swappiness=1
sysctl -w vm.vfs_cache_pressure=50

Source:  http://ubuntuforums.org/showthread.php?t=1183113

You'll notice I didn't take all their suggestions.  I considered these to be the most obvious in relation to the SSD itself.


Reblog this post [with Zemanta]

No comments: