Yeah, I'm trying it. The duties of husband, father, son to aging parents, and geek simply demand more than 16 waking hours. No, this blog is not about to become an Uberman blog... although I'd probably get more hits that way.
As I sit here at 5 am EST, I am about 5 days into the adjustment period. It is difficult... not the most difficult thing I have ever done, but it is definitely not easy. In doing this I have discovered some things about myself, which are always fun.
1. I am not as undisciplined as I thought. I just really have to want something. The idea of only having to sleep 2 hours a day and being able to maintain that over long periods of time is extremely enticing.
2. It is friggin cold in my office at 5 am.
3. I like sleeping in. I like the feeling of knowing I'm supposed to get out of bed and rolling over instead.
4. My hardest nap to get up from is the one from 4am to 4:20am. I usually need every alarm at my disposal to do it.
5. In the 5 days I have done this, my time required to fall asleep has greatly decreased. I am now falling asleep in 1-4 minutes instead of 20 minutes to an hour.
What I can't find are references to ill-effects on health. I have seen a few testimonials but no hard evidence. I think it is worth study.
Monday, January 25
Friday, January 22
The Door to Nowhere
Ever seen something like this:
see more Epic Fails
It's one of those things you wonder... what were they thinking when they designed something silly like that?
You see examples of this all the time in code... hey, I'll admit it, I've *done* stuff like this before. You build an opening for extension to something that can't possibly exist in your application.
This is one of the reasons I believe lean software development is so difficult. Anyone can walk by this door and notice that it has absolutely no conceivable use besides making it easier for someone to fail at suicide. However, layer upon layer of abstraction sits in our code today with absolutely no conceivable use except that it's "abstract" and "extensible". Architects draw pictures of nicely abstracted applications, neatly separated concerns, and built in abstraction for extensibility. They then look at the picture, pat themselves on the back, and then hand it off to a peon developer.
Lord Architect: "Here developer, all you need is right here, just code it up. It shouldn't take you longer than an hour."
Programmer: "Why do I have to build three classes just to build a calculator that can add and subtract?"
Lord Architect: "Because around here we use interfaces for everything so it is extensible."
Programmer: "But you have a method in the Calculator interface called, checkSpelling... isn't it a calculator?"
Lord Architect:"We design code for re-usability and extensiblity. This fits with the long term design concept and it's already been approved. We can remove it later, but write it like that for now."
Bottom line, building software is not like building a bridge, or as in the example, a mall with doors that make sense. However, there are striking similarities in the finished product, the difference being that, in software, they are a lot harder to recognize until you get your hands deep into the code.
While not a silver bullet, there is a way to avoid a pitfall like this and it comes from the book Practices of an Agile Developer. Architects must write code. It is a lot easier to see the stupidity of an idea as it is being built than trying to work with it after the fact.
An addition of my own, developers must challenge the architect when something doesn't make sense. Most developers are just as qualified to be architects as the architects themselves and they can often see waste as they are building it. I guarantee the construction workers who installed this door said, "This is pretty stupid." If you are in an environment where architects design applications but don't code, the onus is even greater upon the developer to raise a concern if something doesn't make sense. Architects, no matter what they think they are, are not gurus or gods of programming. If the design doesn't make sense, they should be open to the idea that what they designed won't work in practice. A mature, professional architect should not even want a programmer who is just a robotic extension of themselves.
In addition, if you are a programmer, you are expected to think, not just type as fast as you can to get the architect's design down on paper. It is unacceptable for a developer to say, "This is stupid" but then build it anyway. If they ask a question, maybe there actually is a reason and they can be much more satisfied in their solution. Working together and fostering an environment of open communication where people give and take constructive criticism is essential to producing applications that do not succumb to problems like the Door to Nowhere.
see more Epic Fails
It's one of those things you wonder... what were they thinking when they designed something silly like that?
You see examples of this all the time in code... hey, I'll admit it, I've *done* stuff like this before. You build an opening for extension to something that can't possibly exist in your application.
This is one of the reasons I believe lean software development is so difficult. Anyone can walk by this door and notice that it has absolutely no conceivable use besides making it easier for someone to fail at suicide. However, layer upon layer of abstraction sits in our code today with absolutely no conceivable use except that it's "abstract" and "extensible". Architects draw pictures of nicely abstracted applications, neatly separated concerns, and built in abstraction for extensibility. They then look at the picture, pat themselves on the back, and then hand it off to a peon developer.
Lord Architect: "Here developer, all you need is right here, just code it up. It shouldn't take you longer than an hour."
Programmer: "Why do I have to build three classes just to build a calculator that can add and subtract?"
Lord Architect: "Because around here we use interfaces for everything so it is extensible."
Programmer: "But you have a method in the Calculator interface called, checkSpelling... isn't it a calculator?"
Lord Architect:"We design code for re-usability and extensiblity. This fits with the long term design concept and it's already been approved. We can remove it later, but write it like that for now."
Bottom line, building software is not like building a bridge, or as in the example, a mall with doors that make sense. However, there are striking similarities in the finished product, the difference being that, in software, they are a lot harder to recognize until you get your hands deep into the code.
While not a silver bullet, there is a way to avoid a pitfall like this and it comes from the book Practices of an Agile Developer. Architects must write code. It is a lot easier to see the stupidity of an idea as it is being built than trying to work with it after the fact.
An addition of my own, developers must challenge the architect when something doesn't make sense. Most developers are just as qualified to be architects as the architects themselves and they can often see waste as they are building it. I guarantee the construction workers who installed this door said, "This is pretty stupid." If you are in an environment where architects design applications but don't code, the onus is even greater upon the developer to raise a concern if something doesn't make sense. Architects, no matter what they think they are, are not gurus or gods of programming. If the design doesn't make sense, they should be open to the idea that what they designed won't work in practice. A mature, professional architect should not even want a programmer who is just a robotic extension of themselves.
In addition, if you are a programmer, you are expected to think, not just type as fast as you can to get the architect's design down on paper. It is unacceptable for a developer to say, "This is stupid" but then build it anyway. If they ask a question, maybe there actually is a reason and they can be much more satisfied in their solution. Working together and fostering an environment of open communication where people give and take constructive criticism is essential to producing applications that do not succumb to problems like the Door to Nowhere.
Monday, January 18
Measuring SSD performance : Follow up
After writing up this post I decided to go out and see what else I could do to speed things up. I saw a suggestion for tweaking sreadahead, but ureadahead is what we use in Karmic and it already detects and optimizes for SSDs.
I did however see numerous suggestions for the addition of elevator=0 to the grub boot parameters will increase performance even more.
Here are my results:
According to that, the improvement is minimal, but the dd test gives slightly different results:
Wow, 178 to 202. And it does feel even snappier in my environment. To implement this tweak, go to your /etc/default/grub and make the following change:
Then update the grub configuration:
Note, DO NOT make changes to /boot/grub/grub.cfg. This file is overwritten every time update-grub is run, which happens whenever an update is made that the bootloader needs to know about(new kernel, etc).
I did however see numerous suggestions for the addition of elevator=0 to the grub boot parameters will increase performance even more.
Here are my results:
$ sudo hdparm -t /dev/sda /dev/sda: Timing buffered disk reads: 462 MB in 3.00 seconds = 153.85 MB/sec $ sudo hdparm -t --direct /dev/sda /dev/sda: Timing O_DIRECT disk reads: 642 MB in 3.00 seconds = 213.91 MB/sec
According to that, the improvement is minimal, but the dd test gives slightly different results:
$ sudo dd if=/dev/sda1 of=/dev/null bs=4k skip=0 count=51200 51200+0 records in 51200+0 records out 209715200 bytes (210 MB) copied, 1.04065 s, 202 MB/s
Wow, 178 to 202. And it does feel even snappier in my environment. To implement this tweak, go to your /etc/default/grub and make the following change:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash elevator=0"
Then update the grub configuration:
sudo update-grub
Note, DO NOT make changes to /boot/grub/grub.cfg. This file is overwritten every time update-grub is run, which happens whenever an update is made that the bootloader needs to know about(new kernel, etc).
Wednesday, January 6
Measuring SSD performance
First off, I'd like to express my undying love for newegg.com. It's a paradise of geekyness that I'm not totally sure how the world did without prior to 2001.
Ok, let me compose myself here and get down to business. I just did an upgrade of my main desktop, the one I do all my work from, to a new Phenom II X4 965 BE, added an 850W power supply and finally upgraded to a Solid State Disk in opting for a 30G OCZ Vertex Turbo.
Of all the improvements, the most noticeable was the SSD by far. Sure my machine now compiles code while my 4 cores running at the stock 3.4ghz barely acknowledge my existence. Sure, I have been working for the last 10 minutes and completely forgot that the machine was transcoding my entire library of videos and music in the background.
The thing I notice is the nearly instantaneous response I get when I ask for something. Applications open instantly. Boot time is ridiculously fast. My computer can now beat my TV from cold boot to working environment. It takes me 36 seconds to reboot this thing from the time I say "restart" to the time I'm back at a working screen. Post to working desktop is about 16 seconds.
You may say, "But you're running linux, why would you have to reboot?" The fact is, booting is a very disk intensive process and it's something almost everyone is familiar with. So even though I reboot very little, maybe once a month, it's still a reminder that this disk is smokin fast.
There are some theoretical drawbacks. SSDs have a limited number of write cycles, at some point, once those cycles are used up, you get unusable space on your disk. It's sort of like the argument that processor dies can crack if you turn your machine off and on due to thermal expansion. When was the last time you actually had a die crack? My father, the miser that he is, shuts off his machine every single time he gets up to save on power. He has a 6 year old Dell Dimension Craptaculous with dust coming out of every air hole and his processor is still working. If you know enough to be worried about thermal damage, you probably aren't the type of person who is going more than 6 years between processor upgrades.
The same is true of the SSD. By the time this thing succumbs to it's write cycle limit we're going to be storing data in crystals and you won't be able to buy a hard drive that's less than a terabyte.
You may have also noticed that the disk is fairly small. I have always been a fan of small boot drives. As recently as 2 years ago, I was still running a 10G boot drive at an awful 5400rpms. Still, seek time was greatly reduced just by the fact that there was less space to seek.
As with anything, there are advertised and actual performance numbers. My drive is advertised for sequential write at 100 to 145 MB/s and seqential read up to 240MB/s. Below are the actual numbers I get using hdparm. Can you find the SSD?
Here's more using dd:
While I don't appear to be getting the promised 240MB/s, it still laid waste to all the other disks in there, which are all 7200RPM WD drives, new as of last April. In all honesty, the test would be a lot more fair if I had a Velociraptor to compare to, but from what I've read, it's still no contest.
So if you are looking into things and thinking now may be the time to try out an SDD with your next upgrade, I say, don't think about it again. Just do it. The numbers speak for themselves. That said, there are some things you can do to get the most out of your drive. Obviously, this applies to linux users... if you're using Windows I'm sure there's some kind of freeware optimizer out there.
1. Most filesystems are, by default, set up for traditional hard drives. In /etc/fstab, set the options noatime and nodiratime on your ssd.
2. While I did a lot to discourage you from using the write cycle argument as a reason not to get one, there are some things you can do to cut down the number of write cycles you actually make. If you're still on the fence and thought to yourself, "What about log and tmp files?", worry no more. Just create ramdisks in your fstab setup. My first impression of this idea was, "You aren't consuming memory with log files on my machine." While I might not suggest this for a server, I did some checking on sizes and the log file was never over 6MB and the others were 5-10K. Nothing to worry about on a modern system. This is probably not a bad idea on any system with more than a couple gigs of memory.
3. Manage your swappiness. Add the following to your /etc/rc.local file. This will discourage a lot of swapping. It makes your machine more memory dependent, but, again, if you're going to drop $100+ on a solid state drive, you are more than likely operating at more than 2GB of memory.
sysctl -w vm.swappiness=1
sysctl -w vm.vfs_cache_pressure=50
Source: http://ubuntuforums.org/showthread.php?t=1183113
You'll notice I didn't take all their suggestions. I considered these to be the most obvious in relation to the SSD itself.
Of all the improvements, the most noticeable was the SSD by far. Sure my machine now compiles code while my 4 cores running at the stock 3.4ghz barely acknowledge my existence. Sure, I have been working for the last 10 minutes and completely forgot that the machine was transcoding my entire library of videos and music in the background.
The thing I notice is the nearly instantaneous response I get when I ask for something. Applications open instantly. Boot time is ridiculously fast. My computer can now beat my TV from cold boot to working environment. It takes me 36 seconds to reboot this thing from the time I say "restart" to the time I'm back at a working screen. Post to working desktop is about 16 seconds.
You may say, "But you're running linux, why would you have to reboot?" The fact is, booting is a very disk intensive process and it's something almost everyone is familiar with. So even though I reboot very little, maybe once a month, it's still a reminder that this disk is smokin fast.
There are some theoretical drawbacks. SSDs have a limited number of write cycles, at some point, once those cycles are used up, you get unusable space on your disk. It's sort of like the argument that processor dies can crack if you turn your machine off and on due to thermal expansion. When was the last time you actually had a die crack? My father, the miser that he is, shuts off his machine every single time he gets up to save on power. He has a 6 year old Dell Dimension Craptaculous with dust coming out of every air hole and his processor is still working. If you know enough to be worried about thermal damage, you probably aren't the type of person who is going more than 6 years between processor upgrades.
The same is true of the SSD. By the time this thing succumbs to it's write cycle limit we're going to be storing data in crystals and you won't be able to buy a hard drive that's less than a terabyte.
You may have also noticed that the disk is fairly small. I have always been a fan of small boot drives. As recently as 2 years ago, I was still running a 10G boot drive at an awful 5400rpms. Still, seek time was greatly reduced just by the fact that there was less space to seek.
As with anything, there are advertised and actual performance numbers. My drive is advertised for sequential write at 100 to 145 MB/s and seqential read up to 240MB/s. Below are the actual numbers I get using hdparm. Can you find the SSD?
$ sudo hdparm -t /dev/sda /dev/sda: Timing buffered disk reads: 460 MB in 3.01 seconds = 152.81 MB/sec $ sudo hdparm -t --direct /dev/sda /dev/sda: Timing O_DIRECT disk reads: 640 MB in 3.00 seconds = 213.32 MB/sec $ sudo hdparm -t /dev/sdb /dev/sdb: Timing buffered disk reads: 216 MB in 3.01 seconds = 71.78 MB/sec $ sudo hdparm -t --direct /dev/sdb /dev/sdb: Timing O_DIRECT disk reads: 250 MB in 3.00 seconds = 83.32 MB/sec $ sudo hdparm -t /dev/sdc /dev/sdc: Timing buffered disk reads: 228 MB in 3.01 seconds = 75.69 MB/sec $ sudo hdparm -t --direct /dev/sdc /dev/sdc: Timing O_DIRECT disk reads: 198 MB in 3.03 seconds = 65.41 MB/sec $ sudo hdparm -t /dev/sdd /dev/sdd: Timing buffered disk reads: 172 MB in 3.02 seconds = 57.02 MB/sec $ sudo hdparm -t --direct /dev/sdd /dev/sdd: Timing O_DIRECT disk reads: 176 MB in 3.02 seconds = 58.33 MB/sec
Here's more using dd:
$ sudo dd if=/dev/sda1 of=/dev/null bs=4k skip=0 count=51200 51200+0 records in 51200+0 records out 209715200 bytes (210 MB) copied, 1.18009 s, 178 MB/s $ sudo dd if=/dev/sdb1 of=/dev/null bs=4k skip=0 count=51200 51200+0 records in 51200+0 records out 209715200 bytes (210 MB) copied, 2.3639 s, 88.7 MB/s $ sudo dd if=/dev/sdc1 of=/dev/null bs=4k skip=0 count=51200 51200+0 records in 51200+0 records out 209715200 bytes (210 MB) copied, 3.04812 s, 68.8 MB/s $ sudo dd if=/dev/sdd2 of=/dev/null bs=4k skip=0 count=51200 51200+0 records in 51200+0 records out 209715200 bytes (210 MB) copied, 3.4337 s, 61.1 MB/s
While I don't appear to be getting the promised 240MB/s, it still laid waste to all the other disks in there, which are all 7200RPM WD drives, new as of last April. In all honesty, the test would be a lot more fair if I had a Velociraptor to compare to, but from what I've read, it's still no contest.
So if you are looking into things and thinking now may be the time to try out an SDD with your next upgrade, I say, don't think about it again. Just do it. The numbers speak for themselves. That said, there are some things you can do to get the most out of your drive. Obviously, this applies to linux users... if you're using Windows I'm sure there's some kind of freeware optimizer out there.
1. Most filesystems are, by default, set up for traditional hard drives. In /etc/fstab, set the options noatime and nodiratime on your ssd.
2. While I did a lot to discourage you from using the write cycle argument as a reason not to get one, there are some things you can do to cut down the number of write cycles you actually make. If you're still on the fence and thought to yourself, "What about log and tmp files?", worry no more. Just create ramdisks in your fstab setup. My first impression of this idea was, "You aren't consuming memory with log files on my machine." While I might not suggest this for a server, I did some checking on sizes and the log file was never over 6MB and the others were 5-10K. Nothing to worry about on a modern system. This is probably not a bad idea on any system with more than a couple gigs of memory.
tmpfs /var/log tmpfs defaults 0 0 tmpfs /tmp tmpfs defaults 0 0 tmpfs /var/tmp tmpfs defaults 0 0
3. Manage your swappiness. Add the following to your /etc/rc.local file. This will discourage a lot of swapping. It makes your machine more memory dependent, but, again, if you're going to drop $100+ on a solid state drive, you are more than likely operating at more than 2GB of memory.
sysctl -w vm.swappiness=1
sysctl -w vm.vfs_cache_pressure=50
Source: http://ubuntuforums.org/showthread.php?t=1183113
You'll notice I didn't take all their suggestions. I considered these to be the most obvious in relation to the SSD itself.
Monday, January 4
Update on TypeMatrix progress
This is pretty much the last update on this. It has now been a solid two months since I received my TypeMatrix keyboard and I decided to update based upon something I noticed the other day. Very simply, I noticed that I was consistently and accurately finding all the keys, including the symbol keys, and that my typing speed has nearly returned to normal. Just to verify, I went through the typing tests in gtypist and found that my speed was consistently hovering around 95 wpm. My previous speed was 102, so I am 93% of my previous typing speed... those speeds include symbols. Considering that I have started going back and forth between traditional keyboards and the TypeMatrix again, that is not bad. I am continually progressing in speed, so I believe I will very quickly overtake my old speeds.
In my opinion, this makes the move to TypeMatrix complete. It's been two months and several thousand lines of code, emails, Facebook updates, and what not... this does not seem promising for my ability to learn a completely new layout(moving to Dvorak), which is a bit disappointing. Given that key locations are only modified on the TypeMatrix, Dvorak may just be a project for my kids rather than myself.
In my opinion, this makes the move to TypeMatrix complete. It's been two months and several thousand lines of code, emails, Facebook updates, and what not... this does not seem promising for my ability to learn a completely new layout(moving to Dvorak), which is a bit disappointing. Given that key locations are only modified on the TypeMatrix, Dvorak may just be a project for my kids rather than myself.
PGP Encryption for Java
Setting up java to work with an encryption provider can be difficult. From the day that PGP was declared a weapon, setting up encryption became a lot harder. The saga of PGP is a fairly entertaining story and worth a read.
These steps should bring you from nothing to a working cryptographic development environment on a Linux machine using Eclipse and Maven.
1. Create a new maven project from the quickstart archetype
2. Make sure the project is set up to build to 1.6 both in the POM and the Eclipse builder
3. If you do not already have the gpg package installed on your machine: sudo apt-get install gnupg
4. Execute gpg - -gen-key ( this will walk you through the process of generating a new gpg key )
5. Create a keys directory in your project and move the files generated during key generation there.
6. On an Ubuntu machine, the basic java policy files are stored in /etc/java-{major_version_number}-sun/security, Go to the downloads page for the correct version of Java, find the JCE Unlimited Strength Jurisdiction Policy Files(https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_Developer-Site/en_US/-/USD/ViewProductDetail-Start?ProductRef=jce_policy-6-oth-JPR@CDS-CDS_Developer), unzip and move the jar files to the /etc/../security directory with some type of tag in the name to delineate them as the unrestricted files. Then move the current files from JAVA_HOME/jre/lib/security to /etc/.../security and tag them as restricted.
7. Create a symbolic link to the unrestricted files in the JAVA_HOME/jre/lib/security directory.
These steps should bring you from nothing to a working cryptographic development environment on a Linux machine using Eclipse and Maven.
1. Create a new maven project from the quickstart archetype
2. Make sure the project is set up to build to 1.6 both in the POM and the Eclipse builder
3. If you do not already have the gpg package installed on your machine: sudo apt-get install gnupg
4. Execute gpg - -gen-key ( this will walk you through the process of generating a new gpg key )
5. Create a keys directory in your project and move the files generated during key generation there.
6. On an Ubuntu machine, the basic java policy files are stored in /etc/java-{major_version_number}-sun/security, Go to the downloads page for the correct version of Java, find the JCE Unlimited Strength Jurisdiction Policy Files(https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_Developer-Site/en_US/-/USD/ViewProductDetail-Start?ProductRef=jce_policy-6-oth-JPR@CDS-CDS_Developer), unzip and move the jar files to the /etc/../security directory with some type of tag in the name to delineate them as the unrestricted files. Then move the current files from JAVA_HOME/jre/lib/security to /etc/.../security and tag them as restricted.
7. Create a symbolic link to the unrestricted files in the JAVA_HOME/jre/lib/security directory.
Sunday, January 3
If you do this... well, there's no fixing you.
I can't believe it took me over a year to write about exceptions given the name of this blog, but here we go. From the "really? *facepalm*" department.
Ron White has a saying that I mostly agree with. "You can't fix stupid." To make this technically correct, and what I think he meant to say is, "You can't fix stupid people." Stupid things you can certainly fix. Developers make stupid mistakes all the time... that's one reason we have statically typed code, checking at compiler time, and static code analysis. Then there's stuff like this where you just have to shake your head. (This is actual code, not a contrived example.)
I'm sorry, but there is no other way to say it, that is stupid and it should require no explanation why. There's absolutely no reason for that, even in test code or sandbox/mess around code. Does it have a major impact? Not really. If it's not high volume and it's unlikely that an exception will be thrown inside the try then no one will probably notice. Most importantly, it doesn't fail... but to say it works is like saying a flat tire "works". Make no mistake, this is broken.
Besides the fact that you should NEVER print a stack trace into nothing, in the event an exception is actually thrown you will force the program to perform three operations where the output is exactly the same [wrong] thing.
I'm a big fan of writing code as simply as possible. I have been known to over-complicate things before and I've usually paid for it. But this isn't simple, it's either lazy or stupid; it might be both. I'm not sure which is worse... and I don't think there's a fix for either.
Ron White has a saying that I mostly agree with. "You can't fix stupid." To make this technically correct, and what I think he meant to say is, "You can't fix stupid people." Stupid things you can certainly fix. Developers make stupid mistakes all the time... that's one reason we have statically typed code, checking at compiler time, and static code analysis. Then there's stuff like this where you just have to shake your head. (This is actual code, not a contrived example.)
try {
//contents that throw an exception
} catch (LocalException e) {
e.printStackTrace();
} catch (SystemException e) {
e.printStackTrace();
) catch (Exception e) {
e.printStackTrace();
}
I'm sorry, but there is no other way to say it, that is stupid and it should require no explanation why. There's absolutely no reason for that, even in test code or sandbox/mess around code. Does it have a major impact? Not really. If it's not high volume and it's unlikely that an exception will be thrown inside the try then no one will probably notice. Most importantly, it doesn't fail... but to say it works is like saying a flat tire "works". Make no mistake, this is broken.
Besides the fact that you should NEVER print a stack trace into nothing, in the event an exception is actually thrown you will force the program to perform three operations where the output is exactly the same [wrong] thing.
I'm a big fan of writing code as simply as possible. I have been known to over-complicate things before and I've usually paid for it. But this isn't simple, it's either lazy or stupid; it might be both. I'm not sure which is worse... and I don't think there's a fix for either.
Subscribe to:
Posts (Atom)