Month: October 2009


Posted by – October 28, 2009

Paul De Palma is right about one thing: no man is qualified to say for sure why any woman does anything at all, much less why she avoids Computer Science. I therefore find it surprising that all of the articles I’ve read on this subject were written by men.

Now, I am not a woman. (I hope this revelation does not surprise you.) As a man, I can unequivocally opine that this male-dominated discipline wants more women. We want it enough to hem and haw and write articles about Title IX and nature vs. nurture. Women are doing none of these things. Could it be that they just don’t care?

Twenty Years of FAIL: The Common Password

Posted by – October 20, 2009

What would you expect of a man who infiltrated dozens of US military and government computer systems in the mid-1980′s? Would it heighten your perceptions to learn that he sold his discoveries to the KGB? Or that, as the first black-hat ever caught and prosecuted, he was in many ways a pioneer in his field?

Would you believe me if I told you that he mostly just guessed passwords?

Oh, sure, he had the emacs mail-move permissions escalation vulnerability that gives Cliff Stoll’s The Cuckoo’s Egg its distinctive title; props for that one. But to even use it, first he needed to guess a password to gain at least minimal access to a system.

How does one guess a password? The hacker, Markus Hess, mostly tried common choices, guest accounts, and default passwords. In some cases, he used an automated program to guess all of the words in the dictionary. In others, he found the password stored on the system in regular old text, plain for anyone to read.

We know better today, of course. We have best practices. Had they been known and followed in the 80′s, Markus Hess would have been far less successful than he was. Passwords can be quite secure.

There you have it, then. Blog over. Use good passwords. Eat your vegetables. The end.


By the way, nobody follows best practices. A recent study found that only about 4% do. Even today, the most common MySpace passwords are “password1″, “abc123″, “myspace1″, and “password”. Anecdotally, everyone I’ve ever met, and I mean everyone, has at most four passwords they use for everything.

I’ve become convinced that there’s really nothing to be done. Secure passwords are impossible to remember. People will write them down, use the same ones everywhere, email them to each other, and generally make criminals’ lives easier. It’s not their fault; it’s a failing of the human brain. Twenty years after Markus Hess, passwords still fail, and they will continue to fail until something better replaces them.

Something must be found to replace the password, something secure that humans can actually use. As an optimist, I hope it can be done; I’ll be thinking on it. At least, the part of my brain not devoted to remembering long lists of secure passwords will.

Attack of the scare quotes: Android is “open”

Posted by – October 13, 2009

Google’s Android platform promised to be the most moddable mobile phone system ever created—and, in many ways, it is. With no other phone is it even possible to tweak the source code of the OS and publish a modified distribution, as is possible with Android. However, as modder Steve Kondik recently discovered, the userland apps that ship with Android are not freely licensed, so a customized firmware distro cannot include them. Structuring their licensing in this way provides no advantage to Google that I can see; even if they do not publish the source code to these apps, they could at least allow distribution under a freeware license. That they do not suggests to me that the carriers must want to control which apps come bundled for marketing purposes. I can only hope this is so, and that all carriers will one day become naught but dumb pipes.

An IPv6 Pipe Dream

Posted by – October 6, 2009

Update 24 July 2012

I wrote this post before I knew very much about the guts of ip-based networking. I now know more, enough to know that this pipe dream has a fundamental flaw: it is not technically feasible to assign a static IP address to every device at the time of manufacture, due to a single inconvenient fact: geography.

How does a server in England know how to send a packet to a client in Argentina? All it has to go on is a destination IP address, and it is not desirable (or even very possible) to maintain a global mapping of every device’s IP address to its current location. So, IP addresses are apportioned according to geography, such that, given only an IP address, a router knows where to start the process of delivering a packet to its destination.

(As an aside, there is an inequality inherent in this system: developed nations have more IP addresses available to them than do developing nations. This is one of the problems that IPv6 will solve.)

Eventually, the packet will arrive at a router that knows the MAC address of the machine that is being targeted, and can send the packet to it. This is why the apparently redundant MAC address exists, why it can be assigned at the time of manufacture, and why the IP address cannot.

This is also why a machine that has a static IP address cannot move. If I take my laptop to a different country (or even a different city), its IP address must change. Therefore, in order for every computer to be a server, some service must exist that can point a domain name to an ever-changing IP address. And, in fact, such services do exist.

So the point I was trying to make in the post below is pretty moot. I’ll leave it up to remind me not to be too arrogant with what I think I know.

The Original post

Sometime in the mid to late 1990′s, the smart folks at the Internet Engineering Task Force (IETF) realized that the Internet was soon to outgrow its britches. How so? Well, the version of the Internet Protocol in use then (and now), IPv4, only allows for 32-bit addressing. That means that only a little over four billion devices can have a globally recognized IP address.

My hope is that most of you already know all that, but just in case: the Internet is running out of IP addresses. Sometime in 2011, there won’t be any more. Fortunately, the solution, IPv6, has been around since 1998. With its 128-bit addresses, IPv6 will allow 3.4 * 10^38 devices to have an IP address. With that many addresses, we will literally never run out. Unfortunately, the Internet is not yet ready to use it, and nobody can agree on how to start.

Again, my hope is that most of you already know that, too. These are things that every user of the Internet should know. But, just in case, I provide you with that two-paragraph summary in order to lead into what I actually want to talk about.

The consequences of address scarcity

The Internet is designed with scarcity in mind. In order to avoid running out of addresses too quickly, most devices are not assigned an address on a permanent basis. You, a lowly Internet peon, can never be sure of what your IP address is going to be tomorrow, because it could very well change.

This is not important if all you want to be is a client. Only servers need have in address that never changes. In the early days of the Internet, this was not an issue. However, increasingly, end-user devices want to act as servers, and the lack of a static IP address makes this quite painful to do.

A MAC, you say?

MAC stands for Media Access Control. A Mac is not a MAC. Tell your friends.

Ahem. The MAC protocol does a lot of things, most of which I don’t understand. However, one thing I do know is that every device that is capable of connecting to the Internet has a unique identifier, called a MAC address. This 48-bit number is assigned to the device at the time of manufacture. It’s like your computer’s social security number.

(By the way, 48-bit addressing gives around 3 trillion addresses. We will eventually run out, I suppose, though I’ve not heard any rumblings to that effect.)

With the advent of IPv6, we will have the ability to do exactly the same thing with IP addresses. Every single network device could come from the factory with its own globally unique IP address permanently assigned.

A server for everyone

I’ll let that sink in for a bit. Sink, sink, sink…aaahhh. I ask you, is your mind not blown? It’s not? Well, then.

Let’s say you have a file to send over the Internet that’s too big to email. That shouldn’t be too hard; most people can’t get attachments larger than 20 MB. Well, turn on file sharing and email your IP address to your friend, and you’re set.

Well, actually, you can do that today (provided your IP address doesn’t change in the middle of the transfer, that is). It’s kind of a pain, but it works. Better would be to just tell your friend your domain name, which you could have pointing to your computer’s IP address.

That’s the least of what you could do. Apple’s Back To My Mac service allows you to access your computer from anywhere, almost as though you had a static IP address. To pull it off, they had to have a server (with a static IP address) keep track of what address your computer happens to be using at the moment. You need to buy a MobileMe subscription for $100 a year. Most of what MobileMe does could be implemented without the server (and the service fee) if everyone had their own IP address.

That’s right—remote syncing wouldn’t need a server. Anything peer-to-peer would also no longer need a server. BitTorrent’s various nascent hacks to skirt the requirement for a tracker would be much easier to do, and more reliable. In fact, anything that requires a peer-to-peer connection could be massively decentralized, very reliable, and impossible to shut down.

And that’s just scratching the surface.

Everyone wins. Really!

That last paragraph is the stuff of nightmares—if you happen to be a member of the RIAA or MPAA. But it shouldn’t be, won’t be once they realize that, were this dream to become reality, they could finally, at long last, pin an act of piracy on an individual with a high degree of accuracy.

Hmm, yeah. Not so great for privacy and anonymity. If you wear a tin foil hat, this proposal is not for you.

ISPs probably lose, too, at least in the short term. Their expensive infrastructure is built on the assumption that people are going to download far more than they upload. That reality is changing, even now, and networks are breaking under the strain. What would happen if every device could be a server?

They’d figure it out, that’s what. They’re going to have to anyway, even if this wish doesn’t come true. Oh, yeah, did I mention? This is never going to happen. It’s just not that likely.

But, you know…forever is a long time, and I want this to happen. Do you want it to happen? Let’s all want it to happen! If enough of us do, then it actually will.

I Hope to Die Before My Data

Posted by – October 1, 2009

The computer and the Internet have conquered information management as thoroughly as a Mongol horde, and the Church of Jesus Christ of Latter-Day Saints is not exempt. The enormous and growing Internet Gospel Library and the digitization of genealogical research are proof enough of that. Nevertheless, the Church’s granite vaults stuffed with microfilm records starkly demonstrate that computers still cannot provide a permanent store of information. The world’s vast digital archives teeter constantly on the brink of erasure, and only diligent and redundant copying keeps entropy at bay.

I have no doubt that this problem will be solved. Many are trying; for example, the still-in-development M-ARC disc promises to store data for 1000 years. We will sort it out, and when we do, the conquest of the computer will be complete.