Saturday, April 15, 2006

The Law of Unintended Consequences

In this post, I'm going to take a different tack on computer security. I want to approach security from a more broadly philosophical perspective. I am going to use computer security to illustrate one of the most interesting issues in the world today. I'm also giving my first nod to an approach to thinking about life that has been rattling around inside me for several years and which has been articulated in a book I am reading right now.

While poking around Tom Peters' website, I found a reference to a woman's website that intrigued me. Her blog promoted one of her books, The Future and Its Enemies. The woman's name is Virginia Postrel and her book has helped me tie together some ideas I have had for a long time about freedom, the opportunities of the free market and fundamental philosophical differences between people who believe the future should be controlled and those who believe that the future is best experienced by allowing it to flourish without artificial constraints. These are big ideas that I'm not quite ready to handle yet in my blog but Postrel does address the dynamics of Unintended Consequences and this dynamic is directly relevant to computer security.

There are two major philosophical battles at play in the world. When you consider the battles more deeply, it is not a reach to say that they are both essentially the same issue. One is a battle that initially sounds like a geek conflict but in reality is actually much deeperand broader than that. This battle involves the conflict between open systems and closed systems. The computer world is currently grappling with the nature of this conflict as open source software (OSS) like Linux seeks a higher philosophical ground thanproprietary software like Windows. Both sides have zealous adherents who assert that virtue is inherently in their camp. I tend to be of the opinion that both open and closed source software has merit and companies like Sun Microsystems are synthesizing both ideologies into what could very well prove to be a coherent and profitable business plan.

OSS advocates assert that because the code for OSS is available to anyone to inspect and modify, the software is more secure. The assumption is that the exposure of the code to a broader community of programmers yields a more rigorously developed and tested codeset than is possible with proprietary software development. By far, most software in use in the world today is proprietary, which means that companies view the code itself as a corporate asset and is therefore secret. This connection between proprietary code and its status as an asset is a direct driver of economic value, because the uniqueness and lack of availability of proprietary source code to the computer industry creates a competitive advantage.

For Microsoft, this is particularly true because Microsoft has massive levels of ubiquity around the world for both its Windows and Office products. Were Microsoft to expose its code for these programs,they would lose competitive advantage and a substantial amount of revenue. In spite of what adherents on both sides of the philosophical spectrum would claim, I do not believe that either approach to software development is inherently more robust or secure. This is due to one sticky reason: humans are involved in both approaches and humans always bring imperfections into what they do. Always.

Open and closed systems are in conflict in other areas of the world. Islamic Jihads are waged by closed-system ideologues against open source societies like America, where opportunities are, in essence, openly available to all people. Certainly, there are still inequities but in comparison to closed societies, open societies offer substantially more opportunity. The egalitarian ideal of equal opportunity for all players in a society is substantially more likely in open than in closed societies.

Another major philosophical battle is between static and dynamic systems. This is Postrel's primary thesis in The Future and Its Enemies. She categorizes the Republican and Democratic parties as being essentially the same in that both have vested interests in controlling society, primarily in the name of safety and stability. She names these people stasists. Stasists fear the dynamic, open-ended nature of the future and seek to mitigate its potential through rules, policies and bureaucracies (e.g. legislation, unions, bureaucracies like Homeland Security, government subsidies, welfare, Social Security, etc.).

Dynamists are much more open to the potential of the future. Their optimism is not blind; they are aware of the risks of an open, minimally-constrained future. Yet, they believe that the most opportunity for higher-quality life is in an open-ended future. Life is not optimized through stasist planning that seeks to minimize risk, but is achieved through iterations of trial and error. Optimization is not a matter of control of the future but of adapting to the sometimes dizzying proliferation of choices.

When people attempt to read the coming future to be experienced in the continually arriving present, they make decisions without perfect knowledge of the future. At any given point in time, we make decisions in the Now that we hope will carry well into the Future. Sometimes they are good decisions. Sometimes they are decisions that carry unintended consequences. Often, the decisions cannot fully anticipate what the future will bring. As adverse conditions arise, we adapt, we step into another set of iterations, where we experiment, fail and resolve problems.

OSS ideologues assert that the nature of the open source development process is inherently more secure than the proprietary approach to software development. They point to Microsoft's well-known historical travails with security flaws as proof of this assumption. The underlying reasoning is the belief that exposing the code to a global community of programmers yields robust code that has been vetted for weaknesses. What I have never seen anyone address is the corollary:exposing the code also gives exploitive programmers perfect knowledge of how the code works. This perfect transparency gives exploitive programmers an advantage that exploitive programmers for proprietary programs do not have, because the closed nature of the code occludes details about how the software actually works.

Whereas OSS programmers are part of a potentially global community of programmers, proprietary software is developed by a smaller group of people united under a corporate banner. Proprietary programmers have the advantages of corporate resources, a (hopefully) coherent vision for not only the current iteration of software but also second, third and fourth generations of the software, consistent best practices and, of course, intimate knowledge of how their applications work. Because the code is not transparent through open source distribution, exploitive programmers lack complete insight into how proprietary programs work.

Both OSS and proprietary systems have advantages and disadvantages. Neither is inherently more secure than the other. They share a common element and it is this element that undermines either side's claims that virtue rests with them: both sides are populated by humans.

Humans have a tendency toward developing imperfect creations. It is the nature of the human condition to be fallible. An open approach to design that favors iterative experiments of trial and error and an emphasis on learning tends to yield creations that are better designed. Both open source software and proprietary software can employ an open approach to development. But even with this approach, the law of unintended consequences still comes into play.

The law says that when people attempt to manipulate complex systems to generate specific outcomes, the planned outcomes don't actually materialize because the system's complexity generated results that could not be anticipated. One of the principles of complex systems is that as the number of variables in a complex system increases, the ability to manage that complex system decreases. For this reason, the likelihood of unintended consequences increases as well.

Some examples of the law of unintended consequences:

The United States provides arms to Afghanistan and trains its soldiers to battle the Soviets as part of the Cold War. In post-911, the US fights the Taliban in Afghanistan, where the Taliban is using weaponry provided by the US in the 70's and 80's.
The US support of Saddam Hussein in his war against Iran.
Apple attempts to protect its market share by keeping its OS closed. Apple made this decision before it had achieved sufficient ubiquity in the marketplace and as a result, has been little more than a niche player in the computer industry.

Postrel says this about the law of unintended consequences:
Unintended consequences really have to do with naive people believing that there are no holes [in a complex system]. It's very easy to seduce yourself into thinking that you've got everything under control. The reality is it's almost never true. Clever people will always come up with ideas no central rulemaker has conceived. A dynamist's world's rules must allow for adaptation, change and recombinations.
Operating systems, applications and networks are highly complex adaptive systems (CAS). Complex systems are inherently adaptive because there are many factors of variability that feedback on each other. Think of ecosystems or the global economy: many players with self-interest and many external factors that each player must adapt to in order to survive and thrive.

Computers, people and networks all combine to form a highly complex adaptive system. All players on the network have myriad motivations: economic, sexual, educational, ego, exertion of power, etc. Consequently, the sum value of the internet and all private networks is staggering when measured by the currencies of money, sexuality, education, ego and power.

Security threats exist because the internet possesses so much value and not just in financial terms. The operating systems on the net are complex and the applications on the net are not only similarly complex but their interactions with other nodes on the net are also complex. The interaction between operating systems and individual nodes (home computers, work computers, servers, etc.) creates tremendous levels of complexity.

The complexity and variety of networks are what drives unintended consequences on operating systems and applications. No development methodology can anticipate all the variety that surfaces within complex adaptive systems like the internet and local networks. The open source development approach does not have the ability to anticipate network complexity and the ingenuity of self-motivated, independent players who seek to exploit weaknesses in computers in order to gain whatever currency is important to them. Neither does the proprietary development methodology have the ability to contain these threats. In other words, neither methodology completely mitigates the Law of Unintended Consequences. The result? Both open source and proprietary systems will experience successful security attacks.

For this reason, the claims of OSS zealots that their methodology is inherently more secure are profoundly fallacious. It is impossible to develop an operating system or application that is 100% secure 100% of the time. The network is too complex to ensure infallible security.

I had said earlier that one of the key philosophical debates is between stasists who seek to creates ever-increasing numbers of rules to stabilize the outcomes of the future. Dynamicists are more open to the ambiguity and uncertainty of the future. They place more value on iterative experiences of trial and error, learning and adaptation. They see value in experiencing the variety that emerges from complexity and uncertainty. The differences between stasists and dynamists are profound.

Stasists insist that one software development methodology be more secure than another. Dynamists look at both open source and proprietary development and see value in both. They recognize that the best approach to computer security is to abandon the illusion that any one development methodology is inherently better than the other and see instead that they are simply different. In both cases, what must happen is what in fact does happen: each OS and application developed by each methodology must respond to security exploits by developing patches that adapt to exploited weaknesses. Doing so is not an admission of weakness but a practical response to what is true about complex adaptive systems.

I rail against OSS zealots because I do not respect their dishonesty and pride. Their hatred of Microsoft blinds them to the realities of complex systems. Their propaganda actually undermines the effectivenessof their platform because, as is happening now, businesses are gaining a better understanding that what was promised about OSS has largely not materialized. This is particularly true as OSS gains more and more ubiquity because with ubiquity comes a more appealing target. Their arrogance offends me because there is little reason in it. Just emotion. Computers are more than an ideology to me and I'd like to think that some balance and sanity can be brought to the discussion. With about 40 hits a day, my blog isn't likely to make a dent, especially after long, ponderous entries like this. Nevertheless, this is one attempt to do that.