Wednesday, July 19, 2006

The Internet Is Enormous Tangled Up Tubes

Watch this one first:



And now watch this one:

Friday, May 12, 2006

What Exactly Does "AD Integration" Mean?

A year or so ago, we bought an application for managing help desk requests. One of our requirements was Active Directory integration. The product we selected said they integrated with AD. The project was a bit rushed so we didn't conduct due diligence like we should have and we just bought the application.

For this application, AD integration actually meant that periodically (every 24 hours by default), the application would query AD and get the current user base and then replicate the same information in its own database. This means the application didn't authenticate through AD; it maintained its own permissions database.

Perhaps there are good reasons for designing the application this way but to me, it just seems unnecessary. The whole point to AD is to manage user access to network resources. Why does it make sense to create a redundant permissions database? This creates more overhead for system administrators and it adds a level of complexity to troubleshooting because the application's home-brewed authentication is another layer that needs to be troubleshot. Whenever I see solutions like this, my inital reaction is: These guys don't know how to query AD in code.

Yet, in direct contradiction to my theory is Microsoft's ISA Server 2004. Internet Security & Acceleration Server is used to manage user access to network and internet protocols. ISA uses various kinds of object definitions to accomplish this: protocols, applications, ports, users, domains, URLs, etc. You'd think the User object would be AD objects because it's a Microsoft application.

But no. You can't write access rules directly to AD objects like users or security groups. You have to create an ISA group, populate the group with AD users or security groups and then use the ISA User object to define who is affected by a rule. This is not AD integration.

SQL Server has a similarly schizophrenic authentication model. You can run SQL in mixed mode, which allows you to use Windows authentication or with SQL auithentication. With SQL 2005 and later releases of some of their applications (e.g. CRM 3.0) it seems that Microsoft is moving away from SQL authentication toward Windows credentials. This is as it should be.

My point in this post is to be aware of what various technical terms mean because there are often two definitions to important tech terms: the marketing meaning and the technical meaning. My mantra is to always distrust the brochure and the sales guy. Don't assume that important technical terms mean what you think they ought to mean. Always clarify the definition because sales people tend to be oriented around seelling to marketing definitions not technical ones.

Tuesday, May 02, 2006

ROI Gibberish and A Video to Illustrate It



I have argued before in my blog that for technology, ROI and TCO calcs are accounting gibberish that companies use to justify the costs to acquire new technology, but which are largely useless because they don't lead to meaningful information. I have a couple of components to my argument against ROI and TCO calculations for technology buys:


1. The actual dollar value of benefit gains are wrapped around assumptions that have varying probabiliites of accuracy, where "accuracy" is defined as how closely the assumptions map to the actual values of costs and benefits (and often the total cost and value picture isn't known at the time the ROI and TCO calcs are made). Further, these calculations tend to only look at benefits. They do not look at, for example, the loss of productivity that happens during implementation and during the user's learning experience as well as other factors that could potentially minimize the net benefits associated with the project.

2. ROI and TCO calcs are most accurate when direct costs are used. They are less accurate when indirect costs are applied. For costs like software licensing fees and hardware acquisition, direct costs are pretty solid. But there are other costs like hourly consulting fees that can easily go over budget. Further, these calcs can be hampered by the opportunity cost of the support provided by existing staff to the technology project in question: when staff is supporting the project, they cannot do their regular jobs. For disciplined companies, the cost of validating the ROI after the payoff period has expired adds costs to the project and diminishes the expected return (see item 3).

3. Most companies do not bother to measure the actual ROI of a project after implementation and after the projected payoff period. This implies that the ROI and TCO calculations are merely exercises to mollify the budget-keepers. If a company truly valued its capital and the need to satisfy a capital hurdle rate (ours is 12%), then they would also have the discipline to validate the ROI after the payoff period has passed. To me, the fact that most companies don't do this means that the ROI and TCO calcs are either acts of obeisance or they are simply traditional, yet vacuous corporate exercises.

4. Individuals who want the project approved adjust their assumptions not to reflect reality but to get the numbers needed to get project approval. It simply involves working the numbers backwards.

Check out this four minute cNet video on performing the TCO calculation. It's a wonderful illustration of my objections to these common exercises. There are two key problems with the claims made in this particular presentation.

Common metric for decisions on all potential projects: She asserts that the value of the ROI calc is that it allows a business to evaluate all potential cash outlays by using the same metric: namely, the value of expected benefit in relation to the up-front costs.

This is not true in all cases and is particularly sketchy with technology purchases. This is because the benefits of technology and the dollar value of those benefits are difficult to measure. In contrast, the ROI of a forklift is more tangible because the dollar value of benefits are easier to measure (e.g. increases ability to move inventory to assembly, which helps increase inventory turns, etc.).

She completely glosses over the value of benefits with a straight face: In her example, she says something like, "We are going to experience an increase in productivity that will drive revenue gains of $80,000/yr for three years."

This is exactly the kind of statement that completely undermines the value of ROI calcs. She doesn't describe how to go about calculating the value of the expected benefits. Specifically, how will the expected benefits translate into measureable dollars? The dollar value of benefits magically appear in ROI spreadsheets as functions of assumptions about those benefits and no one ever challenges the ROI calculation even though everyone knows the assumptions are usually innacurate and incomplete.

The reality is that most companies do not have a clear idea of the dollar value of benefits with technology acquisitions because tech buys are largely intangible. Further, the broader the scope of a tech buy, the more systemic its effects will be. This means that implementing an ERP application for a whole company will touch more business processes with more potential for broad benefit or catastrophic disruption than say implementing AutoCAD in the engineering department. The intangibility of tech buys can be made more concrete by conducting baseline analyses of the costs of business processes and redesigning business processes to derive benefit out of the tech buy. Of course, most companies don't have the discipline for this kind of validation.

Some tech spends have variable benefit value based on whether the technology is fully utilized. For example, if your company uses Office primarily for memos, localized number crunching (i.e. individuals running their calculations without regard to other departments) and databases, there isn't likely a whole lot of value in upgrading from Office 2003 to Office 2007. But if you take Office 2007 and integrate it with SQL 2005, SharePoint and Microsoft Business Scorecard Manager to distribute key performance indicator metrics to the company, you will experience incredible value for the cost of upgrading to Office 2007.

The bottom line is this: for tech spends, it is difficult to accurately measure the dollar value of expected benefits. ROI calculations are hampered by this difficulty. TCO calcs are hampered by the presence of direct and indirect costs, as well as costs that are ignored for various reasons. For example, cold room electrical consumption may be viewed as an overhead cost, which makes it difficult to evaluate the cost of electricity as a decision-making factor in choosing one server or another. This is why Sun's marketing strategy to sell SunFires on the basis of lower power consumption and lower heat throw is not likely to connect with a lot of companies. I address this issue heretoward the end of the article.

The bottom line is that in many, if not most cases, decision-makers buy software solutions because they have an intuitive or hopeful sense that the software spend will generate real process and dollar benefit. I have seen decision-makers tweak assumptions known to be erroneous simply to get the ROI over the hurdle rate so they can get their project approved. In other words, they adjust the numbers at the back end of the calculation in order to justify their desire and hope.

Risk analysis then is also intuitive and is measured by comparing the risk of failure with the comfort level among decision-makers that they have made a good selection and that the project will, in the end, be successful at delivering value.

Technology is bought because of emotion. It's not a purely financial decision.

As much as accountants would like it to be otherwise, technology spends are primarily emotional. Technology is bought because of intuition, desire, coveteousness, a need to appear significant and a deep hope that something can make business pain go away.

Sunday, April 30, 2006

Sun Marketing, Innovation and Commoditization

I have a couple friends who work for Sun. One is well-placed up in the Sun food chain and the other has been heavily involved in Java standards for a number of years. Both are loyal to Sun.

Below is an edited snippet of email conversation between one of my friends and me regarding, at first, the departure of Scott McNealy as CEO, and then a discussion about the strategy of Sun as it has been unwinding over the past year or so. I asked him what he thought of Sun's strategy as it relates to cheaper servers. His response was, "The low-power server is a big deal, I think. If you are just buying 1 of them, you don't care. If you are going to buy 100 of them, you care a lot: the air handling costs really do add up. It is certainly a unique strategy to try to market a server (not just a chip) as low-power, but I think that we are doing so because we were listening to customers."

I'm including my response because it sums up my current analysis of Sun:

"but I think that we are doing so because we were listening to customers.."

That's an interesting dynamic, for three reasons:

1. Are Sun's customers listening to Sun? http://blogs.sun.com/roller/page/jonathan?entry=the_dell_premium

2. In the book, The Innovator's Dilemma, the argument is made that innovators who listen to customer feedback can actually inhibit innovation because customers tend to only think in terms of incremental performance improvement with incremental cost reductions. Customers typically can only conceive of existing technologies once and then create demand that drives downward price pressure along with incremental performance/feature improvements. For an innovative company like Sun, listening to customers could (not saying it will) inhibit innovation. Sun needs to be careful not to allow themselves to be pulled into Dell's commodity server market because I don't think Sun has anywhere near the manufacturing efficiency that Dell does. That would mean loses for Sun. Sun cannot compete against Dell on the basis of price. Neither should Sun compete against Dell with the server as a commodity because that conflicts with Sun's innovation DNA.

3. Last year, when I was visiting my buddy Scott, we had dinner with a Sun exec. He made an interesting observation: he said that Sun has been an incredible innovator from the beginning but that Sun was terrible at marketing. He contrasted Sun with Microsoft, who he characterized as primarily a marketing machine and a technology innovator secondarily. He might have even scoffed at the notion of anything from Redmond as being innovative. I silently disagreed but listened to his point.

Bringing these three together might lead to the following idea: Sun may be taking marketing more seriously because they recognize that marketing is important to capturing more market share. I read Sun press releases and Schwartz' blog and both sources tell me that Sun is successful at large computing deals, particularly with developing nations and academics. Listening to customers is part of that but so is leading customers to better technology.

The links in Jonathan's blog tell a story of U-B's decision to go with Dell servers even though they were (if I remember correctly) Sun customers as well. U-B coulnd't afford to power and cool the Dell servers so they could only run a portion of them. What is interesting about this article is that the University of Buffalo didn't consider electricity consumption and cooling needs as part of the total decision-making process. I think this is in large part because most accounting types want ROI and TCO calculations that don't consider cooling and powering servers. Plus, as I wrote here: http://dereynolds.blogspot.com/2006/04/technology-and-cost-of-capital.html ROI and TCO calcs are accounting gibberish that frequently have little value in IT decision-making.

The question to ask here is Why? And I think the answer is because a lot of decision-makers aren't thinking of power and cooling issues because they view electricity and cooling as overhead expenses not direct costs. Overhead expenses are difficult to include in these kinds of calculations because they are... well, overhead and overhead is hard to tie to a specific point. You can get around it with an allocation of overhead but that's not entirely accurate. I think Jonathan refers to a per square foot cost for chilled server rooms but again, that allocation rate is based on assumptions about cooling and power needs, labor, cost of real estate, etc.

My point is that listening to customers is good and its good that Sun is taking customer input more seriously but what Sun really needs to accomplish is to sensitize decision-makers on the value of lower power consumption, particularly as you mentioned, with 100 servers or 1,000 servers. I think the mistake Sun marketing is making is by trying to pitch the new servers as eco-friendly, however. The only people who care about eco-friendliness are the bearded Birkenstock-wearing, Prius-driving IT guys.

My theory right now is that businesses see the server room as a commodity. They see value-add from IT purchases on the desktop in terms of what the dektop enables from a functionality and business process perspective. This is because software is on the desktop where users see the functionality. Once the philosophical and business decision of platform is made (Windows, Linux, OS-400, Solaris, etc.), it is easy to view the hardware of the server infrastructure as a commodity. This is due to two reasons: decision makers see only the desktop as evidence of IT, which makes the cold room invisible (except when a server goes down); and there is relative parity of price:performance ratios across server competitors and their products.

So, Sun is trying to compete on a selection criteria that isn't part of the thought process for a lot of organizations. Because of the emphasis on ROI in purchase decisions, IT guys need to justify up-front costs of server purchases because it is assumed that the back-end costs of server ownership are equal. The notion of eco-friendliness is not valuable to most people. Dollars, however, are, but from a marketing standpoint, it's a bit of a trick to get people to consider electricity and cooling factors, not only in terms of cost, but also in terms of capacity, i.e. do we have enough electricity and cooling to take these boxes live, because these costs are simply seen as overhead, not direct costs.

I think Sun has come up with a very cool innovation: more computing power, less electrical consumption, less heat generation. The challenge is to get existing and new customers to see it as a value-add feature/benefit set that puts Sun ahead of Dell instead of just something nice you do for the environment.

Saturday, April 22, 2006

Two Standards of Fair Use of Intellectual Property



There are two standards of conduct in the IT industry: those of Microsoft and those of everyone else but Microsoft. Nearly every action that Microsoft takes in the industry is contested but when Microsoft's competitors take the same action, there is no vilification or acerbic attacks. When Microsoft buys functionality and wraps the new capability into Windows, they are accused of suppressing innovation. When Larry Ellison buys PeopleSoft or Sun Microsystems buys StorageTek, they are adding shareholder value by improving their competitive capabilities. Some will even say non-Microsoft companies are buying functionality to extend innovation into the marketplace.

It's a double standard that the industry exploits over and over again.

ZDNet has been publishing articles that have pushed my buttons lately. I don't consider ZDNet to be overtly biased toward open source or against MS, but in the past week or so, I have read stories -- usually "analyses" -- that have done a pretty good job getting me cranked up. As I write on Saturday, I'm a little wound up after reading this analysis of the implications of virtualization software.

The author, Charles Cooper, seems to have two components to his argument: a).virtualization implies that Microsoft ought to let the licensing of virtual servers slide a bit to the advantage of the IT department and b). Microsoft ought to do this because they were accused but not fully convicted of predatory antitrust business practices.

The business value of virtualization accrues to the wise IT shop that sees an economic, operational and administrative advantage to the virtualization of servers. Virtual technology is cool because, as my friend Mike so wisely pointed out in response to my article that ROI and TCO are gibberish calculations for most technology, virtualization software actually does allow the calculation of a meaningful ROI because you can measure real physical boxes that can be eliminated in favor of single, beefy boxes to house numerous virtual servers.

The author seems to be implying an argument that since virtualization software runs on an OS, the extent of license compliance ought to be solely for the host OS. This is an argument predicated on physical use. But virtualization is a logical use not a physical one: VMWare and MS' Virtual Server enable the creation of logical servers and each of those logical servers provides the same functionality of physical servers. Whether that functionality is provided on a physical server or a virtual one is irrelevant to the cost of licensing.

Cooper attempts to build a connotative case against Microsoft using the double-standard of fair use of IP. He refers to a largely meaningless DoJ case against Microsoft in the first part of this decade to imply that the notion of charging customers for a Windows Server license for every instance of Windows Server is evidence of Microsoft's predatory anticompetitive behavior. Cooper insinuates that such a requirement is an economic artifact belonging to misguided libertarians (me being one) and evil capitalists who haven't yet appreciated the economics implied by virtualization.

But here's the question to ask to penetrate the author's accusatory thesis: Would he be challenging Microsoft's requirement of 1 license per Windows Server prior to the existence of VMWare?
The answer is No because the absence of virtualization technology would mean the policy would not be questioned.


This allows you to conclude that the issue is not whether licensing ought to slide for a virtual server but whether an IT department derives value from both physical and virtualized boxes.

It's hard to imagine a more sophomoric closing statement than what Cooper says here: "Last time I checked Microsoft was not in the philanthropy racket. Any company that tries to get out of paying for the full costs of virtualization will find itself on the receiving end of a sweet lawsuit, courtesy of Bill Gates & Co. "

Lovely.IT shops are not charities and they should expect none of the absurd "philanthropy" Cooper implies Microsoft ought to graft to them. There is honor in being a profitable company. There is also honor in paying for the licenses a company uses. Cooper's thesis has no merit.

Thursday, April 20, 2006

Seed the Hobbyist Programmer



Microsoft has decided to continue its free release of Visual Studio Express beyond their original plan of one year. According to ZDNet and other tech RSS feeds, VSE has been downloaded 5M times since November 2005.

Download Visual Studio Express here.

When combined with SQL Server 2005 Express, VSE provides a great learning platform for secondary school students as well as college students. VSE and SQLE also provide a powerful and affordable (how about $0?) development environment for hobbyist programmers who don't want to pay $260 for Visual Studio 2005 Standard.

SQL 2005 Express takes MSDE several steps better. First, the maximum database size is 4GB versus MSDE's 2GB. SQLE can run on a box with up to 1GB of RAM and 1 processors. Third, SQL 2005 SP1 adds graphical interaction with VSE to allow you to design databases without needing SQL Enterprise Manager.

With VSE and SQLE 2005 available for free, Microsoft is seeding the interests and development of future programmers. This is a wise move on Microsoft's part. As I have argued on other articles, software is where innovation and value are delivered. I think the OS platform is primarily a philosophical and business decision but once that decision is made, the hardware and OS are commodities. It's the software that creates value (one could argue that the OS has implicit value because the chosen OS yields a set of innovation that can only be used on the chosen platform). Kevin Kelly argued that the value of a network is a function of the number of nodes on the network. In order to sustain high numbers of Microsoft nodes, there needs to be innovative software to drive demand for the Microsoft platform.

Wednesday, April 19, 2006

This Is A Truly Wonderful Open Source Indictment



For at least the past three years, I've heard numerous rants from some of my friends and online contacts about how insecure IE is and how inherently secure FireFox is. I've debated with them with a bias not toward defending Microsoft but with a bias toward security that is platform indifferent. I won't rehash my arguments here because you can Google Blog search my blog and find those articles yourself.

In this incredible news item, Mozilla users are urged to upgrade their early-model versions of FireFox and Mozilla derivatives because they have significant security weaknesses that can be exploited.
Users have been urged to upgrade to the latest versions of Mozilla's software to protect themselves from a series of critical security holes.

The Computer Emergency Readiness Team (CERT) warned on Monday that earlier versions of Firefox, and other Mozilla software based on Firefox code, contain a clutch of vulnerabilities that expose users to attack.

The Mozilla Foundation released a new version of Firefox last week, version 1.5.0.2, which it said contained fixes for several security flaws.

According to security firm Secunia, there are a total of 21 flaws in the older versions of Firefox, such as Firefox 1.5, some of which it described as critical.
How utterly humiliating for all the people who have arrogantly mocked IE users and Windows advocates on slashdot and IT news boards that their browser was clearly more secure than IE. All their arguments have been completely undermined. Now all they have left is, "Yeah? Well, my browser has better functionality than your browser does!"

I have this glowing, warm feeling inside right now. Vindication feels wonderful. It's a beautiful day.

Comments About Novell and Sun

Just days after I made fun of a Senior Analyst for imagining IBM, Oracle and Novell coming together to defeat Microsoft, my friend Mike sent me this article on Larry Ellison's potential interest in Novell.

I wrote back to him:

"It's interesting but buying Novell to get into open source seems to me to be similar to buying a 1998 Chevy Lumina to race the Indy 500. Ellison has a tendency to buy and absorb into the Oracle brand, so it's possible he has no plans to maintain the Novell brand; he may just want SUSE as an asset that can be rebranded as Oracle Linux.

What interests me about this is what database Oracle would use. Would they support an OS database like MySql or would they develop an OS version of Oracle?"




In a note to a friend of mine in Dallas, I made the following comments about Sun:

I just don't think Sun is going to be able to get their poop in a group any time soon to please the institutional investors' desire for profitability and the market's desire for innovation that means something to customers. I think Mr. McNealy needs to step down and let someone else try; he hasn't led the company well since the 2000 burst. I'm not sure if Schwartz is ready yet for CEO duties but he does seem to be leading Sun's thinking. Jonathan is definitely the less strident and more articulate Sun exec.

I think the big problem with Sun is that it is innovating in back office hardware, which is largely a commodity space. I'm not sure I follow the business logic of leading their marketing with entry-level x86 servers. With few exceptions, i.e. iPod, hardware is a commodity. Sun has a couple innovative takes on servers, namely thermally cooler and less demanding of energy, but the problem with that is its hard to market those criteria when TCO calculations don't typically involve cooling and power costs. (TCO calcs are acts of accounting gibberish anyway, but no one wants to admit that). I think most people view the server as a commodity with a low lifespan and therefore the costs associated with powering and cooling them are essentially the same as the costs of powering and cooling cubicles: it just goes to overheaed electrical expense.

I think that most of the interesting innovation occurs on the software side rather than hardware and I think there is more market demand for that kind of innovation because software is what drives business value. The computing platform is a philosophical and business choice, i.e. do we go with Solaris, Windows, Linux, Unix, whateverix, but once that decision is made, a server is a server. Software, however, is what creates value for a computing infrastructure.

Sun is still primarily a hardware company hoping to make some back-end money on servers and services by giving away Solaris and by selling tape backup systems (????). I think Sun is trying to innovate in the wrong area and the StorageTek acquisition seems to me to be an expression of hesitancy from Sun that their direction is on track: why invest $4B (net $3B cuz StorageTek had $1B in cash) to supply commodity storage servcies? What I do not see in Sun's marketing or in Jonathan Schwart's blog are PR pushes that bring Sun software to the forefront. Why should a business choose Solaris? Where is the value? Why (God, Why??) should a business choose StarOffice just because it is cheaper? See? Sun is innovating a commodity product and yet shipping commodity software in a market that expects software innovation.


Microsoft is doing some hella cool stuff. I saw some technology in Dallas that blew my mind. Office 2007 is bringing in functionality that is incredible. Office is tightly integrated into SQL, Windows and SharePoint portal in a way that allows companies to deliver information and analytics that are truly exciting. One session I went to demo'ed this stuff and twice the audience erupted in applause and "Oh my God"'s and "Wow"'s. After watching this demo, the relevance of Office costing $300/user completely disappeared from my mind. It was the first time I'd ever seen value in Office beyond, "I have to buy this to keep compatibility with everyone else in the world." My thought was, "How can a company not buy Office after seeing this?" Yes, it requires initial setup and implementation but the value it can deliver is truly exciting.

Tuesday, April 18, 2006

BMW Improves iPod Interface



I believe it was in 2003 or 2004 that BMW and Apple announced that BMWs would be pre-wired to interface with iPods. Much ado was made of it but the execution left much to be wanted.

Essentially, the interface enabled the iPod to emulate the six-CD changer that is optional for BMW. This meant that you had to define 5 playlists that were analogs of five CDs. Certainly, you could create long playlists but being able to truly enjoy an entire music collection on a 30GB iPod was seriously hampered by this limitation. The design requires constant interaction with iTunes to modify playlists to take advantage of the capacity of the iPod. It was a neat idea with lame execution.

I've been struggling to find the ideal interface for my music player now that I have shifted from the Dell that died to the iPod 30GB Video. I used to be content with the Belkin FM trasnmitter but after it fell to an untimely death at the hands of spilled Beaner's, I replaced it with the iRiver transmitter. But then I found out about Harman Kardon's most-excellent Drive and Play system and coveted it for a while. I was on the cusp of buying it and I even have my CFO's approval (that would be my wife), but for some reason, I decided I wanted to wait. Perhaps my BMW radar was finely tuned to BMW NA headquarters in New Jersey and knew to wait for this design. Or maybe my decision to not buy the H-K system had nothing to do with this announcement.

In any case, I'll "suffer" through the iRiver FM transmitter until July.

The following is quoted from the BMW press release that announces a new interface available in July 2006.
The new BMW Interface for iPod will be available for owners of the new BMW 3 Series Sedans and Sports Wagons as well as the 5, 6, and 7 Series. It will also be available for the new M5 Sedan and M6 Coupe. It will enable audiophiles to bring their entire music collections with them, plug directly and effortlessly into a superior sound system while maintaining uncompromised control over their driving experience. Since the new Interface is compatible with SIRIUS satellite radio as well as the recently introduced HD Radio, owners will be able to enjoy a broad selection of high fidelity broadcast music sources as well. The original BMW iPod Adapter will continue to be available for 2002 and later BMW models: X3, X5, Z4, and previous generation 3 Series.

...The seamless integration of iPod makes it effortless for drivers to control their music through their existing audio system and the multifunction steering wheel. The new BMW Interface for iPod enables drivers to easily access their entire music library, shuffle songs, skip between tracks and adjust volume -- all of this with no loss of sound quality or driving control.

The new Interface is compatible with all iPods with a dock connector, including the iPod nano and the fifth generation iPod. The BMW iPod Interfaces integrate the iPod through a direct connection in the BMW glovebox, providing outstanding sound quality and constant power to the iPod all while your iPod remains protected and out of view.

The new BMW iPod Interface will be available for customers to purchase at BMW centers beginning in July 2006. Pricing has not been determined.

BMW

P0027123

Sunday, April 16, 2006

Apple Boot Camp Won't Make A Bit of Difference



I have a good friend who is a long-time, die-hard Mac fanatic. He sent me an email titled Look Out Bill! and a link to Apple's Boot Camp beta. Boot Camp is an Apple OS X program that allows a Mac owner to set up a dual boot system that will run Windows XP on a Mac.

This is my response:

A). Bill has nothing to worry about from Boot Camp and he isn't going to lose money. It still requires an XP license so if anything, he's going to make money. But not very much, because...

B). Just about the only people who will do this are existing Mac owners or people who haven't yet purchased a desktop computer. What existing PC owner is going to buy an Apple so he can run Windows XP? It makes no sense. And Mac owners who do this are admitting that the title selection of software for the Mac is paltry in comparison to Windows. Plus, there is a little bit of irony in this: One of the most appealing aspects of the Mac is its relatively low need for users to have technical skills. For the average user, a dual boot computer is not easy to install or use. So, Apple is kinda violating the "easy interface" advantage with a dual boot environment.

When you find someone who has successfully ported Mac OS X to the Wintel platform, then that will be something worth having. I'd love to run Mac OS X on my existing hardware. There was a Japanese guy who did it last fall/winter after much confabulating. It requires a boatload of gyrations and software: VMWare, some open source stuff, some Apple stuff, etc. etc. When it is easy to do then I'll do it. I'll even pay the $100 for a legit copy of OS X.

The only problem is that Steve doesn't want to sell OS X to Wintel users (foolish man). He wants to bundle the OS with the box. Steve still believes that the hardware is cool. It's not. It's just a pretty box with a processor, RAM, a hard drive and ports to the outside world. A box is a box. Some are prettier than others but in the end, it's just a box.

To me, Steve's unwillingness to sell a Wintel port-able version of OS X is an implicit admission that Apple boxes are over-priced and were it not for Mac OS, Apple would have even less share than they do now in the computer market. Put a graphically boring OS like Windows XP on a Mac box and the market yawns. The Mac sizzle comes from the OS. The iPod is the only hardware that Apple makes that means anything. For that matter, the iPod is just about the only hardware that anyone makes that means anything. All other hardware are nothing but commodities. What makes the Mac great is OS X. The box is entirely trivial. And over-priced. The market has largely rejected Mac desktop hardware for the past 20 years. Boot Camp won't change that.

This is interesting from another perspective. It seems to suggest that the halo effect of the iPod on Mac computers hasn't driven the Mac sales Apple hoped it would. Were the halo effect powerful, then there would be no need to create incentive to buy Mac hardware with dual boot capability because people would be buying Macs hand over fist. And Boot Camp better be free because if people have to pay for it, they won't. They will look at a $600 Mini + $100 for Windows XP + $75 for Boot Camp and go, "Well this is dumb. Why do I want to pay $800 for a computer with no mouse, monitor or keyboard to run XP when I can just buy a complete Dell that runs XP for $400?" Even if Boot Camp is free, the argument is not compelling for the user who wants to have WinXP compatibility.

There is one simple factor that kills Mac sales even when most iPod users are Windows users: Windows iTunes.

Windows iTunes obviates the need to buy a Mac.

But if Steve were to cut off Windows iTunes, he would choke the life out of iPod sales. People aren't going to pay $300 for an iPod + $100 for almost necessary accessories like FM transmitters and cases then go spend another $700++ for a Mac Mini to support the iPod. Steve needs Windows iTunes to drive iPod sales. Boot Camp won't change that either.

I don't see Boot Camp selling much at all.

Now Discover Your Strengths



Tanya and I are reading a book together. It's called Now, Discover Your Strengths. Its thesis is that in American culture, and corporate HR culture in particular, we tend to focus on identifying and rectifying performance deficiencies. The authors argue that we are better off by understanding our strengths and capitalizing on them while only paying passing attention to what we are weak in. They believe that millions of training dollars are wasted on trying to build capabilities into people that they don't innately possess instead of helping people reinforce their strengths.

It's a thesis I deeply believe in. When I do public speaking coaching, I tell people to focus on what they already do well. If you focus too much on not doing things that are bad, then your presentation becomes more about not doing something than in releasing yourself to the audience so that they see you as well as your topic.

The book includes a strengths assessment provided by Gallup. My results are described below:

[click to enlarge]
Strength Summary 10

Intellection:
Strength Summary 15

Strategic:
Strength Summary 12

Input:
Strength Summary 13

Communication:
Strength Summary 14

Ideation:
Strength Summary 11

Initial Impressions of Novell Linux Desktop



I am going to post my impressions throughout the day as I experiment with Novell Linux.

Here's my first delightful discovery: 39 urgent updates and probably hundreds of suggested updates.
I thought Linux was inherently more secure than Windows...

Novell Linux 10 Critical Updates


And here is a screen capture of Novell's email client, inexplicably called Evolution. It sure reminds me of some other prominent email client.

Novell Linux 11 Evolution-Outlook

Novell Linux 12 Evolution-Outlook



One of the open source arguments I've heard against Microsoft is that they suppress innovation. In contrast, the open source movement lionizes itself by saying that they are the true innovators. Yet, when I look at Open Office and Novell's Evolution I see not innovation but artless, blatant mimicry of Microsoft's GUI and functionality.

This leads me to conclude that the open source camp expects people to just eat their assertions and arguments. We are supposed to simply accept the idea that the open source development model is inherently more secure, that Microsoft suppresses innovation and that open source unleashes it. I've challenged the open source advocates in my life to show me examples of how Microsoft suppresses innovation and the only remotely satisfying answer has been, "Well, Microsoft doesn't develop its own solutions; they go out and buy someone else's product and rebrand it as their own."

So, apparently if a company develops all its own functionality, that's innovative but if a company goes out and buys technology, that's just corporate relabeling. How come Oracle doesn't get eviscerated by the open source crowd for buying peopleSoft's functionality? What about Sun's acquisition of StorageTek? Oh yeah, I forgot: only Microsoft suppresses innovation by buying functionality. Everyone else is giving innovative solutions a chance by bringing them into the fold of a larger company. Can you spell myopic double-standard? I knew you could.

Sorry kids: Microsoft is producing some of the coolest, innovative and secure software out there. All I have ever seen from the open source crowd is high-horse-posturing that doesn't deliver substance and unoriginal mimicry. I just don't buy any of the core arguments offered by most open source zealots. Only Sun's COO Jonathan Schwartz offers a take on open source that is stated not in anti-Microsoft terms but in a postive statement of the value of open source, particularly as open source relates to intellectual capital rights in developing countries.

-----------------------------------------
A visitor made the following comment on my primary blog:

"Your comments show that you trully do not understand software engineering, or the open source development model.

One obvious reason why outlook/evolution or open office/microsoft office have the same skin is to present a commen interface that users can understand. They are quite different underneath.

You seem to have skipped over the fact that the security through obsurity model has failed and will continue to fail. "

-------------------------------------------
To which I responded (with added edits from the original post to improve clarity):

The commenter's argument is incomplete and it's representative of the standard open source zealotry playbook. He tosses out some common talking points but doesn't offer a cogent response.

GUIs have design philosophies. The intent in a design is to expose the underlying functionality to the end user. Well-designed GUIs present functionality in an intuitive manner, while unintuitive, like Novell's god-awful GroupWise, enable people to "do email" but not well. The GUI of Office 2007 is a radical departure from the traditional menu-driven GUI's exposure of functionality. The ribbon is an amazing design that presents the user with exactly what they need at the moment. The contrast between the GUIs of GroupWise and Outlook, for example, reveals a significant difference between design philosophies. This difference is experienced in the way functionality is exposed; one has a level of quality better than the other.

So, to say that Evolution and Open Office are designed to give a user a familiar interface and yet have them be "very different" underneath is disengenuous. This is because it's a straw man: most people cannot asess the qualty of code between an open source app or a commercially-developed app. But what people do see is the functionality that is exposed through the GUI.

Further, the underlying code is irrelevant to the user experience. Pretty code or code that flows from a favored ideology is completely irrelevant to the end user. What matters to the user is the experiece of interacting with the program. How easy is it? How intuitive is it? How natural is it? If people fail to code with the end user experience in mind, they are completely missing the whole point of writing software in the first place.

If the commenter is implying the classic open source argument that open source code is inherently more secure than proprietary code, then he needs to catch up to 2006. This is a stale accusation that had merit with Windows 2000, Exchange 2000 and IE 5. The current versions of Microsoft products are substantially more secure and as open source has gained prominence, they have created a more enticing attack target. Go to www.secunia.com and talk to me about inherently more secure code. Secunia lists a stunning array of security issues for the IT industry and in doing so, proves that security holes exist in all applications and always will. Inherent security by virtue of a development model is an illusion and a misrepresentation. And as I have commented in my blog numerous times, if anything, the exposed nature of open source code actually makes it more of a security risk because hackers have a 100% accurate map of what to exploit.


The commenter is correct: the obvious reason for the similarity is to give user's a commonly familiar interface. What an incredible concession on the commenter's part! It is an implicit admission that open source has no innovation to offer the end user. How can an application be innovative and mimic Microsoft Office and Outlook at the same time? The very notion of innovation involves a creation that is different from its predecessors either in design, conception or functionality. Mimicry cannot lead to innovation.

The commenter is absolutely correct: they are quite different underneath. The open source versions of Office are actually subsets of Office. The open source product offerings are wanna-be applications that don't even come close to the functionality of MS Office.

Open source zealots claim to have the market cornered on innovation. I don't see it. All they do is parrot Microsoft. From a marketing perspective, this makes great sense. But open source continually claims the moral high ground by asserting Microsoft suppresses innovation. They disparage Microsoft for buying functionality and use this as evidence to ostensibly support their assertion that Microsoft suppresses innovation.

So, I challenge the open source crowd: SHOW THE WORLD WHAT INNOVATION LOOKS LIKE. Quit replicating what MS does and write something unique. Oh and make sure it integrates with the corporate intranet and back end servers and the rest of the office suite to provide great functionality like Office 2007, Windows, SharePoint and SQL 2005 provides. Develop a killer app that is not only cool but is also desired by the open market.

And don't whine that MS has an advantage because they know all the internal hooks and API's into the platform. MAKE YOUR OWN DAMN HOOKS. INNOVATE! CREATE! ADAPT! BE GENUINELY CREATIVE. BACK UP YOUR WORDS WITH SUBSTANCE.

I don't really understand the commenter's mention of security by obscurity. He certainly cannot be referring to Microsoft security since the entire Microsoft product suite is under constant attack, which paradoxically, makes it more secure than less ubiquitous products. He might be referring to open source programs though since they do not weather the same intensity of security attacks as do Microsoft products.

I don't think the commenter understands what security through obscurity means. His mention of it makes no sense in the context of the article.

Baseless Gushing Over Open Source's Potential to Erode Windows



So, apparently Novell had their Brainshare conference recently. Novell, a company with no direction if there ever was one, was all glib about its new Novell SUSE Desktop build. Novell reminds me of the scene in Monty Python and the Holy Grail where one guy gets completely pruned in a joust, only to claim it's a flesh wound. Novell hobbles along on bleeding stubs of legs and tout how they are coming out to take command of the desktop if only 47 variable happen to line up properly.

I've not read Jon Oltsik's column before but it's amazing to me that people with his kind of reasoning can be selected as featured writers. I guess perhaps when you need material to advocate a weak, poorly-thought-out position, you take what editorial content you can get. Here's a high level summary of his argument:

1. Microsoft used to be considered inferior to OS/2, WordPerfect and Lotus 1-2-3.

2. Microsoft overcame that problem by "offering good enough technology, superior pricing and attractive bundling."

3. Microsoft gained its dominant position as a result.

4. At Brainshare, Novell unveiled a beta of their SUSE desktop Windows XP hegemony-killer. It is certain to have broader appeal.

5. The SUSE desktop comes loaded with OpenOffice and says Oltsik, "...I'm sure some of the bells and whistles Microsoft bakes in are missing, but there aren't any obvious functionality gaps. In other words, it's good enough for the majority of employees whose jobs depend on doing basic stuff."

6. The new desktop has improved interoperability with Windows.

7. It's cheaper than Windows XP and therefore offers a better value and lower TCO.

Then Oltsik's article takes an utterly comedic turn:
Novell isn't capable of leading the Linux desktop charge on its own, but there are plenty of others in the industry more than willing to help. IBM could certainly move the market if it evangelized Linux and offered hand-holding migration services in the process. (Author's note: It would be somewhat Shakespearian to think that a combination of IBM, Lotus and Novell would lead a successful Linux desktop assault.) There's no love lost between Microsoft and Oracle, so I'm sure Larry Ellison could be persuaded to support this effort. Intel and AMD want to sell boxes, so Linux desktops are just fine.
I laughed out loud when I read this. It was so naively honest and the implications of what he admits are apparently lost on him.

Novell doesn't have the brawn necessary to appeal to the market in any meaningful way.

Oltsik fantasizes a consortium between Novell, IBM, Oracle, Intel and AMD to unseat Microsoft. "If only this would happen, Bill Gates would be undone..."

Oltsik's argument is predicated on the alignment of quite a few stars to unseat Microsoft. The man even admits that he doesn't fully understand Microsoft's offerings by asserting that Open Office is roughly equivalent to Microsoft Office and that the gaps between the two are inconsequential. While he is correct that for small and home offices, Open Office may be an adequate suite, he does not appreciate that for mid-sized businesses and corporations, Microsoft Office provides substantially more value than Open Office (see my articles from the past two weeks on what Microsoft is doing with integrating Office, Windows, SharePoint and its business applications).

Novell is a paragraph and a footnote in the history of IT. They have been marginalized by Windows and their only solution to the loss of market share is to pin their hopes on a phantom chance that the open source movement can propel them back to relevance.

As I write this, I have a remote access session to my computer at work and I am downloading a VMWare image of the Novell SUSE Desktop image. When I get back to work tomorrow, I will be able to play with it. I will be able to assess just how easy it is to integrate into a domain, add printer drivers and so on. I am anxious to find out if SUSE Desktop truly does integrate in a Windows environment or if it has limited interoperability and is therefore meaningful only to small businesses using a SOHO network with no AD domain.

Details Matter In Advertisements



My trip to Dallas two weeks ago was to a Microsoft conference called Convergence 2006. The focus of the conference was a suite of business enterprise management applications that are scaled based on the size of a business. Applications formerly named Axapta, Navision, Great Plains, CRM and Solomon have all been remarketed under the Microsoft Dynamics brand.

I'm very excited about what Microsoft is doing with Dynamics. The level of integration with the Dynamics suite and other Microsoft platforms like Windows Server, Windows XP, SharePoint Portal and Office 2007 is stunning. What these embedded technologies enable businesses to do is deliver role-based information in a timely, if not real-time, manner. While it is possible to piece-meal third party applications with Windows Server and XP, I believe that Microsoft is tying together servers, clients, Office and enterprise functionality with a level of integration that other solutions will be hard-pressed to touch.

Microsoft is focusing on rebranding the suite by pointing out that the integration enables real-time delivery of information. The following print ad was in Time magazine this week [click to enlarge].

DSC00507


The problem I have with this ad is that it depicts paper charts on the wall. Paper charts on the wall do not convey real-time, integrated, role-based data. It conveys a sense of kludgy, out-dated information that is not distributed based on a role but based on a inter-office distribution list. Microsoft Marketing may be trying to compare and contrast the paper-chart mode versus the real-time method of an integrated, role-based system but I don't think the approach is effective because there's ambiguity about the place paper charts play in the use of Microsoft Dynamics. It isn't clear from the picture whether people are relying on posted paper charts or the Dynamics reports on their screen. This ambiguity doesn't lead a customer to conclude that Dynamics is truly a real-time solution.

Bill Gates Keynote At Convergence



After a bit of a late night, I got up at 5:45 so that I could get showered and breakfasted early. I wanted to get a decent seat at Bill Gates' key note this morning. I achieved my objective. I was in the middle section in the 15th row.

Bill is not an ultra dynamic speaker. However, he is efficient as a thinking speaker. By this I mean that his words come out cleanly and he is able to communicate his ideas with few words. This is in marked contrast to two speakers I endured yesterday: gobs and gobs of words coming out without much communicative content. And yes, for those who know me, I recognize that this is a hypocritical comment for me to make. However, comparing Gates' style of speaking with the styles of some of his executives made me realize that I need to cultivate the ability to articulate ideas concisely. Gates was masterful at it. Bill was thoughtful and excited, and he used an economy in his words that deeply impressed me.

Microsoft is doing some incredibly innovative stuff. With each passing year, they continue to develop better ways of connecting people with information and this is done by connecting systems and data together in ways that are quite powerful. I have been blown away by some of the things I have seen so far at the conference.

This morning, Bill demoed technology that wowed the crowd and it aroused an emotional reaction from me. He showed three contexts: home, work and travel.

DSC00421

DSC00422

At home, he showed a home computer that was oriented in portrait mode and hung on a wall. On the screen was TV, family scheduling activities, pictures, notes, etc. He navigated the screen with touches, a la Minority Report's virtual 3D hand manipulation with the key difference being that extra gear wasn't needed. He saw a MSNBC news item that interested him, so he marked it as one to Track. The info was moved to his phone to allow him to follow the story as he commuted.

He was also able to track the actual real-time location of his children, which brought appreciative snickers from the parents in the audience.

DSC00428

At work... oh my. At work, he walked up to a desk that had three large, ugly panels. But as Bill set his phone down on the desk, a wave of predictive realization washed over the audience as they understood that Bill was about to boot up a computer that had near-180 degree monitors that were probably 2x3'. It was a wrap-around monitor set up that was amazing.

As soon as he logged in, his MSNBC news story followed him in. He was on a conference call and dropped the MSNBC item onto the video conference screen and all attendees on the call had access to the story.

At the airport, he dropped his phone onto a table that recognized the phone and gave Bill the opportunity to log into his phone and his data. He did this with his finger print. The table then displayed a desktop for his phone. Think of this in terms of those portable keyboards for PDAs but with a touch-sensitive monitor. Kind of like a Tablet PC. Bill had received a business card from someone at a meeting, so he put the card on the table and it scanned the contents and put an object representing the card on the desktop. Bill then dragged it to the icon for his phone and the table beamed the card into his Contacts in Outlook. The crowd actually applauded.

It was truly exciting. I felt I was seeing a glimpse into a very real, not-so-distant future. Even as I write this, I am excited. It was a breathtaking demonstration of technology that was actually useful.

Google's Dissonant Views on Information



I am wrapping up the lingering details of rebuilding my computer at work. I'm configuring some of the features of Google Desktop. Just now, I received the following notice:
Please read this carefully. It's not the usual yada yada.

When you use Advanced Features, you may be sending non-personal usage information and information about websites you visit to Google.

For example, Google Desktop sends Google information about the news pages you visit in order to personalize the news you see in Sidebar. We use other non-personal usage data, including crash reports, to help improve Desktop's performance. Please note that none of this data actually tells us who you are; we use it merely to improve Desktop's ability to give you the information that's most relevant to you.

To learn more about our privacy protections, read our Privacy Policy.
Now this is an interesting thing to say when you compare and contrast this to two other actions taken by Google:

1. Resisted the US Attorney General's request to provide anonymous search terms in order to better protect children from child pornographers and (the real reason) to execute a warrantless search for terrorist activities. Google rightly told the Feds to get bent because a). there's no probable cause for the search and b). it doesn't take a whole lot of thought or creativity to come up with search terms that might typically be used by child porn seekers.

2. Google acquiesced to the Chinese government to filter search results for searches originating from Chinese IPs.


In the notice I received this morning, it is okay for me to feel safe that Google doesn't track anything related to me personally (... yet I need to log in to my Google account... hmmm) but it's an invasion of privacy when the Feds want the same kind of information. So, if the data collected by Google is so anonymous why not give it up?

I recognize that I've already answered my own question when I asserted that the Feds' request for data was an unreasonable search with no probable cause but it does seem a bit discordant for Google to tell me their data collection is harmless.


The filtering of Chinese use of Google is disturbing to me. Here is an explanation of the decision from Google's official blog. If you want to skip all the blah blah blah, here's the short version:
We decided to cater to the Chinese government's desire to curtail the free exchange of information by limiting certain kinds of search results so that we can expand our market presence in China. We value revenue over freedom.
Google users in China today struggle with a service that, to be blunt, isn't very good. Google.com appears to be down around 10% of the time. Even when users can reach it, the website is slow, and sometimes produces results that when clicked on, stall out the user's browser. Our Google News service is never available; Google Images is accessible only half the time. At Google we work hard to create a great experience for our users, and the level of service we've been able to provide in China is not something we're proud of.

This problem could only be resolved by creating a local presence, and this week we did so, by launching Google.cn, our website for the People's Republic of China. In order to do so, we have agreed to remove certain sensitive information from our search results. We know that many people are upset about this decision, and frankly, we understand their point of view. This wasn't an easy choice, but in the end, we believe the course of action we've chosen will prove to be the right one.

Launching a Google domain that restricts information in any way isn't a step we took lightly. For several years, we've debated whether entering the Chinese market at this point in history could be consistent with our mission and values. Our executives have spent a lot of time in recent months talking with many people, ranging from those who applaud the Chinese government for its embrace of a market economy and its lifting of 400 million people out of poverty to those who disagree with many of the Chinese government's policies, but who wish the best for China and its people. We ultimately reached our decision by asking ourselves which course would most effectively further Google's mission to organize the world's information and make it universally useful and accessible. Or, put simply: how can we provide the greatest access to information to the greatest number of people?
So, Google justifies their decision by saying that they want to deliver a Quality of Service that makes the Google engine available on a reliable and rapid basis. They are saying that performance and market penetration were the primary values they used when making a decision to restrict Chinese citizen access to information that would enhance the penetration of the democratic and capitalistic impulses within Communist China.

I would argue that a more effective way of arousing desire for freedom than warring against phantom terroristic threats is to penetrate closed societies with information and ideas that encourage thinking on and desire for freedom. Certainly, there is a place for war. However, as I have commented before, the Bush Administration's justification for war today is to propagate freedom in the Middle East. Not only is this a significant departure from the initial basis of justification (secure WMD's before they are used against us) but it is also a fatuous argument: how does killing a country's people encourage democracy?

So, here is Google as a company. I believe that Google is one of the most important -- if not the most important -- assets on the internet and they are choosing revenue and, ostensibly, Quality of Service to justify catering to Beijing's desire to suppress freedom and innovative thought. Google stock is trading at upwards of $380/share. They are a cash cow company and more praise to them for that! I love successful innovative, capitalistic companies.

Yet with affluence comes a degree of responsibility to the disenfranchised. Could not Google subsidize the penetration of free information into China with revenue from open societies? Could not Google, in the case of countries whose governments oppress their people, use their tremendous assets to actively subvert oppressors like Beijing by finding creative ways to give QoS and fully-disclosed search results to the oppressed people of China?

Google's justification in the blog entry I cited above sounds more like a marketing plan than it does a mission for an innovative company whose sole purpose of existence is to get people connected with powerful and useful information. It is a noble-sounding excuse that seeks to get us to forgive their unwillingness to subvert oppression. Instead, it is a hollow statement of marketing corpo-speak.

When taken together, Google's positions on information it pulls from my computer, information it withholds from the Feds and information it intentionally restricts from Chinese citizens seems profoundly discordant to me. It seems to me that Google has given up an opportunity to be an innovator for the democratic process in exchange for Chinese Yuans that nicely convert to American dollars.

OpenOffice Assessment: Pointless



I'm done with my OpenOffice test. Might not have even been two weeks. I just don't like OpenOffice. OO doesn't offer enough to compel me to switch. There's not one damn thing that's innovative about it, even though the open source zealots say that companies like Microsoft suppress innovation. All OpenOffice achieves is a 3rd rate copy of Microsoft Office.

In order to be even marginally compelling, OpenOffice has to mimic MS Office capabilities, menus, commands and file structure. Why? Because without MS Office mimcry, OpenOffice cannot attract acolytes merely by being free. There is better value in Office's GUI and it offers substantially better functionality than OpenOffice. Sun has tried to level the playing field by adding MS Office-compatible macros to StarOffice, the for-pay version of OpenOffice. But this betrays the notion that the power of Office lies in its scripting capabilities. While macros are certainly useful and cool, the better value in Office is its ability to integrate with SharePoint and SQL Server. The open source suites cannot begin to touch this functionality.

This means that OpenOffice only holds appeal for small businesses who do not yet appreciate the value of MS Office. However, at some point, as businesses grow beyond simple letters and spreadsheets, they will have to abandon OpenOffice for MS Office because the open source version cannot scale to the needs of a growing company. At some point in a company's growth path, integration with the Windows platform will actually mean something to the business because of the value that integration delivers. At that point, OpenOffice will be abandoned.

But this is only an issue for people who opt for OpenOffice due to the price. Most other consumers and business people are not going to want to deal with the subset of functionality, juvenille GUI and different methods of functionality. They will make the better decision and buy Office.

So, all I really got out of using OO is a reinforcement that ubiquity is indeed one of the most powerful -- if not the most powerful -- dynamics in the computing industry. Ubiquity drives standards, stability, norms and eliminates the risk associated with adopting less popular platforms.

So, it's time to press the button:


Amazon Continues to Blow My Mind

This weekend, I am researching what is called a public key infrastructure (PKI) on Windows server. We are going to need to have the ability to allow people to encrypt their email because of a government contract we have. A PKI is a Windows server function that enables us to do this.

I've played around with PKI's on my home network and you can actually come to my server and get a digital certificate. But I need to have a level of knowledge that extends deeper than "I've dinked around with it a bit."

This being the case, I went to Amazon to look for books on PKI. I found a Microsoft Press book specifically on PKI's (which was exactly what I wanted). What really amazed me though was a new feature on Amazon called Statistically Improbable Phrases. You can see this section by going to the book link here. Let me quote from Amazon's description of what Statistically Improbable Phrases tells a user of Amazon:
Amazon.com's Statistically Improbable Phrases, or "SIPs", are the most distinctive phrases in the text of books in the Search Inside!TM program. To identify SIPs, our computers scan the text of all books in the Search Inside! program. If they find a phrase that occurs a large number of times in a particular book relative to all Search Inside! books, that phrase is a SIP in that book.
SIPs are not necessarily improbable within a particular book, but they are improbable relative to all books in Search Inside!. For example, most SIPs for a book on taxes are tax related. But because we display SIPs in order of their improbability score, the first SIPs will be on tax topics that this book mentions more often than other tax books. For works of fiction, SIPs tend to be distinctive word combinations that often hint at important plot elements.
Click on a SIP to view a list of books in which the phrase occurs. You can also view a list of references to the phrase in each book. Learn more about the phrase by clicking on the A9.com search link.
This is simply incredible to me. The sheer processing power and comparitive indexing required to pull this highly-useful reference off is mind-blowing to me. Staggeringly brilliant and useful.

Amazon and Google are the two most deeply, meaningfully useful sites on the web.

Fortune Magazine: Why Globalism Matters to You



In the November 28, 2005 issue of Fortune is an article I think is important for people to read. It eliminates the mystique of hatred for Wal-Mart and points the reader in the direction of why globalism is a force that is sweeping away old economics. The problem of globalism is caused by consumers like you and me. We like stuff cheap. Wal-Mart has merely capitalized on gloabl economies better than any other company.

Please read this article.

Dave




Executives at Wal-Mart are worried that Robert Greenwald’s new documentary film about the company—Wal-Mart: The High Cost of Low Price—could become a cult hit on the order of Michael Moore’s anti-GM rant, Roger & Me. So my first piece of advice to CEO Lee Scott and his team is: Stop worrying about the movie. It’s a jeremiad—a ham-handed snore with none of the humor, craft, or story sense that made Moore’s film so engaging. The people who already hate you will love it, but nobody else will be able to sit through it. My second piece of advice is to worry deeply about what the film represents. It’s a response to the great social disrupter of our time—the emergence of a friction-free global economy. This new film, awful though it may be, is a cry from the hearts of people being wrenched from the old world into the new and not liking it. There are millions of them, and they will demand to be heard in the media, the markets, and government. And the world’s largest corporation is, inevitably, the most inviting target they can find.

Why they’re unhappy is no mystery. In the new world it’s possible to coordinate supply chains and distribution networks with precision and efficiency never before imagined. Result: big-box retailers with extremely low prices. Wal-Mart’s critics (including the new movie) dwell heavily on how the company heartlessly drives small-town stores out of business. One never hears the obvious problem with that allegation: that Wal-Mart can’t drive anyone out of business. Only customers can do that, and millions of them happily drive right past those little stores because they’d rather pay lower prices. Of course it isn’t just Wal-Mart that draws them. Home Depot and Lowe’s have been death for small hardware stores, Zales for mom-and-pop jewelry shops, Sports Authority for the old sporting goods retailers. They’re all using the plunging cost of computing power and telecommunication to create previously impossible business models that give customers what they want. That trend is not going to stop.

The new world also makes it impossible for employers to pay people as they used to. Maybe the most important part of the new world for many Americans is the advent of a genuinely global labor market, in which workers around the world compete. Of course nobody in Mumbai can directly take the job of a retail clerk on the floor of a Wal-Mart. But a lot of labor is fungible; a given person could work in a store or factory or office. So global competition for workers in factories or info-based jobs, where work can be offshored, pushes down the pay of millions of others—bad news for Wal-Mart employees and potential employees.

A big chunk of the documentary concerns the fact that many Wal-Mart workers don’t get very good medical coverage—or any at all. Again, welcome to 2005. Everybody’s medical coverage is getting stingier because in a global economy, where U.S. workers compete with those in Datang and Wal-Mart competes for capital with every other business on earth, American companies can’t continue paying the world’s highest health-care costs. Don’t blame Wal-Mart; blame America’s inability to devise a national health plan that takes the burden off employers.

The film includes a few allegations of illegal conduct by Wal-Mart managers, and obviously nothing can excuse that. The big question is whether such behavior is systemic, as the film suggests but doesn’t prove. Until there’s better evidence, one should be agnostic on this question, which is not the same as giving Wal-Mart the benefit of the doubt. The company’s growth has been slowing, and it’s under pressure from investors to improve results. As that pressure gets transmitted down to stores, it’s easy to imagine managers doing things they shouldn’t.

If that’s happening and Wal-Mart doesn’t fix it, the results could be dire. This is a battle, and nothing ordains that Wal-Mart must win. The forces of discontent could enable competitors to find toeholds and over time reduce it to just one of America’s several major retailers. What’s critical to realize is that it wouldn’t really matter. This film’s greatest disservice is to tell people, as it does in its closing sequence, that victory consists of stopping Wal-Mart. That’s a delusion. The only true victory will be adapting to the world that’s coming, like it or not and regardless of who brings it.

The Attack Paradox: Why Windows Is Safer Than Linux

Mac Malware Door Creaks Open

Having been married for a while, I usually try avoid "I told you so" because it's not a very effective way to build good will. Nevertheless, I have been saying for two or three years that when/if the open source and Mac OS X operating systems get more market presence, they are going to be attacked and exploited. I have often challenged the rabid SlashDot claims that Windows is inherently insecure and OS' like Linux, Unix, Mac OS and Solaris are inherently more secure. When I have done so on slashdot, I've been eviscerated with lots of open source groupspeak, stale talking points and lots of arrogant passion but not a whole lot of balance, fairness or reason.

Probably my best commentary on this debate can be found at [ My Authoritative Perspective - Which Is More Secure? ] In this article, I argue that while it is true that software manufacturers have a responsibility to write secure code, it is also the responsibility of sysadmins to keep their workstations and servers updated. When considering attacks by known viruses, the systems that are affected are systems that have not been updated. This is not Microsoft's or Linux's or Mac OS X's fault.

What delights me about the linked article about Mac malware is that it supports what I've been saying: I don't believe that one OS is more secure than another. And, I have also stated what I call the Attack Paradox: The fact that MS has been so vigorously attacked over the years actually means the Windows platform is more secure than others because its weaknesses are identified by hackers and patched by Microsoft. This then diminishes the attack opportunity for Windows. This is a concept that is hard to conceive at first: how can a platform that constantly has security issues be more secure than one that doesn't?

Windows and IE are the primary targets of exploits. Each time Microsoft is successfully attacked, their programmers develop patches to fix the weakness. This provides two opportunities that would't be available had the successful attack not worked: 1.) They get better insight into their code, their coding methodology and their supporting frameworks; and 2). they gain more and more insight into the pathology of the hacker. This knowledge helps inform their coding subsequent to the attack. Platforms that aren't as vigorously challenged because they are not as ubiquitous as Windows unquestionably have as-yet-discovered weaknesses but the absence of frequent attacks means the open source programming team isn't learning as much valueable information.

I predict that Linux and Mac OS will be proven to have many as-yet unexposed security issues. As hackers become more interested in Macs and Linux boxes, we will see a sharp rise in the the number of exploits developed for these "inherently more secure" OS'.

Until that happens, the Mac malware article seems to support my opinon that Anything-But-Microsoft-Operating-Systems are not inherently more secure. The Linux development model does not channel the talents of its programmers to produce inherently secure code.

I told you so.

Saturday, April 15, 2006

Why Do You Use IE?



The Inside Microsoft blog has this article. It's a short article so I have quoted it below:
Most Firefox fans were able to cite specific things they liked about the browser, but those who used Explorer, for the most part, fell back on the “it’s all I know” argument, presenting what could be a huge marketing opportunity for Firefox.

Ummm, no it doesn't present a great marketing opportunity for Firefox. Here's why:

When you interview geeks and normal computer users about their browser preference, you will get two different answers. Geeks are aware of the security risks of surfing the web, particularly pernicious places like warez, hacking and porn sites. Consequently, geeks ought to understand that there was a significant difference between the security of Firefox and the security of IE. (I say was because Firefox has had a number of security weaknesses exposed as the browser has gained popularity. I won't rehash my arguments about inherent security claims here. Because geeks are aware of malware threats, they take seriously the notion of safe browsing. Firefox has made a (undeserved) reputation as the condom of the internet: if you want to surf safe, use Firefox. If you don't care if you catch a disease, surf IE.

Ordinary end users, in contrast to knowledgeable geeks, have little to no awareness of malware. They may be vaguely aware that it is "risky" to surf the net but they don't understand malware, how it gets into their systems and how it works. They only become aware of malware after their system has already been debilitated. End users simply don't understand the relationship of a browser to the threat of malicious software.

Consequently, there is little marketing opportunity because in order to get people to desire Firefox, they first must understand how malware works and why a browser might make a difference in securing a user's surfing experience. Users must have both an understanding of the risk and the desire to go through the process of downloading and installing Firefox. Yes, this is simple for the knowledgeable computer user and it is almost trivial in its difficulty. For the average home user, however, there is just not enough incentive for them to bother with Firefox when IE is already there.

I think there are three types of people who insist on Firefox:

1. People who still believe dogmatically that FF is inherently more secure than IE.

2. People who prefer the functionality of FF to IE, e.g. tabbed pages.

3. People who have geeks as friends who have insisted that FF is "better" than IE

I would admit that on a fresh build of Windows XP with no service packs nor IE updates installed, FF may be more secure than IE. However, with a current build of Windows XP SP2 and all current Windows updates, there is no appreciable security differential between IE and FF. The only security risk posed by IE is on a system that is not maintained.

Let me flip it the other way: If you had a box with Windows XP SP2 and all current patches for Windows and IE, that box's IE would be more secure (notice I didn't say inherently more secure) than an unpatched Windows machine running unpatched FF.

So, the issue is not whether a browser is safer than another. The issue is whether a user or administrator keeps their systems conscientiously updated. A well-maintained computer system has minimized its attack surface and is therefore more secure than an unpatched system. The browser is a negligibly relevant factor in overall system hardness when a system is conscientiously maintained.

The Paradox of Consumer Sensitivity to Fuel Prices



I have had a theory about consumer sensitivity to fuel prices for a few years. My theory is this: people will protest gasoline prices significantly more often than they will protest natural gas/heating prices. Sensitivity to gasoline prices is greater than heating fuel prices because consumers pay for gasoline more frequently than they pay for heating fuel.

I fill my tank about 1.3 times per week, every week of the year. That means my awareness of gasoline price changes is reinforced through ~ 5 refuelings per month. This is 60 events per year. As a result, I will have a finely tuned sensitivity to the rise and fall of gas prices and so do you. We will notice minor fluctuations, especially increases.

Now, consider heating fuel prices. There are three factors that diminish our sensitivity to natural gas prices:
- we only pay the bill once a month
- we only pay during the 4-5 months of cooler weather
- there is a 7 - 8 month span of time where we don't pay a heating bill

So, we have 5 months during which we pay once a month and after winter, we have 7 months where we aren't reminded of winter heating costs. For these reasons, we aren't as sensitive to heating costs as we are to gasoline costs.

The question though, is: Which fuel has the greater impact on our finances over the course of a year?

For auto fuel consumption, I make the following assumptions:
- Avg. fuel economy: 23 miles/gallon
- Avg. Miles driven/year: 15,000 (~1250 miles/month)

So, if gas during Month One is $1.86 and Month Two increases to $2.40, we get these values:

Month One Cost: $101.09
Month Two Cost: $130.43

So, gas for Month 2 only cost $29 more than Month 1. Suppose that gas rose uniformly by the same dollar amount from month to month over the next 11 months, the total extra dollars spent would be 11 months * $29/month = $319.

For natural gas consumption for a typical winter:
One CNN Money page reports that natural gas costs will increase 64% over last year to heat the average home for an average winter. Last year, it cost on average $957 to heat an average home during the winter. This year, the estimated total will be $1,568.

This is $611 more than last year.

So, not only will average home owners pay $611 more to heat their homes during the winters, they will pay that extra $611 over the course of 4 - 5 months.

Contrast this with gasoline, with the outlandish assumption of uniform prices increases month to month.

+$29/mo for gasoline ------------------------------- ~+$130/month for natural gas
+$319 for a whole year of gasoline --------------- +$611 for 5 months of natural gas

And this is the paradox of consumer sensitivity to energy prices. We complain about gasoline prices to the point where it is considered a factor in people's judgment of President Bush's effectiveness yet we completely ignore the cost of natural gas!

To review, consumers have greater sensitivity to gasoline prices than they do to natural gas prices, even though the greater total cost and impact to a household budget is with natural gas. The frequency with which people refill their tanks sensitizes them to fluctuations in gas prices. People consider a 10 cent jump in price per gallon significant, even though it has minimal impact on their budget. In contrast, people have little sensitivity to natural gas prices because they pay once a month, they pay for a few months out of the year and there is a large amount of time that passes where significant gas bills aren't paid. Yet, the greater meaningful cost increase is with natural gas.

And nary a peep of protest is heard from people.

Fascinating.

Open Source R&D




One of my esteemed readers made the following comment about open source R&D on one of my OpenOffice comments:
To me, open source SHOULD have a more intuitive interface as well as features. OO clearly is missing that boat... ok missing the dock.... ok it is in the middle of a desert. As I see it, R&D dollars shouldnt be an issue as more open source geeks are the type who would say "this sucks, I am going to modify it" then turn the code over the the originating "owner" Where as I would think non-OS would run into too much red tape before doing something cool.
There are at least three problems when it comes to the open source development of an application suite with consistent functionality and an intuitive, visually enticing GUI:

1. lack of R&D dollars
2. lack of coherent development focus
3. lack of sustained, consistent usability testing


Lack of R&D Dollars:

The use of labor to produce something always has a cost asociated with it, usually money. Open source software development is no exception. OS software development is either approached from a hobbyist perspective, as implied above or it is approached by a for-profit company like Sun, IBM or Novell to be distributed for free to its user base. Since corporations are usually in the business of making money, there is usually a strategy of revenue generation at the back end (e.g. consulting services, hardware) to compensate for the price of Free in the front.

However the software is distributed, what is inescapable is the front-end costs associated with software development. Until we make software that can write other software without human intervention, software will always cost money to make. This is true for the fundamental reason that people have this relentless dependence on food, shelter and clothing and investors have a relentless desire for returns on investment.

Why do OpenOffice and StarOffice have such kludgy GUIs? In part because they are developed from the beginning as something given away for free or very cheaply (Sun charges ~ $35/set for StarOffice; OpenOffice is completely free). Because businesses are essentially oriented around generating revenue and profit, there is a limited amount of R&D dollars that can be spent on software intended to be low-cost because the expenditure is up front and the actual revenue generated through consulting or training services or hardware sales or maintenance contracts can't be entirely predicted. This implies a difficult decision about how much to invest in an open source project so that the final result has some degree of market acceptance but which doesn't require a company to overextend itself on the revenue it assumes will be generated by back-end services.

On the other side of OS development is the hobbyist programmer. Their primary investment is time. Presumably, they don't need money to code open source projects because they make money in some other manner. Yet the hobbyist programmer simply cannot invest the time needed to develop a solid GUI because they flat out lack the time and work capacity to do so. I suspect that OS programmers are motivated more by pride and a belief in the GPL than a desire for money, yet pride and ideology may not be enough to maintain a sufficient pool of adequately talented programmers to see a large-scale open source project through to completion. Similarly, it is difficult for an open source project manager to rely on hobbyist programmers, who will have varying levels of consistency and follow-through. Because the hobbyist programmer does not have their livelihood tied to an open source project, they have little extrinsic motivation to follow through.

In contrast are corporate development projects that are well-funded (at least in the case of Microsoft this is true). While corporate programmers can lack the intrinsic motivation that is probably more prevalent on open source projects, they may be motivated not only by "finish the project or else..." but by the resources available to them that hobbyist programmers would lack.

Lack of Coherent Development Focus:

In a pure hobbyist development model, there is a lack of a coherent development focus that guides decisions regarding functionality, GUI aesthetics and under-the-covers methodology. This is significant because there needs to be some method for concentrating unaffiliated hobbyist programmers so that they develop a product that will function and meet customer needs. It is rare for a person with strong programming skills to also possess an appealing sense of aestehtics and an awarness of what makes GUIs effective. So, yes, the open source model allows a hobbyist programmer to look at something in a product and say "Hey I don't like that; I'm going to change it," but this does not mean that the changes make sense and will lead to a product that has high user acceptance. In fact, if too many hobbyists have that response to what they see and they are not guided by a development focus, then changes become chaotic and any potential quality in the development will fall.

In a coporate environment that devlops OS apps, there is more of a coherent focus but again, the efforts of R&D for software destined to be free will be limited by budgeted cash. Most likely however, a company like Sun Microsystems or IBM, for example, will already have internal methodologies for development that can be applied to open source projects. The corporate-sponsored open source project will be much better positioned to create good software than a loose affiliation of hobbyists because there will be more internal compliance to a development model.

But again, the intent of a corporation is to generate revenue and profit. Spending money on development of an application that will be given away for free is a bet on the ability to generate revenue later on back end services. So for example, Sun doesn't view the code for OpenOffice as a corporate asset that holds measurable economic value. They see it as a magnet that will help induce alternate revenue streams. In contrast, Microsoft views the proprietary code of Office as a corporate asset with measureable economic value not only at the front end but also in its ability to generate revenue by Office's ability to integrate with other technologies like Exchange, SQL and SharePoint, all of which generate up-front revenue.

Microsoft has a robust and vigorous development model. They have a solid development platform (.NET), an application architecture that separates the code into three layers (user interface, business logic and data abstraction) and a lifecycle development methodology. My reader is correct in saying that corporations often inject "red tape," otherwise known as politics into the mix but overall, I would say that a well-managed software project can be designed to circumvent politics and corporate policy.

Lack of Sustained, Consistent Usability Testing:

In both cases of the open source corporation and the hobbyist programmer, there will be a lack of sustained usability testing. Why? Because it is expensive. Hobbyists not only lack the dollars needed but they lack the coordination with other hobbist programmers needed to do so. Corporate open source projects are limited in the amount they can spend on usability testing and user acceptance.

Usability testing requires iterative development cycles, where each cycle of development receives usability feedback which then leads to another cycle of decision-making and development. Microsoft has invested boatloads of money into Office usability and they have made some wise decsions about Office integration with other MS products (e.g. SharePoint). Hobbyist programmers simply do not have the labor and dollar resources to field test their changes with users.

Certainly, as my contributor pointed out, there is a lot of internal friction to overcome within a large corporation to get changes out efficiently. However, there is a tremendous investment in Office. Office and Windows are easily the primary sources of revenue for the company. Their ongoing development of Office indicates that they do not take their dominance lightly. They want to maintain their position, so they pump millions into R&D so that their position does not erode. That erosion is, in my opinion, almost a certainty because I am convinced the browser will replace Windows as a computer OS but nevertheless, Microsoft has a substantial stake in the security of Office's dominance and so they make significant investments in user acceptance.

In my opinon, software development cannot escape from the need for a centralized layer of decision-making; someone must have the authority to make a decision after everyone has voiced their ideas. The development of software that has expectations for broad user acceptance needs a top-level source of cash and decision-making authority. All one has to do is surf download.com for examples of the quality of software most hobbyist programmers churn out and compare it to Office 2003. Then compare hobbyist software to OpenOffice and notice the similarities. OpenOffice has a hobbyist feel to it. Using OO leaves one feeling a bit hollow, with a feeling that something is missing.

That something is exactly what happens when a product has ample R&D dollars; a consistent, coherent development methodology and iterative cycles of usability testing leading to high levels of user acceptability. OpenOffice is what happens when those three factors are lightly-weighted.

Bottom line: Open Source simply cannot touch the sophistication of corporate software development.