Tuesday, October 24, 2006

MAVERICK RICHARD STALLMAN KEEPS THE FAITH

When Richard Stallman gave Bill Gates the finger in front of Stanford's computer science building, I got nervous. No, it wasn't the real Bill Gates -- it was just his name, engraved in giant letters over the main entrance to the 2-year-old, Gates-funded building. But it didn't seem like a Stanford thing to do. The campus is immaculately manicured, dotted with picture-postcard palm trees and squeaky-clean students. It's just not a flipping-the-bird kind of place.


It didn't strike me as a Richard Stallman kind of place, either. Stallman is a legendary hacker, the founder of the free software movement, a MacArthur "genius grant" recipient and a programmer capable of prodigious exploits. But on this day in Palo Alto he looked unkempt and off-kilter. I had already spent a good part of the afternoon watching in bemused silence as he painstakingly examined his long, stringy brown hair for split ends. I was also mesmerized by his piercing green eyes, radiating the power of an Old Testament prophet. I feared his wrath.


We had come to Stanford in search of a place where Stallman could download his e-mail. Two hours away from catching a long flight to New Zealand -- partly for vacation, partly to continue proselytizing his free software "mission" -- Stallman was jonesing for one last connection to the Net. Being Richard Stallman, he figured he could just drop in on the computer science department at Stanford. He hadn't visited for several years, but he was good friends with equally legendary Stanford professor John McCarthy -- the man who invented the Lisp programming language and coined the term "artificial intelligence." Stallman himself programmed the multipurpose Emacs editing tool, a kind of nuclear-powered Swiss Army knife favored by top-notch programmers and computer scientists. Surely some Emacs acolyte would be delighted to help the one and only Richard Stallman grab his e-mail.


First we tried to sneak in through a side door of the Gates building. Over a lunch of ribs, duck, trout and popcorn shrimp at Palo Alto's MacArthur Park restaurant, Stallman had told me that he didn't despise Bill Gates as much as other free software guerrilla fighters do. But he clearly wasn't eager to legitimize Gates' stature by walking submissively through his totemic gate. Free software and Microsoft don't mix. There had to be a better way.


Except there wasn't. The path to McCarthy's office from the side door entrance was obscure. We sucked in our guts and headed for the main gate.


"Hey," Stallman called out to a graduate student opening the door in front of us, "is it the tradition here to give Bill the finger whenever you go through these doors?"


The student looked over his shoulder, twitched a nervous smile and disappeared inside. Stallman shrugged -- and right there on the spot decided to start his own protest movement. As we entered the building, out came what the ancient Romans used to call the "digit impudicus." Stallman flashed me a sly grin. I glanced around, looking for security.


Over the course of about half an hour in the building, Stallman encouraged three other people to join his campaign. No one signed on unreservedly, but two recognized him right away -- one from a conference some six years earlier and another from his picture in a recent Forbes magazine article celebrating the surprising commercial success of the free software (or, as it is now more commonly called, "open source") movement.


Would the movement to deride Gates have as much success? Stallman didn't know and didn't care. As he pointed out to me repeatedly through the course of our afternoon together, he doesn't do things because they are socially acceptable or strategically appropriate. Success is not his measure for accomplishment. He does what he does because he thinks it is the morally correct, or simply fun, thing to do. And he brooks no compromise.

Monday, July 24, 2006

The Challenge of the Multi-site Nonprofit

Why is it more difficult for nonprofit organizations than, say, retail chains, to run efficient multi-site operations? A recent Harvard Business Review story concluded that nonprofits waste $100 billion a year through inefficient fundraising and dispersal practices and clumsy administrative operations.


The problem, according to Harvard Business School professors Allen Grossman and V. Kasturi ("Kash") Rangan, rests in inevitable tensions and battles for power that arise between national headquarters and local operations.


Not helping the problem is the fact that many nonprofits employ management techniques developed for for-profit companies. "We say this is not a good approach," said Grossman. Instead, nonprofits need their own management practices that recognize the unique characteristics of the nonprofit enterprise.


Grossman and Rangan presented their findings—and some possible solutions—at the Faculty Research Symposium held at HBS on May 20.


Whether a particular nonprofit organizational structure favors central or local control, inevitable tensions develop between national headquarters and local operations, Grossman said. These disputes often result from the unique characteristics that differentiate them from for-profit concerns:
The real value creators for nonprofits are the dispersed units, where money is raised and good deeds accomplished. With for-profits, headquarters is usually where the value is created.
A constant power struggle takes place between local and national leaders.
Wide use of volunteer labor makes worker motivation more of an issue.
Up to 60 percent of a nonprofit CEO's time is spent fundraising, time that could be spent building a more effective organization.
The strong emotional environment around nonprofits can challenge rational decision making.
Nonprofits often have a cultural opposition to structure.
A lack of theory of management practice exists for nonprofits.


These characteristics lead to a number of disputes between headquarters and affiliates, according to Grossman and Rangan. For example, who controls the donor list and money raised? Is it the affiliate that actually raises the funds, or the national organization that provides the overall brand and direction? Are affiliates delivering the level of service defined by the national organization? Does the affiliate adequately represent the values and goals of the national brand? Do affiliates receive an appropriate level of national services from the fees they pay?


Traditionally these strains have fueled centralization versus the decentralization debate in the nonprofit community, Grossman told his audience. And that's the wrong approach to take—the value proposition created by a national organization with local units is lost. Instead, the debate should be reframed with autonomy and affiliation as the key dimensions.


In their research, Grossman and Rangan looked at the system behaviors of five nonprofits: Outward Bound (where Grossman once served as CEO), Planned Parenthood, Habitat for Humanity, SOS Kinderdorf, and The Nature Conservancy. Each was mapped on two dimensions—one that exerts forces toward unit autonomy and the other influencing the degree of organizational affiliation. (The Nature Conservancy and Habitat for Humanity were the highest affiliation organizations; Planned Parenthood and Outward Bound were high autonomy organizations; SOS Kinderdorf ranked lowest on the autonomy scale, and about in the middle for affiliation.)


The point isn't whether an autonomous-biased organization is better than a more affiliate-driven one, but rather to identify "levers of influence" system managers can employ to get their organization the desired balance between the two forces.


"Headquarters should undertake actions to enhance system value and then sustain it, and affiliates should maximize local resources to enhance their credibility and increase their voice in the running of the system," Grossman and Rangan wrote in a working paper on the subject. "The key for management is to develop a governance system that accommodates this tension in a constructive rather than a destructive fashion."


To get constructive co-existence of unit autonomy and organizational affiliation, organizations must have in place a clear process for making decisions as to who will perform the functions necessary for the system, and a process that commits operating units to adhere to systemic decisions.


At the seminar, Rangan cautioned that there is no single cookie-cutter approach to solve the problems of all nonprofits. For example, if a meal kitchen has 100 strong, local operations there is little need for national people to come in, other than to provide a few services.


Among their conclusions, Grossman said, are:
A multi-site nonprofit's value proposition must be real, and consistently communicated externally and internally.
Strong unit autonomy and strong systems can be compatible and mutually supportive.
Lack of understanding of multi-site dynamics leads to a waste of money and resources.


More research is underway on several issues. To what degree are corporate structure and strong unit autonomy incompatible? How much strong national leadership is required to complement local program customization? And how important is national leadership in determining the success of a multi-site nonprofit?

Saturday, June 10, 2006

VMWare - Virtualization

VMWare - Virtualzation the new business model for Low cost virtual machines

Friday, March 24, 2006

The Secret of How Microsoft Stays on Top

Perhaps no technology company outside of IBM has been able to keep on top of the industry as much as Microsoft. What's more, Bill Gates & Co. have achieved this success during times of incredible technological transformation, usually just the period when titans are vulnerable to being knocked off by disruptive technologies.


To understand the way Microsoft manages IP, you have to go back to the roots of the company. Back in the late 1970s, its first products were aimed at helping other programmers develop applications for the computing hardware of the day. It focused on developing programming platforms, in contrast to most other firms who focused on stand-alone applications. It was an approach that permeated both their tools business—the software they provided to other programmers for developing applications; and the operating system business—the software upon which these applications would run.


It was during these early days that Microsoft began to invest in creating libraries of programming "components": building blocks of intellectual property that could be used to develop different software applications. The original impetus was the need to provide programmers with pre-defined interfaces through which they could access commonly used functions and features. Why reinvent the wheel if someone else had already worked out what it should look like? In essence, Microsoft began codifying knowledge and embedding it in a form that could be leveraged, both by itself and others. But it got to decide which components to "expose," and which to keep hidden, providing a mechanism through which its core intellectual property could be protected.


As the company expanded, Microsoft formalized this component framework and developed a "programming model" to go along with it—in essence, defining the way that applications should interact with its preexisting software components. It extended the model to its application business, sharing increasing amounts of code between products like Word and Excel. Over time, as more and more partners signed up to use the model, developing applications for Microsoft's operating systems and using Microsoft's tools in the process, the power of the platform became evident. It was a win-win relationship—the community of development partners received benefits in terms of enhanced productivity, while Microsoft's position was strengthened through the deployment of products that were complementary to its own. This made it tough for competitors. They were not just going head-to-head with Microsoft's products—they were also competing against the repository of knowledge accumulating in Microsoft's component libraries.


By now, you will see that Microsoft was building a rather unique resource. Its approach to software "componentization" allowed the firm to leverage intellectual property across multiple product lines. And it also made it attractive for third-party firms to leverage Microsoft's platform, as opposed to others. But how did this allow the firm to respond effectively to technological change? First, it had an established base of knowledge that could be brought to bear on newly emerging opportunities. Second, it had a well-defined process through which new intellectual property could be codified and integrated into this knowledge base in a way that ensured compatibility with its existing components. And third, it established processes to evolve this knowledge base to ensure it reflected changes in the broader technological context. For example, the programming model was updated in the early 1990s to reflect the increasing use of networks. Then later in the 1990s, Microsoft once again began "re-architecting" its component base to facilitate the delivery of "Web services," applications that can be activated remotely over the Internet.
Only once in fifteen years did Microsoft products fail to win more than 50 percent of these reviews.


Putting this all together, we see that much of Microsoft's long-term success can be attributed to investments that have created "dynamic capabilities" for responding to technological change. These investments include: the process of software componentization through which it captures and embeds intellectual property in an accessible form; the component libraries that result from this process, which form a vast repository of knowledge that can be leveraged across its product lines; a programming model that allows developers, both inside and outside the firm, to access these components through well-defined interfaces; and the process through which both its software components and programming model are updated to reflect developments in the broader technological context.


Microsoft has been criticized as a company that relies more on predatory tactics than great products and innovation to succeed. What can you say about Microsoft's product development performance over the years?


We analyzed the development performance of Microsoft products for the past fifteen years. Our aim was to come up with an objective measure of performance—one that was unrelated to arguments about market power, monopoly position, or predatory tactics. This meant we excluded any consideration of measures like market share or profitability, and focused instead on the ratings given to Microsoft products by independent reviewers. We found that Microsoft products were consistently rated highly when compared to competitive offerings, a result that held true across different product categories and over time. On average, Microsoft products "won" more than two-thirds of the competitive reviews we examined. Indeed, only once in fifteen years did Microsoft products fail to win more than 50% of these reviews. Given the number and diversity of competitors they faced in each different product category, this consistently high performance is striking.
When developers find attractive alternatives to Microsoft technologies as they did when the Internet first emerged—it's not long before the tools division starts to hear about it.


We also evaluated Microsoft's response to a "technological transition"—a major change in the industry that required the firm to rethink its strategy. We chose to examine the rise of the World Wide Web, given that this transition brought about the rise of a new product category—the Web browser. Microsoft therefore needed to develop a product based on technologies with which it had little previous experience. Our analysis focused on Microsoft's first two internal browser development projects, comparing their performance to a sample of Internet software projects completed at the same time. We discovered that Microsoft's projects exhibited significantly higher productivity than the sample average. Furthermore, we found that the resulting products were rated as equal to or higher in quality than competitive offerings. These results often surprise people, given the perceived wisdom that incumbents have difficulty responding to major technological changes.


Microsoft was originally late in its embrace of the Internet. Yet Bill Gates was able to quickly change strategy to allow the company to become a top competitor in selling Internet-related technologies and services. How did Microsoft accomplish this?


 In any industry subject to rapid technological change, a firm faces two big challenges. The first is in recognizing the threats (and opportunities) presented by newly emerging technologies. The second is in mounting an effective response to these threats. Microsoft appears to have solved these problems, giving it the ability to quickly adapt to changing circumstances. The way they have tackled each however, differs in nature.


In terms of recognizing potential threats, Microsoft has built-in "sensing" mechanisms to keep abreast of what is happening in the broader technological context. Much of this ability comes from their tools division, which tracks the needs of the many developers worldwide who write for Microsoft platforms. When these developers find attractive alternatives to Microsoft technologies—as they did when the Internet first emerged—it's not long before the tools division starts to hear about it. You also have to realize that Microsoft has several thousand developers inside the company who are constantly examining the potential of new technologies—"lead users" if you like. When all these sources start telling you the same thing, it's hard not to pay attention. Even if it takes a while to work out exactly what should be done.


In terms of responding to potential threats, Microsoft consistently plays to its strengths—its overall platform strategy, its existing knowledge base, and its process of componentization. For example, when developing the new Internet Explorer browser, the development team opted to leverage its existing programming model, despite the fact that this would initially slow the project down. From this point on, competitors in the browser space faced a formidable challenge—they were competing not only against the Explorer team, but also against the continual improvements made to Microsoft's underlying platform over its many years of existence.


What should company leaders everywhere take away from your research in terms of how to compete in the middle of a technological revolution?


Our research highlights two major themes. The first is the importance of taking a proactive approach to managing the development of a firm's intellectual property. We're not talking about patenting strategies here, but rather the set of processes that contribute to building and evolving a firm's knowledge base. These processes fall into four categories: creation/codification; integration/assimilation; application/exploitation; and evolution/adaptation. Inside Microsoft and other successful firms we've studied, managers give careful consideration to how each of these activities is conducted. In doing so, they pay explicit attention to the way these activities interact with processes that leverage the resulting intellectual property assets (e.g., product development).


The second theme that emerges from our work is the importance of architecture. This theme emerges at multiple levels—in the design of Microsoft's products, its platforms, and its intellectual property. At the product and platform level, the key idea is that in today's networked economy, no firm can remain an island. Technological innovations are increasingly brought to the market by networks of firms, each focused on only specific pieces of the overall puzzle. Competition takes place both between competing platforms and between products that build on top of these platforms. Managers must therefore make explicit choices about the technology architectures they adopt, deciding what to "design/make" themselves, and what to rely upon others to provide.


With regard to developing intellectual property, our work demonstrates the need for an architectural framework that defines how the various building blocks of IP should fit together. Without such a framework, these efforts are likely to be fragmented and difficult to integrate. At Microsoft, this role is performed by its programming model, which describes the interfaces through which its software components can be accessed. Critically, this model is designed to be flexible enough to facilitate future evolutions in content, as required to reflect changes in the broader technological context.


Gates has said, and history suggests, that Microsoft one day will fail. What will be the company's downfall?


If we knew the answer to this question, we'd be rich!


Slightly more seriously, the main threat probably comes from competing platforms—alternative systems that enable large numbers of developers to form competing innovation ecosystems. These other platforms, promoted by competitors such as Sun and IBM, are currently strong alternatives to Windows and the Microsoft Developer Network. One of the most interesting is the Linux/open source platform. This platform has recently become associated with IBM, which has invested resources in its development and extension, and used it to promote complementary hardware, software, and services. However, this is less a story of sudden dramatic failure and more a story of ongoing competition at the platform level. The presence of competing platforms like Linux requires that Microsoft continue to invest in its IP base and integrate new innovations into its own platform. If it fails to do this, it will be certain to lose out to alternatives.

Thursday, January 12, 2006

Why Evolutionary Software Development Works

Given the importance of software, the lack of research on the best ways to manage its development is surprising. Many different models have been proposed since the much cited waterfall model emerged more than 30 years ago. Unfortunately, few studies have confirmed empirically the benefits of the newer models. The most widely quoted references report lessons from only a few successful projects.


Now a two-year empirical study, which the author and colleagues Marco Iansiti and Roberto Verganti completed last year, reveals thought-provoking information from the Internet-software industry—an industry in which the need for a responsive development process has never been greater. The researchers analyzed data from 29 completed projects and identified the characteristics most associated with the best outcomes. (See "Four Software-Development Practices That Spell Success.") Successful development was evolutionary in nature. Companies first would release a low-functionality version of a product to selected customers at a very early stage of development. Thereafter work would proceed in an iterative fashion, with the design allowed to evolve in response to the customers' feedback. The approach contrasts with traditional models of software development and their more sequential processes. Although the evolutionary model has been around for several years, this is the first time the connection has been demonstrated between the practices that support the model and the quality of the resulting product.
Research on the internet-software industry


A study of projects in the Internet-software industry asked the question "Does a more evolutionary development process result in better performance?" The study was undertaken in stages. First, the researchers conducted face-to-face interviews with project managers in the industry to understand the types of practices being used. Next, they developed metrics to characterize the type of process adopted in each project. Finally, the metrics were incorporated into a survey that went to a sample of Internet-software companies identified through a review of industry journals. The final sample contained data on 29 projects from 17 companies.
The most remarkable finding was that getting a low-functionality version of the product into customer's hands at the earliest opportunity improves quality dramatically.


To assess the performance of projects in the industry, we examined two outcome measures—one related to the performance of the final product and the other to the productivity achieved in terms of resource consumption (resource productivity). To assess the former, the researchers asked a panel of 14 independent industry experts to rate the comparative quality of each product relative to other products that targeted similar customer needs at the time the act was launched. Product quality was defined as a combination of reliability, technical performance (such as speed) and breadth of functionality. Experts' ratings were gathered using a two-round Delphi (in which information from the first round is given to all experts to help them make their final assessment). To assess the resource productivity of each project, the researchers calculated a measure of the lines of new code developed per person-day and adjusted for differing levels of product complexity. Analysis of the data uncovered four practices critical to success