Thursday, May 7, 2009

Thirty Years of Domain Engineering

“Thirty years?” At least some readers will be asking themselves that question about the title of this post right now. “But domain engineering isn’t more than about ten or fifteen years old! The conferences on product family engineering only started in the mid-nineties!”

In his wonderful book A Short History of Nearly Everything, Bill Bryson cited a quotation that went something like this: The history of a new idea generally passes through three phases. First, the idea is not understood by anybody. Second, it is finally understood and accepted after years and years. Third, it is then attributed to the wrong person.

James Neighbors introduced the concept of domain engineering in his PhD thesis in 1980. (By the way, aside from coining the expression “domain analysis” in his thesis, he also wrote about “domain specific languages”, nearly thirty years before a special issue in IEEE Software on DSLs was enthusiastically received by a large practicing community.) He expanded on that work over the next four years with his Draco system, which essentially introduced the domain engineering process as it is known today in the product line community. Draco was so innovative that colleagues who were reimplementing it in other places were still grappling with its subtleties years later.

But when I have spoken with many in the software product lines community, they admit that they have never even heard of Neighbors and his work (even though it was published in the standard, widely available channels). When they are told about it, they offer an explanation of why the product lines work is somehow different (it usually has something to do with “commercial focus” or the like). But the explanations are generally not very convincing, at least to me.

It is ironic indeed that in the field of software reuse, of all fields, history also is being forgotten. In precisely the field that preaches not reinventing the wheel, too many of us are doing just that, by not knowing what has been done before. And it is too bad not just for reasons of correct attribution, but also because we are depriving ourselves of some great work. Some of the best work ever done in computer science was done early on. Alan Perlis once said that the programming language Algol60 was “a great improvement over most of its successors.” A lot is still with us today – Lisp was invented in the 1950s and is still going strong. Many very deep concepts were invented, although not all panned out – such as “call by name” in Algol60. But even many of those concepts that didn’t pan out were simply ahead of their time, and bound to come back when the world was ready (either through better technology, or mindset, or whatever).

A lot of things that were explored in the early days of reuse are coming back now, such as introducing systematic reuse into organizations. That’s proof of their viability. The mindset is there now, the technology is more powerful than it was twenty years ago when it was first tried. But that doesn’t mean those earlier efforts were without merit. Why give up the chance to benefit from the insights of those who went before? Aside from the issue of giving credit where credit is due, we’re doing a disservice to ourselves by ignoring our past.

Wednesday, April 22, 2009

Agile + Reuse = Efficient Projects

A couple of years ago I wrote a paper with some reflections on how the dynamics of the capital markets could help illuminate the dynamics of agile software development projects. Briefly: the Efficient Market Hypothesis explains how the market ideally will reflect all information available to investors. One point that is often overlooked about this is that an efficient market also reflects all implications for the future based upon the current information available. (In other words, if you think a stock price will rise in two weeks you buy it now, you don’t wait two weeks to do it.)

I then elaborated the idea of an Efficient Project: agile developers try to construct software that way. Just look at the so-called XP Customer Bill of Rights: “You can cancel at any time and be left with a useful working system reflecting investment to date.” The system reflects all the information available to date.

Furthermore, the YAGNI (You Aren’t Going to Need It) principle of XP says that the system should reflect only the information available to date. Don’t implement anything that isn’t implied by the requirements you have now. Don’t try to second-guess the future.

There’s even a parallel in there to the markets: momentum investing is one of the causes of market inefficiency; it causes bubbles, with people investing beyond what the current information (e.g. company revenues) implies. “Momentum implementation” occurs when projects implement features beyond what is called for by the requirements.

This is great advice. It’s a way of keeping systems from adding useless functionality. However: I think there has been a traditional tendency in the agile community to go too far with this. Probably because the agile movement began with projects with truly unstable requirements, a tendency has grown up around this to consider the future to be entirely unpredictable.

But the future is in fact rarely entirely unpredictable. The only project with entirely unpredictable requirements would be one in which you sit down in front of a white sheet of paper and say, “I’m going to do … something – anything.”

The fact is that there is a continuum of predictability of the future, of requirements. But the agilists have made it a bit too easy on themselves in that respect, and tend not to look hard enough for those aspects of the system which are predictable.

And that’s where reuse comes in. Reuse is all about being able to predict the future. In some ways it is the mirror image of agile. It says, “I can predict the future in these important ways, and I can implement a system that reflects the implications of this future.” It's the "You Are Going to Need It" of software engineering.

And this isn’t just dreaming. One of the most successful examples of this is product line development. The people in Nokia lay out an entire vision for future software in their phone families, and implement the corresponding system now. The product line developers also study where the future is unpredictable, say, in terms of features, and even then try to constrain the unpredictability to squeeze out any amount of partial predictability they can. For example, they introduce variation points in feature models that say, “okay, we’re not sure exactly what a system’s features will be, but we can get it down to variations on this theme and implement based on that.”

It’s tough analysis to do, there’s no silver bullet, but it’s rewarding when it works. And agilists tend to exclude this type of analysis due to a mindset that focuses on unpredictability.

This is not to say that agile and reuse are opposed. On the contrary, they are probably the two most important software engineering techniques we have, and can work together to balance the unpredictability and the predictability of systems and their features.

Surprised to hear this? You shouldn’t be: speaking of silver bullets, probably the most important software engineering paper ever written was No Silver Bullet by Fred Brooks (Martin Fowler once told me that no other paper had had so much impact with so few words.) Brooks said that the software development problem was essentially intractable, and that there were only a few truly powerful tools to combat the problem. Two in particular he mentioned: software reuse was one of them. Another was the idea of “growing a system” – essentially the idea of incremental, agile software development.

Why did he single out reuse and agile? Because he said that essentially the only solution to the software problem was to write less software. Reuse is the way to write less software when the future is predictable. Agile is the way to write less software when the future is not predictable. Leave either one out and you end up writing more software than you should. Use them together and you have an efficient project - a project in which the amount of software written is not too much and not too little.

Reuse and agile – if they were good enough for Brooks, they should be good enough for us.

Tuesday, April 14, 2009

Agile and Reuse

There was an interesting discussion over on the Yahoo XP Discussion List over the last few days on the topic of "reuse across projects." One thing that strikes me is that nearly all of what was said during that discussion has been said many times before. This in itself is not necessarily a problem, but it does leave me with the impression that many remain unaware of the reuse community and, in a kind of ironic twist, "reinvent the wheel" of discussion around reuse.

The agile community has a particularly uncomfortable relationship with reuse. I can testify to this on the basis of discussions all the way to the top -- yes, the top -- of the community where skepticism was expressed. In the discussions over the past few days, the idea of "emergent reuse" was cited with approval. But what is that, if not the notion that reuse only makes sense after several exemplars have been made? Once again, the wheel of thinking about reuse reinvented.

Borrowing an anecdote from another time: I once saw Jean Samet speaking at the History of Programming Languages conference. Defending COBOL from its detractors, she noted that only COBOL had a truly complete facility for I/O. The others punt (and it's true, just look at C and Ada, which farm it out to libraries). She said, "And you know why? Because it's hard, that's why." Simple as that.

I defend the agilists all the time with that anecdote. I tell people that agile may, in its essence, "only" be iterative software development, but that detracts nothing from the fact that they were the ones to finally make iterative software development happen. Why? Because it's hard, and that's why people didn't do it before. It's hard to plan iterations, to time-box them, to re-plan, etc. But the agilists simply rolled up their sleeves and did the hard work of figuring it out and putting it into practice.

The agilists should put this attitude to work and realize that reuse isn't practiced often enough for the simple reason that it's hard. It's hard to distill that perfect interface that makes software easily reusable. It's hard to provide the robustness and elegance needed to make reuse work.

Agilists are always inviting other communities to become familiar with what they're doing before judging them. I think the agilists should become familiar with the reuse community ... even better, participate in it. Come to the Eleventh International Conference on Software Reuse in Washington this September. We can talk about it.

Thursday, April 9, 2009

The Contractual Process

A few days ago during an agile workshop I was explaining the concept of optional scope contracts in agile processes, and listening to myself talk, the word “waterfall” suddenly came to mind. I had never thought of it in those terms before, even though this is clearly what it is.

What I hadn’t been thinking clearly about was the fact that contracting also has a process, which usually tracks or mirrors the software development process, but doesn’t have to. But we don’t see that until we radically change the software process away from the classic waterfall process.

The waterfall process for software development has a number of known problems. The agile process (basically an iterative process) arose in response to the need to get risk and scope under control, allowing to developer to reassess the state of development continuously and intervene to take decisions.

But when agile processes are adopted, it’s actually the exception more than the rule that the contracting process is changed, too. The contracting process stays waterfall (requirements up front, etc.). We end up with a mismatch between the two processes. If people were to think this way, in terms of processes, maybe they would start “getting it” about the possibility of doing contracting in different ways – effectively, an agile contracting process.

Wednesday, April 8, 2009

Reconciling IT Governance and Quality

After the Total Quality Management (TQM) wave that swept over the industry at large during the 1980s, and the success of ISO 9000 for the software industry in the 1990s, the quality imperative has continued its march in the new millennium with a spike in popularity of Six Sigma. Today it is not unusual to see a company’s commercial brochure highlight its special commitment to quality, as a way of identifying itself as a “quality company.”

Is there anything wrong with being a “quality company”?

Six Sigma dates from 1986, but was popularized along with the ISO 9000 movement in the software quality sector in the 1990s and early 2000s, and so it is a bit early to assess the long-term financial performance of companies that have embraced these movements as a governing objective to date. But a useful comparison can be made to the performance of companies that embraced Total Quality Management, a phenomenon that has been with us for some time now.The figure above shows the financial performance of several TQM companies relative to their peers on the Standard & Poors 500 during a full decade in which TQM adoption was at a peak. Among them were several technology leaders, such as Xerox, IBM, and General Motors—all of whom pioneered software systems respected for their high quality (yes, even GM).

Although it would clearly be an exaggeration to say that the financial performance of these companies was disastrous during that period, it is equally clear that the results were not what might have been hoped for, given the undeniable technical superiority of the systems that were produced under their rigorous quality programs. What explanation might exist for the inability of companies who adopted a quality-oriented governing objective to generate financial results as impressive as their technical results?

Some reflection reveals that quality is well-suited as an operational framework, but does not offer an economic framework for strategic decision-making. Some of the most critical decisions that a company is faced with have little to do with its quality program. At the same time that General Motors was embracing TQM, it embarked on a multi-year program of investment in factory automation and robotics, spearheading many software innovations such as the Manufacturing Automation Protocol (MAP). In retrospect, though, this was an ill-advised allocation of precious company resources, which certainly contributed to GM’s under-performance of the market by a full ten percent during that period. It is now generally recognized that at the same time IBM was pursuing TQM in those days, it was paying a heavy premium for its acquisition of Lotus Development Corporation.

In other words: both GM and IBM had great quality programs, but terrible strategic programs, and the bad strategy won out. Now look at how each of those two companies is faring today: IBM is thriving, while GM fights for its very survival. It certainly isn't their quality programs that are making the difference: it is their competitive strategies - one very successful, the other not.

Another, more subtle problem with a “quality strategy” is related to the very fact that programs like ISO 9000 have become so well-accepted: in many markets (for example, aerospace and defense), quality certification has become mandatory for participation – a “union card” for market entry, thereby levelling the field for all players and reducing dramatically the possibilities for building competitive advantage based on quality. Indeed, in these markets, any benefits from quality tend to accrue to customers.

Yet another problem is the “corporate culture” that sometimes arises around quality. A colleague of mine relates that he first began to suspect problems with TQM as a corporate culture when his company worked with Eastman Kodak and observed their operations. G. Newman has described the problem in the following way: “...the fadmongers [of TQM] have converted a pragmatic, economic issue into an ideological, fanatical crusade. The language is revealing. The terms of quality as an economic issue are analysis, cost, benefit, and tradeoff. The terms of quality as a crusade are total, 100 percent quality, and zero defects; they are the absolutes of zealots. This language may have its place in pep talks ... but once it is taken seriously and literally, we are in trouble.” When quality becomes a company-level obsession elaborate (and expensive) bureaucratic infrastructures too often arise, with the inevitable adverse financial consequences.

Quality only adds business value if customers are willing to pay more for higher quality. But in some sectors of the software industry, technical innovation is valued over quality per se. In fact, a December 2005 article in the Wall Street Journal noted observations that a quality management process is thought to actually hinder innovation in many cases. “For stuff you’re already good at, you get better and better,” Michael Tushman, a management professor at Harvard Business School was quoted as saying. “But it can actually get in the way of things that are more exploratory.”

Given these considerations, the practical consequences become evident. Quality is not suitable as a top-down governing strategy, to be followed in any and all cases and contexts. It is up to the business strategist to determine—in his own particular context—whether quality should become a competitive weapon in pursuit of business value.

Wednesday, March 18, 2009

The Public Value of Monetary Integration

Recently there have been rumblings again about whether the introduction of the Euro was a good thing (Paul Krugman mentioned it in his column last Monday in the New York Times).

Several years ago, when I was living in Germany, I attended a conference where I found myself talking to a professor and his wife about the recent fall of the Berlin Wall. The recently united Germany was in the middle of a crisis of second thoughts. The former West Germany, in particular, was greatly irritated at the "Ossies" wanting a free ride all the time ("Do you know what they do? They come up to the cashier in a supermarket and refuse to pay, saying 'I suffered all these years, now I deserve this!'"). But the professor and his wife were adamant: reunification was absolutely the right thing to do.

"It will do something priceless: it will keep them from ever going to war. That's worth the entire cost of reunification."

They have a saying in Italian, Conosco i miei polli - I know my chickens. That's the way many people feel about European history: again and again, European countries have gone to war with each other. They could do it again - I know my chickens. Monetary integration is one more step toward making that harder to do in the future.

Put another way: the Public Value of monetary union is huge. Sure, the financial cost may turn out to be enormous, maybe even a net loss. But the political value is enormous, and makes it all worth it.

Sunday, March 15, 2009

Public Value

There’s a lot of help out there for those of us in private enterprise who are wondering about how to measure the value of our IT operations. But what about those in public or non-profit organizations?

You might want to consider the concept of Public Value first articulated by Moore at Harvard in 1995, and then further elaborated by international entities such as the World Bank; European entities such as the IDABC, and private organisations, the Gartner Group in particular. A number of frameworks for measuring the Public Value of IT have been proposed, all of which tend to share three conceptual elements:
  • Financial and organizational value – this element is closest to the classic techniques of value determination already known in the private sector, such as measures of financial Return on Investment, as well as more qualitatively valued improvements in architecture and organisation.
  • Political value – this element assesses the value of achieving policy-related goals, such as the degree of implementation of laws and directives related to IT-readiness;
  • Constituent value – this element captures the value of the improved end user experience, in terms of decreased administrative burden, more inclusive public services, and so forth. This of the Standard Cost Model for reduction of administrative burden and similar initiatives.
From my point of view, the best thing about the concept of Public Value is that it makes a correct separation of the different dimensions of value into something you can work with. I’ll get back to this later.