1 Confronting the software crisis

The notion of a software crisis, or a software gap, emerged at the end of the 1960s. It was believed that the accomplishments of software fell far short of its ambitions, in terms of user expectations, performance and cost (David and Fraser, quoted in Naur and Randell 1969:120). The crisis stemmed from the difficulties encountered when building large, complex systems. Hardware was evolving at an unprecedented pace at the time, a pace software was not able to match. Edsger W. Dijkstra brought up the subject when giving a speech accepting the ACM Turing Award in 1972:

[The primary cause of the software crisis is] that the machines have become several orders of magnitude more powerful! To put it quite bluntly: as long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a mild problem, and now [that] we have gigantic computers, programming has become an equally gigantic problem. In this sense the electronics industry has not solved a single problem, it has only created them, it has created the problem of using its products. To put it another way: as the power of available machines grew by a factor of more than a thousand, society’s ambition to apply these machines grew in proportion [...]. The increased power of the hardware, together with the perhaps even more dramatic increase in its reliability, made solutions feasible that the programmer had not dared to dream about a few years before. And now, a few years later, he had to dream about them and, even worse, he had to transform such dreams into reality! Is it a wonder that we found ourselves in a software crisis?

A NATO-sponsored software engineering conference was held in Germany in 1968, and the purported crisis figured prominently in the discussions. The very term “software engineering” was provocatively coined for this conference—it was argued that software development was not yet a mature branch of engineering, and that the field had to evolve to earn the engineering label (Seidman 2008; Naur and Randell 1969:13).

Software reuse was not in wide use at the time. A section in the venerable magazine Communications of the ACM was dedicated to disseminating algorithms in the 1960s, but only in source code form (written in the programming language ALGOL 60), and the algorithms were meant to be adapted manually to the target language and machine (Perlis 1966). At the time, developers constantly reinvented the wheel when building systems.

In an effort to counter the crisis, Douglas McIlroy (1969:138) introduced the notion of software components in an invited address at the NATO conference. Instead of developers reinventing basic functionality with each new software project, reusable software components would be used instead, in much the same way the hardware industry was using pre-fabricated components:

We undoubtedly produce software by backward techniques. We undoubtedly get the short end of the stick in confrontations with hardware people because they are the industrialists and we are the crofters. Software production today appears in the scale of industrialization somewhere below the more backward construction industries. I think its proper place is considerably higher, and would like to investigate the prospects for mass-production techniques in software.

McIlroy lamented that software developers started software projects thinking about what to build rather than what to use. To provide these ready-made components, he advocated that a software components industry be founded, providing best-of-breed software parts:

[The] purchaser of a component [...] will choose one tailored to his exact needs. He will consult a catalogue offering routines in varying degrees of precision, robustness, time-space performance, and generality. He will be confident that each routine in the family is of high quality—reliable and efficient.

McIlroy’s understanding of software components was quite different from the contemporary understanding. Components in McIlroy’s vein were akin to the procedural libraries of today, and could be distributed in source code form only, contrary to the modern expectation that components may be distributed in binary form.

Another leading light in this field is Brad Cox, who introduced the Software IC (integrated circuit) concept in middle of the 1980s (Persson 2002:35). In contrast to McIlroy’s understanding of components, a Software IC was to be available in binary form. Cox’s ideas received much attention, but the Software IC concept was marred by the requirement that all components be written in the Objective-C programming language, also designed by Cox.