EN

Go back to blog overview

Hans Mulder

March 30, 2017

Hans Mulder

Co-author

Klaas Meijer

The Software Assembly Line

Digital & IT

software.png

From x hours per function point to x function points per hour

Automating software production increases productivity. However, progress in this area is only sporadic. Why? What is holding things back? On the other hand, it’s just a matter of time. We’re on the eve of a breakthrough. We are in a situation whereby we’ll soon be able to speak in terms of ‘functions points per hour’ instead of ‘hours per function point’. [Illustration Marc Kolle]

Productivity factors in most industries develop such that the more units are produced, the cheaper the unit cost becomes. Hence, the surcharges for extra products on the assembly line are gradually reduced. Building the thousandth car is thus a lot cheaper than building the first one. According to a study by Professor Manny Lehman and the Standish Group, we are yet to see this kind of productivity growth in software projects. Indeed, the realization and implementation of the thousandth software project in a single IT landscape often costs more time and money and yields less profit than one independent project. In other words, software development is governed by Lehman’s law of increasing complexity. This axiom is comparable to the ‘law of increasing (relative) costs’, which dictates that the investment of ever-more resources leads to an increasingly low profit.

Productivity growth along the lines of other industries still seems a long way off for the software industry. IT hardware producers, on the other hand, have managed to reduce processor costs despite hardware capacity increases over the last few decades. This productivity gain is known as Moore’s Law and states that the number of transistors on a computer chip is doubling every two years because of technological progress. In networks, a similar dynamic is at play, known as ‘Metcalfe’s Law’, whereby the value of the network increases in direct proportion to the growth of the network. Having only telephones in the entire world would mean they were worth very little; however, when the entire population has a phone, the value is incalculable. Research over the past few years (among others by the University of Antwerp) has shown that the pace of software development factors escalates as a large part of the building process becomes automated. As yet, however, the software field has not harnessed this potential and, consequently, the concomitant increase in productivity has not been realized.

 

Conditions

The question is: why not? Is it because no-one in the software industry is seeking a technological innovation that will render the current revenue model, in which they charge by the hour, untenable? Or perhaps it is more that industry workers aren’t wasting any time looking for a productivity-boosting innovations that are also liable put them out of a job. What would happen to a company if their productivity suddenly increased tenfold? Over the last few decades, many organizations have based their entire company structure around the production and acquisition of software. This group includes information managers, architects, designers, analysts, builders, testers, managers and buyers. It is highly likely that part of that structure would then become redundant. The first question that arises, then, is ‘who would put themselves out of a job?’ What supplier is going to help install a solution that is ten times cheaper, thereby scuttling their own revenue model? What will happen to outsourcing contracts when it appears that the lion’s share of work will be made up of specification and communication with the business and that the best place to do this is in fact on site? These kinds of questions are really the result of yet another puzzle, namely that of the conditions conducive to the successful adoption of innovation. Clearly, the former and many further questions will have to be answered before the desired innovation may become reality.

It may seem as though it is only a matter of time until this kind of software development makes its grand entrance on the scene, but then again, people were saying exactly the same thing about the electric car just a few years ago. An 1898 issue of Kampioen (a Dutch recreation and driving magazine) confidently announced: ‘It is foreseeable, that the vehicle of the future will be largely powered by an electrical driving force.’ It was not however until the arrival of the lithium-ion battery at the beginning of the twenty-first century and the implementation of an elaborate network of charging points and government subsidies that the domestic electric car actually became a reality.

 

Breakthrough

Meanwhile, we seem on the verge of a real breakthrough. Evolvable software, without technical debt and with reutilization and consequently high productivity, has become viable. However, it is also clear that - just like the electrical car- there will be a struggle before the technical, commercial and knowledge infrastructure reaches the critical mass required to transform the software industry. A system has recently emerged in which intricate crystalized software building blocks (rather like Lego) are assembled by software robots in a software assembly line. Although this approach is hardly new (it was presented at the NATO-software conference in 1968) and has been called ‘very promising’ for years on end now, it has not proved to be sustainable until the present day.

Either the proliferation of building blocks caused too much clutter or – quite the opposite – the building blocks turned out to be too large, limiting sustainability and adaptability. In other words, the introduction of the building block design and the generator approach led to problems (commonly known as technical debt) with the sustainability and adaptability of the system. With the introduction of intricate ‘standardized’ blocks, they are generally able to be realized by software robots without adding to the additional complexity which causes technical debt. This is when a software assembly line analogous to the production line in other industries appears.

A similar software development method allows builders and users to develop software through evolutionary prototyping. The software robots monitor whether the new building block technically fits the landscape within the architecture, like the next piece of a puzzle. These software robots then construe, in technical terms ‘expand’ (which is not the same thing as generating code from functional specifications), the automatically-tested software. Combine this approach with existing semi-finished products, such as business rule engines, business process engines and current best practice around continuous delivery pipelines, and it suddenly becomes more realistic to start looking at things in terms of function points per hour rather than hours per function point.

There are a number of factors at play here. On the one hand, the software industry now has a range of mature products that can be implemented as intermediate products. On the other hand, a similar maturity has emerged within the scientific domain of software, which is free of technical debt with the Normalized Systems Theory (see text box) and the use of the DEMO methodology by Jan Dietz for organizational design and the standardization of requirements. What we are seeing is a ‘best of breed’ approach, consolidating the lessons of the past forty years combined with scientific, research-led knowledge. So, no Big Design Upfront, no grand designs, but rather: a theoretical foundation, measurements, teamwork, improvements, delivering client value and taking small steps forward.

 

Normalized systems

The term ‘Normalized Systems’ refers to a theoretical framework for the development of evolvable information systems, as coined by Herwig Mannaert, Jan Verelst and Peter De Bruyn in 2016. This framework is based on an analysis of the modular structure of software architectures, incorporating concepts like stability and entropy, derived from systems theory and thermodynamics. This modular structure in programming languages consists of instances of constructs, such as procedures, functions, classes, services and – more recently – aspects. The framework consists of four stability-related principles, which indicate when combinatorial effects arise. A combinatorial effect exists when the magnitude of the impact of a modification in the software architecture depends on the magnitude of the information system. Combinatorial effects represent a highly unfavorable situation in software architecture. Such effects cause information systems to become increasingly hard to maintain throughout their life cycle, until they can no longer be adjusted cost-effectively and end up having to be replaced by an information system with equal functionality. In this sense, the principles of Normalized Systems confirm Lehman’s Law of increasing complexity.

 

Independent

Furthermore, combinatorial effects have a negative influence on other quality factors such as reusability and testability. The objective is to build information systems free from combinatorial effects. To that end, the theoretical framework includes, as a second component, five elements, around which the basic functionality of information systems can be built. An application, then, consists of N-instances of these elements. These instances are parameterized copies of those five elements, and therefore can be created by means of a kind of code generation for software robots, referred to as ‘expansion’.

Consequently, applications can be developed at a very high rate. Since it can be verified that these elements no longer include combinatorial elements, the application will also be free of combinatorial effects and will therefore be highly evolvable and reusable. The theoretical framework of Normalized Systems is entirely dependent on programming languages, packets and frameworks. The elements can thus be built in any technological combination of programming languages, packets and frameworks. After all, the framework can be applied to any system of modular structures. The essence is that the systematically-determined factors hindering evolvability, i.e. the combinatorial effects, are removed from these structures, leaving behind an ‘evolving modularity’, regardless of the technology.