2 Software Engineering Methods and Practices

The goal of this chapter is to present abstract topics related to how the way of working to develop software is organized and to some extent which additional means are needed (e.g. notations for specifications). Thus, the goals are for the reader to attain the ability to

  • identify the challenges in software development covering a wide range of aspects like how to proceed step by step, involved people and methods
  • identify essential elements found in all methods and practices which has been defined in the Essence Standard
  • identify various key concepts of some commonly used software development methods created during the last four decades
  • explain the motivation behind the initiative to create the Essence standard as a basic and extendable foundation for software engineering

This will also take the reader briefly through the development of software engineering.


2.1 Software Engineering Challenges

From Smith’s specific single person view of software engineering, we move to take a larger world view in this chapter and the next. We will return to Smith’s journey in chapter 4. From 2012 to 2014, the IEEE Spectrum published a series of blogs on IT hiccups[1]. There are all kinds bloopers and blunders occurring in all kinds of industries. Just to name a few:

  • According to the New Zealand Herald, the country’s police force in February 2014 apologized for mailing over 20,000 traffic citations to the wrong drivers. Apparently, the NZ Transport Agency, which is responsible for automatically updating drivers’ details and sending them to the police force, failed to do so from 22 October to 16 December 2013. As a result, “people who had sold their vehicles during the two-month period… were then incorrectly ticketed for offenses incurred by the new owners or others driving the vehicles.” In New Zealand, unlike the U.S., license plates generally stay on a vehicle for its life.[2]
  • The Wisconsin State Journal reported February 2013 that “glitches” with the University of Wisconsin’s controversial payroll and benefits system had resulted in US $1.1 million in improper payments which the university may likely end up having to absorb. This was after a news report in the previous month that indicated problems with the University of Wisconsin’s payroll system had resulted in $33 million in improper payments being made over the past two years.[3] These types of problems highlighted seem to be those, which we can find amusing, however, they are really no laughing matter if you happened to be one of the victims. What is more surprising is that the problem with these situations is that they can be prevented, but they almost inevitably do occur.

2.2 The Rise of Software Engineering Methods and Practices

Just as we have compressed Smith’s journey from a young student to a seasoned software engineer in a few paragraphs, we will attempt to compress some 50 years of software engineering into a few paragraphs. We will do that with a particular perspective in mind: what resulted in the development of a common ground in software engineering – the Essence standard. A more general description of the history is available in an appendix to this book.

However, the complexity of software programs did not seem to be the only root cause of the so called “software crisis”. Software endeavors and product development are not just about programming; they are also about many other things such as understanding what to program, how to plan the work, how to lead the people and getting them to communicate and collaborate effectively.

For the purpose of this introductory discussion, we define a method as providing guidance for all the things you need to do when developing and sustaining software. For commercial products “all the things” are a lot. You need to work with clients and users to come up with the what the system is going to do for its users – the requirements, you need to design, code and test. However, you also need to set up a team and get them up to speed, they need to be assigned work and they need a way of working.

These things are in themselves ‘mini-methods’ or what many people today would call practices. There are solution related ‘practices, such as work with requirements, work with code and conduct testing. There are endeavor related practices such work as setting up a well-collaborating team and an efficient endeavor as well as improving capability of the people and collecting metrics. There are of course customer related practices such as making sure that what is built is what the customers really want.

The interesting discovery we made more than a decade ago was that even if the number of methods in the world is huge it seemed that all these methods were just compositions of a much smaller collection of practices, maybe a few hundred of such practices in total. Practices are what we say reusable, because they can be used over and over again to build different methods.

To understand how we as a software development community have grown our knowledge in software engineering, we provide a description of historical developments. Our purpose with this brief history is to make it easier for you to understand why Essence was developed.

2.2.1 There are lifecycles

From the ad hoc approach used in the early years of computing, came the waterfall methods; actually, it was not just one single method – it was a whole class of methods. The waterfall methods described a software development project as going through a number of phases such as Requirements, Design, Implementation (Coding), and Verification (i.e. testing and bug-fixing) (see Figure 6).

Figure 6         Waterfall Lifecycle

While the waterfall methods helped to bring some discipline to software engineering many people tried to follow the model literally which caused serious problems especially on large complex efforts. This was because software development is not as simple as this linear representation indicates.

A way to describe the waterfall methods is this: What do you have once you think you have completed the requirements? Something written on ‘paper’. (You may have used a tool and created an electronic version of the ‘paper’, but the point is it is just text and pictures). But since it has not been used, do you know for sure at this point if they are the right requirements? No, you don’t. As soon as people start to use the product being developed based on your requirement, they almost always want to change it.

Similarly, what do you have after you have completed your design? More ‘paper’ of what you think needs to be programmed? But are you certain that it is what your customer really intended? No, you are not. However, you can easily claim you are on schedule because you just write less and with less quality.

Even after you have programmed the design, you still don’t know for sure. However, all of the activities you have conducted don’t provide proof that what you did is correct. 

Now you may feel you have done 80%. The only thing you have left is test. At this point it almost always falls apart, because what you have to test is just too big to deal with as one piece of work. It is the code coming from all the requirements. You thought you had 20% left but now you feel you may have 80% left. This is a common well-known problem with waterfall methods.

There are some lessons learned. Believing you can specify all requirements upfront is just a myth in the vast majority of situations today. This lesson learned has led to the popularity of more iterative life cycle methods. Iterating means you can specify some requirements and you can build something meeting these requirements, but as soon as you start to use what you have built you will know how to make it a bit better.  Then you can specify some more requirements and build, and test these until you have something that you feel can be released. But to gain confidence you need to involve your users in each iteration to make sure what you have provides value. These lessons gave rise to a new lifecycle approach called iterative development, a lifecycle adopted by the agile paradigm now in fashion (see Figure 7).

Figure 7         Iterative Lifecycle

New practices came into fashion. The old project management practices became out of fashion and practices relying on the iterative metaphor became popular. The most prominent practice was Scrum, which still is very popular and which we will discuss deeper in part 3 of the book.

2.2.2 There are technical practices

Since the early days of software development, we have struggled with how to do the right things in our projects. Originally, we struggled with programming because writing code was what we obviously had to do. The other things we needed to do were ad hoc. We had no real guidelines for how to do requirements, testing, configuration management, project management and many of these other important things.
Later new trends became popular. The Structured Methods Era

In the late 1960s to mid 1980s the most popular methods separated the software to be developed into the functions to be executed and the data that the functions would operate upon. The functions living in a program store and the data living in a data store. These methods were not farfetched because computers at that time had a program store, for the functions translated to code, and a data store. We will just mention two of the most popular methods at that time: SADT (Structured Analysis and Design Technique) and SA/SD (Structured Analysis/Structured Design). As a student, you really don’t need to learn anything more about these methods. They were used for all kinds of software development. They were not the only methods in existence. There were a large number of published methods available and around every method there were people strongly defending it. From this time in the history of software development the methods war started. And, unfortunately, it has not yet finished!

Every method brought with it a large number of practices such as requirements, design, test, defect management, and the list goes on.

Figure 8         SADT basis element.

Each had their own blueprint notation or diagrams to describe the software from different viewpoints and with different levels of abstraction (for example see Figure 8 on SADT). Tools were built to help people use the notation and to keep track of what they were doing. Some of these practices and tools were quite sophisticated. The value of these approaches was of course that what was designed was close to the realization – to the machine – you wrote the program separate from the way you designed your data. The problems were that program and data are very interconnected and many programs could access and change the same data. Although many successful systems were developed applying this approach, there were far many more failures. The systems were hard to develop and even harder to change safely and that became the “Achilles” heel for this generation of methods. The Component Methods Era

The next method paradigm shift[4] came in the early 1980 and had its high season until the beginning of the 2000s.

In more detail this paradigm shift was inspired by a new programming metaphor - object-oriented programming, and the trigger was the new programming language Smalltalk. However, the key ideas behind Smalltalk were derived from an earlier programming language Simula 67 that was released in 1967. Smalltalk (and Simula 67) were fundamentally different from previous generations of programming languages in that the whole software system was a set of classes embracing its own data instead of programs (subroutines, procedures, etc.) addressing data types in some data store. Execution of the system was carried out through the creation of objects using the classes as templates and these objects interacted with one another through exchanging messages. This was in sharp contrast to the previous model in which a process was created when the system was triggered, and this process executed the code line by line accessing and manipulating the concrete data in the data store. A decade later, around 1990, a complement to the idea of objects received widespread acceptance inspired, in particular by Microsoft. We got components.

In simple terms, a software system was not anymore seen as having two major parts – functions and data. Instead a system was a set of interacting elements - components. Each component had an interface connecting it with other components and over this interface messages were communicated. Systems were developed by breaking them down into components, which collaborated with one another to provide for implementation of the requirements of the system. What was inside a component was less important as long as it provided the interfaces needed to its surrounding components. Inside a component could be program and data, or classes and objects, scripts or old code (often called legacy code) developed many years ago. Components are still the dominating metaphor behind most modern methods. An interesting development of components that has become very popular is microservices, which we will discuss in part 3.

With components, a completely new family of methods evolved. The old methods with its practices were considered to be out of fashion and were discarded. What started to evolve was in many cases similar practices with some significant differences but with new terminology. In the early 1990s about 30 different component methods were published. They had a lot in common, but it was almost impossible to find the commonalities since each method author created his/her own terminology.

Figure 9         A diagram (in fact a use-case diagram) from the Unified Modeling Language standard

In the second half of 1990s, OMG (a standards body called Object Management Group) felt that it was time to at least standardize on how to represent software drawings; namely notations used to develop software. This led to a task force being created to drive the development of a new standard. The work resulted in the Unified Modeling Language (UML, see Figure 9), which will be used later in the book. This development basically killed all other methods than the Unified Process (marketed under the name Rational Unified Process (RUP)). The Unified Process dominated the software development world around year 2000. Again, a sad step, because many of the other methods had very interesting and valuable practices that could have been made available in addition to some of the Unified Process practices. However, the Unified Process became in fashion and everything else was considered out of fashion and more or less thrown out.

Over the years many more technical practices other than the ones supported by the 30 component methods arrived. More advanced architectural practices or sets of practices for example for enterprise architecture (EA), service-oriented architecture (SOA), product-line architecture (PLA) and recently architecture practices for big data, the cloud, mobile internet and the internet of things (IoT) evolved. At the moment, it is useful to see these practices as pointers to areas of software engineering interest at a high level of abstraction and suffice it to say that EA was about large information systems for e.g. the finance industry, SOA was organizing the software as a set of possibly optional service packages and PLA was the counterpart of EA but for product companies e.g. in the telecom or defense industry. More important is to know that again new methodologies grew up as mushrooms around each one of these technology trends. With each new such trend method authors started over again and reinvented the wheel. Instead of “standing on the shoulders of giants”[5], they preferred to stand on another author’s toes. They redefined already adopted terminology and the methods war just continued. The Agile Methods Era

The agile movement – often referred to just as agile, which now is the most popular trend embraced by the whole world. Throughout the history of software engineering, experts have always been trying to improve the way software is being developed. The goal has been to compress time scales to meet the ever-changing business demands and realities. If agile were to have a starting date, one can pin point to the time when 17 renowned industry experts came together and pen down the words of the agile manifesto. We will present the manifesto in part 4 of the book and how Essence contributes to agile. But for now, it suffices to say that agile involves a set of technical and people practices. Most important is that agile emphasized an innovative mindset resulting in that the agile movement continually evolves its practices.

Agile has evolved the technical practices utilized with components. However, its success did not come from introducing many new technical practices, even if some new practices became popular with agile such as continuous integration, backlog-driven development and refactoring. Continuous integration suggests that developers several times daily integrate their new code with the existing code base and verify it. Backlog-driven development means that the team keeps a backlog of requirement items to work with in coming iterations. We will discuss this practice in more detail when we discuss Scrum in part 3 of the book. Refactoring is to continuously improve existing code iteration by iteration.

Agile rather simplified what was already in use to assist working in an iterative style and providing releasable software over many smaller iterations, or sprints as Scrum calls them.

2.2.3 There are people practices

As strange as it may sound, the methods we employed in the early days did not pay much attention to the human factors. Everyone understood of course that software was developed by people but very few books or papers were written about how to get people motivated and empowered in developing great software. The most successful method books were quite silent on the topic. It was basically assumed that in one way or the other this was the task of management.

However, this assumption changed dramatically with agile methods. Before, there was a high reliance on tools so that code could be automatically generated from design documents such as UML diagrams. With agile methods programming became reevaluated as a creative job. The programmers, the people who eventually created working software were ‘promoted’ and coding became again a prestigious task. Pre-agile coding was downgraded, and other tasks were more prestigious such as project managers, analysts, architects.

With agile many new practices evolved, for instance self-organizing teams, pair programming and daily standups.

A self-organizing team includes members who are more generalists than specialists – most know how to code even if some are experts. It is like a soccer team – everyone knows how to kick the ball even if some are better at scoring goals and someone else is better on keeping the ball out of the goal.

Pair programming means that two programmers are working side-by-side developing the same piece of code. It is expected that the code quality is improved and that the total cost will be reduced. Usually one of the two is more senior than the other so this is also a way to improve team competency.

Daily standup is a practice intended to reduce impediments that team members have as well as to retain motivation. Every morning the team meets for 15 minutes to go through each member’s situation – what he/she has done and what he/she will be doing. Any impediments are brought up but not addressed during the meeting. The issues will be discussed in separate meetings. This practice is part of the Scrum practice discussed in part 3.

Given the impact agile has had on the empowerment of the programmers, it is easy to understand that agile has become very popular. Moreover, given the positive impact agile has had on our development of software, there is no doubt it has deserved to become the latest paradigm.

2.2.4 Consequences

There is a methods war going on out there. It started 50 years ago, and it still goes on – jokingly we can call it the Fifty Year’s War, and there is no truce yet, even today. There are no signs that this will stop by itself.

  1. With every major paradigm shift such as the shift from Structured Methods to Component Methods and from the latter to the Agile Methods, basically the industry throws out all they know about software development and start all over with new terminology with little relation to the old one. Old practices are viewed as irrelevant and new practices are hyped. To make this transition from the old to the new is extremely costly to the software industry in the form of training, coaching and change of tooling.
  2. With every major technical innovation, for instance cloud computing, requiring a new set of practices, the method authors also ‘reinvent the wheel’. Though the costs are not as huge as in the previous point, since some of the changes are not fundamental across everything we do (it is no paradigm shift) and thus the impact is limited to, for instance cloud development, there is still foolish waste.
  3. Within every software development trend there are many competing methods. For instance, back early 1990 there were about 30 competing object-oriented methods. Recently there are about 10 competing methods on scaling agile to large organizations, some of the most famous ones are Scaled Agile Framework (SAFe), Disciplined Agile Delivery (DAD), Large Scale Scrum (LeSS) and Scaled Professsional Scrum (SPS). They typically include some basic widely used practices such as Scrum, user story alternatively use cases, continuous integration, but the method author has ‘improved’ them – sarcastically stated. There is reuse of ideas, but not reuse of original text so the original inventor of the practice feels he or she has been robbed of his work; there is no collaboration between method authors, instead they are at “war” as competing brands.

Within these methods, there are some practices that are specific for each method. Usually there are useful practices in all of these famous methods. The problem is that all these methods are monolithic, not modular, which means that you cannot easily mix and match practices from different methods. If you select one of these methods, you are more or less stuck with that method. This is not what teams want and certainly not their companies. This is of course what most method authors, whose method is selected, like even if it never was what they intended.

Typically, every recognized method has a founding parent, sometimes more than one parent. If successful, this parent is raised to guru status. The guru more or less dictates what goes into his/her method. Thus, once you have adopted a method, you get the feeling you are in a method prison controlled by the guru of that method. Ivar Jacobson, together with Philippe Kruchten, was once such a guru governing the Unified Process prison. Jacobson realized that this was “the craziest thing in the world”, which we all think is unworthy in any industry and in particular in such a huge industry as the software industry. To eradicate such unnecessary method wars and method prisons, the SEMAT initiative was founded.

2.3 The SEMAT Initiative

As of the writing of this book there are about 20 million software developers[6] in the world and the number is growing year by year. It can be guesstimated that there are over 100,000 different methods to develop software, since basically every team has developed their own way of working even if they didn’t describe it explicitly.

Over time the number of methods is growing much faster than the number of reusable practices. There is no problem with this. In fact, this is what we want to happen, because we want every team or organization to be able to set up its own method. The problem is that until now we have not had any means to really do that. Until now creating your own method has invited the method author(s) to reinvent everything they liked to change. This has occurred because we haven’t had a solid common ground that we all agreed upon to express our ideas. We didn’t have a common vocabulary to communicate with one another, and we didn’t have a solid set of reusable practices from which we could start creating our own method.

In 2009, several leaders of the software engineering community came together, initiated by Ivar Jacobson, to discuss the future of software engineering. Through that, the Software Engineering Method and Theory (SEMAT) initiative was initiated with two other leaders founding it: Bertrand Mayer and Richard Soley.

The SEMAT (Software Engineering Method and Theory) call for action in 2009 stated as follows:

“Software engineering is gravely hampered today by immature practices. Specific problems include:

  • The prevalence of fads more typical of fashion industry than of an engineering discipline.
  • The lack of a sound, widely accepted theoretical basis.
  • The huge number of methods and method variants, with differences little understood and artificially magnified.
  • The lack of credible experimental evaluation and validation.
  • The split between industry practice and academic research.

We support a process to re-found software engineering based on a solid theory, proven principles and best practices that:

  • Include a kernel of widely-agreed elements, extensible for specific uses
  • Addresses both technology and people issues
  • Are supported by industry, academia, researchers and users
  • Support extension in the face of changing requirements and technology.”

This call for action was signed by around 40 thought leaders in the world coming from most areas of software engineering and computer science, 20 companies and about 20 universities have signed it and more than 2000 individuals have supported it. You should see the “specific problems” identified earlier as evidence that the software world has severe problems. When it comes to the solution “to re-found software engineering” the keyword here are “a kernel of widely-agreed elements”, which is what this book has as a focus.

It was no easy task to get professionals around the world to agree on what software engineering is about let alone how to do it. It led, of course to significant controversy. However, the supporters of SEMAT persevered. Never mind that the world is getting more complex, and there is no single answer, but there ought to be some common ground – a kernel.

2.4 Essence: The OMG Standard

After several years of hard work, the underlying language and kernel of software engineering was in June 2014 accepted as a standard by the Object Management Group (OMG) and it was given the name Essence. As is evident from the call for action, the SEMAT leaders realized already at the very start that a common ground of software engineering (a kernel) needed to be widely accepted. In 2011 after having worked two years together and having reached part of a proposal for a common ground, we evaluated where we were and understood that the best way to get this common ground widely accepted was to get it accepted as a formal standard from an accredited standards body. The choice fell on OMG. However, it took three more years to get it through the process of standardization. Based upon experience from the users of Essence, it continues to be improved by OMG through a task force assigned to this work.

In the remainder of this part of the book, we will introduce Essence, the key concepts and principles behind Essence, and the value and use cases of Essence. This material is definitely useful for all students and professionals alike. Readers interested in learning more, please see references [2] – [5].

What should you now be able to accomplish?

After studying this chapter, you should be able to

  • identify the challenges in software development
  • identify the important paradigms being applied by software development methods and practices
  • identify various key concepts of the methods from the structured (SA/SD) era, components (UML/RUP), and agile era (e.g. SCRUM)
  • explain what the  Essence standard is about
  • explain the motivation behind the initiative to create the Essence Standard

Again we point to additional reading, exercises and further material at www.software-engineering-essentialized.com.


[1] http://spectrum.ieee.org/riskfactor/computing/it/it-hiccups-of-the-week

[2] http://spectrum.ieee.org/riskfactor/computing/it/new-zealand-police-admits-sending-20-000-traffic-tickets-to-the-wrong-motorists

[3] http://spectrum.ieee.org/riskfactor/computing/it/it-hiccups-of-the-week-university-of-wisconsin-loses-another-11-million-in-payroll-glitches

[4] Wikipedia: A paradigm shift, as identified by American physicist and philosopher Thomas Kuhn, is a fundamental change in the basic concepts and experimental practices of a scientific discipline.

[5] From Wikipedia: The metaphor of dwarfs standing on the shoulders of giants … expresses the meaning of "discovering truth by building on previous discoveries".

[6] https://www.infoq.com/news/2014/01/IDC-software-developers