Agile Software Engineering Skills
Agile Software Engineering Skills
Agile Software Engineering Skills
Agile
Software
Engineering
Skills
Agile Software Engineering Skills
Julian Michael Bass
Agile
Software
Engineering
Skills
Julian Michael Bass
University of Salford
Salford, UK
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
For Bizunesh, Alfie, Rosa and Jill.
In memory of Kibe, Alfie and Beryl.
Preface
The skills you learn from this book will help establish your career in software
development. You can learn skills for working in self-organising teams, developing
software increments and facilitating agile processes. I see a continuing need for
an introductory book that draws together this wide range of modern technical,
collaboration and software process skills.
The book is aimed at early career software development practitioners. You might
be in work or a student and want to work with others and create beautiful computer
programs. Working with people is challenging. For some, more challenging than for
others. Learning to work in a team, with other people who share all our own frailties,
idiosyncrasies and foibles is an important part of life. The hands-on approach in this
book will help equip you for success in a software development team.
Book Structure
Agile methods comprise three sets of related ideas: roles, artefacts and ceremonies.
Consequently, this book comprises three parts, dedicated to People, Product and
Process.
Part I, which focuses on People, describes project roles and the skills you need to
perform each role. This includes members of self-organising teams, scrum masters,
product owners and activities for managing other stakeholders.
I talk about the skills needed to create Product artefacts in Part II. You can learn
the skills you need to create agile requirements, architectures, designs as well as
development and security artefacts.
The agile development Process, you can use to coordinate your work with
others, is described in Part III. I introduce the skills you need to facilitate an
incremental process and to use software tools for version control and automated
testing. These processes can improve product quality and tools automate aspects of
your development process.
vii
viii Preface
I discuss some more advanced topics in Part IV. These topics include large
projects comprising multiple cooperating teams, automating deployment, cloud
software services and evolving live systems.
Exercises
You can’t learn new skills just by reading about them. You have to read, practice,
evaluate, reflect and read some more. By practice, I mean apply and then put into
practice. Each chapter has exercises. These exercises are important to help you
acquire the skills you need. Some exercises are performed alone; for some, you will
need to work in a group. Performing the exercises is, perhaps, the most important
part of the book.
Hints, tips and further advice about tackling the exercises are presented at the
end of each chapter. I recommend you plan your approach to each exercise (but do
not look at the hints or tips). Then, actually conduct or perform the exercise (but,
still, do not look at the hints or tips). Reflect on what happened. What went well?
What could have gone better? Make some notes about what happened. Now. Only
now, look at the hints, tips and advice at the chapter end.
You could start at the beginning and read through to the end. But, you don’t have
to. You should read the Introduction in Chap. 1, first. But then you have a choice,
depending on your interests and current skills. You could just carry on to Chap. 2 in
Part I on People. Or, perhaps you could start with Chap. 7 in Part II on Product. Or
maybe, you could start with Chap. 13 on Process in Part III.
Descriptions at the start of each book part give a brief overview of the contents.
Chapter abstracts help you gain a more detailed sense of the overall flow of the
book. Reading the book part introductions and chapter abstracts would be a top-
down approach, which focuses on the holistic structure or organisation of the book.
You could use the top-down approach to plan which book parts and chapters you
would like to explore first.
Alternatively, you could dive straight into one of the chapters that interests you.
This is a bottom-up approach. The bottom-up approach favours starting by getting
into the detail of one interesting issue. I recommend that you work through the
exercises provided in each chapter. The chapter summaries will help you review and
reflect on the skills you have learned.
x Preface
Learning Journal
You should try to make your learning explicit and deliberate. One way to do this
is by using a learning journal. I suggest you create a journal for each of the three
main parts of the book: People, Product and Process. Use the learning journal to
capture your newly acquired skills and experiences, as well as to reflect on your
own learning process.
The Tabby Cat project integrates and applies the skills from each chapter into
a single case study. The Tabby Cat project was provided by Red Ocelot Ltd., a
Preface xi
software start-up company associated with the University of Salford. You can think
of this as a worked example. The project is to build software for displaying activity
on a source code repository. You can read each Tabby Cat project chapter when you
finish reading each Part. Or, you could read the chapters as a sequence from Chap. 6
and then Chaps. 12 and 17.
Prior Knowledge
You should already have the skills to implement software solutions to simple
classroom problems. I assume you know how to code. Or I should say, I make
no attempt to teach you how to code. By which I mean, you should have already
learned the basics of one or two programming languages, at least for a semester or
two.
You will be able to create the syntax of variables, operators, statements and flow
control, in your chosen language. I assume you can create object-oriented classes,
and their run-time instances, that interact with each other and encapsulate data. You
can probably already use data structures, such as collections, and maybe you have
learned how to build a simple database-driven website.
This book is about applying the programming skills you have to your first few
projects. If you don’t have these skills, you can use this book alongside learning
basic programming. Either way, this book will help you acquire the collaboration
and agile process skills you need for success.
xiii
xiv Acknowledgements
Thanks to Amr Hamed who suggested including a case study and provided
thoughtful feedback on several chapters. I also want to thank Salford students,
including Liam Sutton, who provided specific feedback on earlier drafts of chapters
from this book.
Contents
Part I People
2 Self-Organising Teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 11
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 11
2.2 Self-Organising Teams . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 11
2.2.1 Attributes of Self-Organising Teams . . . . . . . . . . . . . . . . . . . . 12
2.3 Groups and Teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 13
2.3.1 Building Team Performance . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 13
2.4 Agile Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 15
2.4.1 Sustainable Pace . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 15
2.4.2 Collective Code Ownership . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 16
2.5 Forming Teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 16
2.5.1 Accelerating Team Formation . . . . . . .. . . . . . . . . . . . . . . . . . . . 17
2.5.2 Handling Difference and Conflict . . .. . . . . . . . . . . . . . . . . . . . 17
2.5.3 Accelerating Norming . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 17
xv
xvi Contents
4 Managing Stakeholders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 49
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 49
4.2 Managing Upwards . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 49
4.2.1 Set Expectations . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 50
4.2.2 Confess to Catastrophe . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 50
4.2.3 Share Success . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 50
4.2.4 Unreasonable Demands . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 50
4.3 Managing Outwards . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 51
4.4 Contracts .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 51
4.4.1 Contracts and Change Requests . . . . .. . . . . . . . . . . . . . . . . . . . 51
4.4.2 Time and Materials Contracts . . . . . . . .. . . . . . . . . . . . . . . . . . . . 52
4.4.3 Outsourcing Contracts . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 52
4.4.4 Offshoring Contracts . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 53
4.4.5 Academic Contracts . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 53
4.4.6 Negotiating Contracts . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 53
4.5 Communication Quality . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 54
4.5.1 Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 54
4.5.2 Narrative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 54
4.5.3 Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 54
4.5.4 Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 55
4.6 Communication Tools . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 55
4.6.1 Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 55
4.6.2 Presentations . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 57
4.6.3 Blogs and Wikis . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 58
4.6.4 Videos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 59
4.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 59
4.8 Hints, Tips and Advice on Exercises . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 61
4.9 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 64
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 65
5 Ethics . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 67
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 67
5.2 What Went Wrong? . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 67
5.2.1 Algorithms and Inequality . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 68
5.2.2 Platforms and Fake Markets . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 68
5.2.3 Errors, Faults and Failures . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 69
5.2.4 Criminal and Unethical Behaviour . .. . . . . . . . . . . . . . . . . . . . 69
5.3 Copyright and Patents . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 71
5.4 Professional Bodies . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 72
5.4.1 BCS Codes of Conduct . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 72
5.4.2 ACM Codes of Ethics . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 73
5.4.3 Problems with Codes of Ethics . . . . . .. . . . . . . . . . . . . . . . . . . . 73
5.5 Activism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 73
5.5.1 Whistle-Blowing . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 74
5.5.2 Unions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 74
xviii Contents
Part II Product
7 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 89
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 89
7.2 Types of Requirements . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 90
7.2.1 Functional Requirements . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 90
7.2.2 Non-functional Requirements . . . . . . . .. . . . . . . . . . . . . . . . . . . . 90
7.2.3 Incremental Requirements . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 91
7.3 Requirements Quality . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 92
7.3.1 Requirements Precision . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 92
7.3.2 Requirements Consistency . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 92
7.3.3 Requirements Completeness . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 93
7.4 Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 94
7.4.1 Use Case Diagrams . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 94
7.4.2 Use Case, Descriptions . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 95
7.5 User Stories .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 96
7.6 User Story Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 97
7.7 Personas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 98
7.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 98
7.9 Hints, Tips and Advice on Exercises . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 104
7.10 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 109
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 109
8 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 111
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 111
8.2 Architecture in Agile . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 112
8.2.1 Refactoring . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 113
Contents xix
Abstract This book is divided into three main parts: people, product and process.
Each of those parts is summarised by a case study chapter, the Tabby Cat project,
which runs through an agile software development process. This chapter explains
the book structure and introduces some key principles of agile software engineering.
1.1.1 People
When you first learn to program a computer, most of your learning will take place
alone. You will acquire technical skills you need to create a software product.
But, here’s the thing: Building software systems is something people do in teams.
Software engineering is a team sport. As systems grow larger, few people can wait
the time it would take for one person to actually build the thing. Either you enlist
the help of some friends or you work as part a commercial team to build a product.
In this book, you will learn how to collaborate with colleagues to create working
software. Figure 1.1 illustrates the organisation on chapters in the parts of the book.
So, Part I of the book is about people. You will have an opportunity to gain
personal, teamwork and organisational skills for working with people. Skills for
self-organising teams are in Chap. 2. We also need people in roles around our self-
organising teams that aid, support and enable our software development. The skills
used in these roles are discussed in Chap. 3. The teams and their facilitators need to
manage a range of other interested parties, called stakeholders. Stakeholders could
be customers, executives or outsiders to the development process. The skills you
need for managing stakeholders are in Chap. 4. The behaviour and consequences of
the technology sector are increasingly attracting the attention of regulators. Whistle-
blowers, from within technology corporations, have revealed examples of their
employers’ anti-competitive practices and consumer harms. The issues of ethics are
discussed in Chap. 5.
Getting Large-scale
Teams Other Roles Stakeholders Ethics
Started Agile
Chap. 2 Chap. 3 Chap. 4 Chap. 5
Chap. 6 Chap. 18
People, Part 1
Cloud
Deployment
Requirements Architecture Design Development Security Get Building Chap. 19
Chap. 7 Chap. 8 Chap. 9 Chap. 10 Chap. 11 Chap. 12
Technical
Debt
Product, Part 2 Chap. 20
Process, Part 3
1.1.2 Product
The technical skills you need, to build Products and systems from working code, are
discussed in Part II of the book, as shown in Fig. 1.1. When you start work building
a product, you need to know what it is supposed to do. You can learn about the
skills and techniques for recording and managing requirements in Chap. 7. Maybe
someone (such as a boss, client or some other stakeholder) is going to tell you what
software you are supposed to create. But often what you are told is not detailed
enough or sufficiently clear for you to go ahead and get working. In this case, we
need to embark on a process of requirements discovery.
In the rest of Part II, you can learn about creating software product from
requirements. We use feature-driven development. A feature is a function that the
software must perform. Features provide end-to-end functionality, including front-
end (user experience) and back-end (logic and storage) code.
The scope of a product is the number of features, or requirements, your software
is to provide. We can increase the scope of our project by building new software
features that fulfil more requirements. We can reduce our planned scope by
lowering the number of features we aim to create. A feature is some client-valued
function that the software must perform. Features include end-to-end functionality,
comprising front-end (user experience) and back-end (logic and storage) code. We
think about features as a thin slice through the layers of a business information
system. Normally a feature is small enough that we can implement it in a few days.
Individual features can be collected into larger groups of business-related functions,
sometimes called feature sets or epics. Designing and building software as a series of
features allows us to stay focused on what our users (or clients) actually want. They
can give us feature-by-feature feedback, so we all know that what we are producing
is what is needed. Feature-driven development also facilitates tracking (a benefit to
us and the people providing the funding).
Features are collected into larger groups of business-related functions, forming
increments. Hence, we advocate an incremental approach to software development.
The idea is that we deliver our software as a series of phases or stages. So, an
increment is a code release forming some part of a larger working system.
You can learn about the skills to create a high-level software architecture in
Chap. 8. Here, you can learn about architectural styles. Next, you learn skills
of software design in Chap. 9. We explore some common object-oriented design
patterns. After that, implementation skills are described in Chap. 10. Finally, in
Part II, you learn about the skills you need for building secure systems in Chap. 11.
1.1.3 Process
The advanced skills Chapters stand alone. Consequently, you can read them as a
sequence from Chap. 18 to 21. Alternatively, you can jump ahead to the advanced
skills chapters that correspond to each part. Chapter 18 is about people and how
multiple teams are coordinated on larger projects. Chapters 19 and 20 are about
product and how to deploy software-as-a-service applications to the cloud and
manage evolution in live software products, while Chap. 21 is concerned with
process and how to automate deployment using DevOps.
Hence, this book explores three main areas of software development skills in
the people aspects of team working, the technical aspects of software product
development and how to enable a systematic and repeatable software development
process.
Scope
(Features, Increments, Releases)
Resources Quality
(Time, Budget, People, Teams) (Processes, Version Control, Testing)
The Tabby Cat project is a case study that applies the skills from each chapter [6].
The Tabby Cat software is used to display developer activity on a source code
repository. Tabby Cat is described in Chaps. 6, 12, and 17 and was provided by
Red Ocelot Ltd., our own software start-up which itself emerged from our industrial
collaboration [9].
6 1 Introduction and Principles
I have taught software design to commercial clients in Europe, South Asia and North
America. Further, the book is based on research conducted with software practition-
ers and experts from around the world. Over one hundred practitioner interviews
have informed this book. This research has been published in international peer-
reviewed conferences and journals, notably [1, 2, 5] and more recently [8, 10]. For
more details about the research methods employed, see Appendix A.
This research has enabled several industrial collaborations. These collaborations
have focused on agile innovation processes and cloud-hosted software service
deployment. The collaborations have resulted in the conceptualisation, design and
deployment of several software products.
The skills you learn in this book can provide a lifetime of fulfilling and creative
work. You could stay local, or travel the world. Your software could aid health
and wellbeing. Your solutions could build communities, strengthen inclusion and
support diversity.
These skills can get you a job. But these skills, with the right dedication and
commitment, can also serve you well if you want to become a freelancer, or a
technology entrepreneur.
References 7
With these skills, you can build products to create commercial revenue. Of
course. But, you can also use software to improve people’s livelihoods, wellbeing
and life chances. Let’s help make the world a better place, one software product at
a time.
References
1. Bass, J.M.: How product owner teams scale agile methods to large distributed enterprises.
Empir. Softw. Eng. 20(6), 1525–1557 (2015). https://doi.org/10.1007/s10664-014-9322-z,
http://link.springer.com/article/10.1007/s10664-014-9322-z
2. Bass, J.M.: Artefacts and agile method tailoring in large-scale offshore software development
programmes. Inf. Softw. Technol. 75, 1–16 (2016). https://doi.org/10.1016/j.infsof.2016.03.
001, http://www.sciencedirect.com/science/article/pii/S0950584916300350
3. Bass, J.M.: http://www.agileskillsbook.com (2022)
4. Bass, J.M.: Julianbass - overview (2022). https://github.com/julianbass
5. Bass, J.M., Haxby, A.: Tailoring product ownership in large-scale agile projects: managing
scale, distance, and governance. IEEE Softw. 36(2), 58–63 (2019). https://doi.org/10.1109/
MS.2018.2885524
6. Bass, J., Monaghan, B.: Tabby Cat GitHub Explorer. Red Ocelot Ltd (2022). https://github.
com/julianbass/github-explorer
7. BCS, The Chartered Institute for IT: University of Salford – HackCamp (2021). https://
www.bcs.org/deliver-and-teach-qualifications/university-accreditation/practice-highlights/
university-of-salford-hackcamp/
8. Rahy, S., Bass, J.M.: Managing non-functional requirements in agile software development.
IET Softw., 1–13 (2021). https://doi.org/10.1049/sfw2.12037
9. Red Ocelot Ltd: Enhancing digital agility (2022). https://www.redocelot.com
10. Salameh, A., Bass, J.M.: An architecture governance approach for agile development by
tailoring the Spotify model. AI Soc. (2021). https://doi.org/10.1007/s00146-021-01240-x
11. The University of Salford: HackCamp 2020: Computer science and software engineering on
Vimeo (2020). https://vimeo.com/395147780
Part I
People
The overall design of this book is around people, product and process. Parts II and III
are, more or less, stand-alone. So, if your main interest is in the technical product,
you could skip to Part II. Also, if the development process is your main concern,
then you might want to skip ahead to Part III. The skills required for some more
advanced agile software engineering topics are described in Part IV.
Chapter 2
Self-Organising Teams
Abstract When groups of people come together to carry out a shared task,
they form teams. Small self-organising teams are the core building block of any
meaningful software development effort. If we need more people, we use multiple
small teams. This chapter describes the skills you need to create self-organising
teams. You will learn about agile principles and how to energise and support teams.
I discuss the benefits of teams comprising diverse skills and virtual teams where
members work remotely.
2.1 Introduction
Agile methods are now the norm for business information system development [4,
5]. Furthermore, most people, working on agile software projects, work in small
self-organising teams. In this chapter, I will explain how teams are formed and how
we can enhance the performance of these teams. I will explore what distinguishes
groups from teams and how team performance can be enhanced.
Software Software
Engineering Engineering
Division Division
Requirements Software
Testing Team 1 Team 2 Team 3
Analysis Development
Teams learn to learn when they change the way they look at problems, review
and refine the best work method and reconsider the best outputs to deliver. Agile
practices that help teams learn to learn include retrospective workshops and stand-
up meetings, more on these practices in Chap. 13.
Self-organising teams establish their own goals and keep on evaluating them-
selves such that they are able to devise newer and better ways of achieving
those goals [19]. This self-evaluation is often aimed at improving productivity and
product quality. We find that productivity and product quality are often conflicting
objectives, and achieving the right balance between them requires constant attention.
Diversity in the team is an asset. Teams that develop a single view of the
world are at a disadvantage when confronted with a changing environment. More
effective is where diverse team members interact amongst themselves leading to
better understanding of each other’s perspectives [11].
There are several characteristics that define teams [13]. Let’s very briefly explore
these characteristics:
• Team size
• Skills portfolio
• Common purpose (including shared performance or quality goals)
• Common approach
• Mutual accountability
Teams are small, typically comprising seven plus or minus two members. Larger
groups of people have difficulty interacting constructively as a team. Teams are
used to bring together group members with complimentary skills, as just discussed
in relation to Fig. 2.1. Teams where everyone has exactly the same skills are less
effective than teams with a mix of skills.
Teams are created around a common purpose or a shared set of goals. A mean-
ingful purpose establishes aspiration for the team. Often teams are characterised by
shared commitment to a set of performance or quality goals. Also, effective teams
are committed to a common approach. This is the way they will work together
to achieve their shared goals. Finally, team members are jointly accountable for
outcomes. A team also shares responsibility for its own successes and failures.
There are some techniques we can use to build high performance teams. But, we
need to be cautious here. Teams are different and team members are different.
Also, of course, the context in which teams work are different from each other.
14 2 Self-Organising Teams
Consequently, there is no rule book we can follow that automatically creates a high
performing team. We know elite teams when we see them. But we cannot magically
and turn any group into an outstanding team.
Bearing in mind that caveat: what can we do to help create a highly functioning
team? As I mentioned, here are some techniques that can help build team perfor-
mance [8, 13]:
• Establish urgency and direction.
• Select members based on skills and potential (not personalities).
• Pay attention to first meetings and actions.
• Set rules for acceptable behaviour.
• Focus on few short-term performance tasks and goals.
• Update facts regularly.
• Spend (lots of) time together.
• Exploit positive feedback.
Teams tend to work more effectively if there is a shared sense of direction and an
urgency of purpose. The more strongly the team members feel that sense of direction
and urgency, the more effective the team is likely to be.
Effective teams select team members based on their skills. In general, teams need
technical, problem-solving and interpersonal skills. Some team members need to
specialise in each of these three areas. Being more focused on delivery of working
code, you might want to make sure you have:
• Front-end, human-computer interaction and user experience skills
• Back-end, web services and database implementation skills
• Testing and evaluation skills
• Deployment skills to place working code on servers accessible by users
Consequently, in a HackCamp or Hackathon setting, it is much better to choose
team members based on skills, rather than friendship groups.
The set-up phase of the group is very important. Early meetings set the tone for
performance of the group. A calm yet purposeful and collaborative atmosphere is to
be encouraged. Setting the right tone at early stages is healthy and important.
2.4 Agile Principles 15
There is a school of thought that you can write excellent software by ‘pulling an all-
nighter’. The contention is that you can solve challenging problems, with logically
coherent solutions, in the dead of night or during early hours of the morning (ideally
after an energetic and entertaining night out). Let me share with you a naughty
secret: I have, in the past, been known to work on software development at unseemly
hours.
But honestly, I don’t think ‘pulling an all-nighter’ is a good idea. I certainly don’t
think it is good idea when your boss, through omission or design, prevails on you to
work all night. Okay, so maybe when writing software for self-learning or for your
own entertainment, all-night coding might be okay. A hackathon can be a great way
16 2 Self-Organising Teams
to learn and often involves a short and intensive burst of activity. But for professional
software development, this is not really the way to go.
This realisation, that all-night coding is not ideal, partly informs the concept of
sustainable pace. The idea is that software engineering is a creative activity that
should be conducted when people are awake, alert and fully focused on the job.
This is the idea that creative work needs to be conducted in normal office working
hours and not involve long periods of evening or weekend working. The implication
is that carefully implemented software development processes enable a sustainable
approach to code creation, sustainable over weeks, months and years.
Code can be written by individuals but, according to the concept of collective code
ownership, should be a resource belonging to the whole team. It is argued that we
should not put names on the modules of software we write. There should be no
impediment to making changes or corrections to code written by others. The team,
as a whole, stands or falls by the software created by its members.
There are many models of the processes that happen when people come together
to work in small groups. Perhaps the most well-known is Tuckman’s [20]
comprising:
• Forming
• Storming
• Norming
• Performing
• Adjourning stages
This model is obviously a bit of a simplification but has stood the test of time
remarkably well. During the forming phase, group members create a team with
clear structure, goals, direction and roles so that members begin to build trust.
During the storming phase, frustrations and perhaps conflicts build up. The team
often needs to refocus on its goals, perhaps breaking larger goals down into smaller,
more achievable steps. As the team moves into the norming phase, team members
begin to resolve the differences between their initial expectations and the reality
of the team’s experience. Team members often notice more frequent and more
meaningful communication amongst team members and an increased willingness to
share ideas or ask for help. During the performing phase, there is significant progress
towards team goals, and team members show high commitment to the team’s goals.
Finally, during the adjourning phase, team members complete their deliverables
2.5 Forming Teams 17
(final software, test execution, reports and so on), evaluate performance with a
particular focus on identifying ‘lessons learned’ and celebrate the contributions and
accomplishments of the team.
Given this model of small group development, there are steps we can take that
will support this process. During the early stage of coming together as a group,
we can undertake several activities to help establish shared goals and build trust.
The exercises in Sect. 2.9, at the end of this chapter, can help you during this team-
forming stage.
There are three main attributes of techniques for handling disagreements. When
negotiating solutions to differences of opinion:
1. Focus on the problem.
2. Avoid focusing on personalities.
3. Seek solutions that maximise benefit to more of the participants [7].
Taking a vote could seem like a good solution, but the majority within the team
may not be as well informed as an expert. So discussion and learning, while working
towards consensus, is often a better way to reach agreement.
We will look at the technical tasks performed by teams in Part II. But, in software
development, several collaboration and communication-focused activities within
the self-organising team role have been observed [12]. These activities may be
performed by different team members or by the same team member at different
times. An attribute of more experienced and adept team members could be the
ability to perform more of these activities on behalf of the team.
2.6.1 Mentor
We will find out more, in Chap. 3, about the scrum master role. The scrum master
helps inculcate the use of agile methods in the team. But other members of self-
organising teams must also mentor each other. Perhaps someone new joins the team
and needs advice to get started and feel welcome. Perhaps someone already in the
team needs to learn a new technology. Mentors guide and support team members,
help them become more confident about using agile methods and encourage the
ongoing use of agile practices.
2.6.2 Co-ordinator
2.6.3 Translator
Translators are needed that can make clear the meaning of business language used
by customers for the benefit of technical team members. Participants in the self-
organising team need to contribute to improved communication between these two
domains. Understanding the business domain of the project is the primary role of
the product owner, as we will see in Chap. 3. But self-organising teams benefit from
gaining this understanding too.
2.6.4 Champion
Champions are team members that advocate for agile methods with senior man-
agement within their organisation. We want senior executive support for the
self-organising agile team. It is hard for agile to flourish without senior management
support. The champion is adept at explaining agile benefits using language and
evidence that is convincing for senior executives.
2.6.5 Promoter
The promoter is a proponent of agile methods with customers. The promoter secures
customer involvement and collaboration to support the efficient functioning of the
self-organising team. Customers play a vital role in identifying and prioritising
requirements, while the promoter ensures that the team gets all the support needed
from customers.
2.6.6 Terminator
Sometimes, teams find themselves with a member who is not a force for good,
someone maybe persistently unproductive or the negative behaviour of this team
member can threaten the wellbeing of the rest of the team. Self-organising teams
are often happy to take damage limitation steps to cover for a team member who is
‘having a bad day’. But if a team member is causing problems over a long period,
then more drastic action may be needed. In the most extreme case, members of the
self-organising team may engage external stakeholders to get support for removing
someone from the team.
20 2 Self-Organising Teams
There are a number of steps we can take (or tactics we can adopt), to maximise the
chances of team success.
2.7.3 Launch
The launch phase of a virtual team is particularly important. A kick-off phase serves
five main purposes [10]:
22 2 Self-Organising Teams
It is useful to be aware of another type of team that exists within the software
development ecosystem. These are teams with members that do not contribute
to shared work tasks but that nevertheless share ways of working, membership
2.9 Exercises 23
rituals and often shared goals. A community of practice is a voluntary, often rather
unstructured, group that supports and facilitates knowledge and experience sharing.
In the agile development culture developed at the music streaming service
Spotify, the concept of Guilds is introduced that to some extent formalise the
community of practice as part of the development process [18]. The Guilds tend to
be organic and emergent. Guilds vary in size, mission, membership and activities.
2.9 Exercises
Start by creating a learning journal for Part I People, if you haven’t already. Use this
learning journal to keep notes on the things you learn. You can also use the learning
journal to plan your future skills development activities. What are your priorities?
The journal should, eventually, include a section for each book chapter.
It is better not to look at the hints, tips and solutions chapter, at this stage. First
actually perform the exercises (but, still, do not look at the hints or tips). Then
reflect. Only after that, look at the hints, tips and advice in Sect. 2.10.
(continued)
2.9 Exercises 25
(continued)
28 2 Self-Organising Teams
But, this might be too tied to specific technologies and not sufficiently focused
on the objectives you are trying to achieve. SFIA also has six categories of
professional skills, but they are more focused on the goal or purpose:
• Strategy and architecture
• Change and transformation
• Development and implementation
• Delivery and operation
• Skills and quality
• Relationships and engagement
These categories are divided into sub-categories and around 100 different
skill areas. For example, the development and implementation category
includes:
• Systems development
– Systems development management
– Software design
– Software development
– Database design
– Network design
– Testing
• User experience
– User experience analysis
– User experience design
– User experience evaluation
• Installation and integration
– Systems integration and build
– software configuration
– Systems installation
I like SFIA (and used it in industry to help me create job descriptions) because
you can use it to learn about how to develop new aspects of your skill set.
For each category, there is a breakdown of how the skill is applied at
the different levels. For example, in the Systems development category and
Programming/software development sub-category:
• Level 2 is described as ‘Designs, codes, verifies, tests, documents, amends
and refactors simple programs/scripts. Applies agreed standards and tools,
to achieve a well-engineered result. Reviews own work’.
(continued)
2.10 Hints, Tips and Advice on Exercises 29
(continued)
2.10 Hints, Tips and Advice on Exercises 31
References
15. Morgan, G.: Images of Organization, 1st edn. SAGE Publications, Thousand Oaks, California
(2006)
16. SFIA Foundation: SFIA (2018). https://www.sfia-online.org/en
17. Slack: Where work happens (2019). https://slack.com/intl/en-gb/
18. Smite, D., Moe, N.B., Levinta, G., Floryan, M.: Spotify guilds: how to succeed with knowledge
sharing in large-scale agile organizations. IEEE Softw. 36(2), 51–57 (2019). https://doi.org/10.
1109/MS.2018.2886178
19. Takeuchi, H., Nonaka, I.: The new new product development game. Harv. Bus. Rev. 64(1),
137–146 (1986)
20. Tuckman, B.W., Jensen, M.A.C.: Stages of small-group development revisited. Group Organ.
Stud. 2(4), 419–427 (1977). https://doi.org/10.1177/105960117700200404
Chapter 3
Agile Roles
Abstract Self-organising teams create software. Scrum masters and product own-
ers provide an environment in which teams can work. The scrum master facilitates
team working, mentoring team members and removing impediments. The product
owner engages with clients and markets to define and prioritise requirements. This
chapter explores the scrum master and product owner roles in detail and the skills
needed for them to perform their activities.
3.1 Introduction
In Chap. 2, I discussed self-organising teams and how they work. In this chapter,
we will focus on roles outside the self-organising team and introduce the skills you
need to perform these roles. The scrum master and product owner support the work
of self-organising teams.
Scrum masters facilitate the scrum process on behalf of the team, monitor team
status and remove impediments [7, 8]. Practitioners consider the role as central to
the success of the scrum method [2]. You can learn more about the ceremonies that
involve scrum masters later, in Chap. 13.
Research that investigates what scrum masters actually do [2, 6] identifies five
coordination activities: process anchor, stand-up facilitator, impediment remover,
sprint planner and integration anchor. We can now discuss each of these activities in
turn.
The process anchor mentors team members in scrum method use. The agile process
is described in Chap. 13. The idea is to nurture, encourage and perhaps gently cajole
people to learn and use agile methods appropriately. This means understanding how
agile methods work, by creating transparency about who is doing what.
The scrum master is not a team leader or supervisor in the conventional sense.
The scrum master does not tell people what to do. However, the scrum master has
to encourage, sometimes recalcitrant team members, to give their productive best.
Agile teams often like to get into a rhythm, known as a cadence. That means
everyone knows what to expect when a week starts with sprint planning or ends
with a customer demonstration. People can plan their work, and perhaps even their
social lives, around this iteration cadence.
The iteration planner activity helps select and estimate requirements for implemen-
tation. Iteration planning is described in Sect. 13.2. Everyone on the team is involved
in iteration planning for half a day, or a day, at the start of each iteration; see
Fig. 3.1a. But, the important point to make here is that the scrum master facilitates
the iteration planning process.
The impediment remover eliminates work blockages for team members. Imped-
iments are often varieties of missing information or lacking knowledge. So, a
scrum master needs to find out who has the elusive knowledge and convey that
to the blocked team member. This process of information gathering requirements a
combination of networking skills (knowledge of who knows what) and diplomacy
(to convince people too busy or self-regarding to part with information).
A)
Customer
Demonstration Deploy
Daily (Stand-up) Coordination Meetings Working
Sprint
Planning Design, Build and Test Features Code
Retrospective
B)
Fig. 3.1 (a) Iteration structure and (b) feature life cycle
38 3 Agile Roles
that is available to the team on a full-time basis [4]. The product owner role is
formally defined in scrum [8]. Product ownership plays a central role in the overall
software development process [5].
I now want to look at some of the activities product owners perform as part of
their role [3]. Taken together, these activities comprise the product owner role.
In the product grooming activity, the product owner gathers, or elicits, requirements
from business clients in business-to-business contexts. You can learn more about
requirements in Chap. 7. The product owner needs to interact with customers
in order to gather the requirements. For business-to-consumer applications, the
product owner needs to develop detailed awareness of market trends and competitor
behaviour.
However, simply compiling a list of requirements is not sufficient; the require-
ments must also be prioritised according to their value to the business.
3.3.2 Prioritiser
In the prioritiser activity, the product owner ensures that requirements bring
maximum value to the business. In each iteration, the product owner decides
which requirements from the product backlog are going to be most important for
implementation in the next iteration. Sometimes, this involves choosing to prioritise
the needs of one customer group or segment over another. Product owners become
experienced in assessing and prioritising the needs of different segments of the
customer base.
In the release master activity, the product owner manages release plans and approves
software source code for release to customers. Early iterations may not have
sufficient code to deploy; see Fig. 3.2. Approving releases requires a decision about
the quality of software (is it good enough to give to customers?) and the scope of
the software (is there enough useful functionality to give to customers?).
3.3 Product Owner/On-site Customer 39
Iteration 1
Plan Requirements Design Build Test
Iteration 2
Plan Requirements Design Build Test
No Code Deployed
Iteration 3
Plan Requirements Design Build Test
Code Deployed
Code Deployed
3.3.4 Communicator
In the communicator activity, the product owner connects onshore and offshore
stakeholders in the project team to manage geographical distribution. Geographical
distribution is not an ideal attribute for a project team. We would prefer everyone to
be located together in the same site. The ease of communication and movement of
digital goods means that geographical distribution has become a feature of software
development programmes. The product owner, in the communicator activity, uses
audio and videoconferencing and online collaboration tools.
3.3.5 Traveller
In the traveller activity, the product owner spends time with geographically remote
stakeholders gathering first-hand knowledge of their needs and priorities. For
example, a product owner based offshore will sometimes spend time (between 1
and 3 months, depending on the scale of the project) on the client site at the start of
the project, becoming familiar with any special features of the client’s requirements.
The traveller is important for supporting development teams because they are based
at the customer site and can get answers to questions.
40 3 Agile Roles
3.3.6 Intermediary
The product owner, in the intermediary activity, interfaces with senior executives,
driving software development programmes and disseminating domain knowledge to
teams. Domain knowledge is understanding of the business domain or sector of the
application software being created. For example, it might be an application in travel,
financial services or retail. To perform the intermediary activities, product owners
need to have extensive experience of the particular system business domain.
Our research has also identified a set of product owner behaviours. There are traits
that product owners display that are seen as desirable by their line managers.
The three main product owner behaviours we identified are to favour face-to-face
interactions, understand and focus on real goals and make product owner teams
well defined [1].
As has been suggested, for large projects, the product sponsor, intermediary,
technical architect and other members form a product owner team. The process
of building the product owner team should be explicit and well defined. Product
sponsors should create well-defined processes for product owner team building,
induction of new members and succession planning.
We’ll look at large-scale projects in more detail in Chap. 18. But, there are a couple
of roles it is worth thinking about now: product sponsor and technical architect.
coherent approach. Hence, the overall system architecture can be much more finely
balanced and complex.
3.6 Exercises
These exercises will help you practise the agile roles discussed in this chapter.
Don’t look at the hints, tips and solutions chapter, at this stage. First actually do
the exercises, then look at the advice in Sect. 3.7.
(continued)
3.7 Hints, Tips and Advice on Exercises 45
If you notice gaps in the skills available, then you can attempt training
or professional development in these areas. A knowledgeable proxy product
owner can fill gaps in support activities meant to be performed by your actual
product owner.
(continued)
3.7 Hints, Tips and Advice on Exercises 47
• Everyone in the team privately writes three sticky notes: ‘things we should
continue to do’.
• Collect all the sticky notes (which should be anonymous) together on a
blank whiteboard (physical or virtual).
• Everyone writes three sticky notes: ‘potential areas for learning or
improvement’.
• Collect all the sticky notes together on a blank whiteboard.
• Spend a few minutes, as a group, reviewing all the sticky notes.
• Try to collect the ‘potential areas for learning or improvement’ into groups
or categories. Look for themes.
• Choose the top three ‘potential areas for learning or improvement’. The
top three are likely to be areas of consensus or at least mentioned on more
than one sticky note.
• Create one action point for each of the top three ‘potential areas for
learning or improvement’.
You should encourage implementation of the three action points during the
coming iteration. The scrum master should remind the team members about
the action points during the iteration, to help learning and improvement.
(continued)
48 3 Agile Roles
After you look at these hints and tips, make some further notes about the
quality of your solutions. Try to identify areas where you performed well and
those where you might benefit from further reading or other learning.
In this chapter, I have explained the scrum master and product owner roles. I have
described how the scrum master facilitates teamwork, mentors team members and
removes impediments. We do not advocate having a team leader when using agile
methods. In contrast, the scrum master facilitates the self-organising team discussed
in Chap. 2.
The product owner, in contrast, defines and prioritises requirements. The product
owner reviews demonstrations of working code at the end of each sprint and decides
if code quality is sufficient for release to customers. The exercises have focused on
facilitating stand-up meetings, customer demonstrations and sprint planning.
Should we do our best to build a good solution? Yes, of course. Should we tell our
boss if we are failing to achieve this goal? Well, yes. It might not be pleasant. But
we need to be able to communicate good news, bad news and technical decisions.
In Chap. 4, we will learn about managing other people that are interested in the
software development process, people we call stakeholders.
References
1. Bass, J.M., Haxby, A.: Tailoring product ownership in large-scale agile projects: managing scale,
distance, and governance. IEEE Softw. 36(2), 58–63 (2019). https://doi.org/10.1109/MS.2018.
2885524
2. Bass, J.: Scrum master activities: process tailoring in large enterprise projects. In: 2014 IEEE
9th International Conference on Global Software Engineering (ICGSE), pp. 6–15 (2014). https://
doi.org/10.1109/ICGSE.2014.24
3. Bass, J.M.: How product owner teams scale agile methods to large distributed enterprises.
Empirical Softw. Eng. 20(6), 1525–1557 (2015). https://doi.org/10.1007/s10664-014-9322-z
4. Beck, K., Andres, C.: Extreme Programming Explained, 2nd edn. Addison Wesley, Boston
(2004)
5. Hoda, R., Noble, J., Marshall, S.: The impact of inadequate customer involvement on self-
organizing agile teams. Inform. Softw. Technol. 53(5), 521–534 (2011). https://doi.org/10.1016/
j.infsof.2010.10.009
6. Noll, J., Razzak, M.A., Bass, J.M., Beecham, S.: A study of the scrum master’s role. In: Product-
Focused Software Process Improvement, pp. 307–323. Lecture Notes in Computer Science.
Springer, Cham (2017)
7. Schwaber, K., Beedle, M.: Agile Software Development with Scrum, 1st edn. Pearson, Upper
Saddle River (2002)
8. Schwaber, K.: Agile Project Management with Scrum, 1st edn. Microsoft Press, Redmond
(2004)
Chapter 4
Managing Stakeholders
4.1 Introduction
Your team members, and the solutions you create, will benefit from skills you
acquire in managing relationships with other people outside your team. Here, I am
thinking about relationships with bosses, clients, academic supervisors or others
who have an interest in your work.
Of course, Chap. 2 has explored skills needed to manage relationships within
your self-organising team. In addition Chap. 3 discussed scrum master and product
owner roles. Here, we consider managing upwards, managing outwards, contracts
and communication skills.
best. A deep understanding of the problem we are trying to solve will significantly
improve our chances of success. In managing upwards, there are four main issues to
consider: expectations, crises, successes and unreasonable demands.
Set realistic expectations about what you are going to achieve. Don’t promise to
deliver things you can’t fulfil. It is better to set expectations low and then over-
deliver. Rather that, than setting high expectations and failing to deliver. Identify
risky areas, areas where you are uncertain or engage with high degree of novelty.
Make public these areas of risk. You never know, some risky things might not be
important to your stakeholders. By pointing out they are risky, your client might
remove them from the project scope.
Hopefully, you will never have a catastrophe. But if something goes wrong, it is
better to confess sooner rather than later. The idea is to give your boss, client or
other stakeholders as much time as possible to help you plan a recovery strategy.
Trying to hide a mistake or misstep is a risky strategy. It is dishonest and you might
get found out. Better to work with your stakeholders and try to come up with a way
forward.
Make sure you share your successes. When you achieve a technical breakthrough,
tell people about it. Hopefully, your customer, boss or academic supervisor will be
pleased to share your success, by which I mean they will be happy for you. It is
important for them to understand amount of work required to achieve that success.
Fend off unreasonable demands. It might be that your client thinks your team is
superhuman or that your team is willing to work 24 h a day, for 7 days a week.
Perhaps your client does not know if it is possible to create an entire enterprise
resource planning system during a weekend Hackathon.
4.4 Contracts 51
So, you need to educate and inform. Explain how much work is involved. You
need to create a detailed breakdown of each work item involved. Each work item
needs its own estimate of effort, see Sect. 13.2. In this way, you can encourage your
customer, boss or academic supervisor to prioritise the work items they really care
about and de-emphasise those that are less important.
There are lots of other stakeholders that can support your software development
activities. In a commercial setting, you might interact with personnel, finance and
payroll departments. In an academic setting, you might deal with people from
careers, library and the registrar’s department. Try to win support for your team from
these other stakeholders. Your project will go more smoothly if these stakeholders
can be convinced to help you, in spite of their own problems, priorities and
pressures. You will want your team to look professional and efficient, so respond
to enquiries promptly and with courtesy.
There might also be peer groups, other teams that might be working on similar
projects. What can you learn from these other groups? I’m not suggesting that you
unscrupulously copy other people’s work. But, there might be an approach they are
taking that you can apply and learn from.
4.4 Contracts
Contracts are legal agreements between parties. Negotiating large contracts involves
specialist legal advisers. Never sign a contract you don’t understand. Always get
knowledgeable and impartial advice. There are two main categories of contract that
govern the procurement of software: fixed-price, fixed-scope and time and materials.
are you getting paid for developing software according to your interpretation of the
requirements? No, of course not. You are getting paid for the client’s interpretation
of the requirements. So, you had better understand what the client means. Okay, so
what if you, as a vendor, write the requirements specification? Fine. But who pays
for writing the specification? You? Or, the client?
Finally, as we know, change is going to happen during the project. Things out
there in the real world are going to happen. A new browser version will be released.
A new operating system release will come along. You manage this process by
using change requests. A change request is a documented (and funded) change
to the contract specification. Each time the client changes their mind about the
specification, you have to estimate and cost all the consequent changes.
So now, instead of creating software, you are running a whole little industry:
gathering, estimating, implementing and testing changes to the original specifica-
tion. But despite the problems with fixed-scope contracts, they remain very popular
because of their apparent clarity and simplicity.
To overcome some problems with fixed-price, fixed-scope projects, people use time
and materials contracts to manage client and vendor relationships. Here, there is
no fixed-scope specification. Instead, the software development team charges by the
hour. The client adopts a product owner role, establishing, prioritising and managing
requirements. The team creates features requested by the product owner, iteration by
iteration.
Time and materials contracts avoid the need to manage change requests.
Consequently, effort is focused on developing working code. Good. But it is a bit
challenging convincing clients to buy the software. Initially, they will not know
what they will get, how much it will cost or when they will get it. This requires
considerable trust between the client and vendor. And so, we come full circle, back
to fixed-price, fixed-scope contracts.
Outsourcing is the generic term for buying products or services from third parties.
Corporations often outsource their catering, cleaning and so on. If you are not a
technology organisation, it can be attractive to outsource IT provision and software
development to a specialist third party. There are large international companies that
make a good living from these arrangements.
4.4 Contracts 53
It is often said you should outsource to organisations like yours. Similar size,
conveniently local. Small companies find it risky to outsource to big companies; they
are too expensive, they will ‘eat you for breakfast’. And, yet, some organisations
find outsourcing attractive, sometimes to far-flung, cheap and often exotic locations.
Digital goods and the availability of excellent technical skills and computer
networks make this possible.
You can get excellent value for money by offshoring your software development
needs in this way. But new challenges caused by inconvenient time zones and
significant geographical distances can emerge. There is also an environmental cost
to the extensive travel involved in building trust through long-distance relationships.
There are also contracts in a university or college setting. Some institutions use
learning agreements, between students and teaching staff, to help set expectations
and establish norms of learning and teaching behaviour. Education institutions also
commonly have course, programme or module descriptions for teaching staff and
other stakeholders. These course descriptions often include various forms of aims
or learning objectives. It is a good idea, of course, to familiarise yourself with the
aims and objectives of the courses that you are studying.
There are several ways to improve the quality of our communications. Here, we
focus on audience, narrative, language and process.
4.5.1 Audience
For your communication to be effective, you need to understand your audience and
their expectations. The vocabulary, terminology and jargon need to be appropriate
for them, and, crucially, may not be the same as yours. How much does the audience
know about your subject? If they are experts, don’t spend too much time on the
basics of your topic. If the audience are not specialists in your area, then avoid
using technical jargon.
4.5.2 Narrative
Why are you writing a report? To convey an argument. That argument might
be: ‘I’ve done enough good quality work that I am entitled to a good grade at
University’. Or, it might be: ‘I’ve done a diligent and thorough job of work, so
you should give me a promotion or pay rise’. Okay, so maybe you don’t want to
make those arguments explicit. But you should think carefully about the argument
you do want to convey.
The argument should be developed in logical steps and should be supported by
evidence. You can sometimes adopt a journalistic device and summarise the main
argument at the outset [6].
4.5.3 Language
4.5.4 Process
Productive writers tend to write and then edit their work. Focus, first on getting
words on paper. Then, focus on revising and editing [2]. Using analogy to iterative
software development, try to achieve cycles of writing activity followed by editing.
Write. Edit. Write. Edit. These cycles will help you improve the quality of your
writing. You can’t write the finished product first time.
4.6.1 Reports
Non-fiction writing is partly about writing design concepts such as clarity and
simplicity but also a matter of basic principles such as grammar, punctuation and
paragraph structure [12].
Good scientific writing is truthful, evidence-based, clear and simple. Clear writing is
an indicator of clear thinking. It is wise to follow the guidance given by proponents
of this writing style [8, 11, 12]. As I have already said, write and then edit your
writing.
The paragraph is the basic unit of composition. A paragraph focuses on one topic.
Each paragraph benefits from a topic sentence, which summarises the subject. The
topic sentence is followed by further expansion. A good paragraph concludes with
a summary sentence that reinforces the topic sentence.
Use the active voice. An active voice is more direct, forcible and vigorous than
passive writing. This can be controversial. For academic writing, ‘use the third
person’ is often given as advice. But you can see that ‘I used an object-oriented
design method’ is clearer than ‘an object-oriented design method was used in this
project’. Use active voice where you can.
Make statements positive. Avoid hesitating, evasive and non-committal language.
You might say ‘the performance did not fluctuate as user numbers seemed to vary’.
It is better to say ‘performance was constant, despite varying user numbers’.
56 4 Managing Stakeholders
Remove unnecessary words. Make your writing concise. Making your argument
with the fewest possible words leads to strong, direct writing. These composition
guidelines are to make life easier for the reader [1, 3]. Don’t forget: Write first, to
get the ideas down on paper, and then edit your writing to improve composition and
clarity.
As well as a front sheet, and a table of contents, a report of any length will likely
need an abstract or executive summary. The abstract summarises the content of the
entire report and consequently should be written at the end when you know what
the report actually says. A structured abstract comprises context, goal, methods,
results and conclusions. The context describes the domain or application area of the
report. The abstract then summarises the problem solved or goal of the project. Next,
provide a brief overview of the methods or approach you took. Then, summarise
your findings or results. Finally, summarise your conclusions.
Next, your report needs an introduction. An introduction has two main purposes.
Firstly, provide a brief overview of the motivation or justification for performing
the work. Include some evidence supporting the significance of the problem you
are solving. Use references to sources and provide a bit more detail than given in
the abstract. Secondly, summarise the main outcome, result or conclusion of the
report. You might describe the main results from some experiments and discuss
their implications. Or, you might provide an overview of the features of a system
you have built showing how they solve the problem you confronted.
A report often requires a survey of the literature or field. Present the survey as
if it were a funnel. You need to start off with the broader (wider) topics and then
gradually focus on the specific area of your project. A weak literature review will
describe each source in turn. A stronger literature review will organise a collection
of sources into themes and then compare and critically discuss each theme.
Your report needs to describe the approach you took. In medical research, this
is referred to as ‘methods and materials’. You are supposed to provide enough
information that someone could repeat your work and consequently get a similar
outcome. Justify your choice of approach to solving the problem.
The main body of the report focuses on the findings of your work. You might be
advocating a new approach or justifying a technology choice. Here you can weigh
up the advantages and disadvantages of the various options. For a design and build
type project, you will have a series of chapters describing the requirements, design,
implementation and evaluation phases of development. In a more scientifically
oriented project, you might describe the results of a series of experiments you have
conducted.
You will then, likely, discuss or analyse your findings. If you have research
questions, you can answer them here. If not, you discuss your findings in relation
to other published results and describe the implications of your proposals in the
context of the problem you are trying to solve.
4.6 Communication Tools 57
Finally, the conclusions has three main purposes: summary, conclusions and
future work. As reports get bigger, a carefully selected re-statement of ideas
becomes an important means of giving emphasis. Never copy and paste within the
report. We don’t want to read the same sentence or paragraph twice. But, briefly
re-stating the context, aims, methods and findings shows that you understand the
most important elements of the project. You can then describe the lessons you have
learned from the project. What aspects of the project went well? What aspects of the
project proved to be more challenging than expected? At a fundamental level, was
the method you selected appropriate? It is helpful to discuss anything you would
do differently if you were asked to do the whole project over again (but this time
with the benefit of hindsight). At the end of your conclusions, include next steps
for the project or future work. Your report conclusions are important because you
summarise the whole project and describe the implications of your work in the
context of the problem you were trying to solve.
4.6.2 Presentations
There are some circumstances where a presentation can be given without any
supporting materials, for example, where the presentation is informal or very short.
An example is the agile customer demonstration, discussed further in Sect. 13.4. In
a customer demonstration, we focus our attention on demonstrating working code.
Usually, however, presentation slides of some kind are used.
Visually appealing imagery can play an important part in engaging with your
audience [5]. Choose imagery that is relevant to your topic and that reinforces your
argument. For important presentations, you can obtain photographs and diagrams
from commercial or open access visual media repositories [10]. Sometimes, a
presentation containing only images, and no words, can work well. Your imagery
needs to complement and emphasise the narrative you are telling. You want to select
imagery that reinforces, rather than distracts from, your message.
58 4 Managing Stakeholders
Avoid using too many words in your presentation. Do you want the audience to
be reading or listening to the presenter? Commercial trainers follow the five by five
rule. That means no more than five rows of text on each slide. Each row of text
should have less than five words. These days, presentations with even fewer words
are common. Use words for emphasis.
Your presentation needs to engage your audience. Use eye contact and engage the
entire audience. Don’t just look at one person all the time. Make your presentation
flow smoothly, and present your content confidently. Avoid a halting or hesitant
delivery style.
Your audience will be more attentive if they are calm, relaxed and focusing on
the presenter. Watch the body language of your audience, and be alert for signs of
boredom or distraction. Is the audience being distracted by noise from elsewhere?
Is it too hot (or too cold) in the room? Create an atmosphere where the audience is
listening.
For a presentation of any significance, you will want to practice, especially when
you present as a team. Make sure you stick to the agreed duration. Check for
consistent presentation materials and content, and practice handing over from one
presenter to another. Make a plan for when things don’t work or go according to
plan. When travelling abroad to give presentations, I used to carry paper copies of
materials in case projectors or electronics failed.
Various editable online writing platforms have become popular to support communi-
cations in software development teams. Tools and platforms fall in and out of favour.
At the time of writing, Slack is popular with development teams because it enables
instant messaging, content sharing and archiving [7]. Our research suggests that
while Slack is popular within development teams, in contrast, managers, executives
and client-facing relationship managers prefer face-to-face interaction, or audio- and
videoconferencing infrastructure [4].
Content management systems enable sharing of audio-visual or multimedia
content as well as written material. Diagrams, videos and recording of workshops
can be hosted online as part of a project repository. Such resources are typically
hosted behind firewalls on secure intranets to avoid public disclosure.
4.7 Exercises 59
4.6.4 Videos
4.7 Exercises
Working through these exercises will help you acquire skills for managing stake-
holders, as discussed in this chapter. First do the exercises, then reflect on what you
have learned. Finally, look at the advice in Sect. 4.8.
(continued)
4.8 Hints, Tips and Advice on Exercises 61
(continued)
64 4 Managing Stakeholders
In this chapter, I have explored the relationship between your team and the outside
world. In a work environment, you might be trying to satisfy your boss or a
client, or, you might be in a university, trying to satisfy academic supervisors. In
References 65
each case, you want your team to produce the best it is capable of and get the
recognition you deserve. This is achieved by managing your relationships with
these outside stakeholders. Keep your relationships professional, and keep interested
parties informed of your progress towards goals.
Communication skills play an important part of managing relationships with oth-
ers. Use appropriate means of communication and make sure your communications
are right for your audience. Simplicity of message, clarity and presentation quality
are key objectives.
References
1. Clark, R.P.: Writing Tools: 50 Essential Strategies for Every Writer. Little Brown Book Group,
reprint edn. (2010)
2. Elbow, P.: Writing With Power: Techniques for Mastering the Writing Process, 2nd edn. Oxford
University Press, New York (1998)
3. Purdue University: Welcome to the Purdue University Online Writing Lab (OWL) (2019).
https://owl.english.purdue.edu/. Accessed 9 Oct 2019
4. Rahy, S., Bass, J.: Information flows at inter-team boundaries in agile information systems
development. In: Themistocleous, M., Rupino da Cunha, P. (eds.) Information Systems:
EMCIS 2018. Lecture Notes in Business Information Processing, vol. 341, pp. 489–502.
Springer, Limassol, Cyprus (2019). https://doi.org/10.1007/978-3-030-11395-7_38
5. Reynolds, G.: Presentation Zen: Simple Ideas on Presentation Design and Delivery, 2nd edn.
New Riders, Berkeley (2011)
6. Schimel, J.: Writing Science: How to Write Papers That Get Cited and Proposals That Get
Funded. OUP USA, Oxford; New York (2011)
7. Slack: Where work happens (2019). https://slack.com/intl/en-gb/
8. Strunk, William, Jr., White, E. B.: The Elements of Style, 4th edn. Longman, Boston (1999)
9. Truss, L.: Eats, Shoots and Leaves. Fourth Estate, London (2009)
10. Wikimedia Commons: Wikimedia Commons (2019). https://commons.wikimedia.org
11. William Strunk Jr.: The elements of style. http://www.bartleby.com/141/ (1918). Accessed 12
Sept 2014
12. Zinsser, W.: On Writing Well: The Classic Guide to Writing Nonfiction. Harper Collins
Publishers, New York, 25th Anniversary edn. (2006)
Chapter 5
Ethics
5.1 Introduction
The term ethics is about doing the right thing. For some people, ethics is about being
a professional: being seen to act like a professional, doing good work, being reliable,
seeing a project through to successful completion and so on. For other people, ethics
is about protecting stakeholder interests and those of the wider public.
In recent years, technology sector influence has grown significantly. We are
seeing increasing calls for the technology sector to be held accountable in areas
less related to professionalism and more associated with issues like justice, equality
and fairness.
These days, you have to try really quite hard to get sufficiently ‘off-grid’ to not
benefit from somebody’s software. Software can bring us benefits in virtually all
walks of life. But as software technologies become more pervasive, concerns grow
that some aspects are not really serving some stakeholders well or fairly.
The non-hierarchical and open access ethos of early Internet technologies has
set public expectations that Internet services are provided with no financial cost.
Many of these services, such as Internet search, mapping and social media, are
highly valued by users and becoming difficult to avoid. Of course, these services
are not actually free to provide. It’s just that the transaction may not be obvious
to everyone. The emergence of ‘free’ services, as John Honeyball, from PC Pro
magazine [9], succinctly puts it, means that ‘you are the product’ [5]. The ‘free’
services are actually designed to obtain data about users, for example, so that they
can be more effectively targeted by advertisers.
This process has, perhaps surprisingly, created some of the largest commercial
organisations ever seen. For some, this has become known as surveillance capital-
ism [22]. Many of these organisations have been penalised by regulators or legal
authorities for their lack of transparency or for monopolistic (anti-trust) behaviour.
Some argue that the penalties imposed have been trivial considering the size and
revenues of the organisations punished in this way.
Researchers have found that search engines reinforce ethnic [12] and gender
stereotypes. The near monopoly domination of Internet search engines like Google,
which are motivated by sales of online advertising, does not offer a level playing
field for ideas, identities and activities. In 2011, a Google search for ‘girls’ produced
innocuous listings relating to fashion and health. In contrast, a search for ‘black
girls’ produced a list dominated by pornographic websites. By 2012, the search
algorithm had been changed, and the results listing produced by a search for ‘black
girls’ had changed to something innocuous, whereas a search for ‘Asian girls’ still
produced a listing dominated by pornographic sites.
Platforms have created new markets in sectors such as ride hailing (Uber), temporary
overnight accommodation (AirbnB) and even creative work (Amazon MTurk,
Upwork, RentACoder and so on). These platforms can provide good experiences
for consumers and service providers alike. But there have also been critiques [20].
Work has been commissioned but has gone unpaid. Platform workers have been
barred without explanation or recourse to appeal. There have been concerns that
platform workers spend a lot of time searching for job that meet their skill set and
then creating proposals. The platform workers are then rewarded with relatively
small jobs on low rates of pay.
5.2 What Went Wrong? 69
As the public comes to rely on IT ever more, when large systems fail, it attracts
attention. IT outages in the financial services sector, for example, have denied
millions of customers access to their bank accounts. In some cases, outages have
prevented customers from obtaining cash or companies paying salaries to their staff.
An IT outage at one major airline grounded flights and stranded around 75,000
passengers.
Outages affecting millions of people can attract significant publicity, adversely
affecting company reputations and share prices (the value of the company). Finan-
cial regulators have questioned executives, in the banking sector, and senior
executives have left their jobs.
The consequences of failure in safety critical applications is even more severe.
A fault in the user interface design of a Canadian computer-controlled radiation
therapy machine allowed operators to accidentally administer fatal overdoses.
Several patients died, and the equipment manufacturer no longer exists.
At the time of writing, anti-stall software has been implicated as being respon-
sible for two commercial airliners killing over 300 people. The Boeing 737 MAX
8 aircraft had software designed to push the aircraft nose down, to reduce the risk
of a stall. Evidence to a US congressional hearing revealed this software relied on
a single sensor, a so-called, single-point-of-failure. Fears have been expressed that
sensor failure could cause the anti-stall software to push the aircraft into a dive.
Some unethical behaviour can be viewed as borderline. Such professional lapses can
result in disciplinary action by regulators or employers. Such unethical behaviour is
narrow in scope and has limited impact on an organisation’s customers, consumers,
users or reputation.
Criminal action has resulted in jail terms for IT staff members. There are six
main categories of unethical behaviour on software engineering projects [15]:
• Lying
• Computer fraud and unauthorised access
• Information theft
• Espionage
• Sabotage
• Subversion of project goals
Lying is almost never admitted to in software engineering projects. In fact, the
word is almost always avoided at all costs. However, it has a long tradition in
the technology sector. Developers exaggerate progress, project managers ‘sanitise’
status reporting and sales people advertise software benefiting from features that
70 5 Ethics
have yet to be implemented. In some cases, there is a fine line between an optimistic
account and reality. Sometimes there appears to be outright fabrication.
Agile methods can help combat various forms of lying by creating a culture of
transparency and openness. Estimation effort is targeted on short-term increments
rather than attempting to create detailed estimates for far-off features. Poor-quality
estimates are quickly exposed, within weeks, rather than months. Daily coordination
meetings help ensure transparency on project status. Optimistic assessments of
progress are quickly exposed. Finally, time and materials contracts (see Sect. 4.3)
make it more attractive to sell the effort needed to create new features, rather than
attempt to sell features that don’t actually exist.
An important area of public concern is the criminal use of software-intensive
infrastructure. Various forms of bank fraud and monetary theft are serious threat.
There have been high-profile cases of unauthorised access to various computer
systems operated by public, government and even military authorities. Perpetrators
can be cyber joyriders, or sometimes there is the suspicion of corporate or
governmental actors. Criminal techniques for credit card fraud can include:
• Cracking a server—obtaining card details from databases
• Phishing—enticing victims to hand over credit card details to a fake website
• Spear phishing—targeting high-net-worth individuals to obtain card details
• Pharming—creating a fake website for a well-known financial services provider
• Spyware—malicious software that captures details from victims
Social engineering is the use of various forms of trickery or deception to persuade
people to divulge confidential information. Culprits can seek to gather a range of
personal information on each data item being used to gather other more sensitive
items. An important mitigating strategy is to educate users that IT support staff will
never seek passwords.
Information theft can include sensitive corporate intelligence or development
artefacts, such as source code. Sensitive information might include client lists,
employee records or pricing details, which could give competitors an advantage.
Source code or design artefact thefts are also forms of information theft. Actually,
like other digital products, the rightful owner still has the original, but the culprit has
misappropriated a copy. Good software system security measures can detect unusual
usage patterns such as bulk file downloads.
Open-source software takes an alternative approach. In open-source, the software
is a consequence of the expertise and process used during creation. Hence, the
business model is either based on specialist skills used to create the source code
or on providing consulting or support services around the code.
Espionage and industrial espionage is the gathering of confidential material from
a foreign country or competitor company. Diplomats have been expelled as a result
of allegations of state-sponsored cyber-intelligence gathering. There have been
numerous cases of employees moving company and taking corporate intelligence
with them. Non-compete contracts are a common tactic that prohibit employees
from working in a specific business domain for a period of time.
5.3 Copyright and Patents 71
There are a choice of professional bodies that support and encourage professionals
in the computing, IT and software sectors. These bodies offer services to their
members and advocate for the wider discipline. These bodies include:
• Association for Computer Machinery (ACM) [2]
• British Computer Society (BCS), the Chartered Institute for IT [4]
• Institution for Engineering and Technology (IET) [21]
• Institute for Electrical and Electronic Engineering, Computer Society (IEEE CS)
[10]
These bodies have members from around the world and often have member
groups, such as branches and specialist groups, organised around geographies and
technical specialisms, to create opportunities for practitioners to meet, network and
exchanges ideas about the field. Many of these bodies organise conferences and
journals to publish the latest research in the field.
The International Federation for Information Processing (IFIP) [11] also supports
the discipline but is not membership body for practitioners. In contrast, IFIP
comprises professional bodies from around the world. So, BCS and ACM, for
example, are members of IFIP. IFIP also has working groups covering many
technical specialisms along with conferences and journals.
By becoming members of the professional bodies, IT professionals agree to
uphold certain standards of practice. Often this involves making commitments
around honesty and integrity. A member breaching the standards could be expelled.
BCS, the Chartered Institute for IT, has created a code of conduct for members. The
six-page code has a specific section on public interest which states [3]:
“you shall:
• have due regard for public health, privacy, security and wellbeing of others and the
environment;
• have due regard for the legitimate rights of third parties;
• conduct your professional activities without discrimination on the grounds of sex,
sexual orientation, marital status, nationality, colour, race, ethnic origin, religion, age
or disability, or of any other condition or requirement; and
• promote equal access to the benefits of IT and seek to promote the inclusion of all sectors
in society wherever opportunities arise. . . ’
5.5 Activism 73
This implies a duty of care, by IT professionals, towards the wider public. There are
also sections about competency [3]:
You shall:
• only undertake to do work or provide a service that is within your professional
competence.
• NOT claim any level of competence that you do not possess.
The ACM code of Ethics, like that of the BCS, has a commitment to public
interest but also makes the case for professional responsibility in handling personal
information:
‘a computing professional should become conversant in the various definitions and forms of
privacy and should understand the rights and responsibilities associated with the collection
and use of personal information.’
At the start of this chapter, I alluded to some of the problems that have heightened
public concerns about the software sector: data breaches, service outages, misuse of
data and so on. The negative impacts on members of the public I mentioned have
happened despite the existence of professional bodies and their professional codes.
There is increased awareness, among professional bodies, of public disapproval
of ethical lapses in the technology sector. Several professional bodies now provide
help desks or contact points for practitioners facing an ethical dilemma.
5.5 Activism
Software engineers have used various forms of activism to address ethical transgres-
sions. Some employees have organised petitions, to gather support strengthening
ethical positions.
74 5 Ethics
5.5.1 Whistle-Blowing
Whistle-blowing is the act of exposing ethical wrong-doing with the aim of halting
the behaviour. Whistle-blowing needs to be motivated by a commitment to the
public good. Whistle-blowers must carefully evaluate the wrongs they seek to
expose and choose a suitable outlet.
Whistle-blowing carries risks for the whistle-blower. There is a danger that senior
management will be unwilling or unable to tackle the bad behaviour and instead
focus on ‘shooting the messenger’. Legal authorities, in the current climate, are
likely to be supportive of those that expose financial or sexual misconduct.
5.5.2 Unions
Trade unions have not been popular in the technology sector. Trade unions are
membership bodies committed to employment protection for their members. They
offer legal support for their members in certain employment disputes and can offer
a means for employees to work together to address employment-related and wider
concerns.
There have been persistent stories about poor working conditions in the technol-
ogy sector. For example, allegations that Amazon delivery drivers are unable to find
time for toilet breaks have led to some public relations controversies [14].
Trade unions have, in recent years, enjoyed some success in tackling unfair
practices employed by technology companies. For example, the App Drivers and
Couriers Union successfully challenged Uber in the UK Supreme Court [1].
Subsequently, the company announced plans to pay minimum wage, holiday pay
and pensions [7]. Trade unions can play an important role in providing employees a
voice, a forum to discuss concerns and providing workplace advice.
5.6 Professional Development 75
The Skills Framework for the Information Age ‘describes the skills and competen-
cies required by professionals in roles involved in information and communication
technologies, digital transformation and software engineering’ [16].
The framework comprises seven levels, as shown in Table 5.1. Each level, in turn,
is defined in terms of responsibilities, autonomy, influence, complexity, knowledge
and business skills, as shown in Table 5.2, for Level 1. Consequently, each skill has
a rich description of competencies and responsibilities.
Table 5.2 Level 1 dimensions in skills framework for the information age [16]
Autonomy Works under supervision. User little discretion. Is expected to seek
guidance in unexpected situations
Influence Minimal influence. May work alone or interact with immediate colleagues
Complexity Performs routine activities in a structured environment. Requires
assistance in resolving unexpected problems
Knowledge Has a basic generic knowledge appropriate to area of work. Applies newly
acquired knowledge to develop new skills
Business skills Has sufficient communication skills for effective dialogue with others.
Demonstrates an organised approach to work. Uses basic systems and
tools, applications and processes. Contributes to identifying own
development opportunities. Follows code of conduct, ethics and
organisational standards. Is aware of health and safety issues. Understands
and applies basic security practice
5.7 Exercises 77
While professional body certifications and memberships can help with career
development, there are a wide range of commercial certifications and massive open
online courses (MOOCs) that can help with more specific skills. Some commercial
certifications are well respected but are often focused on specific product versions
and tend to be expensive. While MOOCs from reputable providers can be of high
quality and up to date, the dropout rates are very high due in part to the online
delivery format.
5.7 Exercises
Completing these exercises will help you apply the skills in ethics you are acquiring
from this chapter. Remember, it is best if you don’t look at the hints, tips and
solutions chapter, at this stage. I suggest you do the exercises, then look at the advice
in Sect. 5.8.
(continued)
80 5 Ethics
identify their own needs and create plans to update existing skills and acquire
new knowledge.
Professional bodies will encourage you to become aware of the legal and
regulatory framework within which you work. They want their members
to stay on the right side of the law to avoid damaging their professional
reputation. You should seek out the opportunities professional bodies provide
to keep up to date with the legal or regulatory landscape and changes.
(continued)
References 81
Software and digital technologies have been enthusiastically adopted in many walks
of life and have the potential to bring significant benefits. However, there are
also risks that technology can amplify unfairness, inequality and disadvantage.
Marginalised groups can find their undervalued status even further undermined by
the introduction of new software systems. The rich and powerful are disproportion-
ately empowered to exploit technology to their advantage. Our responsibility, as
software professionals, is to educate ourselves to understand these risks and where
possible initiate mitigation.
References
13. No GCP for CBP: Google must stand against human rights abuses: #NoGCPfor-
CBP (2019). https://medium.com/@no.gcp.for.cbp/google-must-stand-against-human-rights-
abuses-nogcpforcbp-88c60e1fc35e
14. O’Neil, L.: Amazon’s denial of workers urinating in bottles puts the pee in
PR fiasco (2021). http://www.theguardian.com/lifeandstyle/2021/mar/25/amazon-bottles-pee-
tweet-warehouse-workers
15. Rost, J., Glass, R.L.: The Dark Side of Software Engineering: Evil on Computing Projects.
John Wiley & Sons, Hoboken (2013)
16. SFIA Foundation: The global skills and competency framework for a digital world (2003).
https://sfia-online.org/en
17. SFIA Foundation: Self-assessment guidelines (2003). https://sfia-online.org/en/tools-and-
resources/using-sfia/sfia-assessment/self-assessment-guidelines
18. SFIA Foundation: SFIA (2018). https://www.sfia-online.org/en
19. Shane, S., Wakabayashi, D.: ‘The Business of War’: Google Employees Protest Work for
the Pentagon. The New York Times (2018). https://www.nytimes.com/2018/04/04/technology/
google-letter-ceo-pentagon-project.html
20. Srnicek, N.: Platform Capitalism. Polity Press, Cambridge (2016)
21. The IET: IET – Home (2019). https://www.theiet.org/
22. Zuboff, P.S.: The Age of Surveillance Capitalism: The Fight for a Human Future at the New
Frontier of Power, main edn. Profile Books, London (2019)
Chapter 6
Tabby Cat Project, Getting Started
Abstract In this chapter, we consider forming a team to create the Tabby Cat
case study project. This project will create an opportunity to apply the ideas from
the chapters in Part I of the book. We will apply the self-organising team, scrum
master and product owner roles to the Tabby Cat project. We will also explore
managing stakeholders and professional issues. Tabby Cat is software for displaying
activity from an online source code repository. You can download information about
commits on the repository and display the data using various filters and searches.
6.1 Introduction
This case study allows us to summarise and apply the most important ideas we have
covered in Part I. Here, you can learn more about agile roles, the self-organising
team, managing stakeholders and professional issues.
This case study is based on software developed by Red Ocelot Ltd. [1]. In
Chap. 12, you can learn about the case study requirements, design and implementa-
tion.
Your aim is to form a team and create the Tabby Cat product. Tabby Cat is a skeleton
software service for obtaining and displaying activity on a GitHub repository. Tabby
Cat can connect to any public GitHub repository and extract data on commits, issues
and metrics.
Once the data is extracted from GitHub, a listing can be produced. This listing
can help understand the focus of developer activity on the target repository.
I have recommended that you create and update a learning journal when you
do the exercises in each chapter (see Exercise 2.1). Now is a good time to
reflect on your journal
• Re-read your learning journal from the chapter exercises in Part I of the
book.
• Think about what went well when you did the exercises.
• Think about what didn’t go so well.
• Make some notes, in your learning journal, about the strengths and
weaknesses of your work in these areas.
• Create some actions or set some targets for your future learning.
Team members can be selected or assigned to you. Even if team members are
friends, you will likely want to learn more of their likes and dislikes in terms
of project work. I recommend you start by conducting a skills inventory of the
members of your group. Consequently, your starting point should be to read Chap. 2
and complete Exercise 2.2.
Now complete Exercise 2.3, to learn more about the other members of your team.
It is especially important in diverse teams to develop empathy and trust with other
team members. This is best achieved by understanding each other’s backgrounds
and experiences.
Team Diversity
Software teams with a diverse membership are more likely to perform well.
Diversity within the team brings different perspectives and complimentary
ideas. Attracting team members with diverse skills should be a high priority.
6.4 Sprint Zero 85
There’s no such thing as a Sprint Zero. But, it is a useful metaphor for starting
your first iteration. Sprint zero is where you work together to prepare for software
development on the Tabby Cat project.
Now is a good time to read Chap. 3, if you haven’t already. If you have not been
assigned a scrum master, you need to choose one. As you learned in Chap. 3, the
scrum master facilitates ceremonies within the team.
First, your team can practise sprint planning, as discussed in Exercise 3.2. The
sprint planning process is described in Chap. 13. Then, your scrum master can
organise and facilitate daily stand-ups, drawing on Exercise 3.3. You may not have
any software to demonstrate at the end of Sprint Zero, so you may not want to run
a customer demonstration (if you do, though, you can look at Exercise 3.4). But,
you should probably conduct a Sprint Zero retrospective. There is more information
about conducting a retrospective in Exercise 3.5.
Next, you need a product owner. The product owner could be a real customer or
perhaps a supervisor or an academic running your course. If the product owner is
86 6 Tabby Cat Project, Getting Started
not obvious, then you need to create a proxy product owner role. Choose the person
with the most knowledge of the Tabby Cat project domain.
Finally, work with your product owner to undertake requirements gathering
workshops, such as those described in Exercises 3.6 to 3.8. You will learn more
about requirements, when you read Chap. 7. Specific requirements for the Tabby
Cat project are discussed in Chap. 12.
Now read Chap. 4, if you haven’t already. Someone in your team can focus on
working through Exercises 4.2 to 4.6. This person can think about how the team
will record important decisions, such as design decisions, and how you will report
to stakeholders on your activities.
The Tabby Cat project may not have serious ethical dilemmas, but read through
Chap. 5 in order to consider ethics issues that might arise. In particular, think about
the skills you have in the team. Perform Exercise 5.5, and consider any skills gaps
you identify. Is there any training members of the team can undertake to address
missing skills?
The Tabby Cat project will create a skeleton software system for connecting to a
public GitHub repository, extracting source code activities and making a display.
In this chapter, we have explored a range of tactics to help you form a team
to work on the Tabby Cat project. If you have applied the knowledge and skills
described, you will be working as a self-organising team with a scrum master and a
product owner. You will also have completed a Sprint Zero and practised running a
few team meetings.
In Part II of the book, we will explore the technical skills you need for an
agile project. I’ll explore requirements in Chap. 7, high-level design or architecture
in Chap. 8, design in Chap.9, development in Chap. 10 and security in Chap. 11.
Discussion of this technical side of the Tabby Cat project will continue in Chap. 12.
Reference
While Part I of the book focused on people, Part II of the book is about product.
We have to acquire skills in defining the needs our system intended to fulfil and the
techniques for creating a software solution.
First, in Chap. 7, there is a discussion of requirements gathering and management
for incremental delivery. You will learn about distinguishing functional and non-
functional requirements. Specifically, you will learn about employing use cases and
user stories for capturing and discussing requirements.
Next, Chap. 8 explores approaches to high-level architectural styles, such as
client-server and layered architectures. You can learn about some of the most
important design principles, such as the SOLID approach.
Then Chap. 9 considers lower-level system design, most notably object-oriented
modelling and how to derive a design from a domain model. You can learn about
design patterns, such as object factories and the model-view-controller.
Incremental development issues are discussed in Chap. 10. You can learn about
the artefacts development teams create while building software systems. This will
cover topics like Kanban boards, backlogs and burndown charts.
In contrast, Chap. 11 looks at security issues and the concept of a secure-by-
design agile development process. We’ll look at creating abuse user stories to model
potential threats and guidance on secure implementations.
In Chap. 12, the ideas from Part II are applied to the Tabby Cat case study. I
explore the technical skills needed to read activity data from an online software
source code repository and display the information with various filter options.
88 II Product
As I have emphasised, the overall design of this book is around people in Part I,
product in Part II and process in Part III. These parts of the book are stand-
alone, more or less. So, if your main interest is in the social aspects of software
development, for instance, then you might want to skip back to Part I. On the other
hand, if your main interest is creating a systematic software development process,
then you might want to skip ahead to Part III. Some more advanced topics, such
as large-scale agile, cloud deployment and continuous integration, are described in
Part IV.
Chapter 7
Requirements
Abstract Our customers, clients, users or bosses give us requirements that define
the needs our software must fulfil. We need to understand when to use outline
requirements, for longer-term planning, and when we need full detail, for the
requirements we are going to implement now. Hence, we adopt an incremental
approach to managing requirements. We often analyse requirements using user
stories and use cases. User stories are great for helping our customers prioritise and
communicate about the software needs. However, developers find use cases useful
for learning how to elaborate our requirements in more detail.
7.1 Introduction
one to use on your project. You can learn more about use cases in Sect. 7.4 and user
stories in Sect. 7.5.
Functional requirements are statements that describe the services the software
should provide. This might involve things like how the software will respond to
particular sets of inputs, how data should be transformed and what the software
does in particular situations. Obviously, the name functional requirement is derived
from the functions the software performs.
I will talk more about feature-driven development, in Sect. 9.2; this is about
developing one particular service at a time. A feature is some client-valued function
that the software must perform. We tend to favour descriptions of functionality from
a perspective outside the software. We are usually interested in externally visible
behaviour.
Caution!
We have a dilemma about how to handle requirements on agile projects. On the one
hand, we can’t build the software without complete and detailed information about
what the system is supposed to do. On the other hand, it takes a long time to create
a detailed specification of requirements for the whole system, and we find some of
the requirements changing as we go along.
So, we need more detail about what we are going to build now, but we can live
with less detail now, about what we are going to build later. But, hang on. The non-
functional requirements are cross-cutting. Non-functional requirements affect the
whole system. So, we need to be very careful here.
We do need full detail about the non-functional requirements at the outset. We
need to ensure our initial software designs take into account the constraints under
which our system will operate. But it is also true that defining detailed specifications
for functions we won’t be implementing until later in the project is not necessary (so
long as they aren’t going to dramatically change our understanding of the constraints
on the software).
So, we arrive at a point where we need detail about the non-functional require-
ments from the start but that we can adopt a, sort of, moving window approach to
functional requirements. We can develop an outline, overview or fuzzy description
of the functionality of the whole system. Then, for the high-priority features we are
going to work on first, we need to obtain all the detail.
92 7 Requirements
Imprecise requirements are bad news, when we build software. If requirements are
ambiguous, the development team may interpret the requirements differently from
users or clients. This means we end up building something that does not really meet
the needs of the client. And that is the bad news. We end up building a poor-quality
product, poor quality because the software does not meet the need (not poor quality
in the sense that there are defects in our code). So, when the time comes to build
functionality, we need the detail about what it is meant to do.
In large-scale projects, with multiple cooperating teams, ambiguous require-
ments can be interpreted differently by different teams. This can lead to confusion
between the teams and even source code defects. This is discussed further in
Chap. 18.
•! Attention
A use case diagram shows users of the system. We give the users a special name
actors. The actors are shown as stick figures and are usually organised on the left
or right edge of the use case diagram. In Fig. 7.1a, you can see an actor called
customer. There are two use cases in Fig. 7.1a: Rent and Return. The oval shapes
represent the use cases. The words Rent and Return are actually use case titles.
Finally, the connecting lines, in Fig. 7.1a, show us that the Customer actor can
perform both use cases Rent and Return.
A)
Rent
Return
B)
System Boundary
Rent
Return
Valet
Agent
Vehicle Customer
Customer actor does not
perform this Use Case
Fig. 7.1 Two simple use case examples. (a) Simplified use case diagram. (b) Car rental desk use
case diagram
7.4 Use Cases 95
The use case shown in Fig. 7.1b is getting slightly more realistic. There
are two actors, Agent and Customer, and three use cases, Rent, Return
and Valet Vehicle. Notice that there is no connecting line between the
Valet Vehicle use case and the Customer. This tells us that the Customer
actor, in this illustrative car rental system, is not required to valet their own vehicle
when they return it. The use cases that are in scope for a project are often indicated
by including a system boundary box on the use case model.
Each Use Case is described in more detail in tables. The table templates vary from
place to place; I show an example in Table 7.1. It is customary to complete the use
case table in full.
The use case title is the name of the use case. The name is often quite succinct,
but it needs to be unique. Each use case title corresponds to the one shown in the
use case diagram, of course.
The primary actors are listed next. A primary actor initiates the use case. As
with the use case title, it is important to make sure the actors in the use cases
correspond to the actors shown in use case diagram. Sometimes, people add a row
to list secondary actors. Secondary actors are required to complete a use case but do
not initiate the use case.
The actors are followed by a goal for the use case. The goal describes the purpose
of the use case from the user’s perspective. What is it that the user is trying to achieve
when they perform the use case?
The scope represents a boundary for the use case. What is included or not
included in the use case? Preconditions must be true before the use case runs.
Postconditions must be true after the use case has completed.
The main success scenario gives the steps that form an interaction scenario in
which nothing goes wrong. Each of the numbered steps reflects a stage in a user
interaction with the system.
Finally, the extensions describe the things that can go wrong, or unexpectedly,
in each step of the main success scenario. Each numbered step in the extensions
corresponds to a numbered step in the main success scenario. Meaning, extension
step 2 is a non-successful variation on step 2 in the main success scenario.
Extensions must be something the system can actually detect for itself [1]. Also,
there is no point in describing an extension the system can’t actually handle.
A user story is also written from the perspective of a person who actually uses
software. But, whereas use cases are described as being semi-formal, a use case
is informal because it is written in simple (natural, non-technical) language. In a
way, we can think of a user story as being a handle, or variable name, representing
some collection of functions that the software will perform. Let’s look at a couple
of simple, fictitious, examples:
User Story 1
As a <holidaymaker>, I want to <book a flight>, in order to <enjoy
a holiday>.
Notice that User Story 1 has three parts: the user, in this case a <holidaymaker>,
followed by an action, <book a flight>, followed by an objective or purpose to
<enjoy a holiday>. Actually, there are lots of different templates for user stories,
but this is the one I tend to use. . .
User Story 2
As an <actor>, I want to <perform an action>, in order to <gain
some value>.
7.6 User Story Mapping 97
User Story 3
As a <user>, I want to <book a flight>, in order to <get a flight>.
We try to avoid using the generic name <user> in our user stories. Why? Because
a user is not a specific enough description of the person or thing interacting with
our software. We’ll explore this idea in more detail in Sect. 7.7, when we talk about
personas.
But to make things clearer, let’s imagine that our travel booking system in User
Story 1 might also have another user story, like this:
User Story 4
As a <business traveller>, I want to <book a flight>, in order to
<have a business trip>.
In both User Story 1 and User Story 4, someone wants to book a flight. So you
might think it would be a simplification to merge them both into User Story 3. But
perhaps, the <business traveller> in User Story 4 is going to be invoiced through
their company, whereas the <holidaymaker> in User Story 1 has to pay online with
a credit card. These extra details are not yet obvious from the user stories we’ve
presented. But this illustrates the benefits specific user segments have in our user
stories.
So, how do we show these extra details (e.g. card payment or corporate billing)
in a user story, then? We often add acceptance test criteria to the user story. We’ll
discuss the skills you need to perform testing in Chap. 16.
Once you have established a series of user stories, it is a good idea to plan out
a user journey through the features [3]. The challenge we are trying to avoid is
creating increments with valuable features but that don’t provide a useful end-to-
end journey for the user. We are going to try and build a matrix of user stories
organised according to their order in a user journey and their criticality, as shown in
Fig. 7.2.
98 7 Requirements
Criticality
User Story
6
Seldom Used
Feature
Seldom Used
7.7 Personas
Personas are fictional characters that you create to represent user segments or types
of actor interacting with you system. The idea is to help you think about using your
system from someone else’s perspective. Personas arise from your research into
typical user behaviour. What are their goals, objectives and motivations in using
your system? By developing different personas, you can articulate the different
needs user groups have.
The persona comprises a photo or cartoon image to represent this user as an
individual. You can then write fictional details about the persona’s age, gender,
ethnicity, education, lifestyle, interests, values, goals, needs, limitations, desires,
attitudes and patterns of behaviour, as appropriate for your application software.
In an online travel booking system, for example, you might distinguish between
‘frequent fliers’ and ‘vacationers’. Frequent fliers tend to be business travellers.
This implies solo travel, metropolitan destinations, short-notice trips, late changes
of plans and corporate billing. However, we might assume that vacationers are more
likely to travel in groups; favour rural, beach or mountain destinations; make fewer
more infrequent trips; and accept online credit card payment. We can use personas
to tease out more details about these different use groups and their needs.
7.8 Exercises
You should start by creating a learning journal for Part II Product, if you haven’t
already. In the learning journal, keep notes on the things you learn. Use the learning
journal to plan your future skills development activities.
7.8 Exercises 99
Don’t forget: it is better not to look at the hints, tips and solutions chapter, at this
stage. First, do an exercise. Next, reflect on that exercise. Then, look at the hints,
tips and advice in Sect. 7.9.
Exercise 7.4 (Use Case Diagram Exercise 3: Flight Travel Booking Sys-
tem)
7.4 The Flight Travel Booking System (FTBS) provides online services to
travellers for flight and hotel reservations using a reservations transaction
handling system such as Amadeus or Sabre. Travellers can search, reserve,
book and cancel flights. Traveller cancelations can be performed up to 24 h
before departure. Frequent fliers can cancel reservations with no penalty
within 6 h of departure.
Exercise Tasks Your objective is to analyse the scenario above and build a
use case model by doing the following tasks:
1. Identify and name the actors of the system.
2. For each actor in the system, identify and name the use cases for the actor.
3. Draw a simple use case diagram for the system.
(continued)
7.8 Exercises 101
(continued)
104 7 Requirements
Register
Transfer
Student Upload
Marks
Module Year
Assignment
Ratify
Lecturer Marks
Administrator
Archive
Purchase
Senior
Librarian Search
Borrower
Borrow
Return
Librarian
Reserve
Book
Frequent Traveller
Flier
Search
Cancel
(continued)
References 109
References
1. Cockburn, A.: Writing Effective Use Cases. Addison-Wesley Professional, Upper Saddle River
(2000)
2. Patton, J.: It’s all in how you slice it. Better Softw. Mag. 2005(01) (2005). https://www.
stickyminds.com/better-software-magazine/its-all-how-you-slice-it
3. Patton, J.: User Story Mapping: Discover the Whole Story, Build the Right Product, 1st edn.
O’Reilly Media, Sebastopol (2014)
4. Rahy, S., Bass, J.M.: Managing non-functional requirements in agile software development. IET
Softw., 1–13 (2021). https://doi.org/10.1049/sfw2.12037
5. Wikipedia: List of Unified Modeling Language tools (2019). https://en.wikipedia.org/wiki/List_
of_Unified_Modeling_Language_tools. Page Version ID: 909970969
Chapter 8
Architecture
Abstract In this chapter, we explore software structuring skills that help achieve
software requirements and manage change. Organising software structure can
simplify communications and enable team members to work on different parts
of the project at the same time. Software features are independent end-to-end
fragments of the functionality of the system. In feature-driven development, end-
to-end fragments are worked on by different team members, in parallel. Other
structures, or architectures, can also reduce dependencies between one part of the
system and another. Structures discussed include client-server, pipe and filter and
layered architectures as well as design patterns such as the model-view controller.
8.1 Introduction
Software architecture concerns the overall structure and organisation of the system.
On the one hand, architecture is a process, the creative and design activities involved
in making an architecture or system structure. In this view, architecture is a set of
high-level system design activities.
On the other hand, architecture is one or more outputs or deliverables, a set
of architecture design models that describe how the system is organised as a set
of communicating components. In this view, architecture is a set of development
artefacts, skeleton software systems, drawings or reports used to convey the desired
system structuring.
Architectural design happens early in the development process. It overlaps with
requirements gathering and often needs to be revisited later during development or
production as a refactoring activity. Architectural design requires consultation with
stakeholders and is needed to:
• Provide a software infrastructure to meet non-functional requirements
• Enable everyone to clearly picture the overall organisation of the system
• Simplify software development collaboration between more than one person or
team
8.2.1 Refactoring
8.2.2 Rework
Rework is not the same as refactoring. Rework is repeating the same work over
again, in the worst case, because it was poorly done in the first place. Nobody likes
rework. Managers hate rework because it is a needless cost. Self-organising team
members dislike rework because it is a sign of poor-quality craft.
Experienced teams are often working on applications that are similar to others
that they have built before. Hence, architecture tends to be inherited and refined from
previous efforts. Obviously, everyone wants to be sure the inherited architecture is
good. Hence, some effort to evaluate architecture quality is needed for working or
live systems.
But what are we to do if we are learning with an inexperienced team or in a new
application domain? We don’t have any reliable architecture to inherit from previous
efforts. Well, think about the trade-offs. Too much upfront design might be a waste
of resources. But, we might end up refactoring the design every iteration. And that
could result in excessive rework.
For sure, you mustn’t attempt to create a fully articulated, detailed architecture
design. If we try to create detailed architecture designs, they may turn out to be
useless. We would need to consider features that will not be implemented for months
ahead. Things change. Stuff happens. Planned features never get built. The days are
gone when we can afford to design architectures for features that are never going to
be implemented.
Better that we try to understand what features are going to have a big impact on the
architecture. Then, develop an outline architecture, with a release plan, or roadmap
114 8 Architecture
for re-architecting when significant feature enhancements are required. That way,
those enhancements can be dropped (or replaced) if the features turn out to be
superseded.
This planned refactoring approach allows the team to consider, externalise and
explain the need for architecture re-design at significant stage of the project. Planned
refactoring allows you to start delivering working code using a simple architecture
to start with. But planned refactoring also requires that you think carefully about the
implications of non-functional requirements and features on architecture.
8.3.1 Client-Server
Client 1
Server 1
Client 2
Server 2
Internet
Client 3
Server n
Client m
For some applications, it is fine if the various components manage their own data
stores. But in very data-intensive applications, where a consistent view of shared
data is required by all components, then a repository architectural style can be
attractive. In the repository architecture, components do not interact with each other
directly. Rather, all interactions happen through repository data transfers, as shown
in Fig. 8.2.
Components do not need to be aware of each other, supporting separation of
concerns. Changes to repository data made by one component are available to
other components. The centralised storage model simplifies handling of services
like backup and data archiving.
A drawback with this approach is that the repository is an obvious single point
of failure for the whole system. Any corruption of repository data affects all the
components.
You can mitigate risk by creating a distributed repository, with data shared
across multiple servers, but that introduces new technical problems such as ensuring
consistency of information within the repository. In some technologies, such as
Apache Kafka [1], availability is ensured using distributed data structures and
redundant layers.
8.3 Design Styles 117
Component 3 Component 4
Component 1 Component n
The pipe and filter architectural style comprises a chain of transformation compo-
nents that each process input data to produce some output, as shown in Fig. 8.3a. The
chain is often sequential, leading to a batch processing model. More sophisticated
implementations can perform transformations in parallel, on different data items, in
a more complex data-flow model.
The main challenge is to organise the process into a set of discrete processing
stages that is each responsible for a specific transformation. Incremental develop-
ment is supported, by starting with a few simple transformations. Further processing
stages can be added as the software matures. Conventionally, the pipe and filter style
used a batch model processing one item at a time. More recently, processing streams
tend to be used.
The pipe and filter architectural style does have disadvantages, which include:
1. Unsuitability for interactive systems.
2. Input parsing and output unparsing are required at each stage.
3. Agreed standard input and output data formats are needed.
Despite these shortcomings, pipe and filter architectures are often used in appli-
cations such as computer language translators and compilers. You can implement a
skeleton pipe and filter architecture in Exercise 8.2. Have a go at the exercise first,
but I’ve put an illustrative solution on GitHub [2].
118 8 Architecture
A)
B)
Persistence Layer
Fig. 8.3 Example architectural styles. (a) Pipe and filter architecture. (b) Layered architecture
When using the layered architectural style, related functionality is grouped into a
series of levels, as shown in Fig. 8.3b. Each layer provides an agreed set of services
to the layer above. In contrast with the unidirectional pipe and filter architectural
style, data flows are bidirectional. Data can flow down through the layers, as well as
up. Lower levels provide services to the next layer up, in the system.
The layered architecture requires discipline to ensure that all team members
adhere to the model. The maintainability benefits of the layers are lost if service
calls jump over a layer and access underlying services. On the other hand, there is a
performance cost to passing data through multiple layers for each request. You can
implement a skeleton layered architecture in Exercise 8.2.
In the clean architecture approach [7], there is a recognition that the application
needs protecting from web interfaces and user interface frameworks in much the
same way. Consequently, instead of having clients at the top and databases at the
bottom, as we do in the n-tier architecture, we form an onion ring perspectives with
all the interfaces around the outside, as shown in Fig. 8.4.
8.3 Design Styles 119
1 7
2 6
3 5
4
Entities
Use Cases
Controllers &
Gateways
Using this model, we have entities in the centre, surrounded by use cases. Then
we have a ring for our gateways and controllers. Finally, as I mentioned, the web,
databases, devices and other external interfaces form a ring around the outside.
The idea is that we may want to swap relational database management system
technology in the future. We should not have to re-write the whole application if
we want to do that.
Thus, referring to Fig. 8.4, we can imagine a scenario where a user presses a
button to request some information stored in a database. When the button is pressed,
our user interface (1) calls a controller (2). The controller calls a use case (3) which
uses an entity (4) and then (5) calls a database gateway (6). Finally, the gateway calls
the database (7) to search for the requested data. The requested data might then be
passed back in through the rings to an entity and back out through rings to the user
interface.
Entities in the architectural style are abstract enterprise logic. The use cases
encapsulate application specific functionality and business rules. The controllers and
gateways provide managed interfaces to the outside world such as drivers, databases
and the web. This is a useful architectural style for business information systems and
will be applied to the Tabby Cat project in Chap. 12.
120 8 Architecture
It is worth pausing here to consider three general design principles, which are good
practice regardless of the implementation technology being used:
• KISS
• DRY
• YAGNI
but also two more detailed sets of object-oriented design principles (GRASP and
SOLID) that help to simplify system maintenance.
KISS is an acronym for Keep It Simple, Stupid. The acronym reminds us to avoid
unnecessary complexity in our designs. Our design need contain only enough
complexity to achieve our requirements, and no more.
Software engineers have the habit of predicting future needs of clients and imple-
menting software features in anticipation of those future requirements. This is not
a good practice because sometimes we invest effort in preparing for future features
that never come. This results in bloated software source code.
Instead, only functionality needed now must be implemented. This improves
productivity against the requirements that have actually been prioritised and also
helps keep things simple to accommodate future changes.
8.5.4 GRASP
Simply put, classes contain operations that need to be performed on the data they
encapsulate.
In general, in object-oriented design, we seek to minimise coupling and maximise
cohesion. That is, we want to minimise coupling between classes and maximise
cohesion within a class. When we loosely couple different classes, we try to
minimise their dependency upon one another. This helps minimise propagation of
change through our system, when we make modifications. The contents of cohesive
classes are strongly related and highly focused.
Object-oriented programming languages support polymorphism, in which a
single interface is used for entities of different types. For example, we can
create constructors for our classes, which accept different parameters. The specific
constructor executed is assigned automatically at runtime.
We try to improve the maintainability of our system by using stable interfaces
around aspects of the system we think are likely to change. Hence, we protect our
system from variation. The interface minimises the effect of later changes rippling
through our system.
A pure fabrication class, according to the GRASP principles, does not directly
correspond to a concept in the problem domain but rather provides a service to other
classes in the system.
8.5.5 SOLID
The SOLID acronym was introduced around 2004 by Michael Feathers, to help
you remember good principles of object-oriented design [6]. The SOLID principles
have some overlap with Larman’s GRASP patterns [5]. The SOLID acronym [11]
is derived from:
• Single responsibility
• Open-closed
• Liskov substitution
• Interface segregation
• Dependency inversion
The single-responsibility principle dates back to the days of structured program-
ming. Simply put every class should have only one responsibility. Consequently,
there can only be one reason to change a class. This is another way of expressing
high cohesion within a class.
The open-closed principle is a restatement of the Protected Variations principles
from GRASP, mentioned in Sect. 8.5.4. We want to achieve a design in which classes
are open for extension but closed for modification. We can use generalisations, such
as inheritance or delegate functions, to extend classes.
The Liskov substitution principle is related to another idea in object-oriented
software design, called design by contract. The idea is that children classes, which
inherit properties from parents, can be substitutable for parents. For example, if the
8.5 Design Principles 123
class Fast Car is a subtype of Car, then a Fast Car object can be used anywhere
that Car object is used.
This principle imposes some restrictions on what we can do in child class
interfaces, regardless of what the programming language actually allows. For
instance, preconditions cannot be strengthened in the subtype, and postconditions
cannot be weakened in the subtype.
Interface segregation is one way to achieve high cohesion in user interface design
(see Coupling and Cohesion in Sect. 8.5.4). User interface software is developed
specifically for a client, such that no client is forced to depend on methods it does
not use. The interface segregation principle encourages us to develop role-based
interfaces. This decouples different clients to simplify software maintenance and
evolution.
The dependency inversion principle suggests that our code depends on abstrac-
tions not on concrete details [6]. Hence, we introduce interfaces or abstract classes
as a level of indirection between components that would otherwise be rather tightly
coupled. This idea is illustrated in Fig. 8.5. In Fig. 8.5, (a) the layers are rather tightly
coupled. Changes in one layer ripple through to another. In contrast, if we look at
Fig. 8.5, (b) changes to the implementation of the concrete application logic layer,
for instance, have less impact on the server-side presentation layer.
A)
Server-side
Application Persistence
Presentation
Logic Layer Layer
Layer
B)
Server-side <<Interface>>
Presentation Application
Layer Logic Interface
<<Interface>>
Application
Persistence
Logic Layer
Interface
Persistence
Layer
Fig. 8.5 Dependency inversion pattern. (a) Conventional layer pattern. (b) Dependency inversion
pattern
124 8 Architecture
8.7 Exercises
Now complete these exercises on the material from Chap. 8. This will help you
consolidate your skills in architecture. Remember, don’t look at the hints, tips and
solutions chapter, just yet. Have a go at the exercises, then look at the advice in
Sect. 8.8.
(continued)
8.7 Exercises 125
We are now ready to move from high-level architectural design to more detailed
design concerns.
References
9.1 Introduction
Chapter 7 explored user stories, use cases and use case diagrams for modelling
interactions between our proposed system and the outside world. Now I can talk
about models from structural and behavioural perspectives.
Class diagrams are used to create structural models that visualise the organisation
of a system or the current environment. We can develop our ideas about the
components that make up a system and their relationships with each other. We can
use our model development to discuss the design of the overall system architecture,
as described in Chap. 8.
Class diagrams are used to develop object-oriented systems. The diagrams show
the classes in the system and their associations. A class is a generalisation of
an object instance that exists in the system. Also, an association represents a
relationship between two classes.
When you develop class diagrams during early stages of the software engineering
process, objects represent something that exists in the real world. In a car rental
application, this might include cars, rental agreements, invoices, payments and so
on.
Where do the class diagrams come from? Well, from requirements (use cases or
user stories), of course. But how? The trick, I learned from some very clever and
experienced architects, is to look for nouns and verbs. What? I know! Nouns and
verbs? What are they? I’m not very good at English grammar, so perhaps I’d better
explain.
Nouns are words that describe a person, place, thing, quality or idea. In software
design, when we see nouns in our requirements, we are thinking of things that might
appear in the system we are developing or in its application domain.
For example, if we think about banking, the noun account might be imple-
mented as a bank account in our software. Similarly, in an online travel booking
system, the noun ticket might be implemented as a passenger ticket in our
software. In the English language, there are more nouns than any other kind of
word.
132 9 Design
In contrast, verbs describe actions. As kids, we called them doing words. In software
engineering, verbs that appear in our requirements might end up being implemented
as methods or operations.
For example, if we think about banking, the verbs open or close might be
implemented as operations on a bank account in our software. Thinking of an online
travel booking system, the verbs purchase or cancel might be implemented as
operations on a passenger ticket in our software.
These models of the real world, as it exists before our system is implemented, are
often called conceptual models [6] or are prepared for domain analysis [7].
Figure 9.1a shows two simple classes, named car rental customer and
hire agreement, and a one-to-one association. This tells us that each customer
can have only one (and actually must have exactly one) rental agreement. Can you
hire more than one car at a time? Well, you could, I suppose. For example, if you
need a hire car and then are going to need a van to move something big or heavy,
that means allowing the rental car to sit unused while you use the van, which seems
extravagant to me. But, it is certainly difficult to actually drive two cars at the same
time, so maybe this one-to-one mapping relationship is okay.
Figure 9.1b shows a more detailed class representation with attributes (encap-
sulated data) and methods (operations the class can perform). This class shows the
conventional representation of a class in the UML, with three boxes: the name box
at the top, the attributes box in the middle and methods at the bottom.
Things get more useful in Fig. 9.2, where the class diagram shows a general-
isation or inheritance relationship. A Corporate Car Hire Agreement inherits
methods and attributes from Car Hire Agreement. Consequently, we think of the
Corporate Car Hire Agreement as a specialisation of Car Hire Agreement.
We can use class diagrams to model other relationships, such as composi-
tion. A car might be made up of engine, transmission, body, wheels and
fuel tank components, as shown in Fig. 9.3.
9.4 Class Diagrams 133
A)
B)
Car Hire
Attributes Agreement
Class
carHired
Name
hireDate
returnDate
Location
Notes
…
setHireDate ()
Methods
getHireDate ()
createAgreement ()
archiveAgreement ()
…
Fig. 9.1 Simple Car Rental Classes. (a) Simple Classes and an Association. (b) Car Hire
Agreement Class (Incomplete)
At an early stage of the design process, class diagrams are used to model real-
world entities that will be implemented into the software. This is in contrast with
requirements modelling, where our focus is on the As Is context. Our focus, during
the design is on the To Be structure of the system. The goal is to identify and name
classes and their associations and then to find and name attributes and operations
which will be implemented as methods.
Then, as the design process progresses, the class diagrams are annotated with further
details, such as the attribute data types, method call and return parameter date types.
So the goal is to make detailed decisions about the class diagrams such that they can
be implemented in software.
134 9 Design
Car Hire
Agreement
Hire Date
Return Date
Locaon
Notes
…
setHireDate()
getHireDate()
createAgreement()
archiveAgreement()
…
Corporate Individual
Car Hire Car Hire
Agreement Agreement
fleetHired[] CarHired
addCarsToFleet() setHiredCar()
Car
Maker
Range
Model
Notes
…
createHireAgreement ()
getLocation ()
Location
getLocalTs&Cs ()
LocalTs&Cs
HireAgreement
Sequence diagrams are used to model the interactions between actors and objects
within the system. This is modelling dynamic behaviour. Generally, a sequence
diagram corresponds to a specific use case. The actors and objects involved are
listed along the top of the diagram. The interactions are shown by using annotated
arrows. The diagram in Fig. 9.4 shows the interactions involved in a car rental desk
receptionist creating hire agreement for a customer.
practice solution to common problems. This helps so that developers do not have to
struggle to find a new solution, to a well-known old problem, every time they build
a new software system. Patterns enable design reuse.
In [5], the catalogue, for each pattern, contains a:
• Name, a meaningful identifier
• Problem description
• Solution description, a template for a design solution
• Consequences, results and trade-offs of applying the pattern
The elements included in design patterns vary from catalogue to catalogue.
The correct way to instantiate the Singleton object is to use the getInstance()
method, as shown in Fig. 9.7.
Hence, we have ensured only one instance of the Singleton is ever created.
Browser
Internet
Fig. 9.8 Model View Controller Design Pattern (Adapted from [5])
<<interface>> CarFactoryDemo
Car
main()
model() : String
creates
model() : String model() : String model() : String getCar()
Fig. 9.9 Factory Pattern for Creating Cars (Adapted from [5])
If you work in the commercial sector and join an existing project team, you
probably won’t get much say in the project technology stack. The company
will probably have already selected technologies. You can focus on familiarising
yourself with the chosen technology stack.
140 9 Design
9.9 Exercises
Now for some exercises on software design from Chap. 9. You should work through
these exercises to sharpen your design skills. Once you are done, have a look at the
hints, tips and solutions in Sect. 9.10.
(continued)
9.9 Exercises 143
(continued)
144 9 Design
getName() : String
getName() : String getName() : String setName(String)
setName(String) setName(String) getDateofBirth() : int
getDateofBirth() : int getDateofBirth() : int setDateofBirth(int)
setDateofBirth(int) setDateofBirth(int) getMonthofBirth() : String
getMonthofBirth() : String getMonthofBirth() : String setMonthofBirth(String)
setMonthofBirth(String) setMonthofBirth(String) getYearofBirth() : int
getYearofBirth() : int getYearofBirth() : int setYearofBirth(int)
setYearofBirth(int) setYearofBirth(int) getStaffId() : String
getStudentId() : int getStaffId() : String setStaffId(String)
setStudentId(int) setStaffId(String) getContractNumber() : int
setContractNumber(int)
Person
firstName : String
lastName : String
getFirstName() : String
setFirstName(String) : Boolean
getLastName() : String
set:LastName(String) : boolean
Programme
Module
lecturerName : String
year : Integer
marks : real[]
Option Module
transferOption(String) : boolean
Holding
title : String
keywords : String[]
type : String
purchase(String)
register(String)
archive(String)
search(String)
borrow(String)
Return(String)
In Chap. 8, the idea of creating or employing an overall architectural style for the
system was discussed.
Once an architecture is in place, we can focus on developing designs for specific
software features. We can use static and dynamic system models, created using the
UML, to explore and discuss our design ideas. We can then use models to record
and disseminate our design decisions.
We also use reusable design patterns to solve recurring problems that appear
during object-oriented design and implementation. Well-known patterns, such as
Singleton, Model, view, controller and Object factory, need to become
part of your regular software development toolkit. I encourage you to think about
using them as part of your software designs.
In Chap. 10, we explore approaches to incremental agile implementation. We will
look more carefully at the artefacts we create during the development of our project
working code.
References
10.1 Introduction
Development artefacts are the things produced by self-organising teams during the
software development process. Obviously, producing working code is the whole
point of software development. Hence, software source code and release artefacts
spring to mind. But a large number of other artefacts are also produced [2]. Software
source code is discussed further in Sect. 10.4.2.
Some artefacts are produced to enable communication between different stake-
holders in the development process. Some people call these boundary objects.
Boundary objects enable a dialogue between people with different outlooks, per-
spectives of backgrounds. Boundary objects might include reference architectures
and models such as class diagrams which can stimulate a dialogue with knowledge-
able (or influential) people.
Planning artefacts are produced before the design phase of an increment begins.
That means during the preceding increment. Hang on! ‘Where do planning artefacts
come from for the first increment?’ I hear you say. Well, some people use a device.
Call it increment 0, or Sprint 0 as I mentioned in Sect. 6.4. Increment 0 is a setup
phase for the project. An advantage of calling the project setup phase increment 0 is
that it is time-bound.
In the Rational Unified Process, the setup stage is called the inception phase. But
the focus of the inception phase is to create a detailed and fully costed requirements
specification. We don’t do that in agile. But we do need a place to work and
computers to work on, and we need some other artefacts before we can get started.
In the scrum method, the product owner elicits and prioritises requirements in the
form of a product backlog. The product backlog is a prioritised list of user stories,
highest priority at the top. As the development project unfolds, the product owner
re-prioritises requirements, keeping the most important user stories at the top of the
list.
A test plan is a strategy or policy that defines how everyone in the development
project is going to handle testing to achieve required levels of code quality. You can
present the test plan how you like, such as a report, wiki or presentation. But the test
plan must be available online and followed by everyone.
The test plan describes the testing to be performed at each stage of development.
Usually, planning considers the desired level of test coverage (the number of code
pathways tested). Unit testing is often performed by developers themselves and
must be completed before code is integrated with code produced by others. Some
10.3 Iteration Artefacts 151
people advocate manual testing, which is certainly better than no testing at all. But
automated testing is really the way to go.
Once your tested code has been integrated with the code produced in previous
increments, you need to test if the old code still works properly. This is regression
testing. Regression testing is used to test if the old code still works when new code
is added. Usually you need automated test tools to do regression testing.
Are unit and regression tests the only tests you need? I hope not. What about load
testing? Integration testing? User acceptance testing? Your test plan should describe
your overall test policy. There is more about test automation in Chap. 16.
Iteration artefacts are produced, well, during each iteration, as the name suggests.
Iteration planning, which will be discussed further in Sect. 13.2, includes the
estimation of work items. User stories can be estimated. But, more precise estimates
can be derived from breaking user stories down into technical tasks and estimating
each of those. A T-shirt sizing, or planning poker, approach can be used.
A burn down chart illustrates project progress during an iteration. The y-axis
represents story points and the x-axis the number of days in the iteration. The
example, shown in Fig. 10.1, shows a 14-day iteration. Stories are only shown on
the burn down chart when they are actually completed. That means the burn down
152 10 Development
300
250
200
150
100
50
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14
300
250
200
150
100
50
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14
process does not really get started during the first few days of the iteration shown in
Fig. 10.1.
The example, shown in Fig. 10.2, also shows a 14-day iteration. But, notice how
in this example the graph curve actually goes up on Day 9. This suggests that
something happened to increase the number of story points during the iteration.
Perhaps a new user story was introduced, which is not normally encouraged. Or
perhaps a spike occurred some problem or challenge emerged and something was
re-estimated. Notice also that the curve does not actually get to zero at the end of
the iteration. This suggests the team were unable to successfully implement and test
some stories.
10.4 Feature Artefacts 153
Feature artefacts are produced for each feature. Not every feature needs every
artefact. Consequently, you choose the ones you need.
10.4.1 Prototypes
Some people seem to think you don’t need prototypes or mock-ups in incremental
development. The need for a prototype, they say, is made redundant by a minimum
viable product that is delivered early and enables feedback from customers or users.
That view is probably true if, by prototype, you mean some elaborately coded
simulation of your system.
But, I think it is prudent under some circumstances to think of producing
mock-ups, particularly user interface mock-ups, to get approval before coding
really begins. The idea is that you create a low-fidelity, low-cost visualisation of
something. This might be because you are working on a new type of application and
have little experience to draw on. Or, it might be because you are working with a
new customer and you are uncertain of their expectations.
A prototype is a good example of a boundary object. You are creating a low-
fidelity interpretation and saying to stakeholders, ‘what do you think of this?’ or
‘how does this look? Is this want you want?’ This process enables you to gather
feedback on a single aspect of the system you are developing. But, it also has the
benefit that you are, sort of, winning commitment for your work from the customer.
It is hard for them to say later that they don’t like the look of something, when they
have approved a prototype.
Working code is what software development is all about. Working code is derived
from source code. All the other stuff is about enabling source code development to
take place.
Many developers feel under pressure from deadlines and hence rushed into
creating messy code. However, messy code actually slows you down, and what you
need is elegant code. Elegant code is easily readable by people and machines. What
makes elegant code? Robert Martin has created a compelling list [6], including:
• Meaningful names.
• Functions (methods) should only do one thing, and do it well.
• Good and elegant code does not need comments.
• Good formatting improves communication.
• Use objects to hide data and expose operations.
154 10 Development
Unit tests are used to verify specific components of our software system. Usually we
test classes (such as the constructors that instantiate objects) and methods (checking
10.5 Release Artefacts 155
calling and return parameters) as well as attributes of classes (have variables been
initialised and so on). As with regression testing, you could in theory do unit testing
manually. But seriously, it is just so much more efficient to automate this stuff. I’ll
talk more about test automation in Chap. 16.
10.4.4 Issues
We use the word issues as a collective term for defects, feature requests, feature
enhancements and so on. Generally, teams supporting live systems carefully keep
track of issues. The defects and change requests are triaged to decide which give the
most benefit for the lowest cost or least effort. Most software organisations don’t
have the resources to fix issues that are expensive but only bring small value. In
safety critical software, a zero defect policy is desirable, but expensive to achieve.
Some vendors give clients a chance to vote on issues to help decide which are
the most important. Teams might provide customers with a list of known issues for
each release. That makes it look like you have a handle on things, even if you don’t
have the resources to fix everything. Obviously, defects that have a big impact and
are inexpensive to fix need dealing with urgently. Otherwise, you will get a bad
reputation for shipping poor quality software.
When development teams are supporting a live product, choosing between
putting effort into new features, feature enhancements or defects requires careful
balancing. To resolve issues, some teams set aside effort during each increment.
Other teams choose to periodically dedicate an entire iteration to feature enhance-
ments and defects (perhaps every second or third iteration).
Defects are deviations from expected behaviour. Defects in common parlance
are bugs. Strictly speaking, a defect is not a software development artefact. No
one manufactures defects (unless you are working on software implemented fault
injection, but that is a bit of a niche application area). The artefact is the defect
record we create, keep and manage. We use tracking tools (such as Bugzilla [3] or
Jira [1]) to implement an issue database.
Feature enhancements are not defects; they are requested improvements to our
software. Feature enhancements have to be prioritised, usually by a product owner
or someone else with a good understanding of user needs. We decompose feature
enhancements into technical tasks and estimate them, at the start of an increment.
These enhancements can then be included in the development iteration just like new
features.
Once source code is written and thoroughly tested, it can be released. In scrum,
product owners decide when code is suitable for release to customers. Then, source
156 10 Development
code has to be packaged, ready for release. There are benefits to automating the
packaging and release process, as discussed in Chap. 21.
In web applications, a .war file must be created comprising all the Java Server
Pages, Java Servlets, XML, Java classes and so on that comprise a release. The .war
file becomes executable when placed in a folder accessible by a web server.
Some places use a containerisation approach to releases. Containers, such as
Docker [4], offer a standardised deployment platform which can then be deployed
or replicated onto different server instances. Docker uses operating-system-level
virtualisation to isolate and bundle software applications, libraries and configuration
files. Multiple containers can run on an operating system instance and hence
are lighter weight than virtual machines. On larger-scale systems, orchestration
software, such as Kubernetes [5], are used to manage containerised deployments.
We discuss cloud deployment further in Chap. 19.
The code binaries are what gets deployed into a production environment to provide
a service for users. Novices and learners often experiment by running binaries on
a local machine. Mobile or embedded application binaries must be executed in a
simulator or downloaded onto the device. Similarly, web application binaries must
be uploaded to a web server.
When new features are integrated into the main software trunk, we need to check if
the existing features have not been adversely affected. During regression testing, we
re-run tests to re-evaluate previously tested software features.
Regression testing is where automation pays off. Re-running automated test
suites takes little effort, just machine time. It gives you (and others) more confidence
in your software when you re-run a full test suite and everything passes okay. There
is more about test automation in Chap. 16.
10.6 Exercises
Now it is time to tackle some exercises on software design from Chap. 10. You
can have a go at these exercises to sharpen your development skills. When you are
finished, review the hints, tips and solutions in Sect. 10.7.
10.6 Exercises 157
300
250
200
150
100
50
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14
References
Abstract This chapter will introduce some basic techniques around cyber-security.
A life cycle approach will be adopted, starting with security analysis, requirements
and design and then moving on to security implementation and evaluation. Finally,
we’ll explore an agile secure-by-design process. The chapter will include checklists
around security good practice, and some testing tools will be introduced.
11.1 Introduction
Strategy and metrics define the overall direction and measures used to assess
compliance with security needs. Education and guidance aim to enhance knowledge
and skills for personnel involved in software development projects. Threat assess-
ment is used to identify and characterise potential attacks. Security requirements
promote the inclusion of functionality or countermeasures to address security
concerns. Requirements-driven testing uses abuse stories which describe, from the
perspective of an attacker, how a system is misused. Security testing uses tools to
discover vulnerabilities in the runtime environment. We can address these themes,
by taking a life cycle perspective.
The expected behaviour of the system under development and its operating context
are called the security environment. This environment determines the likely threats
our application will face. Internet connections offer access to many, potentially
malicious, actors. Consequently, our web applications and software services must
be designed to defend themselves appropriately.
11.3 Security Requirements 165
There are two main types of security requirements: those designed to help create
countermeasures and abuse, or attacker, stories. We can also use attack personas to
help understand potential attackers.
The threat model is used to develop a set of requirements which can be prioritised
and managed. The requirements are used to identify design tasks intended to
mitigate security threats, for example, the authentication user story shown in
Fig. 11.1.
This user story can then be used to create a set of test criteria [2], such as:
• User logs on successfully,
• User fails log on because of invalid credentials,
• User forgets credentials,
• User is not registered.
Abuse (or attacker) stories are used like conventional user stories during the
development process. Abuse stories are developed and prioritised prior to each
iteration. Then the abuse story is used to define work tasks and acceptance criteria,
which in turn help to influence the evaluation process for potentially shippable code
at the end of the iteration.
Personas are synthetic biographies of fictitious users of a future product used during
requirements gathering, as mentioned in Sect. 7.7. A set of security personas (or
anti-personas) can be developed to help team members get inside the mindset of
potential attackers.
Consider the fictitious personas of Mary, Paul and Joan.
• Mary is a semi-professional fraudster,
– She targets large (>$10k) attacks,
– She is not a coder,
• Paul is member of a hacker club,
– He has little financial acumen,
– He wants to deface sites or leave some other calling card,
• Joan is on a low-income,
– She has little technical competence,
– She wants to maximise social security claims.
These personas can help team members understand specific types of attack and
justify appropriate countermeasures.
11.3 Security Requirements 167
Risk management is about identifying, ranking and mitigating risks. The objective
here is to prioritise the important risks and requirements in terms of severity and
likelihood, hence:
We can adopt a qualitative approach to working out the likelihood, such as having
a scale of five criteria:
• Frequent Occurs often or in quick succession (once per month),
• Likely Occurs on multiple occasions,
• Occasional Occurs from time to time (twice a year),
• Remote Can occur but not likely,
• Rare Is not frequently experienced (once in 3 years),
We can then create a matrix showing the relationship between criticality and
likelihood, as shown in Fig. 11.2.
A risk register is a document listing risks and their potential severity and
estimated likelihood. The risk register is mainly used to describe mitigations for
each identified risk. The risk register is created and then regularly reviewed.
For example, large and long-running projects require multiple cooperating self-
organising development teams, as described in Chap. 18. In these multi-team
projects, a risk register review during each sprint is desirable, to assess any potential
adverse impact of inter-team dependencies during each iteration.
Security patterns, like design patterns in general, seek to capture good practice in
architecture design. Patterns provide a route for non-expert users to benefit from
specialist expertise. There are several readily available security pattern catalogues,
such as [3, 10] or [15]. To illustrate the concept, I’ll briefly describe just three pat-
terns, the demilitarised zone, authorisation enforcer and controlled object factory.
The demilitarised zone (DMZ) is a security pattern that advocates a gateway
network layer between a private intranet and the public internet. Hosts in the DMZ
are permitted only limited access to hosts on the internal network. Firewalls are
used to prevent unauthorised access from the internet to the DMZ and also from
the DMZ to the intranet. Consequently, the DMZ provides an additional layer of
internal network protection from external attack.
11.5 Security Implementation 169
Enumeration
Business of Potential Risk Rating of
Prioritisation
Feature Attacks Potential
based on Risk
Description (Penetration Tester Attacks (Consensus)
(Business Analyst) or (Risk Analyst)
Security Champion)
on Kanban boards and are the subject of discussion in stand-up meetings and
retrospectives.
For web applications, the Open Web Application Security Project (OWASP) Top
Ten lists and describes the most common and serious software security risks [8].
For each risk listed, OWASP identifies:
• Threat agents, types of entity that carry out attacks,
• Attack scenarios, pathway used to perform attack,
• Impacts, potential consequences of attack,
• Prevention, advice on thwarting mode of attack,
• Resources, references to useful information about attack resilience.
The OWASP list and resources are very comprehensive, and web application
developers need to familiarise themselves with this material [8].
11.5.3 Authentication
Thinking about the authentication use case in Fig. 11.1, there is an OWASP cheat
sheet that discusses the design and implementation of this user story [7].
For example, in one poor-quality pseudo-code implementation, it seems like a
good idea to check if the user exists in the database before checking their password,
as shown in Fig. 11.4.
But, using this approach, the execution time varies slightly between valid and
invalid usernames. A malicious attacker can use this information to determine if a
username exists in the data store. A better approach is to simultaneously check both
username and password, as shown in Fig. 11.5.
The correct response, when authentication fails, is ‘Login failed; Invalid user ID
or password’. The intention here is to give no clue as to the failure cause. Following
a similar pattern, for password recovery, the correct response is ‘If that email address
is in our database, we will send you an email to reset your password’. Again, the
11.6 Security Evaluation 171
purpose is to avoid giving away information about the existence or, otherwise, of
valid user email addresses. Further, it is argued that multi-factor authentication
reduces the chances of account compromise by 99.9% [14].
There are two main approaches to evaluating security, using reviews or testing.
Security inspections are best conducted during each iteration. The reviews can check
people, processes and policies, as well as technology decisions and architectural
designs (and not just source code implementations). This means reviews are
flexible, don’t require any support technology and can be applied early in the
increment development process. However, reviews are time-consuming and require
172 11 Security
Source code quality and security testing tools can help give assurance about our
application. SonarQube can be used to analyse code in several languages and can
identify numerous code quality and potential security weaknesses [13]. SonarQube
can access software directly from your online source code repository. A small and
simple configuration file must be added to your repository so that SonarQube can
understand your environment.
Another useful tool is BDD-Security [4], a security testing framework that uses
behaviour-driven development concepts. BDD-Security integrates with Selenium
(WebDriver) [11] to perform runtime tests on web applications and APIs.
11.7.1 Roles
11.7.2 Artefacts
11.7.3 Ceremonies
11.8 Exercises
Here are some exercise you can try to learn more about the topics covered in
Chap. 11. Have a go at each exercise and then look at the hints and tips in Sect. 11.9.
References
1. Apvrille, A., Pourzandi, M.: Secure software development by example. IEEE Secur. Priv. 3(4),
10–17 (Jul 2005). https://doi.org/10.1109/MSP.2005.103
2. Bell, L., Brunton–spall, M., Smith, R., Bird, J.: Agile Application Security: Enabling Security
in a Continuous Delivery Pipeline. O’Reilly (Sep 2017)
3. Fernandez-Buglioni, E.: Security Patterns in Practice: Designing Secure Architectures Using
Software Patterns, 1st edn. Wiley (Jun 2013)
4. IriusRisk: BDD-Security. IriusRisk (Mar 2021). https://github.com/iriusrisk/bdd-security
5. Nancy R. Mead, C.C.W.: Cyber Security Engineering: A Practical Approach for Systems and
Software Assurance, 1st edn. Addison-Wesley Professional (Oct 2016)
6. OWASP Foundation: Abuse case cheat sheet (2021). https://cheatsheetseries.owasp.org/
cheatsheets/Abuse_Case_Cheat_Sheet.html
7. OWASP Foundation: Authentication cheat sheet (2021). https://cheatsheetseries.owasp.org/
cheatsheets/Authentication_Cheat_Sheet.html
8. OWASP Foundation: Owasp top ten web application security risks (2021). https://owasp.org/
www-project-top-ten/
9. OWASP Project: Samm agile guidance (2021). https://owaspsamm.org/guidance/agile/#
General
10. Schumacher, M., Fernandez-Buglioni, E., Hybertson, D., Buschmann, F., Sommerlad, P.:
Security Patterns: Integrating Security and Systems Engineering, 1st edn. Wiley (Jul 2013)
11. Software Freedom Conservancy: Seleniumhq browser automation (2021). https://www.
selenium.dev/
12. SonarCloud: Automatic code review, testing, inspection & auditing (2021). https://sonarcloud.
io/
13. SonarSource: Sonarqube (2021). https://www.sonarqube.org/
14. Weinert, A.: Your Pa$$word doesn’t matter (Jul 2019). https://techcommunity.microsoft.com/
t5/azure-active-directory-identity/your-pa-word-doesn-t-matter/ba-p/731984
15. Yskout, K., Heyman, T., Scandariato, R., Joosen, W.: A System of Security Patterns. No. CW-
469 in Department of Computer Science, Katholieke Universiteit Leuven (December 2006).
https://www.researchgate.net/publication/242679421_A_system_of_security_patterns
Chapter 12
Tabby Cat Project: Getting Building
Abstract In this chapter, we start building the Tabby Cat project. We will use this
project to apply the ideas from the chapters in Part II of the book. We describe
requirements in the form of user stories, as we did in Chap. 7. We select an
architectural style from those described in Chap. 8. Finally, we employ object-
oriented design patterns, like those in Chap. 9. As we said in Chap. 6, Tabby Cat
is software for displaying source code repository developer activity. We want to
obtain activity data from a public repository, extract important information using
searching and filtering and display the results.
12.1 Introduction
In this chapter, we want to explore the technical aspects of the Tabby Cat software.
We start by creating Requirements using techniques from Chap. 7. Next we move on
to selecting an architectural style Architecture from Chap. 8. We then employ design
patterns and practices from Chap.9. Finally, our implementation uses software
source code Development techniques from Chap. 10 and Security from Chap. 11.
The Tabby Cat project, as I’ve mentioned elsewhere, has been provided by Red
Ocelot Ltd., our software start-up company [9].
12.2 Requirements
First, we can establish some high-level epic user stories for the Tabby Cat project.
Now, we can decompose our epics into more specific user stories for the Tabby Cat
software.
• As a developer, I want to select a public repository, in order to learn about the
activity history,
• As a developer, I want to download the activity history, in order to learn about
the activity history,
• As a developer, I want to sort the activity history, in order to identify specific
activities in the repository
• As a developer, I want to search the activity history, in order to identify specific
activities in the repository
• As a developer, I want to display repository metrics, in order to identify specific
properties of the repository
The Tabby Cat software should implement these user stories while supporting
possible future functional extensions later.
For the Tabby Cat project, at this stage, we do not need to concern ourselves too
much with non-functional requirements. Our purpose is to build confidence and gain
experience of building a functional solution. This is not a safety-critical application.
Data privacy is not a big issue, since we have chosen to use public source code
repositories. Consequently, anything in the repository is already public domain.
The application doesn’t need to support many users (to quantify what we mean
by ‘many’, let’s say a few tens of users, not hundreds).
However, we might want to add new functionality to the Tabby Cat project later.
Consequently, future enhancement is a priority for this project. We plan to employ
good practices to ensure extensibility. We will also use organisational structures and
design patterns that enable future enhancement.
Finally, if you fork or clone this software, check limitations on the application
programming interface (API) used to collect repository data. Quite often, open-
access (free) APIs impose limits on the number of requests you can make. They
don’t want people running large numbers of requests against their servers. Check the
terms of use, and avoid accidentally running too many requests during development
and testing.
12.3 Architecture
Selected
Source Code
Repository 3
We have selected the clean architectural style popularised by Bob Martin [5].
This style comprises four main elements, entities, use cases, interface adapters and
frameworks and drivers, as described in Sect. 8.3.5.
Entities provide the system with enterprise business logic. The entities comprise
relatively slow-changing functionality. These are plane objects that represent the
business domain of your system.
Use cases are where you provide the business rules of your application. The use
cases are pure business logic and don’t know how results will be presented.
Interface adapters retrieve and store data. A novel feature of this architectural
style is an attempt to provide consistent management of network interfaces and
databases. The interface adapters translate between use cases and specific drivers
and frameworks for presenting data.
Frameworks and drivers comprise the database drivers and graphical user
interface libraries we select for our application.
An important idea in this architectural style is that entities and use cases are
independent of frameworks user interfaces and databases. In simple terms, entities
and use cases comprise business logic. And, interface adapters and frameworks and
drivers comprise implementation detail. Consequently, a simplified architecture of
our system is shown in Fig. 12.2.
12.3 Architecture 181
UseCases Entities
HTTPRequestHandler Gateway
The next question we need to ask ourselves is: how will the overall architectural
style we’ve adopted influence our technology choices. We know we’re making a
software-as-a-service style web application; some approaches might include:
• Monolithic web application that serves html,
• Monolithic web services that expose a REST API with a stand-alone client (i.e.
client-server)
• Micro-services, where many small web services are aggregated to create a single
coherent API which is consumed by a stand-alone client(s).
The monolithic web application is superficially simple, but quickly becomes
difficult to maintain. There are risks of code for the user interfaced being mixed
with code for application functionality and the lack of clarity that can result.
In contrast, the second option of monolithic web services exposing an interface
to a stand-alone client is a little more complicated to implement, but neatly separates
the user interface and application logic. In principle, we can separately deploy the
server-side presentation layer from the REST services, if we want to.
Decomposing the services into micro-services would allow each micro-service
to be deployed independently. Independent deployment of services is useful if you
are supporting very large user populations or where services vary considerably in
their processing complexity (and hence hardware requirements). But micro-services
add complexity to achieve these benefits.
Although our requirements are quite simple, we have chosen the client-server
architectural style to illustrate this commonly used approach. The RESTful services
will be designed around the specific types of repository information we want to
collect. Consequently, looking at the user stories, in Sect. 12.2, we can see we want
to collect information about commits, issues and metrics.
182 12 Tabby Cat Project: Getting Building
12.3.2 Client-Server
We propose a stand-alone web API that serves RESTful requests over HTTP. There
is no user interface component to our RESTful API; it simply accepts HTTP
requests in the form of a HTTP verb and URI which then responds with a HTTP
status code and accompanying payload (in the form of JSON).
We need some way of serving out client code (HTML/JS/CSS) to the user’s
web browser, though. To this end, we employ a popular (open-source) web server,
nginx [6]. Our nginx web server has two purposes. Firstly, it is processing incoming
requests to our domain and sends the relevant static files back. Secondly, it acts as
a reverse proxy, forwarding requests to the back-end RESTful API server. Using
a reverse proxy allows the client code to remain unaware of the back-end server
location, it can simply send requests to itself, and the nginx web server will proxy
them to wherever they need to go.
12.4 Design
Now, make sure you have read Chap. 9 and completed Exercises 9.2 to 9.6.
Looking more carefully at Fig. 12.1, we can decompose this into the following
challenges our system needs to resolve:
• Step 1
– Accepting and processing incoming HTTP requests,
– Converting incoming HTTP requests into an internal format for use,
– Mapping the external URI to internal business logic,
– Processing the request,
12.4 Design 183
• Step 2
– Querying an external HTTP API,
• Step 3
– Mapping the response from the external HTTP API into an internal format,
• Step 4
– Returning a response to the requester.
We then identify the components we will need in our system. We are following
the clean architecture style we mentioned in Sect. 12.3.1. The main components we
identify are:
• A HTTP server component, which accepts HTTP requests and sends HTTP
responses,
• A controller component, which converts HTTP requests into an internal format
and maps to internal business logic,
• Use case components which represent our core business logic,
• Some entities which provide a more meaningful internal representation,
• An internal HTTP request handler, for querying the GitHub API,
• A gateway component, for converting between external data (database, GitHub
API) and internal data (entities).
From our requirements, we identify four entities that we will need to model:
Commit, Issue, Metrics and Source Repository. The Commit entity might aggregate
other entities like author, etc. The Issue entity, at the name suggests, is for issues
recorded in the repository we are investigating. The Metrics entity is for repository
metrics. Finally, there will be a Source Repository entity for managing a handle on
the external repository.
In the first instance, we envisage that these entities will be simple objects that
just contain data. Our use cases, for this initial iteration, are also simple retrieval
operations i.e. GetSomething. So we identify the following:
• GetCommits, get a list of commits for a given repository,
• GetIssues, get a list of issues for a given repository,
• GetSourceRepository, get a source repository from the system database,
• GetSourceRepositoryMetrics, get the metrics for a given repository.
Hence, we can flesh out our simple design into something a bit more complete,
as shown in Fig. 12.3.
After creating our initial design, now is a good time to consider if there are
any problems that can be solved using common object-oriented design patterns [4].
Looking at the diagram in Fig. 12.3, we need to make external HTTP requests from
within our software. We could embed HTTP requests into our source code. But,
these calls could end up going in several gateways. Also making HTTP requests is
quite a common thing to have to do. Hence, there are a couple of third-party libraries
184 12 Tabby Cat Project: Getting Building
GetCommits Commit
InMemoryDataStore SourceRepoDSGateway
that can help us with this, such as [1] and [3]. We have selected the OKHttp library
[3]. To avoid coupling our code to this external code (that we don’t control) and to
simplify the overall interaction, we employ the façade pattern. We’ll discuss this in
more detail when we consider the implementation, Fig. 12.6.
A popular way to structure GUIs is to use the MVC architectural style, as mentioned
in Sect. 9.6.2 and shown in Fig. 12.4a. Our initial design follows this pattern; hence,
we have some models, some views and some controllers. We also need to make
external API calls to our back-end service, it would be tiresome to have to do this
every time, so we will need a wrapper to encapsulate the http request logic. This way,
if our API changes for some reason, there’s only one place we need to change it.
Our front-end models can loosely map to the back-end models. However, we
might make some minor changes. For instance, we want our view to update from
a list of commits; therefore, we might create a Commits (plural) model, instead of
just a Commit (singular) model. Similarly, we want to list all repositories (Repos),
but we also want to select a specific repository (Repo). Consequently, we identify
four models, Commits, Repo, Repos and Issues, as shown in Fig. 12.4. Based
on how we may want to display the data to the user, the repository metrics have been
incorporated into the Repo model.
12.4 Design 185
A)
View Models
B)
Views Models
CommitList Commits
RepoList Repos
AddRepo
RepoDetails Repo
IssuesList Issues
Controller
RepoController
Fig. 12.4 Tabby Cat model-view-controller design. (a) Simple MVC. (b) Tabby Cat MVC design
Now we have our models, we need to think about what views we want to display,
and from our requirements, we will need:
• A view to list available repositories,
• A view for adding a new repository,
• A view for showing the repository details (with a child view which lists commits
and issues).
We therefore identify five views, as shown in Fig. 12.4:
• RepoList, lists available repositories,
• AddRepo, a form for adding repositories to the system,
• RepoDetails, the entry point into a repository, listing the name, owner and
metrics as well as providing functionality to select developer activity,
• CommitList, a list of commits for a given repository,
• IssueList, a list of issues for a given repository.
Given the relative simplicity of this application, we don’t imagine we’ll need
more than a single controller to handle our model/view interaction.
Looking at our MVC design, shown in Fig 12.4, we would like to maintain a
unidirectional data flow. First, the user interacts with view. Then, the controller
updates the relevant model. Finally, the model updates the view. However, we’d
like to keep this as loosely coupled as possible. Therefore, we use the observer
pattern [4] where our models are the Subject and our views are the Observer. Our
186 12 Tabby Cat Project: Getting Building
controller will bind each view to the relevant model that it needs to observe. That
way, whenever our model updates, it will iterate through all its observers updating
them with its new state.
12.5 Development
Make sure you have read Chap. 10 and completed Exercises 10.2 to 10.5. We
can now see a representation of our overall architectural implementation shown in
Fig. 12.5.
In the Tabby Cat project, we have decided to build a web-based service, serving
a RESTful API over HTTP. We are familiar with both Java and JavaScript. Let’s
consider some further Java and Node.js design issues:
• Node.js is single threaded, but, due to the runtime environment and ‘event loop’
model it uses, can offer significant performance per resource cost for high I/O-
based applications (think of a web server dealing with lots of small requests)
[7].
• Java on the other hand is multi-threaded, spawning a new thread (with accom-
panying memory) for each new request that comes in. This means individual
SpringBoot
Entity
uses
uses uses
VCSExplorerApplication VCSExplorerController
uses
GetCommitBySHAInteractor Commit
<<interface>>
CommitDSGateway uses
<<interface>> uses
Http3CommitGateway
HTTPRequestHandler
OkHttp3RequestHandler
requests can be complex, but the total number of I/O requests is limited (based
on available resources in the runtime environment).
• Node.js has no types out of the box but can be added via Typescript; however,
this introduces another layer of complexity.
• Java is strictly typed without any additional overhead.
This list is my no means exhaustive. These are simply examples of the issues
you might consider. I encourage you to look at empirical research resources that
experimentally compare different languages. Be wary of online discussions that are
based on opinion, instead of fact.
We have only considered Java and Node.js in this discussion; it could be worth
looking at languages such as C# or Python and see how they compare. As with most
things, it’s about trade-offs—Java might be ‘good enough’ and we already know the
language, but it might be that Python is just perfect for the job and might therefore
be worth the initial investment in learning. On the other hand, maybe a HackCamp
or Hackathon setting is not ideal for learning a new language. Using our experience
and an evaluation of the available technologies and our skill sets, we have decided
to choose a Java-based server-side application for implementing a REST API over
http.
One other issue we should consider is how we integrate Tabby Cat source code
with third-party libraries. We want to use an external library to simplify accessing
the GitHub API and making activity history requests. As mentioned earlier, we
have chosen to use the OKHttp3 library for this [3]. We could just embed calls
to this library within our own code, but this can add complexity when it comes to
future source code maintenance. Consequently, it is good practice to use a façade
pattern [4] to hide the complexity of the OKHttp3 library, as shown in Fig. 12.6.
We have provided a generic request handler HttpRequestHandler and a specific
<<uses>>
HttpCommitGateway HttpRequestHandler
OkHttpRequestHandler
<<uses>>
An important issue facing developers is how to organise the source code files
for the project. This is particularly true when a team of developers is involved.
12.5 Development 189
A)
<<interface>>
Subject
Observer
Observer A Observer B
B)
<<interface>>
Subject
View
+observers: Array<View>
+update(Object state)
+subscribe(View view)
+notify(Object state)
CommitsModel CommitsList
Fig. 12.7 Tabby Cat observer pattern. (a) Observer pattern. (b) Observer pattern, Tabby Cat
implementation
load developer
activity (commits) selectCommits
…FromRepoCallback()
getCommitsForRepo()
render() notify()
A) B)
GitHubExplorerController GitHubExplorerController
<<uses>>
<<uses>>
<<interface>>
GitHubExplorerService
<<interface>>
GitHubExplorerServiceImpl GitHubExplorerComponent
<<uses>> GitHubExplorerComponentImpl
<<uses>>
<<interface>> <<interface>>
GitHubExplorerPersistence GitHubExplorerPersistence
HttpGitHubExplorerPersistence HttpGitHubExplorerPersistence
Fig. 12.9 Tabby Cat source code organisation (Adapted from [5]). (a) Layered code organisation.
(b) Component code organisation
12.6 Security
Our focus here has been to build a functional system. We’ve already observed, in
Sect. 12.2, that non-functional requirements are not at the forefront of our minds,
right now. Consequently, security issues are not exceptionally stringent beyond the
concerns of any internet connected application or software service.
Take this opportunity to read Chap. 11 and complete Exercises 11.2 and 11.3.
Now is a good time to review the Open Web Application Security Project (OWASP)
Top Ten list that describes the most common and serious web application software
security risks [8].
References
1. Apache Software Foundation: Apache httpcomponents – httpclient overview (Feb 2022), https://
hc.apache.org/httpcomponents-client-5.1.x/
2. Bass, J., Monaghan, B.: Tabby Cat GitHub Explorer. Red Ocelot Ltd (Jan 2022). https://github.
com/julianbass/github-explorer
3. Block, Inc: Overview - okhttp (2022). https://square.github.io/okhttp/
4. Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Design Patterns : Elements of Reusable Object-
Oriented Software. Addison-Wesley, Harlow, England (2005)
5. Martin, R.C.: Clean Architecture: A Craftsman’s Guide to Software Structure and Design, 1st
edn. Addison-Wesley (Sep 2017)
6. Nginx, Inc.: nginx (January 2022). https://nginx.org/en/
7. OpenJS Foundation: Node.js (Jan 2022), https://nodejs.org/en/
8. OWASP Foundation: Owasp top ten web application security risks (2021). https://owasp.org/
www-project-top-ten/
9. Red Ocelot Ltd: Enhancing digital agility (2022). https://www.redocelot.com
Part III
Process, Tools and Automation
Part III of the book focuses on process. We want to learn how to create a systematic
and repeatable software development process, for creating worthwhile products.
Each of the chapters in Part III has exercises.
First, in Chap. 13, the coordination activities and meetings in a typical business
information system development process are described. You can learn about
coordination meetings and some engineering practices like pair programming and
test-driven development.
In contrast, Chap. 14 investigates the benefits of lean software development. I’ll
explore key ideas around value, waste and speed in a software development process.
Version control helps you create a revision history of your software and provides
a means for sharing code with others in your team. Version control is discussed in
Chap. 15.
Testing helps identify defects in your code and is considered in Chap. 16. From
a process perspective, we are most interested in test automation.
In Chap. 17, the ideas from all the chapters in Part III are applied to the Tabby Cat
case study. I explore the process and automation skills needed to read information
from a selected GitHub repository using an API and display the activity data.
194 III Process, Tools and Automation
As I’ve said, the overall design of this book is around Part I on people, Part II on
product and Part III on process. These parts of the book are stand-alone, more or
less. So, if your main interest is in the people aspects of software development, for
instance, then you might want to skip back to Part I. However, if your main interest
is in technicalities of developing a product, you could skip back to II. To further
support you in learning the skills you need, there are some more advanced topics, in
Part IV.
Chapter 13
Agile Ceremonies
13.1 Introduction
Planning is essentially about deciding what to work on (and consequently what not
to work on) in the coming (hopefully short) time window. We want to plan for
a short iteration, as a way of mitigating the risk of change (in the environment,
in customer needs or wishes, in the teams and so on). Planning involves mapping
customer priorities to estimates of our production capacity.
Iteration planning is conducted at the start of each iteration, as shown in Fig. 13.1,
and comprises four tasks:
• prioritisation of requirements,
• breaking up of requirements into technical tasks,
• estimation of technical tasks and consequently requirements,
• work item assignment within the team.
Highest Priority
Product Requirement
Requirements
Backlog
Iteration
Requirements
Backlog Repeated
Iterations
2-4 Weeks
Shippable
Software
13.2.1 Prioritisation
The product owner, not the development team, prioritises requirements for imple-
mentation. There is one exception, which is where team members notice some
technical dependency between tasks. That is, some high-priority requirement,
selected by the product owner, depends on some lower-priority requirement being
implemented first. In this situation, the team can advise the product owner that the
higher-priority requirement can be implemented but will not work. The product
owner can then decide if they want to increase the priority of the lower-priority
requirement.
Our first task during iteration planning, with a prioritised requirements backlog, is
to get consensus on what a user story comprises. So, we break each feature up into
a set of technical tasks. Does this user story require any front-end interface screens?
Does this feature need to use data storage? What business logic operations are part
of this feature? We need to divide the user story into all its constituent technical
tasks. By creating a list of smaller work items, we can more confidently estimate the
effort required to implement each and hence the overall feature.
We can summarise the process as follows:
• Select highest-priority user story from the backlog,
• Discuss the purpose and scope of the user story,
13.2 Iteration Planning 197
13.2.3 Estimation
We need to know how many features we can fit into an iteration. That is a difficult
question to answer. Not least because team members may have different perceptions
of what is required to implement a feature. The two techniques worth mentioning
are story points and T-shirt sizing. Both approaches tend to use the planning poker
technique.
Story points are a relative measure of the size or complexity of user stories. The
integers used to approximate size are taken from a Fibonacci number sequence: 1,
2, 3, 5, 8 and 13. Using this number sequence, the estimates for larger sizes are
less precise; consequently, there is no need to differentiate between sizes 9 and 10.
Instead, it is sufficient to distinguish between 8 and 13.
Larger, story point sizes, depending upon the business domain of the application
under development, could indicate that the user story is in fact an epic that needs
to be further decomposed into user stories. If large user stories cannot logically be
decomposed, then story point sizes like 20, 40 or 100 might be considered. However,
we do need each user story to fit into an iteration; consequently, epics do have an
upper size limit (or iteration durations lengthened).
Planning poker is commonly used to allocate story points to technical tasks. To
perform planning poker, the team members collectively:
• Take each technical task; in turn, the first round of voting starts,
• Discuss each technical task, if necessary,
• Write down (secretly) their estimates for the work item,
• When everyone has finished writing, team members reveal their votes for the
tasks,
• Look at the story points assigned and see if there is close consensus (in novice
teams or a new application domain, close consensus is unlikely),
198 13 Agile Ceremonies
• If there is consensus, on the story point allocation, move on to the next technical
task,
• If there is no consensus, constructively discuss the highest and lowest story point
estimates and try to understand why someone thought it was a larger or smaller
task,
• Following this discussion, move into a second round of voting
• Continue rounds of voting and discussion until consensus emerges around the
story point value for a task,
• Then, move on to the next technical task or user story.
The planning poker approach to estimation is consensus-based and draws upon
all the team expertise available. This approach fosters discussion, which is a
valuable source of learning for novice or less experienced members. Teams tend
to improve estimation accuracy over time.
A simple and easy way to estimate tasks is to agree a small set of categories and fit
the features into the agreed groups. So, we can think of tasks or features as being
small, medium, large and extra large. We think of this estimation process as fitting
tasks in a (small) group of size categories. How many size categories are reasonable
for your context? Three, four or five? How accurate (or precise) do you expect your
estimation to be? More than five size categories require considerable effort for a
novice team.
If a task is bigger than the usual range of categories, we take further action,
as we did with story point estimation. For example, if a task is extra, extra large
(which is an epic user story), it requires further analysis to break it down into a
more manageable size.
We can use a similar planning poker process as we used with story point
estimation. Everyone secretly writes their size estimate for a technical task on a
sticky note. The sticky notes are all revealed simultaneously (so no one can change
their score, when they see other people’s estimates). Team members can then see
the variation in size estimates within the group. Teams usually discuss the thinking
behind the largest and smallest estimates. After the discussion, team members are
invited to offer a revised size estimate. Everyone writes a second estimate on a
sticky note, which is then shared again. After a few rounds of discussion and voting,
consensus is achieved on the size of that specific technical task.
13.3 Coordination Meetings 199
After requirements have been estimated, we now have an idea how many tasks can
fit into an iteration. We can now decide who, in the team, is going to tackle each
task. A defining characteristic of a self-organising team is that people volunteer for
tasks. Sometimes, people pick up a task because it is similar to others they have
successfully completed in the past. On the other hand, sometimes people pick up
tasks to learn something new. The aspiration for the team, achieved in experienced
groups, is that anyone be capable of doing any tasks.
You might imagine relying on volunteers to pick up tasks means that there are
tasks no one wants that don’t get taken up. But practitioners say this is unusual.
More commonly, self-organising teams develop a sense of shared commitment to
group outcomes. So, unpopular tasks do tend to get shared around the group over
successive iterations.
Some groups like to add a fourth question: Am I going to create any blockers
that might impede others? This fourth question is typically useful in larger projects
where there are dependencies between the codes produced by different team
members.
Coordination meetings are usually held in front of a visual (often physical) display
of project status. The idea is to make visual the team’s efforts towards project goals.
Kanban boards were mentioned in Sect. 10.2.1.
The Kanban board originates in the world of advanced manufacturing and just-
in-time production emanating from the Japanese car industry. In its simplest form,
it consists of three columns: To Do, Doing and Done. The requirements, features or,
more likely, technical tasks identified in iteration planning are added to the board
using sticky notes. Each sticky note represents a technical task. All the tasks start off
in the To Do column. As the project progresses, the sticky notes all work their way
over to the Done column. The sticky notes are usually moved during coordination
meetings, as the status of an item changes. This gives a visually appealing sense of
project progress. Each team member can see their effort as part of the wider range
of team activities.
Online tools, such as Trello [1], can be used to support virtual teams using
Kanban boards. This retains the visual illustration of project progress while enabling
remote working. Tasks or user stories modelled using online Kanban boards can be
embellished with acceptance test criteria and links to definitions of done.
4. Review any requirements that you were unable to implement for any reason,
5. Collect and carefully record any feedback from the product owner or client.
An important benefit of the incremental development approach is the idea
of getting feedback at intermediate stage of the project development process.
Customers, clients or users (whichever most appropriately describes your situation)
need to be able to see progress towards project completion and influence the
direction of travel. If you are serious about software development, you genuinely
want reassurance that the code you are writing is fit for purpose and meets the needs
that have been identified.
It is the customer demonstration that offers both sides the opportunity for this
feedback. You get reassurance from the customers that you are on the right track.
And customers see evidence of progress towards the completed system. You do this
by demonstrating each of the features, in turn, that have been implemented during
the last iteration.
Demonstrate how each feature works and the defensive programming measures
you have implemented. So, for example, where your software requires user input,
you will show how the programme responds if the wrong type of information is
provided. You may also demonstrate how the working software has benefited from
testing and other quality assurance measures.
13.4.1 Retrospectives
you can develop a set of actions. Actions are practical steps you can take to address
improvement areas.
their way around an existing code base. In either situation, the learner controls
the keyboard. You don’t learn much by merely watching an experienced hand.
Obviously, the mentor adopts a warm, constructive and supportive demeanour.
There are some specialist ceremonies that development teams use, usually when
things are not going well. Sometimes our initial estimates of a user story turn out
to be wrong. Perhaps, as a development team, we misunderstood the requirement.
Or, maybe implementation of the story turns out to be much more complicated than
expected. Often, pair programming is enough to get us out of such a fix. However
occasionally, a more dramatic solution is called for.
13.7.1 Spikes
A spike is where the estimate, for a requirement or technical task under develop-
ment, proves to be inaccurate. This usually means some new, previously hidden,
13.7 Specialist Agile Ceremonies 205
complexity associated with a requirement has emerged. We can mark the task on
our Kanban board, re-estimate the effort required and re-prioritise. The advantage
of treating the story as a spike is that we can remain committed to other stories in
the sprint and do not get distracted with the troublesome one.
We might choose to park the story for a future sprint. But, this is undesirable,
because we have failed to meet our commitment to the product owner or client. In
consultation with our product owner, we might decide that the spike is too important
to just park for a future sprint.
Having identified a story as a spike, we might also consider adding resources to
that story. In this situation, quite a lot of teams use pair programming to resolve
the issue. While, in the Extreme Programming method, it is recommended that
developers use pair programming all the time [4], some practitioners prefer to use
pair programming only under specific circumstances. A common, special case, use
of pair programming is to address spikes.
Alternatively, we might reduce the priority of some other activity, so that we
can resolve the spike. There are some other approaches to resolving spikes, such as
swarm programming or even pulling in additional specialist support from outside
the team.
In swarm programming, more than two developers work together. This can be useful
if team members want to work together to tackle some new task or technology
that no one has used before or where progress for the whole team is blocked by
one particular problem that needs to be solved. The idea is that the swarm makes
development quicker, and comes up with higher-quality solutions, than an individual
or pair.
Usually swarm programming is used to tackle specific issues. For example, some
teams use swarm programming to address a high-priority spike. Alternatively, a
swarm might be used to achieve consensus on an architectural style.
Mob programming takes the ideas of pair and swarm programming to the extreme.
In mob programming, the whole team works together all the time. The team is
co-located, working together at one computer performing all requirements, design
and development activities. So, in mob programming, the team has workshops
for defining stories, working with customers and designing, testing and deploying
software.
206 13 Agile Ceremonies
13.8 Exercises
Now create a learning journal for Part III Process. You can use the learning journal
to make notes on the things you learn from this part of the book. The journal should
include a section for each book chapter. You can also use the learning journal for
planning your future skills development.
Don’t look at the hints, tips and solutions chapter, at this stage. First, complete
an exercise (but still, do not look at the hints or tips). Next, reflect on the exercise.
Then look at the hints, tips and advice in Sect. 13.9.
In agile methods, the specific meetings teams use to develop software are often
called ceremonies. I have described ceremonies used to start and finish iterations.
This includes estimation and work allocation during iteration planning and kick-off.
Demonstrations of working code provide opportunities for feedback and retrospec-
tives and an important forum for team learning. I’ve also discussed ceremonies used
during the iterations themselves, such as coordination meetings, pair programming
and test-driven development. Kanban boards, whether physical or online, provide
visibility to team members of project progress. Next, in Chap. 14, we’ll explore the
principles and ideas behind lean software development.
References
Abstract This chapter will introduce the concept of lean software development.
The lean approach treats each user story or work item as an artefact flowing through
a development process. Lean focuses on concepts such as value, waste, speed,
people, knowledge and quality. We take a holistic view of the development life cycle,
concentrating on maximising the efficient flow of work items. We also touch on the
influential lean start-up model, an approach to starting a technology company using
revenue (rather than investment) to support growth. There are many useful ideas in
lean which we can apply alongside agile methods. Some teams view adopting lean
as a natural progression once they have become proficient at agile.
14.1 Introduction
I devoted the whole of Part I to the topic of people, their roles, membership of
self-organising teams and managing other stakeholders in the process. In short,
software development is a team sport. We need technical expertise, and we need
an environment in which we can work together.
Management’s objective is to coach and mentor staff members so that they
acquire the skills and behaviours we need. Managers help people to develop. The
model of management is a servant-leader approach. Managers provide the resources
team members need to complete work items and offer support with learning new
skills. Managers remove impediments that create inefficiencies in the development
process.
Many lean organisations provide time for professional development. Perhaps half
or 1 day per week is set aside for personal projects that can help team members
acquire new skills and knowledge that can, in turn, help the organisation grow and
improve. These projects can be used to learn entirely new techniques or to study
exciting new technologies.
Just like in agile methods, the lean approach is dependent on the self-organising
team. The self-organising team takes responsibility for delivering good-quality
software. Team members assign themselves work items and commit to continuous
improvement of quality. Over time, the team develops a collective responsibility for
delivering good-quality code, on time.
14.1 Introduction 213
Sometimes a team member may be keen to take on a stretch task, a work item
that creates an opportunity to learn new skills. At other times, team members may
be happy to exercise the skills they currently have. The point is that selecting work
items empowers team members to have more control over their activities during the
work day.
A common goal, for advocates of lean, is to strive for perfection. We can think
of quality assurance in two senses: prevention and detection of errors. Pair pro-
gramming (see Sect. 13.5) and code reviews can help with prevention. Test-driven
development (see Sect. 13.6) and test automation can help with detection. We’ll talk
more about automated testing in Chap. 16.
Our goal must be to use sensible coding standards and best practices [3], as
discussed in Sect. 10.4.2, and to be alert for code smells [2] that might indicate
future maintainability problems. We use appropriate folder and package structures
to logically organise our source code. Source code is split into subsystems (perhaps
layers or other moving parts) depending upon the architecture style we have
selected; see Chap. 8. Further, in larger systems, some sensible organisation of
functionality into groups might also be required.
Using good naming conventions helps with readability. We carefully choose
meaningful names that convey the purpose of the source code element. Use naming
styles that are consistent with language conventions and used uniformly. Removing
dead code helps achieve simplicity. We avoid unused imports, variables, methods
and classes. Any redundant code must be refactored.
We try to automate as much as possible. We like automated testing: unit, regres-
sion and acceptance testing. We also like version control as explained in Chap. 15.
Frequent merging of branches helps to minimise and resolve inconsistencies. We
will discuss continuous integration and DevOps in Chap. 21. Perhaps DevOps is
too much for a student or novice project, but it’s a good idea for mature commercial
teams to consider. Automation helps us to apply policies consistently and repeatably.
Manual processes are error prone and tend to get forgotten when teams are under
pressure from tight deadlines.
Software tools to review code quality, such as SonarCloud [6], help us to identify
potential problems early. We can run a quality test each time code is pushed to our
main trunk in version control. A dashboard in SonarCloud then helps us identify
issues and even gives advice on mitigation.
14.2 Value
value in our processes. We would certainly like to eliminate any of our activities that
do not produce value at all.
I like to think of value more broadly than money and identify other sources of value.
For example, disaster recovery software is used to manage the logistics delivering
emergency aid and relief. We might consider disaster recovery software value in
terms of the number of lives saved. Environment mitigation software might be
valued in terms of the number of habitats saved or restored. What better examples of
value can there be? Consequently, it is legitimate to think beyond monetary value,
if you are developing software in a commercial context or for a third-sector or non-
governmental organisation.
We need a set of (value) criteria for work items to move from one part of our value
stream to the next. This might take the form of a checklist or some other set of
criteria. For example, code ready for merging into the main trunk must have passed
all unit tests and locally integrate with the code in the trunk without creating errors.
Code ready for review must have passed unit tests and integration as well as static
quality assurance tests and regression tests. Finally, in this example scenario, code
might be ready for deployment only if:
• Unit tests have all been passed,
• Code reviews have been completed and any actions addressed,
• Security tests the full suite of security tests performed,
• Code quality tests have passed.
216 14 Lean
Some people use informal names to distinguish these stages of completion. Code
ready for deployment is done, done, done. Code ready for review is done, done.
Code ready for merging into a branch is simply done.
14.3 Waste
Imagine starting a stopwatch the moment an idea for a new software feature is
identified. Then imagine stopping the stopwatch, the moment you get paid for that
feature. Our goal is to minimise that time interval. What can you do to remove
any activity that does not add value in that time interval? In manufacturing, seven
sources of waste have been identified. These seven wastes have been translated into
the software development context [4]; see Table 14.1.
Let’s briefly consider each of these forms of waste.
Our overall objective is to get worthwhile features, deployed and used by paying
clients as efficiently as possible. Any incomplete work in the system or under
development is a source of waste from that perspective. We can explore some
examples of partially done work.
Documentation that is missing code. Design documents and requirements spec-
ifications that have yet to be implemented represent a source of waste. These
documents need to be prepared when they are needed not any earlier.
Code not checked into trunk. Code sitting in personal repositories that has yet to
be checked in to the main repository is not adding value to the development process.
We check in code frequently.
Untested code. Code can be tested at development time. Acceptance testing and
code reviews should be conducted promptly. Untested code is not adding value to
our product.
14.3.3 Rework
14.3.4 Hand-Offs
Hand-offs, where an incomplete work item is passed onto someone else, result in
lost tacit knowledge about the task. This lost tacit knowledge must either be re-
learned by the work item recipient or, perhaps worse still, they proceed without
the benefit of the tacit knowledge potentially resulting in defects. It is healthy to
minimise hand-offs, which is an important justification for self-organising teams
comprising people with the full range of required skills.
218 14 Lean
14.3.6 Delays
Delays and waiting time are obviously undesirable in an agile development process.
Some of the most significant waiting times occur before we even start development,
such as:
1. Waiting for project approval,
2. Waiting for people to be assigned to the project,
3. Waiting for assigned people to become available.
A common problem faced by developers is waiting for sufficient information to
be able to develop code. This can be because insufficient effort went into user story
elaboration or because clients assume it is enough to describe desirable features
only in broad terms. Scrum masters are supposed to remove impediments, such as
waiting for information, but disengaged clients can undermine agile processes.
14.3.7 Defects
We try to minimise defects in our code. The longer a defect exists in our code,
the more expensive it is to fix. Further, if a defect reaches customers, it damages our
reputation for quality as well. We use frequent automated unit and acceptance testing
as well as code reviews to try to catch defects early, ideally during the development
cycle.
14.4 Speed
Short delivery cycles increase learning. You are forced to find ways to simplify
installation and product upgrades, because you plan to do those activities frequently.
Your quality assurance processes are designed to be performed within iterations, not
after iterations have finished.
Analysis of queueing theory suggests that to reduce average cycle times, we
should ‘even out the arrival of work’, ‘minimise number of things in process’,
‘minimise size of things in process’, ‘establish a regular cadence’, ‘limit work to
capacity’ and ‘use pull scheduling’.
Even out the arrival of work. It is difficult to control project approval processes
or sales of bespoke software. However, it is undesirable if requests are queued
for months at a time. We strive to maintain a steady flow of work. Allowing big
product backlogs to build up is not ideal.
Minimise number of things in process. I’ve already suggested that task switching
is inefficient. Work-in-progress (WIP) limits are used to make process bottle-
necks more visible. This enables more precise matching of resources to demand.
Minimise size of things in process. It is a difficult discipline, but reducing the size
of work items is a good tactic for reducing average cycle times. Try to split work
items up so that as many as possible are small.
Establish a regular cadence. Iterations provide a valuable insight into the produc-
tivity of teams. You learn how much can be accomplished and build confidence
in estimation. This means it is easier to make promises to clients and then honour
them.
Limit work to capacity. Working over capacity means people work long hours and
consequently get tired and careless. Short-term over-capacity working can be
useful, even desirable. But as a long-term strategy, it is not wise.
Use pull scheduling. Work items can be pulled from a backlog into development
and production, as a consequence of some external demand. This is usually
established through prioritisation of the product backlog. We pull high-priority
items first. The point is to pull according to customer need.
In summary, problems in our development process slow down our cycle times.
Tackling these inefficiencies one by one, using a continuous improvement process,
helps us to streamline our software production processes.
As mentioned, WIP limits provide a mechanism for controlling the number of work
items being processed. Establishing WIP limits is a policy decision that can help us
manage the flow of items through our work processes.
The WIP limit is derived from the capacity of the team to perform a particular
task. When looked at from this perspective, what is the point of giving a team more
work than they have the capacity to perform? The WIP limit provides a mechanism
for making the team’s capacity more visible.
220 14 Lean
As part of creating a WIP limit, it might be desirable to create a buffer (the buffer
might be shown on a Kanban board, for example) for items blocked by the WIP
limit. The buffer can be useful for accommodating small fluctuations in arrival rate
of work items. The buffer also has an important role in making visual an unhealthy
build-up of work items in a buffer. Having identified a work item build-up in our
buffer, we can add resources to clear the backlog or analyse our processes to better
understand our work item flow. For example, we might use a developer swarm, as a
temporary fix, to empty the buffer; see Sect. 13.7.2.
A major challenge for teams seeking to reduce cycle times is variability in the
size and complexity of work items. Building new features is fun and attractive;
enhancing the source code in existing features is less so. New features tend to be
large work items. Feature enhancements vary in size.
Refactoring to simplify our code base is important. Refactoring, as we’ve said, is
making changes without affecting programme outputs. Refactoring is to help with
maintainability and readability of source code, but is difficult to estimate. Often
refactoring is overlooked by product owners when they prioritise work.
The effort, required for defect fixing, is difficult to estimate; by the time you’ve
figured out the problem (the time-consuming and difficult part), implementing the
solution is often relatively straightforward. Consequently, some teams don’t perform
estimation on maintenance tasks; they view it as waste [1].
Teams estimate the effort needed to create new features, but don’t waste
their time estimating defect fixes and minor feature enhancements. Each iteration
comprises a blend of new features and maintenance tasks. Team members and
product owners collaborate to achieve the right blend over time.
In larger-scale projects (see Chap 18), some teams are solely dedicated to
maintenance tasks: bug fixing and minor feature enhancements. But it seems rather
uninspiring to be limited to maintenance tasks, if another team gets to build all the
new features.
An approach I like, which is dependent upon the number and size of work items
arriving, is for teams to perform a maintenance iteration from time to time. Perhaps
every third iteration is focused on defect fixing and minor feature enhancements.
The other iterations are (largely) focused on new feature development. This way,
everyone gets to share the full range of work items.
14.5 Lean Start-Up 221
Lean thinking has also been influential among technology entrepreneurs, notably
through the work of Eric Ries [5]. This approach advocates a fierce focus on
experimenting and monitoring customer reaction. The idea is to get early feedback
from developing products and business models with minimum investment, by using
prototypes or mock-ups to assess market reaction. The goal is to generate revenues
and continue experimenting to maximise income. Consequently, this model focuses
on attracting paying customers, rather than obtaining investment in an untested idea.
Three important concepts of lean start-up ethos are bootstrapping, minimum viable
product and pivot.
14.5.1 Bootstrapping
The lean start-up model focuses on generating revenue, early in the business
development process. The approach advocates testing ideas, through revenue gener-
ation, before making substantial investments. Some people call this bootstrapping,
because it is an attempt to pull the business up by its own bootstraps. The bootstrap
approach is a reaction to the focus on raising investment that was popular during the
.com (pronounced ‘Dot Com’) bubble earlier in the century.
The idea of a minimum viable product is to make tangible the essence of a solution,
with the least possible investment of time and resources. Then, the minimum viable
product can be used to test concept viability with potential customers. The minimum
viable product’s purpose is to support short cycles of evaluation with each new
feature.
The definition of essence, in the minimum viable product, is the central chal-
lenge. What set of features are needed to make the solution work? And, by
implication, what features are not necessary? We need to identify only the essential
features, because we don’t want to invest time and resources on superfluous features.
The minimum viable product typically includes end-to-end information flows
and hence requires simple interfaces to cooperating subsystems. Sometimes it is
helpful to think of the minimum viable product as a skeleton of the system or core
solution.
222 14 Lean
14.5.3 Pivot
If our minimum viable product is not energising potential customers, we may decide
our solution idea is not as promising as we hoped. This may cause us to pivot, or
change direction, towards a variation of our solution idea. In a sense, the minimum
viable product failure has worked perfectly. We have not invested heavily, or ‘bet
the house’, on an idea that is not going to work.
The pivot may be a rather dramatic change of direction. The solution may
serve a different market or perform a different function, than our original idea.
A new minimum viable product needs to be constructed and further experiments
performed. Many technology start-ups have gone through the experience of a pivot
towards a different idea to their original concept.
14.6 Exercises
Now for some exercise, you can try to learn more about the topics covered in
Chap. 14. Complete an exercise and then you can look at the hints and tips in
Sect. 14.7.
Product Sprint
Backlog Backlog
Product
Owner
User
Stories
Sprint
Retro
Daily Customer
Scrum Stand- Demonstraon
Master ups
Sprint
Planning
Front-end
Unit Tests
Source Code
Self-Organising Product
Team Back-end Acceptance
Source Code Tests
In this chapter, we have explored a set of eight lean principles: eliminate waste,
build quality in, create knowledge, defer commitment, deliver fast, respect people,
optimise the whole and eliminate waste. We have investigated in more detail the lean
concepts of value, waste, speed, people, knowledge and quality. Value and value
stream mapping invite us to understand inefficiencies in our processes. Waste is
anything that does not add value to our product, including delays and superfluous
activities. When you tirelessly work to eliminate waste, you will likely achieve
much faster delivery of value. When we say speed, we really mean the need to
minimise development cycle times. This is a perspective on maximising work flow
though our process. Lean proponents advocate a systematic and scientific approach
to knowledge gathering. Experiments are conducted to test hypotheses aimed at
226 14 Lean
References
Abstract Version control software tools provide content management services for
source code. They offer a searchable change history and allow us to archive and
restore code fragments, as we add new features to our software. Version control
gives us a historic database of our system as we develop. We can use version control
locally, when we are working alone. Moreover, version control really comes into its
own when we work with others in a group. For team working, we can use shared
source code repositories. In this chapter, we will explore simple version control use
cases, such as staging and committing files. Then we will explore shared repository
techniques like cloning, checkout, merging and so on. These techniques will enable
you to share the new features you create with others. In turn, you will be able to
learn how to incorporate their features into your code.
15.1 Introduction
Version control is about solving three main problems: creating change records,
storing the changes we make to our evolving software as well as sharing and
integrating code with others. I suggest you start by learning how to manage changes
in your own code first. You can then learn how to share your own software and
download code written by others.
A version control system is used to record a copy of files as you make changes over
time. We most often think of version control being used to record the evolution of
computer software source code. But version control can actually be used for any
computer files. Indeed, I used a version control system to keep a record of changes
and create a backup file archive during the development of this book.
As the software you create becomes more complex, using a version control
system is a very wise thing to do. Version control can help you to protect yourself
File Version 3
Version 2
Version 1
against lost files or revert to an earlier version when new ideas or features that
you add to software don’t work out. Version control allows you to revert files to
a previous state. You can also use version control to easily revert an entire project
to a previous state. You can use version control on your local computer, as shown
in Fig. 15.1. You can store a snapshot of your system, as it evolves, keeping track of
changes as you go.
You have to initialise the version control database, and you have to remember to
store and document the snapshots as you go. But in return for this discipline, you
get much more control and access to a range of features you don’t get if you simply
archive files to a backup storage device or a cloud server.
Let’s try this out for ourselves. I’ve chosen to use a version control system called git
[1]. Others are available. First make sure git is installed on your computer. Open a
command window and type this:
C:\folder>git --version
If git is installed, you will see a version number. If not, you will need to download
and install git in a manner appropriate for your operating system [5]. I have chosen
to use a command window for these exercises. I prefer to see exactly what is
happening, which can sometimes be obscured by graphical environments. Assuming
15.2 Content Management 229
git is installed and running, create a folder and initialise a git repository on that
folder, like this:
C:\folder>mkdir git-test
C:\folder>cd git-test
C:\folder\git-test>git init
C:\folder\git-test>dir
. . . nothing seems to have changed. That is because git has created a hidden folder in
the current directory. You can see this, in MS Windows 10, for example, by typing
this:
C:\folder\git-test>dir /a
You will see a hidden folder, as shown in Fig. 15.2. Git uses that hidden folder
to keep copies of your files as you make changes. We don’t need to worry ourselves
about the internals of how git does this.
Now, let’s work through an exercise of creating a change history for some of our
own code. We can create some example files to be archived using our version control
system. Use a text editor to create three files:
• MyFirstFile.txt,
• MySecondFile.txt,
• AFileIDoNotCareAbout.txt
You can put a sentence text into each file, as follows:
• MyFirstFile.txt, ‘here is some text’,
• MySecondFile.txt, ‘this is some other text’,
• AFileIDoNotCareAbout.txt, ‘some unimportant text’.
Before we do anything else, what does the git version control system think is
happening? We can run the git status command, like this:
C:\folder\git-test>git status
The output of the git status command is shown in Fig. 15.3. Notice that the
files we created are listed in red. Git even tells us what we need to do, if we want to
include the files in our version control repository.
Fig. 15.4 Using the ‘git status’ command to show two staged files
Staging Files
The output of the git status command is a bit different this time, as shown in
Fig. 15.4. We can now see the two files we added are shown in green. Technically,
these files are staged, which means they are ready to be put in the version control
repository. The staged files are not in the version control repository, yet. They are
only ready to be put into the repository.
Staging allows us to prepare some files to go into version control and ignore some
others. This way we can separately track changes we make for different purposes.
We are not forced to put everything in version control at the same time.
To put the staged files into the version control repository, we must perform a
git commit operation. The git commit is followed, in this example, by the -m
232 15 Version Control
option to accept a message parameter. The -m option is followed by the text string
"files created with initial text" of the message.
The output can be seen in Fig. 15.5. Notice that the two files have been created
(in the version control repository) and that the message from the git commit
command is reproduced.
We can run the git status command again and see what git ‘thinks’ is
happening. The git status is shown in Fig. 15.6.
Notice that git is not tracking the file AFileIDoNotCareAbout.txt because we
didn’t use the git add command on that file. That file is being ignored by git. Files
that are not staged are not added to the version control repository.
Now you can use the text editor to add some new text to the first file and do a
new git commit. Let’s also delete the unimportant file, like this:
15.3 Source Code History 233
After running the final git status command, you will see something like the
output in Fig. 15.7.
Let’s illustrate how version control can help you recover from mistakes [6]. We’ll
deliberately add some erroneous text to our second text file, using a text editor, to
illustrate the idea. In our moment of madness, we figure the text in the second file is
fine, so let’s commit that, like this:
Oh no! Now, let’s imagine we realise we have committed a file with errors. No
matter. We can just, in this example scenario, revert to out earlier commit.
Now, we can use the git log command to view the commit history of our work
so far. Have a look at Fig. 15.8. You can see the three commit messages and that each
commit has a unique reference number. A simple, and perhaps rather crude, way to
remove the text we just added to the second text file is to use the git revert
command. The git revert offers several options, but in this example, we will
simply throw away the last commit, like this:
234 15 Version Control
Now when we look at the output from the git log command, we can see that
a new commit has been added that reverses our previous commit, as shown in
Fig. 15.9, and removes the erroneous text we had added to our second file.
Having learned some skills about using git to create a change history for some of
our own code, we can now learn the skills we need to share code with others.
While using a local version control system to manage your content makes sense,
adding a remote server offers a higher level of reliability. In the previous section, we
created a local git repository, so we don’t need to do that again. What we need to do
though is:
15.4 Source Code Remote Archiving 235
Fig. 15.9 Git log output (partial) after reverting the third commit
If everything worked, the final git status command should show you that the
local repository is up to date and that is linked to origin, which is the name we
gave to the remote repository on GitHub. The GitHub approach to authentication has
changed over time. At the time of writing, access tokens are needed. You create an
access token, copy it and then use it, in place of a password, through your command
line. Don’t worry; there are instructions about how to set this up, online [4].
236 15 Version Control
Now each time you make changes, you have to remember to stage any files you
have changed, commit the changes and push the changes to your remote server, like
this:
C:\folder\git-test>git add -A
C:\folder\git-test>git commit -m "describe changes"
C:\folder\git-test>git push
As we have seen, modern version control systems, like git, support integration with
remote servers. By using a combination of local and remote version control systems,
known as distributed version control, you can share source code within a team in an
orderly manner. In this way, a team member can work on a specific feature locally,
which can then be shared with other members of the team using the remote server,
as shown in Fig. 15.10.
Server Computer
Version Database
Computer A Version 3 Computer B
Local Local
File 1 Version 2 File 2
Version 1
Version Database Version Database
Version 3 Version 3
Version 2 Version 2
Version 1 Version 1
We now have to get to grips with another new idea: branching. Branches are separate
lines of development within your project. You can think of branches as development
topics, features or versions. Branches can live for a long time, or can be rather
temporary, depending on your project needs.
For example, say you release version 1.1 of your project to some customers. But
now you want to work on version 1.2. So you can create branches for versions 1.1
and 1.2; you can then decide to leave version 1.1 alone, in case you need to go back
and investigate any bug fixes there later. You can safely work on version 1.2 without
affecting version 1.1. Branches to support releases (such as versions 1.1 and 1.2 for
example) tend to live on for a long time.
Alternatively, maybe you want to add a new feature to your software, but you are
not sure exactly how it is going to work. Then you need to create a new branch for
the new feature, which you can work on separately, without disrupting the rest of
the code base, as shown in Fig. 15.11. Branches for new features tend to get merged
back into the main code base (often known as the trunk).
Version control systems, such as git, provide a lot of features for creating,
merging and managing branches. But don’t get too carried away. We don’t want
dozens of branches in our projects. We’ll soon lose track of what on earth is going
on. Here is an example of creating a branch, pushing to the remote repository and
switching between branches locally:
238 15 Version Control
The first directory listing is shown in Fig. 15.12. You can see the third file that
you added; mine has the unimaginative title of MyThirdFile.txt. Now, for this
example, when you switch back the master branch, what happens to that third file?
Remember that the idea of branches is to allow separate lines of development to be
performed without interfering with each other.
The second directory listing is shown in Fig. 15.13. As you might have guessed,
that third file has disappeared. The third file exists in the feature1 branch and not
in the master branch.
15.5 Source Code Sharing 239
In this simple example, the git merge feature1 command works without
any problem. We can then remove the branch that is no longer needed with the
git branch -d feature1 command.
The git merge, in this simple example, went well because we were simply
adding a new file into the code trunk on the master branch. Things get a bit more
complicated when the code or text we are merging goes into one file. Thankfully,
git has been well-designed with many tools to support merging. For example, you
can explore the changes made to files by using the git diff command.
So far, the merging we have been doing is local. It is quite useful to be able
to create local branches, merge and delete them. But, what about picking up code
written by someone else? What we really want to have is an orderly way to allow
240 15 Version Control
different team members to create features and then, once the features are working
and tested, integrate them into a shared trunk. A simple way to achieve this is to use
feature branches and the remote GitHub server.
One person can create a new branch; let’s call it feature1. That person
downloads the main trunk from the GitHub repository and works on feature1
locally. In the meantime, another person can create a new branch; let’s call it
feature2. That person also downloads the main trunk from the GitHub repository
and works on feature2 locally to them.
After completing their work, the first person merges their feature1 code with
the main trunk. That main trunk now includes feature1. Sometime later, the
second person merges their feature2 branch with the new main trunk on the
GitHub repository. The main trunk now includes both feature1 and feature2.
However, while they are working on the branches locally, feature1 and feature2
are kept separate from each other.
While the feature branch technique is conceptually simple and elegant, it can
encourage people to work independently for long periods of time. Imagine you work
on feature1 or feature2 for a few hours or even a day or two, not much harm
is done when you merge that feature with the main trunk. But, what happens if you
work on a feature for several weeks or months? Maybe there will have been major
changes to the main trunk that make merging a complicated nightmare. To avoid
this, some people suggest working locally on new features, but keep them in the
same main truck branch and keep merging frequently. As you add new files, they
get added to the main trunk every day.
Caution!
As I’ve mentioned, branches are very useful where we want to keep a frozen
copy of a source code release. Let’s say we complete version 2.1 of our software
system and we want to start work on version 2.2. We can create a new branch,
dedicated to the version 2.1 release. We can then version control that branch,
with defect fixes and feature enhancements, without worrying about new features
being added on the main branch. When version 2.2 is released, we can create a
new branch for that release as well. We can repeat this cycle for each new release,
knowing that we have a separate version control history for each release.
15.6 Exercises
(continued)
15.6 Exercises 243
(continued)
244 15 Version Control
C:\folder\git-test>git init
C:\...>git add -A
C:\...>git commit -m "Describe operations performed"
C:\folder\git-test>git add -A
C:\...>git commit -m "Describe operations performed"
(continued)
246 15 Version Control
Now you realise, for the purposes of this exercise, that the last commit was
an error. You want to undo the change you made to that file and have just
committed. No problem.
The git revert command can be used to undo the last commit. Use
the git revert --help command to see the other capabilities of the
git revert command.
You can then use git push to back up your local files to the
remote server. The -u option saves you from having to type the
<REMOTENAME> <BRANCHNAME> every time you do a git push.
(continued)
15.7 Hints, Tips and Advice on Exercises 247
With luck, you should now have your local code backed up on the remote
server. You can use your browser and GitHub credentials to search your online
repository and see what is there.
Our new branch, in this example called feature1, has been created and we
are now ‘in’ the new branch. We can create and modify files without affecting
the contents of any other branches. You should now add some new content
within the feature1 branch. This new content can now be committed and
push to your remote repository on GitHub.
(continued)
248 15 Version Control
C:\folder\git-test>git add -A
C:\...>git commit -m "Third file"
C:\...>git push --set-upstream origin feature1
You should now have a backup of the new branch of your local repository
on your remote GitHub repository.
References
Abstract Testing is used to identify defects and provide reassurance about the
quality of our software. We always test our code before we share it with others. In
this chapter, we will learn the skills needed to create unit, regression, user experience
and acceptance tests. We will also learn the skills needed to create simple test
automations. We always try to automate the things we do, so that they are reliably
repeatable. This saves time in the long run and reduces the chances that we will
forget or cut corners later, when we are under pressure from short deadlines. We
will also explore some techniques for performance and security testing.
16.1 Introduction
value, you quickly realise that testing all combinations of inputs and preconditions
is not possible in even quite simple systems.
However, we can and must use testing to check for defects and improve the
quality of our software. We just need to be selective about what we test, making
sure we test all the important pathways through our code. We touched upon testing
artefacts used during the development process, in Chap. 10.
Test planning is about establishing the process and resources needed to test an
application to an appropriate level of test coverage. The test plan defines what is
to be tested and how test results will be recorded, as mentioned in Sect. 10.2. The
testing schedule, and how testing is integrated into an incremental delivery model,
will also need to be defined.
Testing levels determine the focus of our testing: whether on the contents of
small software components or classes on the one hand or on the integration of
components into larger systems on the other hand. Common testing levels include
unit, integration, systems and acceptance testing.
Unit testing is a feature development artefact, as mentioned in Sect. 10.4. In unit test-
ing, we isolate and test individual elements of our code, such as method signatures,
return parameters and data transformations. Unit tests are often created and executed
as we develop. Unit tests are usually written by source code developers, sometimes
with the support of test specialists that are members of our self-organising teams
during the development iteration.
Integration testing is used to verify each new interface we add to our system.
Interfaces to new subsystems are built incrementally and must be tested to expose
any defects. Interface testing is done after unit testing and prior to offering up the
code to other software components that depend on the new services provided.
16.4 Testing Techniques 253
Prior to integration testing, we can use stubs and drivers to simulate interfaces.
A stub provides a dummy implementation of an interface. The stub complies with
the interface requirements without executing any code to fulfil the call. A simple
dummy response is provided instead. Conversely, a driver make dummy calls to an
interface. We can implement all the source code to fulfil the interface and use the
driver to simulate calls to our interface code.
During integration testing, we remove the stubs and drivers and perform tests
to ensure everything works as expected. We normally perform one integration at a
time, to ensure each interface is working before we move onto the next.
We use system testing to evaluate the holistic functioning of our system. During
system testing, we determine if components and subsystems cooperate in the
way we expect and transform data appropriately across interfaces. System testing
routinely involves software built by different individuals or teams. System testing
may also involve evaluating newly developed software interacting with third-party
software.
We can perform non-functional as well as functional system testing. Perfor-
mance, load testing and security testing are best performed on the entire system prior
to release. If a third-party test team is employed, they are more likely to perform
system testing than unit or integration testing.
Acceptance testing is used to ensure that each feature or increment meets its
needs. Conventionally, in sequential or plan-based software development models,
acceptance testing was performed towards the end of the development life cycle.
But in incremental development, acceptance test criteria are developed earlier in
the process and then applied as features or increments are completed. Acceptance
testing is performed before customer demonstrations, so that the test results can
inform judgements about software quality.
To achieve the objectives of evaluating our systems at the different stage of devel-
opment, we can employ five main testing techniques: regression, user experience,
performance (load), security and A/B testing.
254 16 Testing and Test Automation
The users are then observed performing the requested task using the software
system under investigation. Observation might be informal, with researchers mak-
ing notes and timing activities. For more detailed studies with more resources
available, careful observation and video recording might be used. For a more
rigorous, research-oriented, approach, specialist eye-tracking apparatus can be used
to precisely characterise participant behaviour while using the software.
Once users have completed the scenarios, it is typical to ask them questions
about their impression of the experience. This qualitative data can help decide if the
software is going to be enthusiastically adopted by potential customers. Once the
software goes live, we employ A/B testing to evaluate the new features we release,
as discussed in Sect. 16.4.5.
Performance testing is used to check that system response times are acceptable when
subjected to an expected number of user requests. Tools like Apache JMeter [1] can
be used to load-test applications, while Selenium [6] has features for testing web
applications by simulating button presses and web-form filling. Performance testing
gives reassurance that an application performs with adequate response times, when
under anticipated load.
However, stress testing is often performed to explore the load under which the
response time falls below acceptable levels. In stress testing, load (in terms of
number of users or data throughput) is steadily increased to understand the limits
of acceptable performance. We want to know how much additional capacity our
system can withstand before response times become unacceptable.
We talked about security in more detail in Chap. 11. It is good practice to include
security testing in a test pipeline. General code quality testing tools, such as
SonarCloud, perform some security tests that can identify deficiencies in our code
[7]. In addition, we can use more specialist security testing tools such as Gauntlt [8]
and OWASP ZAP [4].
These tools are designed to be used as part of an automated test and build
pipeline. This is known as DevOps in which source code development, the Dev,
and deployment or operations, the Ops, are integrated using automation. Indeed,
extending the phrase DevOps to DevSecOps derives from the need to include
security testing in the continuous integration or continuous deployment pipeline.
We’ll come back to these ideas in Chap. 21.
256 16 Testing and Test Automation
In A/B testing, we deploy two alternative versions of our functionality to live users
and measure their responses. This approach is usually used in web-based or browser-
based applications. Our objective is to measure user behaviour in real time on our
live web-hosted products. At runtime, users are assigned to one version or the other
of our experimental features. For example, a new landing page might be deployed,
and we can use A/B testing to see which version attracts the higher number of site
registrations.
We use metrics like number of click-throughs, lower abandonment rates, higher
conversion rates and revenue per customer to evaluate different versions. Essentially,
each roll-out of a new feature is treated as an experiment to measure impact on
desirable customer behaviour.
An A/B testing framework is usually used to manage our experiments. The
framework allows us to define metrics, deploy software features and measure usage.
Such frameworks offer a dashboard for managing experiments and analysing test
results.
Bearing in mind that we can’t test everything, we nevertheless ensure that the main
features and data pathways have been tested before we ship product. To achieve this
goal, automation of our testing processes becomes essential.
Automating unit testing is attractive because we want to re-run tests at later stages
in the development process.
The term XUnit is used to describe a standard set of unit testing frameworks
that have emerged for providing a consistent approach to automated unit testing
regardless of the programming language used. So, for example, we can find SUnit
(for Smalltalk), JUnit (for Java) and RUnit (for R) and so on.
The XUnit approaches comprise several common elements:
• Test runner, an executable programme for running tests,
• Test case, used to define specific test conditions,
• Test fixtures, a set of preconditions needed to run a test
• Assertions, a function that verifies the behaviour or state of the unit under test,
• Test execution, the execution of an individual test,
• Test result formatter, produces test results in a common output format,
• Test suites, a mechanism for running collection of tests in any order.
16.5 Test Automation 257
CarTest Car
String make
String model
main() getMake()
setMake(String)
getModel()
setModel(String)
To illustrate a basic use of JUnit, look at the small application shown in Fig. 16.1
and which is available to download from GitHub [2].
Having created the application classes and methods, we can create a test. Simply
right-click on the project in the Eclipse package explorer, hover your cursor over
the new menu item, and select a new JUnit Test Case. I usually collect my tests
into a JUnit package, although on larger projects it might make more sense to have
a dedicated Test Package for each package in the source code.
The test simply instantiates the class and then tests the values of variables using
getter methods, as shown here:
package junit;
import org.junit.Test;
import car.FastCar;
@Test
public void test() {
FastCar fastCar = new FastCar("BMW", "320M", 180);
258 16 Testing and Test Automation
assertTrue(fastCar.getMake().equals("BMW"));
assertTrue(fastCar.getModel().equals("320M"));
assertTrue( fastCar.getSpeed() == 180);
}
}
The test can be executed once the test case has been completed. To execute, right-
click on the project in the Eclipse package explorer, and select the Run As\dots
menu. You can then select a JUnit Test.
The JUnit result formatter provided by Eclipse shows the number of tests
executed and their results. A green bar denotes all the tests in this run have passed,
while a red bar indicates failed tests.
For this simple example, I have used the assertTrue method to check the values
of attributes. However, in the JUnit environment, assertions come in a variety of
flavours:
• assertEquals(boolean expected,boolean actual): to check if two
primitives/objects are equal.
• assertTrue(boolean condition): to check if a condition is true.
• assertFalse(boolean condition): to check if a condition is false.
• assertNull(Object obj): to check if an object is null.
• assertNotNull(Object obj): to check if an object is not null.
Consequently, these test methods can be used to test different variable values as
required by the application. Finally, a test suite can be created, which executes all
the unit tests, as follows:
package junit;
import org.junit.runner.RunWith;
import org.junit.runners.Suite;
import org.junit.runners.Suite.SuiteClasses;
@RunWith(Suite.class)
@SuiteClasses({ CarTest.class, FamilyCarTest.class, FastCarTest.class })
public class AllTests {
}
16.5 Test Automation 259
Refactor or
Create Code
Refine Code
Fail Pass
The test suite automates executing multiple tests, simplifying the processes of
running every test against a specific method or class.
Title
A user story title.
Narrative
A brief feature description using the following structure:
Acceptance criteria
A set of scenarios describing the behaviours of the user
story with the following structure:
The acceptance criteria are parsed by software tools to create tests. For example,
in the Cucumber tool set, the Gherkin natural language parser is used to extract test
cases from such user story descriptions [5].
16.6 Exercises
These exercises can help you develop your testing skills. Have a go at an exercise
and then check for hints, tips and advice in Sect. 16.7.
References
Abstract In this chapter, we consider the process issues and software tools required
to create the Tabby Cat case study project. This project will create an opportunity
to apply the ideas from the chapters in Part III of the book. As stated in Chaps. 6
and 12, Tabby Cat is software for displaying source code repository activity. We
want to obtain data from a public repository and display activities using various
searches and filters.
17.1 Introduction
This case study allows us to summarise and apply the most important ideas we have
covered in Part III. Here, you can learn more about agile process and software tool
issues. The three main sets of skills I want to focus on are ceremonies from Chap. 13,
version control from Chap. 15 and test automation from Chap. 16.
The Tabby Cat project has been kindly provided by Red Ocelot Ltd., our software
start-up company [2]. The Tabby Cat project source code is available on GitHub [1].
I have recommended that you create and update a learning journal when you
do the exercises in each chapter; see Exercise 13.1. Now is a good time to
reflect on your journal notes for each exercise in the Part III chapters.
• Re-read your learning journal from the chapter exercises in Part III of the
book,
• Think about what went well when you did the exercises,
• Think about what didn’t go so well,
(continued)
• Make some notes, in your learning journal, about the strengths and
weaknesses of your work in these areas,
• Create some actions or set some targets for your future learning.
First, make sure you read Chap. 13 and work through Exercises 13.2 to 13.5. These
ceremonies are how we collaborate in agile projects. We try to empower team
members to fulfil our goal using their own creativity and skill. We think this is better
than relying on a project manager who has all the creativity and tells everyone what
to do.
Before we start development work, we need a prioritised list of requirements.
Each sprint starts with a sprint planning activity, described in Sect. 13.2. Essentially,
sprint planning is where we decide on the requirements to be tackled during
this sprint and which team member is going to work on each. We divide each
requirement (use case or user story) into technical tasks, and, in a self-organising
team, members step forward to pick up each task.
Daily stand-up meetings, as described in Sect. 13.3, allow everyone in the team
to keep track of progress. Remember that the daily stand-up consists of everyone
taking turns to answer three questions: ‘what have you done since the last meeting?’,
‘what are you working on now?’ and ‘are there any impediments stopping you
from making progress?’. Separate meetings are called to deal with any challenged
surfaced during the daily stand-ups.
At the end of the sprint, we have a customer demonstration. As mentioned in
Sect. 13.4, this is where we demonstrate working code created during the sprint.
The purpose of the customer demonstration is to collect feedback on our work. We
prefer to find out sooner, rather than later, if we are going off on the wrong track.
We conduct a sprint retrospective, after the customer demonstration, as discussed
in Sect. 13.4.1. During the retrospective, we can reflect on our successes during the
sprint and look for opportunities to improve. Usually, we aim to have two or three
action areas for improvement in each sprint.
Now read Chap. 14 and consider how you can apply lean thinking to your project.
Exercises 14.2 to 14.7 should provide some ideas you can apply to your work.
Perhaps most important is to focus on the flow of features through your development
process. This mindset might help you identify waste to eliminate which increases
the value of your work.
References 267
Now, you should have read Chap. 15 and worked through Exercises 15.2 to 15.7.
Version control is going to play a critical role in the Tabby Cat project. We want to
use version control to be able to manage source code changes during the project,
but also to provide a straightforward mechanism for sharing source code within the
team using a shared remote repository.
There is an overhead in learning the skills required to keep your working source
code in version control. First working source code must be synchronised with your
local repository and then keeping your local repository synchronised with an archive
in a remote repository. But, if you can get into the habit of using these software tools
regularly, it will save you a lot of time in the long run.
Aim to synchronise your code locally very few minutes (by performing local
commits). It is much better to commit in small increments, rather than commit the
result of hours (or, worse, days) worth of effort. Frequent commits make it easier to
do a roll-back, if it ever needed, and easier to track any defects introduced into the
source code.
First, make sure you read Chap. 16 and work through Exercises 16.2 and 16.3. It is
obviously good practice to unit test, as you create your source code. Depending on
the language you adopt, XUnit style tests (JUnit for Java and so on) seem the way
to go.
You will also need to perform regression testing on existing source code, when
new features are added in each increment. This is where automated testing really
comes into its own, because the automated tests are already there for the existing
features and can be re-run to check nothing has broken when new features are added.
It is likely also to be a good idea to create some integration tests. Integration
tests are used to check the interfaces between the moving parts of your system. Re-
running integration tests at the end of each increment provides increased confidence
that interfaces have not been accidentally broken. It is very easy to perform an
enhancement on an existing interface and forget to update all the existing clients
of that interface.
In Part IV, I will explore some more advanced skills that will be useful for you
to acquire as your expertise in agile software development grows.
References
1. Bass, J., Monaghan, B.: Tabby Cat GitHub Explorer. Red Ocelot Ltd, London (2022). https://
github.com/julianbass/github-explorer
2. Red Ocelot Ltd: Enhancing digital agility (2022). https://www.redocelot.com
Part IV
Advanced Skills
Part IV of the book deals with four more advanced topics: large-scale agile, cloud
deployment, technical debt, evolution and legacy, and DevOps. The skills in these
chapters become more relevant; once your agile team is functioning, you have
shipped your first releases and you have a nice agile process cadence.
Chapter 18 explores large-scale agile development projects. A key feature of
large-scale agile is the need to coordinate multiple self-organising teams. The trade-
off here is that team sacrifice some autonomy in order to work towards a common
software solution. The specialist roles, activities and ceremonies needed for big
projects are discussed in Chap. 18.
Cloud-hosted application deployment, described in Chap. 19, is very attractive if
you don’t already have access to your own servers. Cloud-hosting can scale to the
needs of a varying size customer base and can minimise capital investment costs
for start-ups and new entrants. We explore some key issues, such as scaling, multi-
tenancy and containerisation, for software-as-a-service deployment.
Finally, Chap. 20 explores legacy systems, technical debt and software evolution.
Technical debt builds up as a natural consequence of incremental development.
Periodic refactoring is desirable to reduce technical debt. Legacy systems often
exhibit extremes forms of technical debt.
Continuous integration and continuous deployment, often called DevOps, are
useful when using incremental development. DevOps ensures the seamless delivery
of new features into your production deployed product. Automation helps to ensure
consistency and quality in your deployment pipeline. Tools to help with continuous
integration and continuous deployment are described in Chap. 21.
Abstract Large-scale agile development is required where time scales are short and
the scope of work is, well, large. Large-scale development focuses on cooperating
teams. We have a dilemma; on the one hand, we want to empower teams to be
self-organising and innovative. However, on the other hand, teams must cooperate
to work together on the same product. This chapter introduces the more advanced
topics around cooperating teams. We will discuss conventional approaches, such as
the scrum-of-scrums approach, where dependencies between cooperating teams are
resolved by scrum masters meeting and thrashing out release roadmaps and resolv-
ing impediments. We will also introduce the Spotify culture of squads, chapters,
tribes and guilds. Spotify engineering culture is based on self-organising teams,
known as squads. A tribe is a collection of collaborating squads organised around
specific products. Within the same tribe, Chapters focus on skills development
within a tribe. While guilds are communities of practice for sharing knowledge
across different tribes about areas of specialism. We will explore these techniques
for managing scale, distance and governance.
18.1 Introduction
When the number of requirements to be fulfilled is large, and time scales are short,
then we need to use large-scale agile techniques. Rather than have one large team,
we prefer to have a number of smaller cooperating teams. There are specific issues
around coordinating cooperating teams. We will consider specialist artefacts and
activities within roles on agile projects.
On large-scale projects, there is also a tension between consistency of approach
between teams and the ability of teams to be creative and innovative. Different
organisations will choose their own point on this spectrum. Some will focus on
nurturing highly creative and innovative autonomous teams, while others will look
to compare teams and aim for consistency. Such organisations might ask questions
like:
• How can we learn from the most productive team(s) and spread best practice to
others?
• Why do some teams maintain consistent velocities, while others seem to fail
sprints from time to time?
• How do we achieve consistent software quality standards?
There are risks with both extremes. Highly autonomous teams might be inno-
vative but end up having to waste time doing rework to fit in with other teams,
because of uncoordinated decision-making or wrong assumptions. On the other
hand, enforcing too much consistency can stifle team innovation and undermine
motivation. Clearly, a balance needs to be struck. The secondary roles, in Sect.18.4
and 18.5, help cooperating teams to achieve this balance between team autonomy
and team consistency.
18.2 Distance
Geographical distance is, well obviously, the physical distance between team
members. Engaging other team members in the same city is considered easier than
if team members are in different continents. While offshore development refers to
engaging developers from remote distances, near-shore development refers to team
members from within the same continent. Large geographical distance means longer
flights to facilitate face-to-face meetings and more time on video conferencing
platforms.
Temporal distance refers to the issue of time zone differences between team
members. Significant time zone differences impact the ability to hold online
meetings during the working day. Someone is going to have to meet other team
members outside normal working hours, either during the late night or early
morning. Consequently, some organisations group people into teams with others
18.3 Large-Scale Artefacts 273
that live directly north or south, rather than east or west. From this perspective,
forming groups with team members from Brazil and North America, on the one
hand, or from Africa and Europe, on the other hand, is more attractive, to minimise
time zone differences. There are lots of out-of-hour work meetings for teams in
South Asia working with clients in the USA.
Cultural distance, perhaps the more controversial concept here, refers to differences
between community, family and social attitudes and values in different societies.
Some observers identify differences in how hierarchical or deferential some organ-
isations are, compared to others. Also, team members may be from different
cultures where attitudes to issues such as gender equality, religion or sexuality
vary considerably. Cultural awareness becomes a more sensitive issue in teams
comprising members from distant cultures. Some large organisations offer team
members training in this area.
Amelioration steps for each risk might include approaches to identify, analyse, plan
response, monitor and control.
On many large-scale projects, the risk register will be reviewed by senior
executives during each iteration. The potential impact of one or other teams failing
to deliver planned work will be assessed.
Many organisations will have team members dedicated to spreading best practice
from team to team. These individuals aim to have an extensive repertoire of tech-
niques that can help teams work effectively. There are several ways of conducting
sprint planning or sprint retrospectives (these ceremonies are described in more
detail in Chap. 13), for example. An agile coach can help teams try different
approaches and see what works best for them. You might be thinking that the agile
coach sounds just like a scrum master. Well, it’s true. Scrum masters are supposed
18.5 Large-Scale Product Owner Activities 275
Scrum of Scrums
Comprises Scrum Masters and Product Owners from each team
to help teams try new ideas and improve their effective use of agile methods. The
agile coach, however, supports several (sometimes many) teams, whereas the scrum
master usually only supports one team.
Product owners create and prioritise user stories and decide when code is ready
to be deployed and shipped. In large-scale software development programmes,
the activities of identifying business needs and creating requirements are time-
consuming and difficult. Product owners have been found to create several specific
new activities, to cope with agile scaling [2].
Someone needs to develop the vision as well as create and negotiate a business case
to senior executives in a large organisation. This usually requires involvement at
the most senior board level. Senior executives, such as the chief executive, chief
information or chief technology officer, may ‘own’ the project, but they are unlikely
to have time to attend to all the project details. Hence, a product sponsor, who
‘owns’ the project, creates a product owner team which then deals with all the details
associated with running the project [1].
276 18 Large-Scale Agile
In really big projects, with many teams working together for a long period of
time, risk becomes an important factor to monitor. What happens if one team fails
to deliver? Will the work of all the other teams be disrupted? Large companies
are forever reorganising themselves. What happens if a team you depend on gets
redeployed to another (more urgent or important) project? The risk assessor keeps a
list of risks, their likelihood and impact severity. This list is reviewed, every sprint or
two, and kept up to date along with proposed mitigating actions. See the discussion
in Sect. 11.3.4 and 18.3.1.
18.5.3 Governor
Someone needs to make sure all the project teams comply with corporate quality
standards and technical policies. Self-organising teams have to relinquish some
autonomy when they cooperate with other teams to create a product. Usually some
central architecture board or design authority determines policies and approaches,
which teams then comply with.
There are, of course, variations between organisations and business sectors. Some-
times, in large projects, product owners are called product managers (particularly
in consumer-facing businesses). There also seems to be a trend towards technical
product owners supporting one team, within a specific business domain. The
technical product owners will be used to focusing on a set of technologies or a
technology stack, within their business domain. They support a more senior product
owner who is more broadly product focused, supporting several teams, but who is
technology agnostic.
The product owner prioritises the development of new features and services. This
can be informed by maintaining good customer relationships in business-to-business
domains or through market trend analysis. Horizon scanning market trends and
competitor performance helps identify new business opportunities that can be
fulfilled by new features and software services.
The music streaming service Spotify developed their own software innovation
culture comprising teams organised into squads, tribes, chapters and guilds [3].
Spotify was launched in 2008 and had continuous growth for 10 years, at times being
one of the most downloaded mobile device apps. The Spotify engineering model
uses a matrix management structure, as shown in Fig. 18.2, comprising squads,
chapters and tribes. In addition, guilds are communities of practice that disseminate
innovations across the organisation.
18.6.1 Squads
The squad is the basic unit of software development in Spotify. A squad is similar to
a scrum team, consisting of five to seven people. But Spotify squads have more
autonomy over choosing a software development method, such as Scrum, XP,
Kanban, lean, Scrumban and so on.
Squads have a long-term focus on an aspect of the product, such as a mobile
device front-end, payment solutions or back-end services. This long-term commit-
ment to a mission helps the squad gain deep expertise on their product area. In
278 18 Large-Scale Agile
Product
Owner
What
Squad Chapter
Lead
How
Chapter
How
How
How
Tribe
Spotify, they think of squads as mini-start-ups with a particular focus. Five key
features of squads are:
• Product owner, each squad has a product owner,
• Agile coach, each squad has an agile coach to help resolve impediments and
support continuous improvement,
• Self-organising team, influencing planning and work assignment,
• Squads own their own process and continuous improvement,
• Squads have a clear mission, with backlog stories focused on that mission.
The product owner assigned to each squad prioritises work considering business
value and technical issues. Squads are strongly influenced by the principles of the
lean start-up [4]. The minimum viable product concept means releasing early and
often. While extensive use of metrics and A/B testing provides validated learning to
find out what works and what doesn’t, the approach of squads is summarised by the
informal slogan ‘Think it, build it, ship it, tweak it’.
Squad members can spend 10% of their time on hack days, where people can
try out new ideas and experiment with new technologies. Some squads choose to
do one hack day every other week; other squads save up the hack days into a hack
week. Hack days are a fun way to try out new tools and techniques.
18.6 Spotify Culture 279
18.6.2 Chapters
Chapters comprise people within a tribe, with a similar competency or skill set and
focus on personal growth and development. The chapter lead is line manager for
chapter members, responsible for training, performance reviews, salary setting and
so on. Chapters meet regularly to disseminate best practice within a skill area, such
as testing, web development or back-end services.
18.6.3 Tribes
Tribes are formed where multiple co-located squads work together on the same
product. Tribes are typically limited to around 100 people. Tribes are designed to be
smaller than around 100 people, to discourage formation of excessive bureaucracy,
layers of management or organisational politics.
In Spotify, they think of tribes as an incubator for their squads. The squads within
a tribe are assigned adjacent office space, often sharing a physical lounge space
to encourage collaboration. Tribes have a leader who is responsible for creating a
desirable habitat for the squads.
18.6.4 Guilds
Guilds, on the other hand, are communities of practice that operate across the
whole organisation [8]. As mentioned, guilds are communities of practice for
disseminating innovations. Guilds are self-managing and formed around particular
set of interests. Guilds have a coordinator and organise activities for members.
Anyone can join any guild, as shown in Fig. 18.3, the span squads, tribes and
chapters. Successful communities of practice have a good topic, a passionate leader,
a proper agenda, decision-making authority, openness, tool support, a suitable
rhythm and cross-site participation.
Squad
Chapter
Guild
Tribe Tribe
customer problem. Alignment is the extent to which organisation strategy and goals
are proudly understood and undertaken by having focused squad interactions [5].
Software structure or software architecture can play an important role in fostering
or impeding collaboration between teams [6]. In a simple model, each squad
focuses on one layer of an architecture. Interfaces, which are published and version
controlled, define the boundaries between architectural layers, but also between
teams. As projects grow larger, this simple approach breaks down, because there
is too much work for one squad to handle an entire layer of the architecture.
Abdallah Salameh worked with a Scandinavian FinTech company to tailor their
Spotify model to foster architectural alignment [5]. Dr Salameh helped to form an
Architectural Ownership Team comprising chapter leads and led by an enterprise
architect. This change represents a decentralisation of architectural decision-making
from enterprise architects to chapter leads while also freeing up time for enterprise
architects to focus on over-arching enterprise aspects of architectural thinking.
We found that this approach strengthened the autonomy of squads by aligning
architectural decision-making and helped to share architectural knowledge among
squads [5].
support large-scale agile. There are two frameworks which we need to mention:
large-scale scrum and the scaled agile framework.
Large-scale scrum (LeSS) offers two different frameworks, one for up to eight
scrum teams and another for huge development programmes [9]. LeSS is a
formalisation and perhaps scaled-up version of scrum of scrums. They refer to
one team scrum and use a single product owner and single product backlog with
multiple teams. The teams contribute to a single potentially shippable product, with
a common definition of done across the teams, at the end of a single sprint.
Scrum meetings or daily stand-ups are conducted separately within each team.
Sprint reviews, in contrast, are conducted using a bazaar or science fair concept
in a large room with multiple areas. In each area, team members show and discuss
working code they have developed.
Emphasis in LeSS is placed on teams working together, at the sprint planning
phase, to determine which teams will pick up which features. During the single
sprint, team members are encouraged to talk, use open spaces, travel to other teams,
communicate using code and develop communities to share ideas and interact across
teams.
18.7.2 SAFe
Scaled agile framework (SAFe) is a framework for scaling agile across the enterprise
[7]. SAFe operates at a team, programme, large solution and portfolio level and
inherits principles from Scrum, XP and lean approaches as well as DevOps [7].
SAFe also adds layers for handling large-scale projects as well as techniques for
managing a collection of products.
SAFe has a large set of training programmes, for roles in different levels of the
organisation. Practitioners can become certified, to provide recognition for their
skills and knowledge. Accredited consultants are available to help organisations
adopt the approach.
In some ways, SAFe reminds me of the limitations faced by the Rational Unified
Process (RUP). RUP, like SAFe, has many good ideas. But, the framework has
become elaborate and burdensome to implement. SAFe attempts to cover every
eventuality by providing advice and practices at every level of the organisation.
And anyway, the whole attempt to implement or impose an elaborate framework
seems to undermine the whole philosophy of agile. Agile is supposed to be about
empowering teams to find their own solutions. Implementing an elaborate set of
rules or practices seems to go against that ethos, in my opinion.
282 18 Large-Scale Agile
References
1. Bass, J.M., Haxby, A.: Tailoring product ownership in large-scale agile projects: managing scale,
distance, and governance. IEEE Softw. 36(2), 58–63 (2019). https://doi.org/10.1109/MS.2018.
2885524
2. Bass, J.M., Beecham, S., Razzak, M.A., Canna, C.N., Noll, J.: An empirical study of the
product owner role in scrum. In: Proceedings of the 40th International Conference on Software
Engineering: Companion Proceedings, pp. 123–124. ICSE ’18, ACM, New York (2018). https://
doi.org/10.1145/3183440.3195066
3. Kniberg, H., Ivarsson, A.: Scaling Agile @ Spotify with Tribes, Squads, Chapters & Guilds.
Crisp AB (2012). https://blog.crisp.se/wp-content/uploads/2012/11/SpotifyScaling.pdf
4. Reis, E.: The Lean Startup: How Constant Innovation Creates Radically Successful Businesses.
Portfolio Penguin, London (2011)
5. Salameh, A., Bass, J.: Influential factors of aligning spotify squads in mission-critical and
offshore projects – a longitudinal embedded case study. In: Kuhrmann, M., Schneider, K., Pfahl,
D., Amasaki, S., Ciolkowski, M., Hebig, R., Tell, P., Klünder, J., Küpper, S. (eds.) Product-
Focused Software Process Improvement. Lecture Notes in Computer Science, vol. 11271,
pp. 199–215. Springer International Publishing, New York City (2018). https://doi.org/10.1007/
978-3-030-03673-7_15
6. Salameh, A., Bass, J.M.: An architecture governance approach for agile development by
tailoring the spotify model. AI Soc (2021). https://doi.org/10.1007/s00146-021-01240-x
7. Scaled Agile Inc: Safe 5.0 framework (2021). https://www.scaledagileframework.com/
8. Smite, D., Moe, N.B., Levinta, G., Floryan, M.: Spotify guilds: how to succeed with knowledge
sharing in large-scale agile organizations. IEEE Softw. 36(2), 51–57 (2019). https://doi.org/10.
1109/MS.2018.2886178
9. The LeSS Company: Overview (2021). https://less.works/
Chapter 19
Cloud Deployment
Abstract Many software applications and services are now deployed on remote
servers and accessed using internet technologies. We want to learn more about
such routes to application deployment. We discuss some architectural issues that
developers of cloud-hosted applications must face, including scalability, multi-
tenancy, automated customer on-boarding and automated source code deployment.
19.1 Introduction
Cloud services are provided by remote servers accessed using internet technology.
This is a rental model of hardware, storage and platforms. The cost of entry, the
start-up cost, is significantly lower than in-house server provision. There is no need
to create an air-conditioned server room with emergency backup power supply.
There is no need to purchase racks of computers and create dedicated high-speed
internet connections. Instead, an online dashboard is used to select and instantiate
the compute or storage resources that you need.
Usually, cloud services are provided on a shared-resource pay-as-you-go basis.
Multiple virtual machines, belonging to different clients, are executed on a single
hardware processor. This pay-as-you-go model can be particularly attractive, if your
compute or storage demands fluctuate significantly, the idea being you only pay for
the services you use, when you use them.
However, over time, the cost of renting cloud-hosted servers will likely exceed
the cost of in-house provision. For example, at the time of writing in August 2021,
a small virtual machine from DigitalOcean costs US $5 for 1 month [1]. You can
purchase a hobbyist single-board computer, such as a Raspberry Pi Model 4b, for
around US $35 [12]. I’ve used both to experiment with installing and executing a
small Jenkins server, for instance. For the first couple of months, renting a virtual
machine from DigitalOcean would be cheaper. But if you plan to experiment over
a longer period, say 6 months, then buying hardware might be a better option.
Needless to say, if you need to build a commercial server room, with air conditioning
and backup power, the DigitalOcean option is cheaper for much longer.
But bear in mind, there could be other benefits to using DigitalOcean virtual
machines, of course. The cloud-hosted virtual machines have the potential to be
used in a production environment, which is not wise with a hobbyist setup. Also,
many cloud vendors provide software tools to support production deployment. In
addition, the skills acquired when you instantiate a service with cloud providers can
be in demand from potential employers.
We are interested in cloud services from two standpoints. On the one hand,
we are likely to be consumers of cloud-hosted services. On the other hand, we
might be interested to deploy the software we create to cloud platforms. From both
perspectives, it is useful to find out a bit more about these technologies.
19.2.1 Infrastructure-as-a-Service
Many cloud providers offer raw virtual machines on a pay-as-you-go basis. Virtual
machines can be started (and stopped) at short notice, often more or less instan-
taneously. Providers often offer virtual machines with a range of specifications in
terms of processor power, memory and network traffic. This might range from
simple single-core processors with modest memory allocations to much more
powerful multi-core processors with significant allocations of working storage.
Persistent storage is usually available separately from the virtual machines used
for computation. Connecting persistent storage to the processor is usually a (fairly)
straightforward configuration step. Providers offer data archiving capabilities, for
an extra cost.
19.2.2 Platform-as-a-Service
might be higher. But initially, you can avoid the cost of integrating your own
software systems to establish the platform capability. Instead, you can use the cloud
platform whenever you need it.
For example, it is sometimes attractive to be able to deploy, configure and
security harden a powerful production database server within an hour. This is a
much quicker response than is possible if it is necessary to purchase, install and
configure server hardware as well as operating system and database software.
19.2.3 Software-as-a-Service
Cloud-hosted software services, such as Dropbox [3], Google Drive [6], Office 365
[11] and Google Docs [7], have become very popular. Such services include social
media, business services and entertainment platforms. Service consumers need not
be concerned with application installation, deployment or maintenance since this is
all performed (and charged for) by the service provider.
Serverless computing is a misnomer really, because the code still runs on a server, of
course. However, the server (deployment, management and maintenance) is pretty
much invisible to the developer. Essentially, all operational aspects of the service
are outsourced to the serverless compute provider. The aim with this approach is to
allow developers to focus on creating their application and not have to worry about
deployment issues at all.
There are several issues we must consider if we design applications for cloud-hosted
deployment. We learned about the concept of architectural styles in Chap. 8 and
object-oriented design patterns in Chap. 9. We can now consider applying these
pattern concepts to cloud-hosted software services [4].
19.3.1 Scalability
of users declines, resources can be released, reducing hosting rental costs. This
is known as scalability or sometimes elasticity. Some cloud providers offer a
range of services to support scalability. This might include measures to provide
metrics for things like processor utilisation or number of incoming user requests.
These metrics can be used to create thresholds that, in turn, trigger creation of
new virtual machine instances. Increasing or decreasing the number of virtual
machines supporting deployed services makes assumptions, for example, about state
management. Certainly, scalability is much simpler for stateless services.
19.3.2 Multi-Tenancy
Cloud-hosted applications and services need to support multiple users sharing the
same functionality. The data for each user needs to be kept separate, while the
services provided are generally similar to each other. Several architectural styles
are available providing different levels of tenant isolation. An authorisation system
is usually implemented which identifies each user and provides access to their data
(and no one else’s).
Cloud applications usually try to avoid manual operations when adding new users to
the system. We aim to eliminate manual processes from the on-boarding activities
that inhibit application scalability and increase costs. Hence, our emphasis is on
patterns for automated customer on-boarding.
We try to reduce any sources of friction during the on-boarding process, initially
collecting minimum information and making it easy for potential customers to get
started. However, our automated processes will probably need to capture means of
payment when we add new customers to the system. Our goal is to maximise the
number of conversions from visitors to customers.
at a higher price. Finally, there is often a more expensive enterprise-level gold tier
for corporate clients.
For larger business information systems, we often use an ‘n’-tier architectural style
in which the presentation layer is separated from the business logic layer which
in turn is distinct from the persistence layer. The layered architectural style was
introduced in Sect. 8.3.4. The ‘n’-tier architectural style is useful for reducing
coupling between the layers of large-scale systems. For example, in principle, the
storage technology can be replaced in the persistence layer, without affecting the
rest of the system, assuming a good persistence layer interface has been defined.
In the cloud deployment context, we can take the ‘n’-tier architectural style to
another level of sophistication by deploying the different layers to separate virtual
machines. Back-end persistence and business logic layers can be implemented
behind the demilitarised zone, to improve data security. Hence, executing layers
on different servers can offer improved resilience and allow us to more easily adjust
compute resources to achieve performance targets.
19.5 Containerisation
References
11. Microsoft: Microsoft 365 | secure, integrated office 365 apps + teams (2022). https://www.
microsoft.com/en-gb/microsoft-365
12. Raspberry Pi Foundation: Buy a raspberry pi 4 model b. https://www.raspberrypi.org/products/
raspberry-pi-4-model-b/
Chapter 20
Technical Debt, Software Evolution
and Legacy
Abstract Most of this book has been concerned with developing new systems or
features. When students learn software development, it is usually on new projects
starting afresh, without any previously existing source code. In contrast, most
commercial software development effort is directed towards sustaining live systems
that have existing user communities. Live systems already have paying customers.
We need those customers to pay us for the development effort. In this chapter, we
concentrate on the needs of live systems and how teams can support their evolution.
20.1 Introduction
In this chapter, I discuss several concepts and techniques useful for managing
live systems. Legacy systems are an extreme form of live software services that
have usually not benefited from sufficient investment in evolution or maintenance.
I’ll come on to discussing legacy systems shortly.
In software projects, the phrase technical debt is a metaphor for monetary debt
[2]. Small amounts of technical debt are not a bad thing and are actually a natural
consequence of a healthy incremental software development process. Pressure to
deliver new features quickly means deficiencies are introduced into our system
design. Choosing a quick and easy solution now creates an implied cost of rework
later, hence the monetary debt metaphor.
Over time, design rationalisation or tidying up becomes desirable to repay this
debt. As development work continues, and more new features are added, the need
for refactoring and even re-engineering becomes a high priority. Periodically, in a
healthy project, the team will create opportunities to focus on significantly reducing
technical debt.
Balancing investment in technical debt with the need for new features and
functionality is important. Typically, projects do not have sufficient resources to
deal with all technical debt as it arises. Indeed, a project with no technical debt is
likely not creating enhancements quickly enough to satisfy customers. On the other
hand, excessive technical debt impedes the prompt addition of new features and
functionality.
Feature Enhancements
and Defect Removal
Incremental Development
of New Features
Development Iterations
Time
goes on, teams increase the frequency of iterations focused on defects and feature
enhancements.
20.2.2 Refactoring
Refactoring is the process of making changes to software that do not affect the
external behaviour [1]. Refactoring is intended to simplify design, improving
flexibility and maintainability without changing the behaviour of the software and
repay technical debt previously accrued.
20.3.1 Wrappering
Wrappering is the processes of surrounded an existing system with new layers that
can be more readily enhanced in the future. For example, a new user interface can
294 20 Technical Debt, Software Evolution and Legacy
Legacy
Legacy Application
Application
A) New Application Wrapper to Legacy Database B) Legacy Application Wrapper to New Database
New Legacy
Application Application
Legacy New
Application Application
Wrapper Wrapper
be added to a legacy application, as shown in Fig. 20.2a, while, using a web service
wrapper, a thin-client (web-based) front-end and a mobile device application can be
added to an existing installed application, as shown in Fig. 20.2b. This allows us to
expose the application functionality and data without re-developing the core legacy
system. This can reduce short-term costs and get the enhanced solution to clients
sooner.
In other circumstances, with a problematic persistence layer, a new back-end
storage infrastructure can be used to improve resilience or performance [4]. There
are basically two approaches to wrappering data storage as shown in Fig. 20.3. We
can either provide a wrapper to a legacy database and build new application on
top of the wrapper, as shown in Fig. 20.3a. Or alternatively, we can build a new
20.4 Legacy Systems 295
database, migrate legacy data and then create a wrapper for the legacy application
to the modern database, as shown in Fig. 20.3b.
Wrappering can be cost-effective, compared to an entire re-development effort.
The existing system is treated as a black-box, with minimum intervention. But new
features are added using a more modern technology stack. A significant challenge
is that legacy systems are often monolithic and hence difficult to decompose into
logical subsystems. For example, the legacy persistence layer, shown in Fig. 20.3a,
may not exist in a monolithic legacy system.
20.3.2 Re-engineering
Eventually, after many releases, a live software system will need a significant
upgrade. The high-level architectural style may need to be reworked. A refresh
of the entire implementation technology stack may be desirable. There may come
a point where to a significant extent the system will have to be re-designed and
re-implemented. This re-engineering effort is intended to deliver new, existing and
enhanced services, but with a much improved internal structure and implementation.
Legacy systems provide important services, but are built using out-of-date tech-
nologies. It is important to emphasise that legacy systems fulfil significant needs.
We rely on the services they provide. The drawback of legacy systems is that they
are implemented using old technologies.
There are some technologies in computing that get old surprisingly quickly. I’ve
spoken to practitioners working on a large-scale thin-client system, a database-
driven web application for a big multinational enterprise. The web application
is implemented using a framework and has just been deployed. The team is
considering the next new application. A new and better web framework is now
available. Consequently, the team choose not to employ the same web development
framework used in their current system for their next application.
A less pejorative way of looking at legacy systems is to think of them as
heritage. In the UK-built environment, we have lots of heritage sites. We have castles
and palaces that are carefully preserved and maintained by large and well-funded
institutions. Tourists from home and abroad (when we are not in the grip of a virulent
pandemic) visit to enjoy the spectacle of such historical relics. Thinking of software
as heritage helps us understand the need to nurture and evolve such systems. Rather
than allowing them to decay into obsolescence. Investment is needed to support our
heritage systems. Failure to invest will result in higher costs later.
We see legacy systems as a suffering from an extreme form of technical debt [3].
For historical reasons, the legacy system has not benefited from the (perhaps
296 20 Technical Debt, Software Evolution and Legacy
Most commercial software development effort goes into sustaining live systems that
have active users, rather than into creating new products. This chapter has focused
on the evolution of live systems.
Technical debt is a useful way of thinking about investment decisions into
software evolution. Technical debt builds up during incremental development when
speed of delivery takes priority over elegant solutions. As new features are added,
internal complexity builds up.
Periodic refactoring is used to reassert simple design. Refactoring is used to
improve maintainability and flexibility without changing behaviour. Refactoring
reorganises internal structure to facilitate future enhancement and is used to repay
technical debt.
As systems age, the need for more far-reaching re-design arises. The implemen-
tation technology stack can become stale and needs to be modernised. Demand
for substantial new functionality may impose the need for significant restructuring.
Wrappering and re-engineering techniques can be used to address these needs.
References
1. Fowler, M., Beck, K., Brant, J., Opdyke, W., Roberts, D.: Refactoring: Improving the Design of
Existing Code, 1st edn. Addison Wesley, Reading (1999)
2. Kruchten, P., Nord, R., Ozkaya, I.: Managing Technical Debt: Reducing Friction in Software
Development, 1st edn. Addison-Wesley, Reading (2019)
3. Monaghan, B.D., Bass, J.M.: Redefining legacy: a technical debt perspective. In: Morisio, M.,
Torchiano, M., Jedlitschka, A. (eds.) Product-Focused Software Process Improvement, pp. 254–
269. Lecture Notes in Computer Science. Springer, Berlin (2020). https://doi.org/10.1007/978-
3-030-64148-1_16
4. Tripathy, P., Naik, K.: Software Evolution and Maintenance: A Practitioner’s Approach, 1st edn.
Wiley, London (2015)
Chapter 21
DevOps
Abstract We like to automate testing, because it makes it faster and repeatable for
us to maintain high standards of quality. In a similar way, it makes sense to automate
the build process. The idea is that we want to build an executable version of our code,
run all our tests and, assuming all goes well, deposit the resulting release onto a
server for execution. We want to get into the habit of making frequent improvements
to our code, and doing all these steps by hand means we might forget or cut corners.
So automating the build process means we remember to do all the steps needed,
every time we release (which might be every 30 min or so, on some projects).
21.1 Introduction
One important idea emerging from the continuous integration, continuous delivery
and DevOps communities is the benefit of automating build and deployment
processes as much as possible [4]. Automation offers repeatable processes that can
be evolved and refined over time. With automated processes, there is less pressure
to take shortcuts, such as skipping certain testing and quality checks, when teams
are under pressure of short deadlines.
However, build, test and deployment pipelines are an expression of an organisa-
tional commitment to high-quality efficient delivery processes. Significant organisa-
tional and cultural changes are needed to make these automated approaches a reality.
DevOps is a compound of development (Dev) and operations (Ops) representing
a set of practices, software tools and organisational culture to integrate product
development and IT teams.
Quality Assurance
Development Team Test Team Operations Team
IT IT
Quality
Development Operaons Team 1 Team 2 Team 3
Assurance
test and from test to operations can easily get out of control. In some organisations,
these hand-offs can stretch to days or even weeks.
Test
Test Create new branch
Review
Review
Deploy Create feature code and Deploy
Git commit triggers the
commit in new branch
test and deploy pipeline
Unit Test
Acceptance Test
Review
Deploy
Fig. 21.4 More sophisticated test pipeline triggered from a version control commit
The Jenkins File distinguishes between stages and steps. A stage is a group of
tasks that perform a conceptually distinct function. The stages in the Jenkins File
shown are build, test and deploy, whereas a step is a single task telling Jenkins what
to do.
I talked about testing and test automation in more detail in Chap. 16, of course.
But the issue here is building automated testing into continuous integration and
continuous delivery pipeline.
The first stage is unit testing of new features under development. Initially, developers
test their own code. Once the new features are merged into the main trunk,
acceptance testing on the new features will need to be performed.
302 21 DevOps
The other aspect of testing is to ensure the new features have not introduced any
problems with existing features. Hence, the need to test legacy features after new
features have been integrated, this is regression testing, as discussed in Sect. 16.4.1.
Continuous integration is the practice of frequently merging new code into the main
trunk of the project source code repository. Continuous integration encourages a
different view on the branches discussed in Sect. 15.5.1. A conventional view of
new feature development envisages long-lived feature branches. Feature branches
are helpful because they keep the new feature code separate from the main trunk,
while the feature is under development.
However, from a lean perspective, source code sitting in a feature branch is a
form of waste; see Sect. 14.3. The feature branch code does not add value to the
project until it is integrated into the main trunk. Delaying the integration of new
code into the main trunk increases the likelihood of merge conflicts. Taking this
view, feature branches are best avoided. Instead, features are developed on the main
trunk.
all times. Following a review process, the latest production version can be released
to customers.
In continuous deployment, we take this one step further. The idea is to automate
the entire process and release the latest executable code into a production environ-
ment after each commit. This high level of automation is intended to accelerate the
delivery of new features to customers and to attract feedback more quickly as a
consequence.
Acceptance Test
Penetration Test
Vulnerability Test
Review
Deploy
References
The material, described in this book, has benefited from collaborations in commer-
cial software development projects and original research in the software develop-
ment sector. I benefited from an opportunity to work with Add Energy Ltd. (advising
on their AimHi, AssetC and Asset Voice products), Invisible Systems Ltd. and Red
Ocelot Ltd. and learned much from these activities.
Several of the chapters in this book have benefited from empirical research inves-
tigating the activities of practitioners engaged in software development projects.
More specifically, Chap. 3 draws on [2–4, 11], while Chap. 7 benefits from [13]. In
turn, Chap. 10 includes findings from [5]. Chapter 21 draws on evidence from [8, 9].
Chapter 18 benefits from [14, 15], and Chap. 20 includes findings from [10].
middle managers, agile coaches and development team members, such as software
developers, testers and scrum masters.
A semi-structured interview guide was used during practitioner interviews. The
interviews included open-ended questions to elicit topics from respondents not
considered by the interviewer. Interviews, which typically lasted around 50 min,
were recorded and transcribed.
Data analysis was informed by grounded theory [7]. Interviews were analysed
using a Glaserian grounded theory approach [6]. Open coding, memoing, constant
comparison and saturation techniques were used to extract topics, concepts and
themes from the interview transcript data [1].
Coding, in this context, does not mean writing software. But rather, coding is the
research process of identifying the topics described in the source data. For this
research, a sentence-by-sentence approach was adopted. The large volume of data
made it attractive to use a qualitative analysis tool [12] to record and manage the
coding process.
Memos are short essays recoding the scope and content of topics and categories
identified from the data. Memos include interview quotes and contrast the differing
experiences and perspectives of respondents. Some memos build upon contempo-
raneous field notes taken during observations of practices or interviews. The memo
writing is used to clarify, refine and sharpen categories. The memos are revised and
enhanced as new transcript data is added.
Using the constant comparison technique, the researcher iterates back and forth
between data collection and analysis. We use constant comparison to compare
events or respondent perspectives that apply to each category we have identified.
A Research Methods 307
A.3.4 Saturation
In the early stages of the research, interviews with each new company or project
team cause reappraisal of the topics and categories which have previously been
identified. New events, incidents, artefacts, development practices and stakeholders
are discovered at each new research site.
As the study progresses and the number of interview respondents increases,
the richness and detail of the grounded theory are enhanced as a consequence.
Gradually, each new research site and practitioner interview results in fewer new
discoveries and has less impact on the categorisation. The evidence from new
interviews is increasingly consistent with the topics and categories previously
identified.
Saturation has occurred when new research sites or interviews don’t cause
significant refinement to the topics and categories already identified.
References
1. Adolph, S., Hall, W., Kruchten, P.: Using grounded theory to study the experience of software
development. Empirical Softw. Eng. 16(4), 487–513 (2011). https://doi.org/10.1007/s10664-
010-9152-6
2. Bass, J.M., Haxby, A.: Tailoring product ownership in large-scale agile projects: managing
scale, distance, and governance. IEEE Softw. 36(2), 58–63 (2019). https://doi.org/10.1109/
MS.2018.2885524
3. Bass, J.: Scrum master activities: process tailoring in large enterprise projects. In: 2014 IEEE
9th International Conference on Global Software Engineering (ICGSE), pp. 6–15 (2014).
https://doi.org/10.1109/ICGSE.2014.24
4. Bass, J.M.: How product owner teams scale agile methods to large distributed enterprises.
Empirical Softw. Eng. 20(6), 1525–1557 (2015). https://doi.org/10.1007/s10664-014-9322-z
5. Bass, J.M.: Artefacts and agile method tailoring in large-scale offshore software development
programmes. Inform. Softw. Technol. 75, 1–16 (2016). https://doi.org/10.1016/j.infsof.2016.
03.001
6. Glaser, B.G.: Doing Grounded Theory: Issues and Discussions. Sociology Press, Mill Valley
(1998)
7. Glaser, B., Strauss, A.L.: Discovery of Grounded Theory: Strategies for Qualitative Research.
Aldine Transaction (1999)
308 A Research Methods
8. Macarthy, R.W., Bass, J.M.: An empirical taxonomy of devops in practice. In: Euromicro 46th
Conference on Software Engineering and Advanced Applications (SEAA), pp. 221–8. IEEE,
Piscataway (2020)
9. Macarthy, R.W., Bass, J.M.: The role of skillset in the determination of devops implementation
strategy. In: Joint 15th International Conference on Software and System Processes (ICSSP)
and 16th ACM/IEEE International Conference on Global Software Engineering (ICGSE), pp.
50 – 60. IEEE, Piscataway (2021)
10. Monaghan, B.D., Bass, J.M.: Redefining legacy: A technical debt perspective. In: Morisio,
M., Torchiano, M., Jedlitschka, A. (eds.) Product-Focused Software Process Improvement, pp.
254–269. Lecture Notes in Computer Science. Springer, Berlin (2020). https://doi.org/10.1007/
978-3-030-64148-1_16
11. Noll, J., Razzak, M.A., Bass, J.M., Beecham, S.: A study of the scrum master’s role. In:
Product-Focused Software Process Improvement, pp. 307–323. Lecture Notes in Computer
Science. Springer, Cham (2017)
12. QSR International: NVivo 11 for Windows Help—Welcome (2019). http://help-nv11.
qsrinternational.com/desktop/welcome/welcome.htm
13. Rahy, S., Bass, J.M.: Managing non-functional requirements in agile software development.
IET Softw. (2021). https://doi.org/10.1049/sfw2.12037
14. Salameh, A., Bass, J.: Influential factors of aligning spotify squads in mission-critical and
offshore projects—a longitudinal embedded case study. In: Kuhrmann, M., Schneider, K.,
Pfahl, D., Amasaki, S., Ciolkowski, M., Hebig, R., Tell, P., Klünder, J., Küpper, S. (eds.)
Product-Focused Software Process Improvement. Lecture Notes in Computer Science, vol.
11271, pp. 199–215. Springer, Berlin (2018). https://doi.org/10.1007/978-3-030-03673-7_15
15. Salameh, A., Bass, J.M.: An architecture governance approach for agile development
by tailoring the spotify model. AI & Society (2021). https://doi.org/10.1007/s00146-021-
01240-x
Appendix B
Further Reading
Having finished reading this book and working through all the exercises, there are
some further books I think everyone should read. Here are my top 20(ish) agile
software engineering books I think everyone should read.
If you are interested to explore some of these more specialist topics in further detail,
then I recommend:
• DevOps
– Accelerate is interesting and based on analysis of a large practitioner survey
[7].
• Security
– Bell et al. place security in an agile development context [2]
• Requirements
– A practical approach to requirements is in [3] and also for organising
requirements [15].
• Large-scale agile
– Team Topologies focuses on organising for flow [22].
• Legacy
– Michael Feathers takes a practical approach to dealing with legacy code [6],
or on technical debt, then [12].
For those interested in pursuing research, for example, by doing a PhD in Software
Engineering, I recommend:
• Research (in general)
– Phillips and Pugh take a practical approach to advice for PhD students [16],
– Mark Reed’s book on Research Impact is good [18],
– For case study research, try [26],
– While for mixed-method research, [5] is good,
– I’ve used a grounded theory approach; have a look at [10] or [4],
– Zinsser’s book on non-fiction writing will be useful for many [27].
• Software engineering research (specifically)
– For case study research, check out Runeson et al. [19],
– For experimental methods, try [25].
B Further Reading 311
References
1. Beck, K., Andres, C.: Extreme Programming Explained, 2nd edn. Addison Wesley, Boston
(2004)
2. Bell, L., Brunton-Spall, M., Smith, R., Bird, J.: Agile Application Security: Enabling Security
in a Continuous Delivery Pipeline. O’Reilly (2017)
3. Cohn, M.: User Stories Applied: For Agile Software Development. Addison Wesley, Reading
(2004)
4. Corbin, J.M., Strauss, A.C.: Basics of Qualitative Research: Techniques and Procedures for
Developing Grounded Theory, 3rd edn. Sage Publications (2008)
5. Creswell, J.W., Creswell, J.D.: Research Design: Qualitative, Quantitative, and Mixed Methods
Approaches, 5th edn. SAGE Publications (2018)
6. Feathers, M.: Working Effectively with Legacy Code, 1st edn. Prentice Hall, Englewood (2004)
7. Forsgren, N., Humble, J.: Accelerate: The Science of Lean Software and Devops: Building and
Scaling High Performing Technology Organizations. Trade Select, illustrated edn. (2018)
8. Fowler, M., Beck, K., Brant, J., Opdyke, W., Roberts, D.: Refactoring: Improving the Design
of Existing Code, 1st edn. Addison Wesley, Reading (1999)
9. Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Design Patterns: Elements of Reusable
Object-Oriented Software. Addison-Wesley, Harlow (2005)
10. Glaser, B., Strauss, A.L.: Discovery of Grounded Theory: Strategies for Qualitative Research.
Aldine Transaction (1999)
11. Gothelf, J., Seden, J.: Lean UX: Designing Great Products with Agile Teams, 2nd revised edn.
O’Reilly (2016)
12. Kruchten, P., Nord, R., Ozkaya, I.: Managing Technical Debt: Reducing Friction in Software
Development, 1st edn. Addison-Wesley, Reading (2019)
13. Loeliger, J., McCullough, M.: Version Control with Git: Powerful tools and techniques for
collaborative software development, 2nd edn. O’Reilly Media (2012)
14. Martin, R.: Clean Code: A Handbook of Agile Software Craftsmanship, 1st edn. Prentice Hall,
Upper Saddle River (2008)
15. Patton, J.: User Story Mapping: Discover the Whole Story, Build the Right Product, 1st edn.
O’Reilly Media, Sebastopol (2014)
16. Phillips, E., Pugh, D.S.: How To Get A Phd: A Handbook for Students and Their Supervisors,
6th edn. Open University Press (2015)
17. Pressman, R.S., Maxim, B.R.: Software Engineering: A Practitioner’s Approach, 8th edn.
McGraw-Hill Education, New York (2015)
18. Reed, M.S.: The Research Impact Handbook. Fast Track Impact (2016)
19. Runeson, P., Höst, M., Rainer, A., Regnell, B.: Case Study Research in Software Engineering:
Guidelines and Examples. Wiley-Blackwell (2012)
20. Schwaber, K.: Agile Project Management with Scrum, 1st edn. Microsoft Press, Redmond
(2004)
21. Sharp, H., Preece, J., Rogers, Y.: Interaction Design: Beyond Human-Computer Interaction,
5th edn. Wiley, London (2019)
22. Skelton, M., Pais, M., Malan, R.: Team Topologies: Organizing Business and Technology
Teams for Fast Flow, illustrated edn. It Revolution Press (2019)
23. Sommerville, I.: Software Engineering, 10th edn. Pearson Education, Harlow (2015)
24. Steve McConnell: Code Complete: A Practical Handbook of Software Construction, 2nd edn.
Microsoft Press, Redmond (2004)
25. Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., Wesslén, A.: Experimentation
in Software Engineering. Springer, Berlin (2012)
26. Yin, R.K.: Case Study Research: Design and Methods, 4th edn. Sage Publications (2009)
27. Zinsser, W.: On Writing Well: The Classic Guide to Writing Nonfiction, 25th anniversary edn.
HarperCollins Publishers, New York (2006)
Index
A Artefacts
Abstraction, 114 feature, 153
Activism, 73 iteration, 151
Activities planning, 149
large scale, 274, 275 release, 155
product owner:market trends, 277
technical architect, 276
technical product owner, 277 B
Agile coach, 274 Behaviour-driven Development, 259
Agile principles, 15 Blogs, 58
collective code ownership, 16 Branching, 237
sustainable pace, 15 Branching exercise, 247
Agile Build automation, 299
security, 172 Build pipelines, 297
security artefacts, 172 Burn down chart, 151
security ceremonies, 173 Burn down chart exercise, 158, 161
security roles, 172 Business bootstrapping, 221
Algorithms and inequality, 68
Architecture
abstraction, 114 C
agile, 112 Catastrophe, confessing, 50
clean, 118 Class diagrams, 131
client-server, 115 derivation, 131
design principles, 120 detailed, 133
design styles, 115 domain models, 132
implementation, 124 exercises, 142, 143
layered, 118 high-level, 133
pipe and filter, 117 nouns, 131
planned refactoring, 113 verbs, 132
refactoring, 113 Clean architecture, 118
reference, 120 Client-server, 115
repository, 116 Cloud patterns, 285
rework, 113 customer on-boarding, 286
standards, 274 deployment, 287
Git branches (create and merge) exercise, 242 Learning journal exercise, 23, 26, 30, 42, 44,
Git repository, local, exercise, 241, 245 47, 59, 61, 77–79, 99, 104, 124, 126,
Git repository, remote, exercise, 242 141, 145, 146, 157, 159, 173, 174,
Governor, 276 206–208, 222, 224, 241, 244, 245,
Groom, 38 260–262
Grounded theory, 306 Learning timeline exercise, 24
Group behaviours exercise, 25, 30 Legacy systems, 295
Groups and teams, 13
M
H Managing
HackCamp, Software, 6 outwards, 51
Handling requests exercise, 223, 225 upwards, 49
Minimum viable product, 221
Model-driven engineering, 141
I Modelling, system, 130
Incremental requirements, 91
Infrastructure-as-a-service, 284
N
Integration Testing Exercise, 261, 262
Non-fiction writing, 63
Intellectual property exercise, 78
Non-fiction writing exercise, 60
Intermediary, 40
Non-functional requirements, 90
Issues, 155
Iteration backlog, 151
Iteration planning, 195
O
estimation, 197
OWASP Top Ten, 170
prioritisation, 196
OWASP top ten review exercise, 173, 174
task assignment, 199
Iteration planning exercise, 42
P
People, 212
K Personal learning timeline exercise, 29
Kanban board exercise, 157, 159 Personas, 98
Kanban boards, 150 Pipe and filter, 117
Keep It Simple, Stupid (KISS), 120 Pipes and filters exercise, 124, 126
Knowledge, 213 Pivot, 222
Knowledge gathering exercise, 223 Planned refactoring, 113
Platform-as-a-service, 284
Platforms and fake markets, 68
L Presentations, 57
Large-scale artefacts, 273 content, 57
Large-scale scrum (LeSS), 281 delivery, 58
Layered architecture, 118 rehearsal, 58
Layered architecture exercise, 125, 126 review exercise, 59
Lean types, 57
definition of done, 215 Prioritisation, 196
knowledge, 213 Prioritiser, 38
quality, 214 Product backlog, 150
speed, 218 Product owner, 37
value, 214 Product owner activities
value stream mapping, 215 communicator, 39
waste, 216 groom, 38
Lean principles, 211 intermediary, 40
Lean start-up, 221 large-scale, 275
316 Index
Sustainable pace, 15 U
Unions, 74
Unit Testing Exercise, 261, 262
T Unreasonable demands, 50
Tabby cat Use case descriptions, 95
architecture, 179 Use case diagram exercise, 99, 100
design, 182 Use case diagram, exercise 1, 99
development, 186 Use case diagram exercise solutions, 104, 105
implementation, 191 Use case exercise, 101, 102, 146
requirements, 177 Use case exercise solutions, 105, 106, 146
security, 191 Use cases, 94
Tabby Cat Project, x diagrams, 94
Tabby cat project, 5, 83, 177, 265 User stories, 96
Task assignment, 199 User story estimates, 151
Team activities User story estimation exercise, 158
champion, 19 User story exercise, 103
co-ordinator, 18 User story exercise solutions, 108
mentor, 18 User story mapping, 97
promoter, 19 User story mapping exercise, 103
terminator, 19
translator, 19
Teams V
building performance, 13 Value, 214
forming, 16 Value, non-monetary, 215
self-organising, 11 Value stream mapping, 215
Technical architect, 41, 276 Value stream mapping exercise, 222, 224
Technical debt, 292 Version control
agile, 292 branching, 237
Technical product owner, 277 commit, 231
Technology stack, selection, 138 exercises, 241
Test integration, 301 file staging, 231
Test plan exercise, 157, 160 local repository, 228
Test remote repository, 234
A/B, 256 source code sharing, 236
acceptance, 253 undo changes, 233
behaviour-driven development, 259 Video production exercise, 60
integration, 252 Videos, 59
levels, 252 Virtual teams
performance, 255 launch, 21
plan, 150 performance, 22
planning, 252 preparation, 21
regression, 254 principles, 20
security, 172, 255
system, 253
techniques, 253 W
unit test automation, 256 Waste, 216
unit testing, 252 Whistle-blowing, 74
user experience, 254 Wikis, 58
Test-driven development, 204, 259 Work in progress limits, 219
Testing Work item variability, 220
regression, 156
unit, 154
Training, 77 Y
Traveller, 39 You Aren’t Gonna Need It (YAGNI), 121