Software Engineering Book
Software Engineering Book
Software Engineering Book
Chapter 1
Software Engineering Introduction
The establishment and use of sound engineering principles in order to obtain economical
software that is reliable and works efficiently on real machines.
Software engineering is a systematic and disciplined approach towards the development of
the software operation and maintenance.
Software engineering is an engineering branch associated with the development of software
product using well-defined scientific principles, methods and procedures.
Characteristics of a software
Software should achieve a good quality in design and meet all the specifications of the
customer.
Software does not wear out i.e. it does not lose the material.
Software systems may be inherently simple or complex.
Software must be efficient i.e. the ability of the software to use system resources in an
effective and efficient manner.
Software must be integral i.e. it must prevent from unauthorized access to the software or
data.
Software engineering - Layered technology
1. Quality focus
The characteristics of good quality software are:
Integrity i.e. providing security so that the unauthorized user cannot access
information or data.
2. Process
The processes that deal with the technical and management of software development
are collectively called software processes.
There are two distinct categories of software process- i) Technical processes and ii)
Management processes.
The technical processes specifies all engineering activities and the management
processes specifies how to plan, monitor and control technical processes so that, cost,
timing, quality and productivity goals are met.
3. Methods
The method provides the answers of all 'how-to' that are asked during the process.
4. Tools
The software engineering tool is an automated support for the software development.
The tools are integrated i.e the information created by one tool can be used by the
other tool.
For example: The Microsoft publisher can be used as a web designing tool.
CMM
Capability Maturity Model (CMM) is for improving software processes during software
development. The Software Engineering Institute’s (SEI) CMMprovides a well known
benchmark for judging the quality level of development processes of a company. CMM and
CMM levels are good bench-mark for a company to study the processes and practices
followed by the company. It allows a company to judge maturity in handling complex
software development task and achieve quality software goal.
To understand CMM, one must understand the word ‘mature’ in the context of software
development and delivering product to a client. Being mature means a company should able
to
The model describes a five-level path of increasing maturity which reflects company’s
capability of executing software processes for development, including production version.
The results of such evaluation indicate the likely performance of the company if the company
is awarded software development work. CMM is similar to ISO 9001, which specify an
effective quality for software development and maintenance.
The main differencebetween CMM and ISO 9001 lies in their respective aims. ISO 9001
specifies a minimal acceptable quality level for software processes, while CMM establishes a
framework for continuous software process improvement. CMM is used to assess a company
against a scale of five process maturity levels based on certain Key Process Areas (KPA).The
five levels are (IMDQO)
Software Process : A set of activities, methods, practices and performance that people
use to develop and maitain soaftware and its associated products. (SEI- CMM).
Actual Process: The actual process is what you do with all omission, mistakes and
oversights in developing software.
Capability:
---The range of expected results that can be obtained by following software processes.
---Means of predicting the most likely outcome to be expected from the next software
project.
Maturity:
___Extent to which all software processes are defined, measured, monitered, controlled
and implemented.
Evaluation of the maturiy level of a company is done through SEI by using software
capability evaluation questionaire. It includes analysis of processes, interviews, scrutiny of
documents, manuals, study of project management processes and company processes.
Umbrella Activities
While developing software, it is important to write the correct code. It is equally important to
take measurements to track the development of the application and make the necessary
changes for any improvement that might be needed. These framework activities in software
engineering fall under the category of umbrella activi9ties.
Risk Management
Risk management, as the name suggests, involves analyzing the potential risks of the
application in terms of quality or outcome. These risks may create a drastic impact on the
outcome and the functionality of the application.
Completing a Full-Stack Developer certification can help reduce the risk (My
recommendation). The course helps software scientist become better in their work, thus
reducing the risk associated with software development. Risk is financial losses that are likely
to occur if project is deviated from its planned execution, resulting in bad quality or
completely fail.
Testing the quality of the software is necessary. It helps to determine how the application will
do when used or launched in the market. On quality testing, the client is assured that
everything in the application went as planned. This is known as the software quality
framework in software science.
Evaluation of errors is done at every step of the generic process framework for software
engineering. This helps to eliminate a major blunder at the end of the process.
The software engineers check their work for technical and quality issues and find technical
bugs or glitches. Step-wise error analyses help in the smooth functioning of software
development.
The software configuration process helps in managing the application when changes are
necessary.
Measurement
The software engineers collect relevant data that is analyzed to measure the quality and
development of the application.
Chapter 2
SDLC Process Activities
The System Development Life Cycle, "SDLC" for short, is a multistep, iterative process,
structured in a methodical way. This process is used to model or provide a framework for
technical and non-technical process activities to deliver a quality system which meets or
exceeds a business’ expectations or manage decision-making progression.
Phase 1: SRS It is a skilled process. It needs experienced software scientist to gather
information about customer expectation without any mistakes. It is a document which
contains all specifics about what proposed software system is going to deliver to the
customer. It also specifies what can not be done in Business Process Automation (BPA) by
software.
Information gathering from client is an important part of this phase (core function). Usually
it starts with a meeting between two teams, one from client side and other from software
vendor. In these first meeting broad understanding is developed on both side for proposed
software system. The vendor team learns about stack holder, end-user and basic level
knowledge of client team members. The client team learns about project manager and other
team members and their skill sets. Subsequent meetings are between smaller and specific
groups. SRS team from vendor side starts interview, questionnaire and original company
documents including minutes of meetings.
Phase 2: Planning
In the planning phase, project goals are determined and a high-level plan for the requirement
for project is established. Planning is the most fundamental and critical organizational phase.
The three primary activities involved in the planning phase are as follows:
Feasibility assessment
Phase 2: Analysis
In the analysis phase, end user business requirements are analyzed by a team headed by a
project manager. Project goals are converted into the defined system functions that the
organization intends to develop. The three primary activities involved in the analysis phase
are as follows:
Phase 3: Design
In the design phase, we describe the desired features and operations of the system. This phase
includes business rules, pseudo-code, screen interfaces/layouts, and other necessary
documentation. The two primary activities involved in the design phase are as follows:
Designing of IT infrastructure
To avoid any crash, malfunction, or lack of performance, the IT infrastructure should have
solid foundations. In this phase, the specialist recommends the kinds of clients and servers
needed on a cost and time basis, and technical feasibility of the system. If the client does not
have such infrastructure, the project leader may suggest cloud solution. Also, in this phase,
the organization creates interfaces for user interaction. Other than that, data models/structure
and entity relationship diagrams (ERDs) are also created in the same phase.
Phase 4: Development(Implementation)
In the development phase, all the documents from the previous phase are transformed into the
actual system. The two primary activities involved in the development phase are as follows:
Establishing IT infrastructure
In the design phase, only the blueprint of the IT infrastructure is provided, whereas in this
phase the organization actually purchases and installs the respective software and hardware in
order to support the IT infrastructure. If cloud solution is desired, appropriate cloud
infrastructure may be provisioned. Following this, the creation of the database and actual
code can begin to complete the system on the basis of given specifications.
Phase 5: Testing
In the testing phase, all the pieces of code are integrated and deployed in the testing
environment. Testers then follow Software Testing Life Cycle activities to check the system
for errors, bugs, and defects to verify the system’s functionalities work as expected or not,
often. The two primary activities involved in the testing phase are as follows:
Writing test cases and creating use cases
Testing is a critical part of software development life cycle. To provide quality software, an
organization must perform testing in a systematic way. Once test cases are written, the tester
executes them and compares the expected result with an actual result in order to verify the
system and ensure it operates correctly. Writing test cases and executing them manually is an
intensive task for any organization, which can result in the success of any business if
executed properly.
Phase 6: Deployment
During this next phase, the system is deployed to a real-life (the client’s) environment where
the actual user begins to operate the system. All data and components are then placed in the
production environment. This phase is also referred to as ‘delivery.’
Phase 7: Maintenance
In the maintenance phase, any necessary enhancements, corrections and changes will be
made to make sure the system continues to work and stay updated to meet the business goals
of the client. Customer satisfaction is of paramount importance. It is necessary to maintain
and upgrade the system from time to time so it can adapt to future needs. The three primary
activities involved in the maintenance phase are as follows:
System maintenance
When selecting the best SDLC approach for your organization or company, it's important to
remember that one solution may not fit every scenario or business. Certain projects may run
best with a Waterfall approach, while others would benefit from the flexibility in the agile or
iterative models.
Before deploying an SDLC approach for your teams and staff, consider contacting a
knowledgeable IT consultant at Innovative Architects for advice. Our experts have seen how
the different models function best in different industries and corporate environments. We are
adept at finding a good fit for any situation.
Exercise
Waterfall model is the pioneer of the SDLC processes. In fact, it was the first model which
was widely used in the software industry. It is divided into phases and output of one phase
becomes the input of the next phase. It is mandatory for a phase to be completed before the
next phase starts. In short, there is no overlapping of phases in Waterfall model.
In Waterfall, development of one phase starts only when the previous phase is complete.
Because of this nature, each phase of Waterfall model is quite precise and well defined. Since
the phases fall from higher level to lower level, like a waterfall, it’s named as Waterfall
model.
1. Requirements analysis and specification: The aim of the requirement analysis and
specification phase is to understand the exact requirements of the customer and document
them properly. This phase consists of two different activities.
Requirement gathering and analysis: First, all the requirements regarding the
software are gathered from the customer and then the gathered requirements are
analyzed. The goal of the analysis part is to remove incompleteness (an incomplete
requirement is one in which some parts of the actual requirements have been omitted)
and inconsistencies (inconsistent requirement is one in which some part of the
requirement contradicts with some other part).
2. Design: The aim of this phase is to transform the requirements specified in the SRS
document into a structure that is suitable for implementation in some programming language,
like Python, Java etc.
3. Implementation (Coding) and Unit testing: In the coding phase, software design is
translated into source code using suitable programming language. Thus each designed
module is coded. The aim of the unit testing phase is to check whether each module is
working properly.
4. Integration and System testing: Integration of different modules are undertaken soon
after they have been coded and unit tested. Integration of various modules is carried out
incrementally over a number of steps. During each integration step, previously planned
modules are added to the partially integrated system and the resultant system is tested.
Finally, after all the modules have been successfully integrated and tested, the full working
system is obtained and system testing is carried out on this.
System testing consists of three different kinds of testing activities as described below:
Acceptance testing: After the software has been delivered, the customers
perform the acceptance testing to determine whether to accept the delivered
software or to reject it.
5. Deployment: After the system is integrated and tested, it is deployed at the customer
location and on their infrastructure. This phase is also known as roll-out. The first one or two
weeks are very important after this phase, because ‘end-user’ is going to use the system in
real world environment and give his reaction.
6. Maintainence: Maintenance is the most important phase of a software life cycle. The
effort spent on maintenance is 60% of the total effort spent to develop a full software. There
are basically three types of maintenance:
Corrective Maintenance: This type of maintenance is carried out to correct
errors that were not discovered during the product development phase.
An application is small.
The tools and technology used are stable and not dynamic.
For smaller projects, Waterfall model works well and yields appropriate results.
Since the phases are rigid and precise, one phase is done one at a time, it is easy to
maintain. Therefore, this model can be used as the basis for other iterative models.
For bigger and complex projects, this model is not good as the risk factor is higher.
Since the testing is done at a later stage, it does not allow identifying the challenges
and risks in the earlier phase so the risk mitigation strategy is difficult to prepare.
Conclusion:
In the Waterfall model, it is very important to take the sign-off of the deliverables of each
phase. As of today, most of the projects are moving with Agile and Prototype models,
Waterfall model still holds good for smaller projects. If requirements are straightforward and
testable, Waterfall model will yield the best results.
Questions:
Evolutionary models are iterative (Repeat phases for refinement and improvement)
type models.
The Prototyping Model is one of the most popularly used Software Development Life Cycle
Models (SDLC models). This model is used when the customers do not know the exact
project requirements beforehand. In this model, a prototype of the end product is first
developed, tested and refined as per customer request, which provides core functionality of
the final product. This is refined as per the customer feedback and changes in requirements if
any.
2. Evolutionary Prototyping
In this method, the prototype developed initially is incrementally refined on the basis of
customer feedback till it finally gets accepted. In comparison to Rapid Throwaway
Prototyping, it offers a better approach which saves time as well as effort. This is because
developing a prototype from scratch for every iteration of the process can sometimes be very
frustrating for the developers.
Prototype model need not know the detailed input, output, processes, adaptability of
operating system and full machine interaction.
The development process is the best platform to understand the system by the user.
It identifies the missing functionality easily. It also identifies the confusing or difficult
functions.
The client involvement is more and it is not always considered by the developer.
It is a thrown away prototype when the users are confused with it.
In spiral model, an alternate solution is provided if the risk is found in the risk
analysis, then alternate solutions are suggested and implemented.
Figure 4.2.1.3 Boehm’s risk based spiral model for software development
In spiral model, the software is produced early in the life cycle process.
There are two states 1. Awaiting changes state and 2. ‘Under development’ state. This
is shown in Figure 4.2.1.4.
Figure 4.2.1.4 Various states of concurrent model. Each block of the project undergoes
same method.
The communication activity is completed in the first iteration and exits in the awaiting
changes state.
The modeling activity completes its initial communication and then goes to the ‘under
development’ state.
If the customer specifies a change in the requirement, then the modeling activity
moves from the ‘under development’ state into the ‘awaiting change’ state.
It needs better communication between the team members. This may not be achieved
all the time.
Questions
Component based software engineering uses almost similar kind of methods, tools, and
principles as used in software engineering. However, there are certain differences. The prime
difference is that the CBSE distinguishes the process of “component development” from that
of “system development with components” by focusing on questions related to components.
The main idea behind this is the re-usability. That means, systems are built from pre-existing
components. However, there are certain consequences of using such an approach. Some of
the consequences are mentioned below.
1. The development processes of component-based systems are different from that of the
development processes of the components
2. Finding and evaluating the components, i.e. a new separate process is introduced
This model goes through SDLC phases in a slightly modified manner than normal
SDLCpahases. These are described below.
It includes analyzing the solution to meet the requirements. The available components are
checked if they can fulfil the requirements.
Similar to the above phase, it totally depends upon the availability of the components. A
particular component model should be able to integrate with the potential (which may be
used) components.
The system should be built by integration of the components. The concept of “glue code” is
used to specify the connection.
System Integration
The application components along with the standard infrastructure components of the
component framework are integrated. This is often called component deployment.
Standard techniques should be used. For example, location of error is a specific problem in
component based approach. Here, components are of “black box” type and may be from
different vendors. A component may show error due to malfunctioning of another component
(side effects).
Operation support and maintenance
It is similar to the integration process. A system deploys a new modified component. In most
of the cases, an existing component is modified. However, a new version of the same
component can also be integrated.
The execution of the components should be supported by a run-time environment. The run-
time environment should be standardized. This includes both general and domain-specific
run-time services. General services include object creation, life cycle management, object-
persistence support, etc.
The development organizations are not adapted to the basic principles of component based
software engineering (CBSE). Hence, a component-based approach cannot be used for the
development processes. The approach uses reusability of existing components. This reduces
the implementation efforts significantly. However, it increases the efforts for system
verification. This has to be adjusted according to the development process. By studying the
case study of various industries, it is concluded that achieving a complete separation of the
development process is very difficult. Moreover, it puts a lot of load on the architectural
issues and system and component verification.
Questions
a. Rigorously Tested
b. Re-Usable
c. Documented
d. Combines Components
Incremental Model
In incremental model, the whole requirement is divided into various builds. Multiple
development cycles take place here, making the life cycle a “multi-waterfall” cycle. Cycles
are divided up into smaller, more easily managed modules. Incremental model is a type of
software development model like V-model, Agile model etc.
In this model, each module passes through the requirements, design, implementation
and testing phases. A working version of software is produced during the first module, so
you have working software early on during the software life cycle. Each subsequent release
of the module adds function to the previous release. The process continues till the complete
system is achieved.
Figure 4.2.1.6 Shows life cycle of different builds which may be done simultaneously
Generates working software quickly and early during the software life cycle.
This model is more flexible – less costly to change scope and requirements.
Easier to manage risk because risky pieces are identified and handled during it’d
iteration.
Needs a clear and complete definition of the whole system before it can be broken
down and built incrementally.
This model can be used when the requirements of the complete system are clearly
defined and understood before any development work starts.
Major requirements must be defined; however, some details can evolve and change
with time.
Such models are used where requirements are clear and can be implemented phase
wise. From the figure it’s clear that the requirements are divided into Build 1,
Build2……….BuildN , various build and delivered accordingly.
Mostly such model is used in web applications and product based companies.
The initial activity starts with the communication between customer and developer.
Planning depends upon the initial requirements and then the requirements are divided into
groups.
1) Business Modeling
2) Data modeling
3) Process modeling
4) Application generation
5) Testing and turnover
1) Business Modeling
Business modeling consists of the flow of information between various functions in the
project.
For example, what type of information is produced by every function and which are the
functions to handle that information.
It is necessary to perform complete business analysis to get the essential business
information.
2) Data modeling
The information in the business modeling phase is refined into the set of objects and it is
essential for the business. The objects can be any form, for example JSON,XML etc.
The attributes of each object are identified and define the relationship between objects.
3) Process modeling
The data objects defined in the data modeling phase are changed to fulfil the information
flow to implement the business model.
The process description is created for adding, modifying, deleting or retrieving a data
object.
4) Application generation
The prototypes are independently tested after each iteration so that the overall testing time
is reduced.
The data flow and the interfaces between all the components are fully tested. Hence, most
of the programming components are already tested.
This model is flexible if any changes are required and can be extended as well.
Reviews are taken from the clients at the starting of the development; hence, there are
lesser chances to miss the requirements.
This model is not a good choice for long term and large projects.
Questions
1.What are the different phases in RAD Model? What is RAD model?
Answer :
5. Testing and Turnover: Many of the programming components have already been tested
since RAD emphasis reuse. This reduces overall testing time. But new components must be
tested and all interfaces must be fully exercised.
Chapter 5
Agile Methods
Pair Programming
Figure shows general model of agile methodology in which iteration is important for
release of different versions.
Agile teams, committed to frequent, regular, high-quality production, find themselves striving
to find ways to keep short-term and long-term productivity as high as possible. Proponents of
pair programming ("pairing") claim that it boosts long-term productivity by substantially
improving the quality of the code. But it is fair to say that for a number of reasons, pairing is
by far the most controversial and least universally-embraced of the agile programmer
practices.
Pairing Mechanics
Pairing involves having two programmers working at a single workstation. One programmer
"drives," operating the keyboard, while the other "navigates," watching, learning, asking,
talking, and making suggestions. In theory, the driver focuses on the code at hand: the syntax,
semantics, and algorithm. The navigator focuses less on that, and more on a level of
abstraction higher: the test they are trying to get to pass, the technical task to be delivered
next, the time elapsed since all the tests were run, the time elapsed since the last repository
commit, and the quality of the overall design. The theory is that pairing results in better
designs, fewer bugs, and much better spread of knowledge across a development team, and
therefore more functionality per unit time, measured over the long term.
Spreading Knowledge
Certainly as a mentoring mechanism, pairing is hard to beat. If pairs switch off regularly (as
they should), pairing spreads several kinds of knowledge throughout the team with great
efficiency: codebase knowledge, design and architectural knowledge, feature and
problem domain knowledge, language knowledge, development platform knowledge,
framework and tool knowledge, refactoring knowledge*, and testing knowledge. There
is not much debate that pairing spreads these kinds of knowledge better than traditional code
reviews and less formal methods. So what productivity penalty, if any, do you pay for
spreading knowledge so well?
Every time we change code without refactoring it, rot worsens and spreads. Code rot
frustrates us, costs us time, and unduly shortens the lifespan of useful systems. In an agile
context, it can mean the difference between meeting or not meeting an iteration deadline.
Refactoring code ruthlessly prevents rot, keeping the code easy to maintain and extend. This
extensibility is the reason to refactor and the measure of its success. But note that it is only
"safe" to refactor the code this extensively if we have extensive unit test suites of the kind we
get if we work Test-First. Without being able to run those tests after each little step in a
refactoring, we run the risk of introducing bugs. If you are doing true Test-Driven
Development (TDD), in which the design evolves continuously, then you have no choice
about regular refactoring, since that's how you evolve the design.
Research results and anecdotal reports seem to show that short-term productivity might
decrease modestly (about 15%), but because the code produced is so much better, long-term
productivity goes up. And certainly it depends on how you measure productivity, and over
what term. In an agile context, productivity is often measured in running, tested features
actually delivered per iteration and per release. If a team measures productivity in lines of
code per week, they may indeed find that pairing causes this to drop (and if that means fewer
lines of code per running, tested feature, that's a good thing!).
Proponents of pairing claim that if you measure productivity across a long enough term to
include staff being hired and leaving, pairing starts to show even more value. In many
mainstream projects, expertise tends to accumulate in "islands of knowledge." Individual
programmers tend to know lots of important things that the other programmers do not know
as well. If any of these islands leaves the team, the project may be delayed badly or worse.
Part of the theory of pairing is that by spreading many kinds of knowledge so widely within a
team, management reduces their exposure to this constant threat of staff turnover. In Extreme
Programming, they speak of the Truck Number: the number of team members that would
need to be hit by a truck to kill the project. Extreme Programming projects strive to keep the
Truck Number as close as possible to the total team size. If someone leaves, there are usually
several others to take his or her place. It is not that there is no specialization, but certainly
everyone knows more about all of what is going on. If you measure productivity in terms of
features delivered over several releases by such a team, it should be higher than if pairing
does not occur.
Pairing Strategies
In by-the-book Extreme Programming, all production code is written by pairs. Many non-XP
agile teams do not use pairing at all. But there is lots of middle ground between no pairing
and everyone pairing all the time. Try using pairing when mentoring new hires, for extremely
high-risk tasks, at the start of a new project when the design is new, when adopting a new
technology, or on a rotating monthly or weekly basis. Programmers who prefer to pair might
be allowed to, while those who do not are allowed not to. The decision to use code reviews
instead of any pairing at all is popular, but we don't know of any reason not to at least
experiment with pairing. There is no reasonable evidence that it hurts a team or a project, and
there is increasing evidence that it is a helpful best practice.
Scrum Team
An agile team in a Scrum environment often still includes people with traditional software
engineering titles such as programmer, designer, tester, or architect.
But on a Scrum team, everyone on the project works together to complete the set of work
they have collectively committed to complete within a sprint, regardless of their official title
or preferred job tasks.
Because of this, Scrum teams develop a deep form of camaraderie and a feeling that “we're
all in this together.”
When becoming a Scrum team member, those who in the past fulfilled specific traditional
roles tend to retain some of the aspects of their prior role but also add new traits and skills as
well. New roles in a Scrum team are the ScrumMaster or product owner.
A typical Scrum team is three to nine people. Rather than scaling by having a large team,
Scrum projects scale through having teams of teams. Scrum has been used on projects with
over 1,000 people. A natural consideration should, of course, be whether you can get by with
fewer people.
Although it's not the only thing necessary to scale Scrum, one well-known technique is the
use of a “Scrum of Scrums” meeting. With this approach, each Scrum team proceeds as
normal, but each team identifies one person who attends the Scrum of Scrums meeting to
coordinate the work of multiple Scrum teams.
These meetings are analogous to the daily Scrum meeting, but do not necessarily happen
every day. In many organizations, having a Scrum of Scrums meeting twice a week is
sufficient.
Like other Agile methods, XP is a software development methodology broken down into
work sprints§. Agile frameworks follow an iterative process—you complete and review the
framework after every sprint, refine it for maximum efficiency, and adjust to changing
requirements. Similar to other Agile methods, XP’s design allows developers to respond to
customer stories, adapt, and change in real-time. But XP is much more disciplined, using
frequent code reviews and unit testing to make changes quickly. It’s also highly creative
and collaborative, prioritizing teamwork during all development stages.
§ A sprint is a short, time-boxed period when a scrum team works to complete a set amount
of work. Sprints are at the very heart of scrum and agile methodologies, and getting sprints
right will help your agile team ship better software with fewer headaches. Agile is a set of
principles and scrum is a framework for getting it done.
“With scrum, a product is built in a series of iterations called sprints that break down big,
complex projects into bite-sized pieces," said Megan Cook, Group Product Manager for Jira
Software at Atlassian.
5 values of extreme programming
Extreme programming is value driven. Instead of using external motivators, XP allows your
team to work in a less complicated way (focusing on simplicity and collaboration over
complex designs), all based on these five values.
1. Simplicity
Before starting any extreme programming work, first ask yourself: What is the simplest thing
that also works? The “that works” part is a key differentiator—the simplest thing is not
always practical or effective. In XP, your focus is on getting the most important work done
first. This means you’re looking for a simple project that you know you can accomplish.
2. Communication
XP relies on quick reactivity and effective communication. In order to work, the team needs
to be open and honest with one another. When problems arise, you are expected to speak up.
The reason for this is that other team members will often already have a solution. And if they
don’t, you will come up with one faster as a group than you would alone.
3. Feedback
Like other Agile methodologies, XP incorporates user stories and feedback directly into the
process. XP’s focus is producing work quickly and simply, then sharing it to get almost
immediate feedback. As such, developers are in almost constant contact with customers
throughout the process. In XP, you launch frequent releases to gain insights early and often.
When you receive feedback, you will adapt the process to incorporate it (instead of the
project). For example, if the feedback relieves unnecessary lag time, you’d adjust your
process to have a pair of developers improve lag time(latency) instead of adjusting the project
as a whole.
4. Courage
XP requires a certain amount of courage. You’re always expected to give honest updates on
your progress, which can get pretty vulnerable. If you miss a deadline in XP, your team lead
likely won’t want to discuss why. Instead, you’d tell them you missed the deadline, hold
yourself accountable, and get back to work.
If you're a team lead, your responsibility at the beginning of the XP process is to set the
expectation for success and define "done." There is often little planning for failure because
the team focuses on success. However, this can be scary, because things won’t always go as
planned. But if things change during the XP process, your team is expected to adapt and
change with it.
5. Respect
Considering how highly XP prioritizes communication and honesty, it makes sense that
respect would be important. In order for teams to communicate and collaborate effectively,
they need to be able to disagree. But there are ways to do that kindly. Respect is a good
foundation that leads to kindness and trust—even in the presence of a whole lot of honesty.
For extreme programming, the expectations are:
A recognition that everyone on the team brings something valuable to the project.
1. Planning
In the planning stages of XP, you’re determining if the project is viable and the best fit for
XP. To do this, you’ll look at:
User stories to see if they match the simplicity value and check in to ensure that the
customer is available for the process. If the user story is more complex, or it’s made
by an anonymous customer, it likely won’t work for XP.
The business value and priority of the project to make sure that this falls in line with
“getting the most important work done first.”
What stage of development you’re in. XP is best for early stage development, and
won’t work as well for later iterations.
Once you’ve confirmed the project is viable for XP, create a release schedule—but keep in
mind that you should be releasing early and often to gain feedback. To do this:
Break the project down into iterations and create a plan for each one.
Share updates as they happen, which empowers your team to be honest and
transparent.
Share real-time updates that help the team identify, adapt, and make changes more
quickly.
Use a project management tool to create a Kanban board§ or timeline to track your
progress in real-time.
2. Managing
One of the key elements of XP is the physical space. XP purists recommend using an open
workspace where all team members work in one open room. Because XP is so collaborative,
you’ll benefit from having a space where you can physically come together. But that’s not
always practical in this day and age. If you work on a remote team, consider using
a platform that encourages asynchronous work for remote collaboration. This way, all
members can continue to work on the project together, even if they’re not physically together.
As in other Agile methods, use daily standups meetings to check-in and encourage constant,
open communication. You’ll want to use both a weekly cycle and quarterly cycle. During
your quarterly cycle, you and your team will review stories that will guide your work. You’ll
also study your XP process, looking for gaps(gap analysis) or opportunities to make changes.
Then, you’ll work in weekly cycles, which each start with a customer meeting. The customer
chooses the user story they want programmers to work on that week.
As a manager or team lead, your focus will be on maintaining work progress, measuring the
pace, shifting team members around to address bugs or issues as they arise, or changing the
XP process to fit your current project and iteration. Remember, the goal of XP is to be
flexible and take action, so your work will be highly focused on the team’s current work and
reactive to any changes.
3. Designing
When you’re just starting out with extreme programming, begin with the simplest possible
design, knowing that later iterations will make them more complex. Do not add in early
functionality at this stage to keep it as bare bones as possible (almost like prototyping).
CRCs are useful for stimulating the process and spotting potential problems. Regardless of
how you design, you’ll want to use a system that reduces potential bottlenecks. To do this, be
sure you’re proactively looking for risks. As soon as a potential threat emerges, assign one to
two team members to find a solution in the event that the threat takes place.
4. Coding
One of the more unique aspects of XP is that you’ll stay in constant contact with the customer
throughout the coding process. This partnership allows you to test and incorporate feedback
within each iteration, instead of waiting until the end of a sprint. But coding rules are fairly
strict in XP. Some of these rules include:
Using a unit test to nail down requirements and develop all aspects of the project.
Use continuous integrations to add new code and immediately test it.
Only one pair can update code at any given time to reduce errors.
Collective code ownership—any member of the team can change your code at any
time.
5. Testing
You should be testing throughout the extreme programming process. All code will need to
pass unit tests before it’s released. If you discover bugs during these tests, you’ll create new,
additional tests to fix them. Later on, you’ll configure the same user story you’ve been
working on into an acceptance test. During this test, the customer reviews the results to see
how well you translated the user story into the product.
Manage a smaller team. Because of its highly collaborative nature, XP works best on
smaller teams of under 10 people.
Are in constant contact with your customers. XP incorporates customer requirements
throughout the development process, and even relies on them for testing and approval.
Have an adaptable team that can embrace change (without hard feelings). By its very
nature, extreme programming will often require your whole team to toss out their hard
work. There are also rules that allow other team members to make changes at any
time, which doesn’t work if your team members mig ht take that personally.
Are well versed in the technical aspects of coding. XP isn’t for beginners. You need
to be able to work and make changes quickly.
Question
Iteration
Re-factoring
Milestone reviews
Chapter 3
The UML diagrams are made from elements of object oriented concepts. Software
developers use UML to create successful models and designs for properly functioning
systems. This simplifies the software development process. After developers finish writing
the code, they draw the UML diagrams to document different workflows and activities and
delegate roles. This helps them make informed decisions about which systems to develop and
how to do so efficiently. UML
UML contains 13 types of diagrams that software developers and other professionals draw
and use. These diagrams separate into two groups:
Structural diagrams
Structural diagrams show a system's structure, implementation levels, individual parts and
how these components interact to create a functional system. There are six kinds of structure
diagrams, including:
1. Class diagrams: Class diagrams are the foundational principle of any object-oriented
software system, and depict classes and sets of relationships between classes. The
classes within a system or operation depict its structure, which helps software
engineers and developers identify the relationship between each object.
3. Composite structure diagrams: This diagram shows the internal structure of a class
and how it communicates with other parts of the system.
5. Object diagrams: Object diagrams show the relationship between the functions in a
system, along with their characteristics.
6. Package diagrams: Packages are the various levels that may contribute to a system's
architecture, and package diagrams help engineers and developers organize their
UML diagrams into groups that make them easier to understand.
Behavioral diagrams
Behavioral diagrams show how a proper system should function. Explore these seven kinds
of behavioral diagrams:
1. Use case diagrams: A use case diagram shows the parts of the system or
functionality of a system and how they relate to each other. It gives developers a clear
understanding of how things function without needing to look into implementation
details.
2. Activity diagrams: These diagrams depict the flow of control in a system and may be
useful as a reference to follow when executing a use case diagram. Activity diagrams
can also depict the causes of a particular event.
3. Sequence diagram: This diagram shows how objects communicate with each other
sequentially. It's commonly used by software developers to document and understand
the requirements needed to establish new systems or gain more knowledge about
existing systems.
6. State machine diagrams: These diagrams describe how objects behave in different
ways in their present state.
7. Time diagram: Time diagrams are a type of sequence diagram used to show the
behavior of objects over time. Developers use them to illustrate duration constraints
and the time to control changes in objects' behavior.
Class diagram
A class diagram
UML allows different levels of detail on both the attributes and the methods of one
class
Method ()
Method(parameter 1, parameter 2…….)
Method ( Parameter 1, parameter 2= initial value,….)
Relationships :
1) Dependency
2) Association
3) Generalization
Dependencies
if you change one object's interface, you need to change the dependent
object
Amount Request
User/LogIn
Resources
Associations
Associations imply
Our knowledge that a relationship must be preserved for some time (0.01 ms to forever)
AccountCollection 1 0 * Accounts
What is a collaboration diagram?
A collaboration diagram, also known as a communication diagram, is an illustration of the
relationships and interactions among software objects in the Unified Modeling Language
(UML). Developers can use these diagrams to portray the dynamic behavior of a
particular use case and define the role of each object.
To create a collaboration diagram, first identify the structural elements required to carry out
the functionality of an interaction. Then build a model using the relationships between those
elements. Several vendors offer software for creating and editing collaboration diagrams.
1. Objects. These are shown as rectangles with naming labels inside. The naming label
follows the convention of object name: class name. If an object has a property or state
that specifically influences the collaboration, this should also be noted.
2. Actors. These are instances that invoke the interaction in the diagram. Each actor has a
name and a role, with one actor initiating the entire use case.
3. Links. These connect objects with actors and are depicted using a solid line between two
elements. Each link is an instance where messages can be sent.
4. Messages between objects. These are shown as a labeled arrow placed near a link. These
messages are communications between objects that convey information about the activity
and can include the sequence number.
The most important objects are placed in the center of the diagram, with all other
participating objects branching off. After all objects are placed, links and messages should be
added in between.
Use Case Diagram
Use case diagram is a behavioural UML diagram type and frequently used to analyze various
systems. They enable you to visualize the different types of roles(users) in a system and how
those roles interact with the system.
To identify functions and how users interact with them – The primary purpose of
use case diagrams.
For a high-level view of the system – Especially useful when presenting to managers
or stakeholders. You can highlight the roles(users) that interact with the system and
the functionality provided by the system without going deep into inner workings of
the system.
To identify internal and external factors – This might sound simple but in large
complex projects a system can be identified as an external role in another use case.
Actor
Actor in a use case diagram is any entity
that performs a role in one given system.
This could be a person, organization or an
external system and usually drawn like
skeleton shown below.
Use Case
A use case represents a function or an
action within the system. It’s drawn as an
oval and named with the function.
System
The system is used to define the scope of the
use case and drawn as a rectangle. This an
optional element but useful when you’re
visualizing large systems. For example, you
can create all the use cases and then use the
system object to define the scope covered by
your project. Or you can even use it to show
the different areas covered in different
releases.
Package
The package is another optional element that
is extremely useful in complex diagrams.
Similar to class diagrams( From OOPS),
packages are used to group together use
cases. They are drawn like the image shown
here.
When it comes to analyzing the requirement of a system(SRS) use case diagrams are second
to none. A use case diagram mainly consists of actors, use cases and relationships.
Actors
Relationships
Example 1
In this use case scenario, a food delivery mobile application wants to expand to include more
food and drink establishments, even if some establishments have a limited menu.
Deliver the Good dishes, a food delivery service, wants to grow the number of offered
establishments and aims to include coffee shops and convenience stores. The software
developers need to determine how the newly featured establishments benefit from current
software parameters and what user thresholds might prompt the software through to the next
stage. The team runs use cases like:
UC1: A customer searching for a specific brand item not found in the area or selected
establishment. For example In Jio Grocery App, customer wants to find Unibic
products.
UC2: A customer with a low amount total bill, a prompt for a minimum purchase
message(or how much more purchases be made to avoid delivery charges) for not
levying delivery charges.
As details are given here, proposed system is an improvements over an existing e-commerce
web app. The three use cases described above can be applied to all e-retailer web apps.
Example 2
In this use case example, Air India wants to refresh its online booking system, offering more
complex fare options and ancillary revenue options and additional optional services, like
online check-in etc..
Air India software engineers design a refreshed fare reservation page, complete with tiered
fare selection, with extra options like lounge access, free flight change or cancel abilities and
complimentary checked bags. It also allows customer (account holders) to pay in credit,
debit, online payment platforms or by Air India loyalty program miles. The software
engineers conduct several use cases to establish how the booking flow works and identify
potential concerns. They run use cases that include:
Use Case : To book a ticket between starting point and destination.
Primary Actor : Customer wishes to travel.
Precondition: Customer must have logged in.
Main success action
Exception Scenario:
https://creately.com/guides/sequence-diagram-tutorial/
Sequence diagrams, commonly used by developers, model the interactions between objects in
a single use case. They illustrate how the different parts of a system interact with each other
to carry out a function, and the order in which the interactions occur when a particular use
case is executed.
In simpler words, a sequence diagram shows how different parts(objects) of a system work
Sequence diagrams are commonly used in software development to illustrate the behavior of
a system or to help developers design and understand complex systems. They can be used to
model both simple and complex interactions between objects, making them a useful tool for
A sequence diagram is structured in such a way that it represents a timeline that begins at the
top and descends gradually to mark the sequence of interactions. Each object has a column
Activation Bars
The activation bar is the box placed on the lifeline. It is used to indicate that an object is
active (or instantiated) during an interaction between two objects. The length of the rectangle
In a sequence diagram, an interaction between two objects occurs when one object sends a
message to another. The use of the activation bar on the lifelines of the Message Caller (the
object that sends the message) and the Message Receiver (the object that receives the
message) indicates that both are active/ are instantiated during the exchange of the message.
Message Arrows
An arrow from the Message Caller to the Message Receiver specifies a message in a
sequence diagram. A message can flow in any direction; from left to right, right to left, or
back to the Message Caller itself. While you can describe the message being sent from one
object to the other on the arrow, with different arrowheads you can indicate the type of
The message arrow comes with a description, which is known as a message signature, on it.
The format for this message signature is below. All parts except the message_name are
optional.
Synchronous message
As shown in the activation bars example, a synchronous message is used when the sender
waits for the receiver to process the message and return before carrying on with another
message. The arrowhead used to indicate this type of message is a solid one, like the one
below.
Asynchronous message
An asynchronous message is used when the message caller does not wait for the receiver to
process the message and return before sending other messages to other objects within the
system. The arrowhead used to show this type of message is a line arrow as shown in the
example below.
Return message
A return message is used to indicate that the message receiver is done processing the message
and is returning control over to the message caller. Return messages are optional notation
pieces, for an activation bar that is triggered by a synchronous message always implies a
return message.
Tip: You can avoid cluttering up your diagrams by minimizing the use of return messages
since the return value can be specified in the initial message arrow itself.
Participant creation message
Objects do not necessarily live for the entire duration of the sequence of events. Objects or
The dropped participant box notation can be used when you need to show that the particular
participant did not exist until the create call was sent. If the created participant does
something immediately after its creation, you should add an activation box right below the
participant box.
Likewise, participants when no longer needed can also be deleted from a sequence
diagram. This is done by adding an ‘X’ at the end of the lifeline of the said participant.
Reflexive message
Comment
UML diagrams generally permit the annotation of comments in all UML diagram types. The
comment object is a rectangle with a folded-over corner as shown below. The comment can
Then, before you start drawing the sequence diagram or decide what interactions should be
included in it, you need to draw the use case diagram and ready a comprehensive description
of what the particular use case does.
From the above use case diagram example of ‘Create New Online Library Account’, we will
focus on the use case named ‘Create New User Account’ to draw our sequence diagram
example.
Before drawing the sequence diagram, it’s necessary to identify the objects or actors that
would be involved in creating a new user account. These would be;
Librarian
Email system
Once you identify the objects, it is then important to write a detailed description of what the
use case does. From this description, you can easily figure out the interactions (that should go
in the sequence diagram) that would occur between the objects above, once the use case is
executed.
Here are the steps that occur in the use case named ‘Create New Library User Account’.
The librarian request the system to create a new online library account
The user’s details are checked using the user Credentials Database
From each of these steps, you can easily specify what messages should be exchanged
between the objects in the sequence diagram. Once it’s clear, you can go ahead and start
drawing the sequence diagram.
The sequence diagram below shows how the objects in the online library management system
interact with each other to perform the function ‘Create New Library User Account’.
Stakeholders have many issues to manage, so it's important to communicate with clarity and
brevity. Activity diagrams help people on the business and development sides of an
organization come together to understand the same process and behavior. One uses a set of
specialized symbols—including those used for starting, ending, merging, or receiving steps in
the flow—to make an activity diagram, which I will cover in more depth within this activity
diagram guide.
These activity diagram shapes and symbols are some of the most common types you'll find in UML
diagrams.
Joint
Combines two concurrent activities and re-introduces them to a flow
symbol/
where only one activity occurs at a time. Represented with a thick
Synchroniza
vertical or horizontal line.
tion bar
Chapter 4
SRS and System design
Learning objectives
A good SRS(Software Requirements Specification)is valuable to for implementing
quality project.
Structure and components of SRS.
The different activities in the process for good SRS.
Attributes of an SRS and main components in SRS document.
Use of use case and DFD for specifying functional requirements.
Definition
According to IEEE
(i) A condition of capability needed by a client to solve a problem or achieve specific goal.
One must understand that we are talking about capability of software system which is
proposed to be developed. Obviously, the form and structure of SRS documents may vary
depending on the SDLC model used for developing proposed system.
Requirement Gathering This step onwards the software development team works to carry on
the project. The team holds discussions with various stakeholders from problem domain and
tries to bring out as much information as possible on their requirements. The requirements are
contemplated and segregated into user requirements, system requirements and functional
requirements. The requirements are collected using a number of practices as given -
Software Development Life Cycle
As we said before SRS also depends on SDLC model proposed for the system development.
Therefore it is difficult to define a generic structure which can be applied t all kinds of
projects. Nevertheless it is possible to define structure outline and essential components of
SRS. Any SRS document should address following requirements which can be considered as
components of SRS.
1. Functionality.
2. Performance.
5. Security.
(a) For each function it describes what should be the output for given input data.
(b)For each function it describes type of input data and source of it.
(c)For each function it describes input data, its source, unit of measurement and its valid
range.
(d)It specifies all operations to be performed on input data to obtain output data. For example
parameters affected by the operation, mathematical equation or logical operation and
validation of input and output data.
(e) It describes system response in abnormal condition, or invalid input data , which if at all
processed should produce invalid output data. For example railway reservation system should
not book a ticket for valid input data if seat is not available. Another example an order
processing system should not process valid order if item is not available in inventory.
All these requirements should be specified in measurable terms. For example developer
should not state that “System response should be quick” .Instead it should read as “
Response time should be less than one second 96% of the time” .
4. GUI (Graphic User Interface):- GUI requirements are becoming important. Al interactions
of software system with end-user, another software or hardware should be specified.
5. Security :- Specify any requirements regarding security or privacy issues surrounding use
of the product or protection of the data used or created by the product. Define any user
identity authentication requirements. Refer to any external policies or regulations containing
security issues that affect the product. Define any security or privacy certifications that must
be satisfied. Ensure all compliance required are mentioned in the document.
1. Correctness: User review is used to provide the accuracy of requirements stated in the
SRS. SRS is said to be perfect if it covers all the needs that are truly expected from the
system. t
2. Completeness: The SRS is complete if, and only if, it includes the following elements:
(2). Definition of their responses of the software to all realizable classes of input data in all
available categories of situations.
Note: It is essential to specify the responses to both valid and invalid values.
(3). Full labels and references to all figures, tables, and diagrams in the SRS and definitions
of all terms and units of measure.
3. Consistency: The SRS is consistent if, and only if, no subset of individual requirements
described in its conflict with other requirements. There are three types of possible conflict in
the SRS:
(1). The specified characteristics of real-world objects may conflicts. For example,
(a) The format of an output report may be described in one requirement as tabular but in
another as textual.
(b) One condition may state that all lights shall be green while another states that all lights
shall be blue.
(2). There may be a reasonable or temporal conflict between the two specified actions. For
example,
(a) One requirement may determine that the program will add two inputs, and another may
determine that the program will multiply them.
(b) One condition may state that "A" must always follow "B," while other requires that "A
and B" co-occurs.
(3). Two or more requirements may define the same real-world object but use different terms
for that object. For example, a program's request for user input may be called a "prompt" in
one requirement's and a "cue" in another. The use of standard terminology and descriptions
promotes consistency.
4. Unambiguousness: SRS is unambiguous when every fixed requirement has only one
interpretation. This suggests that each element is uniquely interpreted. In case there is a
method used with multiple definitions, the requirements report should determine the
implications in the SRS so that it is clear and simple to understand.
5. Ranking for importance and stability: The SRS is ranked for importance and stability if
each requirement in it has an identifier to indicate either the significance or stability of that
particular requirement.
Typically, all requirements are not equally important. Some prerequisites may be essential,
especially for life-critical applications, while others may be desirable. Each element should
be identified to make these differences clear and explicit. Another way to rank requirements
is to distinguish classes of items as essential, conditional, and optional.
8. Traceability: The SRS is traceable if the origin of each of the requirements is clear and if
it facilitates the referencing of each condition in future development or enhancement
documentation.
1. Backward Traceability: This depends upon each requirement explicitly referencing its
source in earlier documents.
2. Forward Traceability: This depends upon each element in the SRS having a unique name
or reference number.
The forward traceability of the SRS is especially crucial when the software product enters the
operation and maintenance phase. As code and design document is modified, it is necessary
to be able to ascertain the complete set of requirements that may be concerned by those
modifications.
10. Testability: An SRS should be written in such a method that it is simple to generate test
cases and test plans from the report.
11. Understandable by the customer: An end user may be an expert in his/her explicit
domain but might not be trained in computer science. Hence, the purpose of formal notations
and symbols should be avoided too as much extent as possible. The language should be kept
simple and clear.
12. The right level of abstraction: If the SRS is written for the requirements stage, the
details should be explained explicitly. Whereas,for a feasibility study, fewer analysis can be
used. Hence, the level of abstraction modifies according to the objective of the SRS.
Concise: The SRS report should be concise and at the same time, unambiguous, consistent,
and complete. Verbose and irrelevant descriptions decrease readability and also increase error
possibilities.
Conceptual integrity: It should show conceptual integrity so that the reader can merely
understand it. Response to undesired events: It should characterize acceptable responses to
unwanted events. These are called system response to exceptional conditions.
Verifiable: All requirements of the system, as documented in the SRS document, should be
correct. This means that it should be possible to decide whether or not requirements have
been met in an implementation.
Entity/Relationship Modelling
Entity/Relationship models consist :-
Relationships.
Attributes.
E/R Diagrams.
Database Design
Example
One-to-many relationship
Entities :-
Entities represent objects or things of interest
Entities have
Instances of that particular type, such as Ashwin Mehta or Sidraa Khan are
instances of Lecturer.
Chapter 7
Metrics of Software Engineering
A computer program is an implementation of an algorithm considered to be a collection of
tokens which can be classified as either operators or operands. Halstead’s metrics are
included in a number of current commercial tools that count software lines of code. By
counting the tokens and determining which are operators and which are operands, the
following base measures can be collected :
Halstead Program Length – The total number of operator occurrences and the total
number of operand occurrences.
N = N1 + N2
And estimated program length is, N^ = n1log2n1 + n2log2n2
The following alternate expressions have been published to estimate program length:
NJ = log2(n1!) + log2(n2!)
NB = n1 * log2n2 + n2 * log2n1
NC = n1 * sqrt(n1) + n2 * sqrt(n2)
NS = (n * log2n) / 2
Halstead Vocabulary – The total number of unique operator and unique operand occur-
rences.
n = n1 + n2
Program Volume – Proportional to program size, represents the size, in bits, of space ne-
cessary for storing the program. This parameter is dependent on specific algorithm imple-
mentation. The properties V, N, and the number of lines in the code are shown to be lin-
early connected and equally valid for measuring relative program size.
V = Size * (log2 vocabulary) = N * log2(n)
The unit of measurement of volume is the common unit for size “bits”. It is the actual size
of a program if a uniform binary encoding for the vocabulary is used. And error =
Volume / 3000
Potential Minimum Volume – The potential minimum volume V* is defined as the
volume of the most succinct program in which a problem can be coded.
V* = (2 + n2*) * log2(2 + n2*)
Here, n2* is the count of unique input and output parameters
Program Level – To rank the programming languages, the level of abstraction provided
by the programming language, Program Level (L) is considered. The higher the level of a
language, the less effort it takes to develop a program using that language.
L = V* / V
The value of L ranges between zero and one, with L=1 representing a program written at
the highest possible level (i.e., with minimum size).
And estimated program level is L^ =2 * (n2) / (n1)(N2)
Program Difficulty – This parameter shows how difficult to handle the program is.
D = (n1 / 2) * (N2 / n2)
D=1/L
As the volume of the implementation of a program increases, the program level decreases
and the difficulty increases. Thus, programming practices such as redundant usage of op-
erands, or the failure to use higher-level control constructs will tend to increase the
volume as well as the difficulty.
Programming Effort – Measures the amount of mental activity needed to translate the
existing algorithm into implementation in the specified program language.
E = V / L = D * V = Difficulty * Volume
Language Level – Shows the algorithm implementation program language level. The
same algorithm demands additional effort if it is written in a low-level program language.
For example, it is easier to program in Pascal than in Assembler.
L’ = V / D
lambda = L * V* = L2 * V
{
int i, j, save, im1;
/*This function sorts array x in ascending order */
If (n< 2) return 1;
for (i=2; i< =n; i++)
{
im1=i-1;
for (j=1; j< =im1; j++)
if (x[i] < x[j])
{
Save = x[i];
x[i] = x[j];
x[j] = save;
}
}
return 0;
}
Explanation –
Int 4 sort 1
() 5 x 7
, 4 n 3
[] 7 i 8
If 2 j 7
< 2 save 3
; 11 im1 3
For 2 2 2
= 6 1 3
– 1 0 1
<= 2 – –
++ 2 – –
Return 2 – –
{} 3 – –
Therefore,
N = 91
n = 24
V = 417.23 bits
N^ = 86.51
n2* = 3 (x:array holding integer
to be sorted. This is used both
as input and output)
V* = 11.6
L = 0.027
D = 37.03
L^ = 0.038
T = 610 seconds
Cyclomatic complexity
Cyclomatic complexity of a code section is the quantitative measure of the number of
linearly independent paths in it. It is a software metric used to indicate the complexity of a
program. It is computed using the Control Flow Graph of the program. The nodes in the
graph indicate the smallest group of commands of a program, and a directed edge in it
connects the two nodes i.e. if second command might immediately follow the first
command.
For example, if source code contains no control flow statement then its cyclomatic
complexity will be 1 and source code contains a single path in it. Similarly, if the source
code contains one if condition then cyclomatic complexity will be 2 because there will be
two paths one for true and the other for false.
Mathematically, for a structured program, the directed graph inside control flow is the edge
joining two basic blocks of the program as control may pass from first to second.
So, cyclomatic complexity M would be defined as,
M = E – N + 2P
where,
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components
Steps that should be followed in calculating cyclomatic complexity and test cases design
are:
A = 10
IF B > C THEN
A=B
ELSE
A=C
ENDIF
Print A
Print B
Print C
Determining the independent path executions thus proven to be very helpful for De -
velopers and Testers.
It can make sure that every path have been tested at least once.
Thus help to focus more on uncovered paths.
Code coverage can be improved.
Risk associated with program can be evaluated.
These metrics being used earlier in the program helps in reducing the risks.
Advantages of Cyclomatic Complexity:.
It can be used as a quality metric, gives relative complexity of various designs.
It is able to compute faster than the Halstead’s metrics.
It is used to measure the minimum effort and best areas of concentration for testing.
It is able to guide the testing process.
It is easy to apply.
Disadvantages of Cyclomatic Complexity:
It is the measure of the programs’s control complexity and not the data complexity.
In this, nested conditional structures are harder to understand than non-nested struc -
tures.
In case of simple comparisons and decision structures, it may give a misleading figure.
Lines of code
line of code (LOC) is any line of text in a code that is not a comment or blank line,
and also header lines, in any case of the number of statements or fragments of statements on
the line. LOC clearly consists of all lines containing the declaration of any variable, and
executable and non-executable statements. As Lines of Code (LOC) only counts the volume
of code, you can only use it to compare or estimate projects that use the same language and
are coded using the same coding standards.
Features :
Variations such as “source lines of code”, are used to set out a codebase.
LOC is frequently used in some kinds of arguments.
They are used in assessing a project’s performance or efficiency.
Advantages :
Most used metric in cost estimation.
Its alternates have many problems as compared to this metric.
It is very easy in estimating the efforts.
Disadvantages :
Very difficult to estimate the LOC of the final program from the problem specification.
It correlates poorly with quality and efficiency of code.
It doesn’t consider complexity.
Research has shown a rough correlation between LOC and the overall cost and length of
developing a project/ product in Software Development, and between LOC and the number
of defects. This means the lower your LOC measurement is, the better off you probably are
in the development of your product.
Introduction
There can be various methods to calculate function points; you can define your custom too
based on your specific requirements. But "Why re-invent the wheel?" when you already have
a tried and tested method given by IFPUG by their experiences and case study.
The method used to calculate function point is knows as FPA (Function Point Analysis). Also
I would define it in single line as "A Method of quantifying the size and complexity of a
software system in terms of the functions that the system delivers to the user". Let's start
learning how to calculate the function points.
Functionalities
Following functionalities are counted while counting the function points of the system.
Data Functionality
o Internal Logical Files (ILF)
o External Interface Files (EIF)
Transaction Functionality
o External Inputs (EI)
o External Outputs (EO)
o External Queries (EQ)
Now logically if you divide you software application into parts it will always come to one or
more of the 5 functionalities that are mentioned above. A software application cannot be
derived without using any one of the functionalities above.
We need to under stand a system first with respect to the function points for that consider an
application model as below for measuring the function points.
c. Few other terminologies of RET and DET are to be understood here as well to determine the
function points.
d. A RET (record element type) is a user recognizable subgroup of data elements with
as ILF or EIF
e. A DET (data element type) is a unique user recognizable, non-repeated field either main-
tained in an ILF or retrieved from an ILF or ELF.
3. Identify the transaction functionalities (EI, EO, EQ)
a. All the three Transactional functionalities are "elementary processes"
b. An Elementary Process is the smallest unit of activity that is meaningful to the user(s).
c. The elementary process must be self-contained and leave the business of the application in a
consistent state.
d. An EI (External Input) is an elementary process of the application which processes data
that enters from outside the boundary of the application. Maintains one or more ILF.
e. An EO (External Output) is an elementary process that generates data that exits the bound-
ary of the application (i.e. presents information to the user) through processing logic, re-
trieval of data through ILF or EIF. The processing logic contains mathematical calculations,
derived data etc.
f. An EQ (External Query) is an elementary process that results in retrieval of data that is sent
outside the application boundary (i.e. present information to the user) through retrieval of
data from ILF or EIF. The processing logic should not contain any mathematical formula,
derived data etc.
4. Using the above data we can calculate the UFP (Unadjusted Function Points)
a. After all the basic data & transactional functionalities of the system have been defined we
can use the following set of tables below to calculate the total UFP.
b. Now for each type of Functionality determine the UFP's based on the below table.
c. For EI's, EO's & EQ's determine the FTR's and DET's and based on that determine the
Complexity and hence the Number of UFP's it contributes. We have to calculate this
for all the EI's, EO's & EQ's.
External Inputs (EI)
UFP = Sum of all the Complexities of all the EI's, EO's EQ's, ILF's and EIF's
5. Further the calculation of VAF (Value added Factor) which is based on the TDI (Total De-
gree of Influence of the 14 General system characteristics)
a. TDI = Sum of (DI of 14 General System Characteristics) where DI stands for Degree of In -
fluence.
b. These 14 GSC are
1. Data Communication
3. Performance
5. Transaction Role
7. End-User Efficiency
8. Online Update
9. Complex Processing
10. Reusability
FP = UFP * VAF
8. Now these FP's can be used to determine the Size of the Software, also can be used to quote
the price of the software, get the time and effort required to complete the software.
9. Effort in Person Month = FP divided by no. of FP's per month (Using your organizations
or industry benchmark)
10. Schedule in Months = 3.0 * person-month^1/3
Problem Partitioning
For small problem, we can handle the entire problem at once but for the significant problem, divide
the problems and conquer the problem it means to divide the problem into smaller pieces so that
each piece can be captured separately.
For software design, the goal is to divide the problem into manageable pieces.
These pieces cannot be entirely independent of each other as they together form the system. They
have to cooperate and communicate to solve the problem. This communication adds complexity.
Abstraction
1. Functional Abstraction
2. Data Abstraction
Functional Abstraction
i. A module is specified by the method it performs.
ii. The details of the algorithm to accomplish the functions are not visible to the user of the
function.
Functional abstraction forms the basis for Function oriented design approaches.
Data Abstraction
Details of the data elements are not visible to the users of data. Data Abstraction forms the basis for
Object Oriented design approaches.
Modularity
Modularity specifies to the division of software into separate modules which are differently named
and addressed and are integrated later on in to obtain the completely functional software. It is the
only property that allows a program to be intellectually manageable. Single large programs are
difficult to understand and read due to a large number of reference variables, control paths, global
variables, etc. The desirable properties of a modular system are:
o Each module is a well-defined system that can be used with other applications.
o Each module has single specified objectives. o Modules can be separately com-
piled and saved in the library. o Modules should be easier to use than to build. o
Advantages of Modularity
There are several advantages of Modularity
o It encourages the creation of commonly used routines to be placed in the library and used
by other programs.
o It simplifies the overlay procedure of loading a large program into main storage.
o It provides a framework for complete testing, more accessible to test o It produced the well
Disadvantages of Modularity
There are several disadvantages of Modularity
o Execution time maybe, but not certainly, longer o Storage size perhaps, but is
o More linkage required, run-time may be longer, more source lines must be
written, and more documentation has to be done
Modular Design
Modular design reduces the design complexity and results in easier and faster implementation by
allowing parallel development of various parts of a system. We discuss a different section of
modular design in detail in this section:
The use of information hiding as design criteria for modular system provides the most significant
benefits when modifications are required during testing's and later during software maintenance.
This is because as most data and procedures are hidden from other parts of the software,
inadvertent errors introduced during modifications are less likely to propagate to different locations
within the software.
Strategy of Design
A good system design strategy is to organize the program modules in such a method that are easy
to develop and latter too, change. Structured design methods help developers to deal with the size
and complexity of programs. Analysts generate instructions for the developers about how code
should be composed and how pieces of code should fit together to form a program.
1. Top-down Approach
2. Bottom-up Approach
1. Top-down Approach: This approach starts with the identification of the main com-
ponents and then decomposing them into their more detailed sub-components.
2. Bottom-up Approach: A bottom-up approach begins with the lower details and
moves towards up the hierarchy, as shown in fig. This approach is suitable in case of an existing sys -
tem.
2. Data Coupling: When data of one module is passed to another module, this is called data
coupling.
3. Stamp Coupling: Two modules are stamp coupled if they communicate using composite
data items such as structure, objects, etc. When the module passes non-global data structure or
entire structure to another module, they are said to be stamp coupled. For example, passing
structure variable in C or object in C++ language to a module.
4. Control Coupling: Control Coupling exists among two modules if data from one module is
used to direct the structure of instruction execution in another.
5. External Coupling: External Coupling arises when two modules share an externally im-
posed data format, communication protocols, or device interface. This is related to communica -
tion to external tools and devices.
6. Common Coupling: Two modules are common coupled if they share information through
some global data items.
7. Content Coupling: Content Coupling exists among two modules if they share code,
e.g., a branch from one module into another module.
Module Cohesion
In computer programming, cohesion defines to the degree to which the elements of a module
belong together. Thus, cohesion measures the strength of relationships between pieces of
functionality within a given module. For example, in highly cohesive systems, functionality is
strongly related.
Cohesion is an ordinal type of measurement and is generally described as "high cohesion" or "low
cohesion."
Types of Modules Cohesion
5. Temporal Cohesion: When a module includes functions that are associated by the
fact that all the methods must be executed in the same time, the module is said to exhibit
temporal cohesion.
6. Logical Cohesion: A module is said to be logically cohesive if all the elements of the
module perform a similar operation. For example Error handling, data input and data out -
put, etc.
Coupling Cohesion
Coupling shows the relationships between Cohesion shows the relationship within the mod-
modules. ule.
Data-flow diagrams are a useful and intuitive way of describing a system. They are generally
understandable without specialized training, notably if control information is excluded. They show
end-to-end processing. That is the flow of processing from when data enters the system to where it
leaves the system can be traced.
Data-flow design is an integral part of several design methods, and most CASE tools support data-
flow diagram creation. Different ways may use different icons to represent data-flow diagram
entities, but their meanings are similar.
A data dictionary lists the objective of all data items and the definition of all composite data
elements in terms of their component data items. For example, a data dictionary entry may
contain that the data grossPay consists of the parts regularPay and overtimePay.
For the smallest units of data elements, the data dictionary lists their name and their type.
A data dictionary plays a significant role in any software development process because of the
following reasons:
A Data dictionary provides a standard language for all relevant information for use by engin-
eers working in a project. A consistent vocabulary for data items is essential since, in large
projects, different engineers of the project tend to use different terms to refer to the same
data, which unnecessarily causes confusion.
The data dictionary provides the analyst with a means to determine the definition of vari-
ous data structures in terms of their component elements.
Structured Charts
It partitions a system into block boxes. A Black box system that functionality is known to the user
without the knowledge of internal design.
Pseudo-code
Pseudo-code notations can be used in both the preliminary and detailed design phases.
Using pseudo-code, the designer describes system characteristics using short, concise, English
Language phases that are structured by keywords such as If-Then-Else, While-Do, and End.
Coding
The coding is the process of transforming the design of a system into a computer language format.
This coding phase of software development is concerned with software translating design
specification into the source code. It is necessary to write source code & internal documentation so
that conformance of the code to its specification can be easily verified.
Coding is done by the coder or programmers who are independent people than the designer. The
goal is not to reduce the effort and cost of the coding phase, but to cut to the cost of a later stage.
The cost of testing and maintenance can be significantly reduced with efficient coding.
Goals of Coding
1. To translate the design of system into a computer language
format: The coding is the process of transforming the design of a system into a computer
language format, which can be executed by a computer and that perform tasks as specified
by the design of operation during the design phase.
2. To reduce the cost of later phases: The cost of testing and maintenance can be
significantly reduced with efficient coding.
3. Making the program more readable: Program should be easy to read and un-
derstand. It increases code understanding having readability and understandability as a
clear objective of the coding activity can itself help in producing more maintainable soft -
ware.
For implementing our design into code, we require a high-level functional language. A programming
language should have the following characteristics:
Readability: A good high-level language will allow programs to be written in some methods that
resemble a quite-English description of the underlying functions. The coding may be done in an
essentially self-documenting way.
Portability: High-level languages, being virtually machine-independent, should be easy to
develop portable software.
Generality: Most high-level languages allow the writing of a vast collection of programs, thus
relieving the programmer of the need to develop into an expert in many diverse languages.
Brevity: Language should have the ability to implement the algorithm with less amount of code.
Programs mean in high-level languages are often significantly shorter than their low-level
equivalents.
Cost: The ultimate cost of a programming language is a task of many of its characteristics.
Modularity: It is desirable that programs can be developed in the language as several separately
compiled modules, with the appropriate structure for ensuring self-consistency among these
modules.
Widely available: Language should be widely available, and it should be feasible to provide
translators for all the major machines and all the primary operating systems.
A coding standard lists several rules to be followed during coding, such as the way variables are to
be named, the way the code is to be laid out, error return conventions, etc.
Coding Standards
General coding standards refers to how the developer writes code, so here we will discuss some
essential standards regardless of the programming language being used.
2. Inline comments: Inline comments analyze the functioning of the subroutine, or key
aspects of the algorithm shall be frequently used.
3. Rules for limiting the use of global: These rules file what types of data can be
declared global and what cannot.
The following are some representative coding guidelines recommended by many software
development organizations.
1. Line Length: It is considered a good practice to keep the length of source code lines at or
below 80 characters. Lines longer than this may not be visible properly on some terminals and
tools. Some printers will truncate lines longer than 80 columns.
2. Spacing: The appropriate use of spaces within a line of code can improve readability.
Example:
Bad: cost=price+(price*sales_tax)
fprintf(stdout ,"The total cost is %5.2f\n",cost);
4. The length of any function should not exceed 10 source lines: A very
lengthy function is generally very difficult to understand as it possibly carries out many various
functions. For the same reason, lengthy functions are possible to have a disproportionately lar-
ger number of bugs.
5. Do not use goto statements: Use of goto statements makes a program unstructured
and very tough to understand.
7. Error Messages: Error handling is an essential aspect of computer programming. This does
not only include adding the necessary logic to test for and handle errors but also involves making
error messages meaningful.
Programming Style
Programming style refers to the technique used in writing the source code for a computer program.
Most programming styles are designed to help programmers quickly read and understands the
program as well as avoid making errors. (Older programming styles also focused on conserving
screen space.) A good coding style can overcome the many deficiencies of a first programming
language, while poor style can defeat the intent of an excellent language.
The goal of good programming style is to provide understandable, straightforward, elegant code.
The programming style used in a various program may be derived from the coding standards or
code conventions of a company or other computing organization, as well as the preferences of the
actual programmer.
2. Naming: In a program, you are required to name the module, processes, and variable, and so
on. Care should be taken that the naming style should not be cryptic and nonrepresentative.
3. Control Constructs: It is desirable that as much as a possible single entry and single exit
constructs used.
4. Information hiding: The information secure in the data structures should be hidden from
the rest of the system where possible. Information hiding can decrease the coupling between
modules and make the system more maintainable.
5. Nesting: Deep nesting of loops and conditions greatly harm the static and dynamic behavior
of a program. It also becomes difficult to understand the program logic, so it is desirable to avoid
deep nesting.
6. User-defined types: Make heavy use of user-defined data types like enum, class, struc-
ture, and union. These data types make your program code easy to write and easy to under -
stand.
7. Module size: The module size should be uniform. The size of the module should not be too
big or too small. If the module size is too large, it is not generally functionally cohesive. If the
module size is too small, it leads to unnecessary overheads.
9. Side-effects: When a module is invoked, it sometimes has a side effect of modifying the
program state. Such side-effect should be avoided where as possible.
Software testing provides an independent view and objective of the software and gives surety of
fitness of the software. It involves testing of all components under the required services to confirm
that whether it is satisfying the specified requirements or not. The process is also providing the
client with information about the quality of the software.
Testing is mandatory because it will be a dangerous situation if the software fails any of time due to
lack of testing. So, without testing software cannot be deployed to the end user.
What is Testing
Testing is a group of techniques to determine the correctness of the application under the
predefined script but, testing cannot find all the defect of application. The main intent of testing is
to detect failures of the application so that failures can be discovered and corrected. It does not
demonstrate that a product functions properly under all conditions but only that it is not working in
some specific conditions.
Testing furnishes comparison that compares the behavior and state of software against mechanisms
because the problem can be recognized by the mechanism. The mechanism may include past
versions of the same specified product, comparable products, and interfaces of expected purpose,
relevant standards, or other criteria but not limited up to these.
Testing includes an examination of code and also the execution of code in various environments,
conditions as well as all the examining aspects of the code. In the current scenario of software
development, a testing team may be separate from the development team so that Information
derived from testing can be used to correct the process of software development.
The success of software depends upon acceptance of its targeted audience, easy graphical user
interface, strong functionality load test, etc. For example, the audience of banking is totally
different from the audience of a video game. Therefore, when an organization develops a software
product, it can assess whether the software product will be beneficial to its purchasers and other
audience.
Manual Testing
Manual testing is a software testing process in which test cases are executed manually without
using any automated tool. All test cases executed by the tester manually according to the end user's
perspective. It ensures whether the application is working as mentioned in the requirement
document or not. Test cases are planned and implemented to complete almost 100 percent of the
software application. Test case reports are also generated manually.
Manual Testing is one of the most fundamental testing processes as it can find both visible and
hidden defects of the software. The difference between expected output and output, given by the
software is defined as a defect. The developer fixed the defects and handed it to the tester for
retesting.
Manual testing is mandatory for every newly developed software before automated testing.
This testing requires great efforts and time, but it gives the surety of bug-free software. Manual
Testing requires knowledge of manual testing techniques but not of any automated testing tool.
Manual testing is essential because one of the software testing fundamentals is "100% automation
is not possible."
There are various methods used for manual testing. Each method is used according to its testing
criteria. Types of manual testing are given below:
3. Unit Testing
4. System Testing
5. Integration Testing
6. Acceptance Testing
o All test cases are executed manually by using Black box testing and white box testing.
o If bugs occurred then the testing team informs to the development team.
o Development team fixes bugs and handed software to the testing team for retesting.
o Tester interacts with software as a real user so that they are able to discover usability and
user interface issues.
o It ensures that the software is a hundred percent bug-free. o It is cost effective. o Easy to
o Tester develops test cases based on their skills and experience. There is no evidence that
they have covered all functions or not.
o Test cases cannot be used again. Need to develop separate test cases for each new soft -
ware.
o It does not provide testing on all aspects of testing.
o Since two teams work together, sometimes it is difficult to understand each other's
motives, it can mislead the process.
Appium
TestLink
Postman
Firebug
Mantis
Automation Testing
When the testing case suites are performed by using automated testing tools is known as
Automation Testing. The testing process is done by using special automation tools to control the
execution of test cases and compare the actual result with the expected result. Automation testing
requires a pretty huge investment of resources and money.
Generally, repetitive actions are tested in automated testing such as regression tests. The testing
tools used in automation testing are used not only for regression testing but also for automated GUI
interaction, data set up generation, defect logging, and product installation.
The goal of automation testing is to reduce manual test cases but not to eliminate any of them. Test
suits can be recorded by using the automation tools, and tester can play these suits again as per the
requirement. Automated testing suites do not require any human intervention.
o A tester can test the response of the software if the execution of the same operation is re -
peated several times.
o Automation Testing provides re-usability of test cases on testing of different versions of the
same software.
o Automation testing is reliable as it eliminates hidden errors by executing test cases again in
the same way.
o Automation Testing is comprehensive as test cases cover each and every feature of the ap-
plication.
o It does not require many human resources, instead of writing test cases and testing them
manually, they need an automation testing engineer to run them.
o The cost of automation testing is less than manual testing because it requires a few human
resources.
o Debugging is mandatory if a less effective error has not been solved, it can lead to fatal res-
ults.
The term 'white box' is used because of the internal perspective of the system. The clear box or
white box or transparent box name denote the ability to see through the software's outer shell into
its inner workings.
Test cases for white box testing are derived from the design phase of the software development
lifecycle. Data flow testing, control flow testing, path testing, branch testing, statement and decision
coverage all these techniques used by white box testing as a guideline to create an error-free
software.
White box testing follows some working steps to make testing manageable and easy to understand
what the next task to do. There are some basic steps to perform white box testing.
o This step involves the study of code at runtime to examine the resource utilization, not ac -
cessed areas of the code, time taken by various methods and operations and so on.
o In this step testing of internal subroutines takes place. Internal subroutines such as nonpub-
lic methods, interfaces are able to handle all types of data appropriately or not.
o This step focuses on testing of control statements like loops and conditional statements to
check the efficiency and accuracy for different data inputs.
o In the last step white box testing includes security testing to check all possible security loop-
holes by looking at how the code handles security.
o This testing is more thorough than other testing approaches as it covers all code
paths. o It can be started in the SDLC phase even without GUI.
o White box testing needs professional programmers who have a detailed knowledge and un-
derstanding of programming language and implementation.
In this method, tester selects a function and gives input value to examine its functionality, and
checks whether the function is giving expected output or not. If the function produces correct
output, then it is passed in testing, otherwise failed. The test team reports the result to the
development team and then tests the next function. After completing testing of all functions if there
are severe problems, then it is given back to the development team for correction.
o In the second step, the tester creates a positive test scenario and an adverse test scenario
by selecting valid and invalid input values to check that the software is processing them cor-
rectly or incorrectly.
o In the third step, the tester develops various test cases such as decision table, all pairs test,
equivalent division, error estimation, cause-effect graph, etc.
o The fourth phase includes the execution of all test cases. o In the fifth step, the tester com-
o In the sixth and final step, if there is any flaw in the software, then it is cured and tested
again.
Test procedure
The test procedure of black box testing is a kind of process in which the tester has specific
knowledge about the software's work, and it develops test cases to check the accuracy of the
software's functionality.
It does not require programming knowledge of the software. All test cases are designed by
considering the input and output of a particular function. A tester knows about the definite output
of a particular input, but not about how the result is arising. There are various techniques used in
black box testing for testing like decision table technique, boundary value analysis technique, state
transition, All-pair testing, cause-effect graph technique, equivalence partitioning technique, error
guessing technique, use case technique and user story technique. All these techniques have been
explained in detail within the tutorial.
Test cases
Test cases are created considering the specification of the requirements. These test cases are
generally created from working descriptions of the software including requirements, design
parameters, and other specifications. For the testing, the test designer selects both positive test
scenario by taking valid input values and adverse test scenario by taking invalid input values to
determine the correct output. Test cases are mainly designed for functional testing but can also be
used for non-functional testing. Test cases are designed by the testing team, there is not any
involvement of the development team of software.
EasyLeave
1 OBJECTIVE: This project is aimed at developing a web based Leave Management Tool,
which is of importance to either an organization or a college. The Easy Leave is an Intranet
based application that can be accessed throughout the Organization or a specified group/Dept.
This system can be used to automate the workflow of leave applications and their approvals.
The periodic crediting of leave is also automated. There are features like notifications,
cancellation of leave, automatic approval of leave, report generators etc in this Tool.
Functional components of the project: There are registered people in the system. Some are
leave approvers. An approver can also be a requestor. In an organization, the hierarchy could
be Engineers/Managers/Business Managers/Managing Director etc. In a college, it could be
Lecturer/Professor/Head of the Department/Dean/Principal etc.
RESOURCE :-
List of Module:
Objective:
Functional Requirements: