Software Engineering
Software Engineering
Software Engineering
Unit 1
Introduction to Software Engineering
Importance of Software Engineering as a Discipline
Software Applications
Software Crisis
Software Processes & Characteristics
Waterfall Model:
Prototype and Prototyping Model:
Evolutionary Model:
Spiral Model
Software Requirements Analysis & Specifications:
Requirements Engineering:
Functional and Non-Functional Requirements:
User Requirements:
System Requirements:
Requirement Elicitation Techniques: FAST, QFD, and Use Case Approach
Data Flow Diagrams
Levels in Data Flow Diagrams (DFD)
Requirements Analysis Using Data Flow Diagrams (DFD):
Data Dictionary
Components of Data Dictionary:
Data Dictionary Notations tables :
Features of Data Dictionary :
Uses of Data Dictionary :
Importance of Data Dictionary:
Entity-Relationship (ER) Diagrams:
Requirements Documentation:
Software Requirements Specification (SRS) - Nature and Characteristics:
Requirement Management:
IEEE Std 830-1998 - Recommended Practice for Software Requirements
Specifications:
Unit 2
Software Project Planning:
Project size estimation
Size Estimation in Software Development: Lines of Code and Function Count:
Cost Estimation Models in Software Development:
COCOMO (Constructive Cost Model):
Putnam Resource Allocation Model
Software Engineering 1
Validating Software Estimates:
Risk Management in Software Development:
Software Design: Cohesion and Coupling
Function-Oriented Design in Software Engineering:
Object-Oriented Design (OOD) in Software Engineering:
User Interface Design
Unit 1
Introduction to Software Engineering
Software Engineering is a systematic approach to the design, development,
maintenance, and documentation of software. It encompasses a set of methods,
tools, and processes to create high-quality software efficiently.
Key Concepts:
5. Implementation: Writing code and building the software based on the design
specifications. It involves programming, coding, and unit testing.
Software Engineering 2
requirements.
3. Quality Assurance: Ensuring that software is of high quality and free of defects
is a continuous challenge.
Various tools and techniques are used in software engineering, such as version
control systems (e.g., Git), integrated development environments (IDEs),
modeling tools, and project management software.
Career Opportunities:
That's a brief introduction to software engineering. If you have any specific questions
or need further details on any aspect, please feel free to ask.
2. Quality Assurance:
Software Engineering 3
rectifying defects, reducing errors, and ensuring reliability.
3. Cost-Efficiency:
4. Predictable Timelines:
6. Risk Management:
7. Reusability:
8. Scalability:
9. Documentation:
Software Engineering 4
Software engineering adheres to industry standards and best practices. This
consistency across the discipline fosters a common understanding of how to
develop and maintain software systems, promoting professionalism and quality.
12. Innovation:
Software Applications
Definition:
1. Desktop Applications:
These applications are developed for smartphones and tablets. They can be
categorized into two major platforms:
iOS Apps: Designed for Apple devices like iPhones and iPads.
Software Engineering 5
3. Web Applications:
These are accessed through web browsers and run on remote servers.
Users can interact with web applications through a web page. Examples
include email services (e.g., Gmail), social media platforms (e.g., Facebook),
and online shopping websites (e.g., Amazon).
4. Enterprise Applications:
These are software solutions designed for business and organizational use.
Enterprise applications often include Customer Relationship Management
(CRM) software, Enterprise Resource Planning (ERP) systems, and project
management tools.
5. Gaming Applications:
6. Utility Applications:
Most applications have a graphical user interface (GUI) that allows users to
interact with the software.
2. Functionality:
3. Platform Compatibility:
4. Connectivity:
5. Data Storage:
Software Engineering 6
Applications may store data locally on the device or in remote servers,
depending on their design and purpose.
Development Process:
App Stores:
Many applications are distributed through app stores specific to their platforms
(e.g., Apple App Store, Google Play Store, Microsoft Store). These platforms
provide a centralized marketplace for users to discover and download apps.
Monetization:
Security:
Software applications have become an integral part of daily life, serving diverse
purposes from productivity and communication to entertainment and business
operations. Their development and continuous improvement contribute significantly
to the digital world's evolution and functionality.
Software Crisis
Definition:
The software crisis refers to a period in the early history of software development
when the industry faced significant challenges and difficulties in producing
Software Engineering 7
software that met the desired quality, cost, and delivery targets. It was a time
when software projects often ran over budget, exceeded timelines, and resulted
in systems that were error-prone and unreliable.
4. Quality Issues: Software systems produced during this period often had
numerous defects, making them unreliable and requiring frequent updates and
maintenance.
Software Engineering 8
Solutions to the Software Crisis:
1. Definition:
Software Engineering 9
Model, and Iterative models. Each SDLC model prescribes a series of
phases and activities to guide the development process.
1. Systematic Approach:
2. Repeatability:
3. Quality Assurance:
4. Project Management:
5. Flexibility:
While processes provide structure, they can be adapted to fit the needs of
different projects. Agile methodologies, for example, prioritize flexibility and
adaptability in response to changing requirements.
6. Documentation:
Software Engineering 10
Software processes emphasize the importance of documentation. This
includes requirement documents, design specifications, code documentation,
and test plans to ensure that the project's progress and outcomes are well-
documented.
7. Risk Management:
8. Iterative Improvement:
9. Communication:
Waterfall Model:
Description:
The Waterfall Model is a traditional and linear software development life cycle
model. It is often considered a classic approach, where the project is divided into
distinct phases, and each phase must be completed before the next one begins.
It follows a sequential, top-down flow where the output of one phase becomes
the input for the next.
Phases:
Software Engineering 11
1. Requirements Gathering: This is the initial phase where the project's
requirements are gathered, documented, and analyzed. It involves interactions
with stakeholders to understand their needs.
2. System Design: In this phase, the system architecture is designed based on the
gathered requirements. This includes defining system components, their
relationships, and a high-level design.
3. Implementation: The actual coding and development of the software take place
in this phase. Programmers write code according to the system design
specifications.
Characteristics:
Sequential: The phases in the Waterfall Model proceed sequentially, and each
phase depends on the deliverables of the previous one.
Advantages:
Software Engineering 12
Suitable for projects with stable, well-defined requirements.
Disadvantages:
Testing and user feedback often occur late in the project, which may lead to
costly defects.
Software Engineering 13
A prototype is a working model of a software system or a part of it. It is created to
provide a tangible representation of the software's functionality and features
before the final system is developed. Prototypes can be of various types,
including:
Prototyping Model:
Software Engineering 14
Phases:
5. Testing: The final software is tested to ensure quality and compliance with the
requirements.
Characteristics:
Software Engineering 15
Iterative: Prototyping involves multiple iterations, allowing for refinements and
improvements based on user feedback.
Risk Management: It reduces the risk of delivering a final product that doesn't
meet user needs.
Advantages of Prototyping:
Disadvantages of Prototyping:
Evolutionary Model:
Description:
Software Engineering 16
dynamic environments. The model allows for the early delivery of a basic working
system and then iteratively enhances the software.
Phases:
Characteristics:
Adaptable: Suited for projects with changing requirements and high uncertainty.
Early Delivery: The model allows for the early delivery of a basic working
system, which can provide value to users.
Reduced Risk: Iterative nature helps identify issues early and allows for course
corrections.
Software Engineering 17
Potential Scope Creep: Iterative enhancements may lead to expanding the
project scope beyond the original plan.
Spiral Model
The Spiral Model is a software development and project management approach that
combines iterative development with elements of the Waterfall model. It was first
introduced by Barry Boehm in 1986 and is especially suitable for large and complex
projects. The Spiral Model is characterized by a series of cycles, or "spirals," each of
which represents a phase in the software development process. Here are the key
components and principles of the Spiral Model:
1. Phases:
The Spiral Model divides the software development process into several
phases, each of which represents a complete cycle of the model. The typical
Software Engineering 18
phases include Planning, Risk Analysis, Engineering (or Development), and
Evaluation (or Testing).
3. Risk Analysis:
The Risk Analysis phase is a unique feature of the Spiral Model. It involves
identifying and assessing project risks, such as technical, schedule, and cost
risks. The goal is to make informed decisions about whether to proceed with
the project based on risk analysis.
4. Prototyping:
5. Flexibility:
6. Customer Involvement:
7. Documentation:
Software Engineering 19
The project is continually monitored, and control mechanisms are in place to
manage risks and resources. This ensures that the project remains on track
and aligned with its goals.
Effective for large and complex projects where risks and uncertainties are high.
The potential for project scope creep or endless iteration if not properly
controlled.
The Spiral Model is a robust approach for projects that require risk management,
flexibility, and a focus on iterative development. It is particularly useful in domains
where requirements are complex, evolving, or not well-understood initially. However,
it does require a disciplined approach to risk assessment and management to be
effective.
Software Engineering 20
Software Requirements Analysis & Specifications:
1. Software Requirements Analysis:
Definition:
Key Activities:
Software Engineering 21
contradictions are addressed during this phase.
Challenges:
2. Software Specifications:
Definition:
Key Components:
3. User Interface (UI) Specifications: For software with a graphical user interface,
these specifications outline the layout, design, and behavior of the user interface
elements.
Importance:
Software Engineering 22
Clear and detailed specifications serve as a common reference point for all
stakeholders, including designers, developers, testers, and users. They help
ensure that the software is built as per the requirements and can be tested
effectively.
Documentation Standards:
Tools:
Traceability:
Requirements Engineering:
Definition:
Software Engineering 23
2. Documentation: Capturing and recording requirements in a structured and
comprehensible format. Requirement documents can take the form of textual
descriptions, diagrams, or use cases.
5. Validation: Validating requirements to ensure that they align with the project's
goals, are achievable within budget and time constraints, and meet the needs of
stakeholders.
Software Engineering 24
Ambiguity and Incompleteness: Requirements are often stated vaguely or
may not cover all necessary aspects.
Managing Scope: Defining the project's scope and ensuring it does not expand
beyond the original intent can be complex.
Traceability:
What the System Does: Functional requirements describe what the system
does in response to specific inputs or under certain conditions.
Specific and Testable: They are typically specific, well-defined, and testable,
allowing for validation and verification.
Software Engineering 25
User-Centric: Often, functional requirements focus on user interactions and
system behavior from the user's perspective.
Interactions and Use Cases: They often include use cases, scenarios, and user
stories that describe how the system functions in real-world situations.
2. Non-Functional Requirements:
Definition:
Qualities and Constraints: They define the qualities or constraints that the
software must adhere to, such as response times, data storage, and security
measures.
2. Reliability: Availability, fault tolerance, and error handling, e.g., the system
should have 99.9% uptime.
3. Usability: User interface design, accessibility, and user satisfaction, e.g., the
system should be intuitive for novice users.
Software Engineering 26
4. Security: Authentication, authorization, and data protection, e.g., user
passwords must be stored securely.
5. Scalability: The system's ability to handle increased load or data, e.g., the
system should scale to accommodate ten times the current user base.
Importance:
Both functional and non-functional requirements are vital in ensuring that software
meets the needs and expectations of users, performs well, and complies with quality
and performance standards. Balancing and satisfying both types of requirements is
crucial for successful software development and user satisfaction.
User Requirements:
User requirements, also known as user needs or user stories, are a critical
component of software development. These requirements describe what the users,
stakeholders, or customers expect from a software system. User requirements are
typically expressed in non-technical language to ensure clear communication
between developers and end-users.
Key Characteristics of User Requirements:
Software Engineering 27
4. User Stories: User requirements are often framed as user stories, which are
short, narrative descriptions that explain a specific user's need and the expected
outcome. User stories typically follow the "As a [user], I want [feature] so that
[benefit]" format.
4. As a mobile app user, I expect the application to load within two seconds and
respond quickly to my interactions to provide a smooth and responsive user
experience.
5. As a healthcare provider, I require the system to comply with all relevant data
security and privacy regulations to safeguard patient information.
Validation: User requirements serve as a basis for validating the final product to
ensure it meets user expectations.
Software Engineering 28
Reducing Development Risk: A clear understanding of user needs helps
mitigate the risk of building features that users do not value or neglecting
essential functionality.
System Requirements:
System requirements, also known as technical requirements or software
requirements specifications, describe the technical and operational characteristics
that a software system must possess to meet the user requirements and function
effectively. These requirements provide guidance to the development and testing
teams on how to design, build, and maintain the software.
Key Characteristics of System Requirements:
6. Integration: They specify how the software will integrate with other systems or
components, if applicable.
2. The software shall be built using Java and utilize the Spring framework for web
development.
Software Engineering 29
4. The system should support a minimum of 500 concurrent users without a
significant degradation in performance.
5. Data backup must be performed every day at midnight and stored securely for a
minimum of one year.
6. The software should integrate with the company's single sign-on (SSO) system
for user authentication.
Documentation: They provide a clear reference for the development and testing
teams and are essential for future maintenance and updates.
Software Engineering 30
techniques in detail: Function Analysis System Technique (FAST), Quality Function
Deployment (QFD), and the Use Case Approach.
1. Function Analysis System Technique (FAST):
Definition:
Use Cases:
Software Engineering 31
2. Create the House of Quality: This is a visual matrix that correlates customer
needs with specific product features, indicating the strength and nature of the
relationship.
Use Cases:
The Use Case Approach is a technique used to capture and describe the
interactions between an actor (usually a user) and a software system. Use cases
provide a clear understanding of system functionality from a user's perspective.
1. Identify Actors: Identify the different actors or users who interact with the
software system. Actors can be individuals, other systems, or entities.
2. Define Use Cases: Describe specific use cases, which are scenarios of
interactions between actors and the system. Each use case represents a
discrete piece of functionality.
3. Use Case Diagrams: Create use case diagrams to visualize the relationships
between actors and use cases.
4. Detail Scenarios: Write detailed descriptions of each use case, including the
steps involved, preconditions, postconditions, and any exceptions.
5. Validate and Refine: Use cases are reviewed and refined to ensure they
accurately represent user needs and system functionality.
Use Cases:
Software Engineering 32
The Use Case Approach is a standard technique in software requirements
engineering. It provides a user-centric view of system functionality, making it a
valuable tool for software development teams.
1. All names should be unique. This makes it easier to refer to elements in the
DFD.
2. Remember that DFD is not a flow chart. Arrows is a flow chart that represents
the order of events; arrows in DFD represents flowing data. A DFD does not
involve any order of events.
4. Do not become bogged down with details. Defer error conditions and error
handling until the end of the analysis.
Standard symbols for DFDs are derived from the electric circuit diagram analysis and
are shown in fig:
Software Engineering 33
Circle: A circle (bubble) shows a process that transforms data inputs into data
outputs.
Data Flow: A curved line shows the flow of data into or out of a process or data
store.
Data Store: A set of parallel lines shows a place for the collection of data items. A
data store indicates that the data is stored which can be used at a later stage or by
the other processes in a different order. The data store can have an element or group
of elements.
Source or Sink: Source or Sink is an external entity and acts as a source of system
inputs or sink of system outputs.
Software Engineering 34
see primarily three levels in the data flow diagram, which are: 0-level DFD, 1-level
DFD, and 2-level DFD.
0-level DFDM
It is also known as fundamental system model, or context diagram represents the
entire software requirement as a single bubble with input and output data denoted by
incoming and outgoing arrows. Then the system is decomposed and described as a
DFD with multiple bubbles. Parts of the system represented by each of these
bubbles are then decomposed and documented as more and more detailed DFDs.
This process may be repeated at as many levels as necessary until the program at
hand is well understood. It is essential to preserve the number of inputs and outputs
between levels, this concept is called leveling by DeMacro. Thus, if bubble "A" has
two inputs x1 and x2 and one output y, then the expanded DFD, that represents "A"
should have exactly two external inputs and one external output as shown in fig:
The Level-0 DFD, also called context diagram of the result management system is
shown in fig. As the bubbles are decomposed into less and less abstract bubbles, the
corresponding data flow may also be needed to be decomposed.
Software Engineering 35
1-level DFD
In 1-level DFD, a context diagram is decomposed into multiple bubbles/processes. In
this level, we highlight the main objectives of the system and breakdown the high-
level process of 0-level DFD into subprocesses.
Software Engineering 36
2-Level DFD
2-level DFD goes one process deeper into parts of 1-level DFD. It can be used to
project or record the specific/necessary detail about the system's functioning.
Software Engineering 37
Software Engineering 38
Requirements Analysis Using Data Flow Diagrams
(DFD):
Data Flow Diagrams (DFDs) are a visual modeling technique used in software
engineering to represent the flow of data and processes within a system. They are
also a valuable tool for analyzing system requirements. Here's how you can perform
requirements analysis using DFDs:
Software Engineering 39
1. Identify Key Stakeholders:
Before you start with DFDs, identify the key stakeholders who will be involved in
the requirements analysis. This typically includes end-users, business analysts,
and subject matter experts.
Begin by gathering the initial set of high-level requirements. These can be in the
form of user stories, business use cases, or textual descriptions of what the
system should do.
The first step in using DFDs is to create a context diagram. This diagram shows
the system as a single process or entity and its interactions with external entities
(e.g., users, other systems, data sources). This provides an overview of the
system's boundaries and external interfaces.
Once you have the context diagram, you can start decomposing the system into
more detailed processes. Each process represents a specific function or task
within the system.
As you decompose processes, identify the data flows between them. Data flows
represent the transfer of data from one process to another. This helps in
understanding how data is shared and processed within the system.
Data stores are repositories where data is stored within the system. Identify the
data stores and their relationships with processes and data flows.
For each process, describe the data transformations that occur. What happens to
the data as it moves from input to output within a process? This helps in
understanding how data is processed or transformed.
For each process, analyze the logic and rules governing it. What conditions
trigger the process? What are the expected outcomes? This analysis helps in
capturing detailed process requirements.
Software Engineering 40
9. Identify Constraints and Rules:
DFDs can also capture constraints and business rules that apply to the system.
These can include validation rules, security requirements, and any other specific
constraints.
Data Dictionary
Data Dictionary is the major component in the structured analysis model of the
system. It lists all the data items appearing in DFD. A data dictionary in Software
Engineering means a file or a set of files that includes a database’s metadata (hold
records about other objects in the database), like data ownership, relationships of the
data to another object, and some other data.
Example a data dictionary entry: GrossPay = regular pay + overtime pay
Case Tools is used to maintain data dictionary as it captures the data items
appearing in a DFD automatically to generate the data dictionary.
Software Engineering 41
Aliases: It represents another name.
Notations Meaning
It is very important for creating an order list from a subset of the items list.
It is very important for creating an order list from a complete items list.
The data dictionary is also important to find the specific data item object from the
list.
Used for creating the ordered list of a subset of the data items
Software Engineering 42
It provides developers with standard terminology for all data.
Data Quality: A data dictionary can help improve data quality by providing a
single source of truth for data definitions, allowing users to easily verify the
accuracy and completeness of data.
Software Engineering 43
users do not access or modify the data.
Types of Relationships:
Many-to-One (N:1): Many instances in Entity A are associated with one instance
in Entity B.
Benefits of ER Diagrams:
Software Engineering 44
Normalization: Supports the process of database normalization to reduce data
redundancy and improve data integrity.
Requirements Documentation:
Requirements documentation is a critical aspect of the software development
process. It involves capturing, organizing, and presenting detailed information about
the software's functional and non-functional requirements, as well as any additional
information necessary for understanding and implementing the project. Effective
requirements documentation is essential for ensuring that the software meets user
expectations, aligns with stakeholder needs, and serves as a reference throughout
the development and testing phases.
Key Elements of Requirements Documentation:
1. Introduction:
2. Scope Statement:
The scope statement defines the boundaries of the project and specifies
what is included and excluded. It helps in managing project expectations.
3. Functional Requirements:
This section outlines the specific functions, features, and capabilities that the
software must provide. It includes detailed descriptions of how the system
should behave in response to various inputs or under specific conditions.
4. Non-Functional Requirements:
Software Engineering 45
Non-functional requirements describe the quality attributes of the software,
such as performance, security, usability, scalability, and reliability. They
specify how the system should perform rather than what it should do.
5. User Requirements:
6. System Requirements:
Constraints and assumptions refer to factors that limit or impact the project.
This section outlines any restrictions or assumptions made during the
requirement-gathering process.
9. Data Models:
11. Dependencies:
Software Engineering 46
12. Verification and Validation:
This section outlines the methods and criteria used to verify and validate the
requirements, ensuring that they are complete, accurate, and testable.
Software Engineering 47
developers have a clear understanding of what is expected.
An SRS includes both functional requirements (what the system should do)
and non-functional requirements (how the system should perform). This
encompasses features, user interactions, performance criteria, security
requirements, and more.
The SRS uses clear and concise language to ensure that there is no room
for misinterpretation. Ambiguities and contradictions are eliminated during
the development of the document.
4. User-Centric:
The SRS focuses on meeting the needs and expectations of end-users and
stakeholders. It ensures that the software serves its intended purpose
effectively.
5. Traceability:
6. Structured Format:
7. Feasibility Analysis:
The SRS often includes a feasibility analysis that examines whether the
project can be realistically completed within budget and time constraints.
This analysis may assess technical, operational, and economic feasibility.
Software Engineering 48
Verification and validation criteria are specified to ensure that each
requirement is testable and that there is a method to verify that it has been
met.
In some industries, the SRS may include information related to legal and
regulatory compliance to ensure that the software adheres to relevant
standards and guidelines.
The SRS may evolve throughout the project as requirements change or new
insights are gained. It is important to maintain version control and document
changes carefully.
The SRS aligns with the overarching goals and objectives of the project,
ensuring that the software supports the business or organizational strategy.
The SRS serves as the basis for project planning, including resource
allocation, scheduling, and budgeting.
Ensuring the SRS is accurate and complete is essential for maintaining the
quality and reliability of the final software product.
Requirement Management:
Software Engineering 49
Requirement management is a critical process in software development and project
management. It involves the systematic and structured handling of requirements
throughout the project lifecycle. Effective requirement management ensures that
requirements are captured, documented, tracked, and maintained to meet the needs
and expectations of stakeholders and deliver a successful project. Here are the key
aspects and practices of requirement management:
1. Requirement Elicitation:
2. Requirement Analysis:
3. Requirement Documentation:
4. Requirement Prioritization:
5. Requirement Traceability:
Traceability ensures that each requirement is linked to its source and that there
is a mechanism to track changes and updates throughout the project.
6. Change Control:
Projects are dynamic, and requirements may change. A formal change control
process is established to evaluate and manage requested changes, assessing
their impact on the project scope, schedule, and budget.
7. Version Control:
Software Engineering 50
and errors.
8. Requirement Validation:
Requirements are validated to ensure that they are accurate, complete, and
aligned with stakeholder needs. Validation often involves reviews, inspections,
and walkthroughs.
9. Requirement Verification:
Software Engineering 51
maintained and updated as necessary to accommodate changes in the software or
evolving stakeholder needs.
Requirement management is an ongoing process that ensures that a project stays on
track, delivers what stakeholders expect, and manages change effectively. It is an
integral part of project management, quality assurance, and the software
development lifecycle.
The IEEE (Institute of Electrical and Electronics Engineers) has developed standards
for various aspects of software engineering, including the Software Requirements
Specification (SRS). The standard that pertains to SRS is IEEE Std 830-1998, titled
"IEEE Recommended Practice for Software Requirements Specifications."
IEEE Std 830-1998 outlines a specific format and structure for SRS documents,
including sections and subsections that should be included in the document. This
structured approach helps ensure consistency and completeness.
3. Content Guidelines:
IEEE Std 830-1998 recommends a clear and unambiguous language and style
to avoid misunderstandings and ambiguities in the requirements.
5. Requirements Attributes:
Software Engineering 52
The standard suggests using attributes for each requirement to provide
additional information, such as the source of the requirement, its priority, and its
verification method.
6. Traceability:
7. Appendices:
The standard allows for the inclusion of appendices, which can provide
supplementary information, such as data dictionaries, use case descriptions, and
diagrams.
IEEE Std 830-1998 recommends that the SRS undergo reviews and verification
to ensure its accuracy and completeness.
9. Change Control:
10. Examples:
- The standard provides examples and templates to help illustrate how to structure
and format an SRS document effectively.
11. References:
- IEEE Std 830-1998 may reference other IEEE standards and guidelines that are
relevant to software requirements engineering.
It's important to note that standards may evolve over time, and there may be more
recent versions or related standards that update or complement IEEE Std 830-1998.
Therefore, it's advisable to check for the latest version of the standard and any
supplementary standards that may provide additional guidance on SRS creation and
management.
Following the IEEE Std 830-1998 or other relevant IEEE standards can help software
development teams create well-structured and comprehensive Software
Requirements Specifications, contributing to successful project outcomes and
effective communication with stakeholders.
Software Engineering 53
Unit 2
Software Project Planning:
Software project planning is the process of defining the scope, objectives, and
approach for a software development project. It involves the creation of a detailed
plan that outlines the project's tasks, timelines, resource allocation, and budget.
Effective project planning is crucial for delivering software projects on time, within
budget, and meeting stakeholder expectations. Here are the key aspects and steps
involved in software project planning:
1. Project Initiation:
Define the project's purpose, objectives, and scope. Identify the key
stakeholders, project team members, and their roles. Determine the feasibility of
the project and its alignment with organizational goals.
2. Requirements Analysis:
Gather and analyze the project requirements, including functional and non-
functional requirements. Ensure a clear understanding of what the software
should achieve and the needs of the end-users.
Clearly define the scope of the project, specifying what is included and excluded.
This helps in managing expectations and avoiding scope creep.
5. Task Estimation:
Estimate the effort, time, and resources required for each task or activity. Use
estimation techniques like expert judgment, historical data, and parametric
modeling.
6. Resource Allocation:
7. Project Scheduling:
Software Engineering 54
Develop a project schedule that includes task sequences, dependencies, and
durations. Use project management software to create Gantt charts or other
scheduling tools.
Identify potential risks that may impact the project's success. Develop a risk
management plan to mitigate and manage these risks.
9. Quality Planning:
Define the quality standards and processes that will be followed throughout the
project to ensure the software meets quality requirements.
Software Engineering 55
Effective software project planning is an iterative process, with adjustments and
refinements made as the project progresses. It's essential to maintain clear
communication and coordination among team members and stakeholders and to
adapt the plan as necessary to achieve project success. Project managers and
teams often use project management methodologies and tools to facilitate the
planning and execution of software projects.
Software Engineering 56
will be needed to build the project. Various measures are used in project size
estimation. Some of these are:
Lines of Code
Function points
1. Lines of Code (LOC): As the name suggests, LOC counts the total number of
lines of source code in a project. The units of LOC are:
The size is estimated by comparing it with the existing systems of the same kind. The
experts use it to predict the required size of various components of software and then
add them to get the total size.
It’s tough to estimate LOC by analyzing the problem definition. Only after the whole
code has been developed can accurate LOC be estimated. This statistic is of little
utility to project managers because project planning must be completed before
development activity can begin.
Two separate source files having a similar number of lines may not require the same
effort. A file with complicated logic would take longer to create than one with simple
logic. Proper estimation may not be attainable based on LOC.
The length of time it takes to solve an issue is measured in LOC. This statistic will
differ greatly from one programmer to the next. A seasoned programmer can write
the same logic in fewer lines than a newbie coder.
Advantages:
Simple to use.
Software Engineering 57
Disadvantages:
It is difficult to estimate the size using this technique in the early stages of the
project.
When platforms and languages are different, LOC cannot be used to normalize.
Disadvantages:
No fixed standards exist. Some entities contribute more to project size than
others.
Just like FPA, it is less used in the cost estimation model. Hence, it must be
converted to LOC.
Each major process can be decomposed into smaller processes. This will
increase the accuracy of the estimation
Disadvantages:
Studying similar kinds of processes to estimate size takes additional time and
effort.
Software Engineering 58
All software projects are not required for the construction of DFD.
4. Function Point Analysis: In this method, the number and type of functions
supported by the software are utilized to find FPC(function point count). The steps in
function point analysis are:
Count the number of functions of each proposed type: Find the number of
functions belonging to the following types:
External Inquiries: They lead to data retrieval from the system but don’t
change the system.
Internal Files: Logical files maintained within the system. Log files are not
included here.
External interface Files: These are logical files for other applications which
are used by our system.
External Inputs 3 4 6
External Output 4 5 7
External Inquiries 3 4 6
Software Engineering 59
Find Total Degree of Influence: Use the ’14 general characteristics’ of a system
to find the degree of influence of each of them. The sum of all 14 degrees of
influence will give the TDI. The range of TDI is 0 to 70. The 14 general
characteristics are: Data Communications, Distributed Data Processing,
Performance, Heavily Used Configuration, Transaction Rate, On-Line Data Entry,
End-user Efficiency, Online Update, Complex Processing Reusability, Installation
Ease, Operational Ease, Multiple Sites and Facilitate Change. Each of the above
characteristics is evaluated on a scale of 0-5.
Find the Function Point Count: Use the following formula to calculate FPC
Advantages:
Disadvantages:
Many cost estimation models like COCOMO use LOC and hence FPC must be
converted to LOC.
Software Engineering 60
1. Lines of Code (LOC):
Definition: Lines of code (LOC) is a metric that measures the number of lines or
statements in the source code of a software application. It's a simple and widely used
method for estimating the size of a software project.
Pros:
Useful for Cost Estimation: It can be used to estimate project costs and
schedule.
Cons:
Not Always Accurate: LOC may not accurately represent the complexity or
functionality of the software.
Dependent on Coding Style: The number of lines of code can vary based on
the coding style, making it subjective.
Effective for Comparisons: It's useful for comparing the complexity and size of
different software projects.
Cons:
Software Engineering 61
Key Steps in Estimating Size with Function Points:
1. Identify User Inputs: Determine the types and quantity of user inputs (external
inputs, external outputs, and external inquiries).
2. Identify User Outputs: Identify the types and quantity of user outputs.
4. Identify Logical Files: Determine the logical files used by the application.
6. Calculate Function Points: Calculate the function points based on the identified
elements and their weights according to the Function Point Analysis guidelines.
Both LOC and FP have their merits and are often used in conjunction for more
accurate size estimation. The choice of which method to use may depend on project
characteristics and goals. FP is particularly useful for assessing the functionality of a
software system, while LOC is more closely related to coding effort and project
management. Accurate size estimation is crucial for effective project planning, cost
estimation, and resource allocation in software development.
Function Point Analysis estimates the size of a software project based on the
functionality it provides to users. The function points are then converted into
effort and cost estimates using established conversion factors.
Software Engineering 62
3. Use Case Points (UCP):
4. Estimation by Analogy:
This method involves comparing the current project with previous similar
projects and using historical data to estimate costs. It's based on the
assumption that past project experiences can be applied to the current
project.
6. Expert Judgment:
7. Parametric Models:
8. Bottom-Up Estimation:
9. Top-Down Estimation:
Top-down estimation starts with an overall project estimate and then breaks
it down into smaller components. It's useful for high-level cost estimation
before detailed project planning.
Software Engineering 63
10. Expert Estimation with Delphi Technique:
The Delphi Technique involves gathering estimates from experts and using a
systematic approach to achieve consensus on project cost estimates. It's
often used in situations where there is uncertainty or limited historical data.
The choice of a cost estimation model or technique depends on the project's nature,
available data, and the level of detail required. It's common to use multiple models
and compare their estimates to ensure accuracy. Additionally, ongoing monitoring
and refinement of cost estimates are essential as the project progresses and more
information becomes available.
Organic Mode: Suitable for relatively small and straightforward projects with
experienced developers. Effort is primarily based on LOC.
Embedded Mode: Suitable for large and complex projects, often involving real-
time or mission-critical systems. LOC, innovation, complexity, and other factors
Software Engineering 64
are considered.
2. Intermediate COCOMO:
Intermediate COCOMO is a more detailed version of the model. It provides a
framework for estimating effort, project duration, and cost based on a range of cost
drivers and factors, including project attributes, product attributes, hardware
attributes, and personnel attributes. The formula for Intermediate COCOMO is:
Effort (E) = a * (KLOC)^b * EAF
Where:
Intermediate COCOMO allows for a more nuanced and tailored estimation, taking
into account the specific characteristics of the project. It considers factors such as
personnel capability, development flexibility, and the use of modern tools and
techniques.
3. Detailed COCOMO:
Detailed COCOMO is the most comprehensive version of the model, and it offers a
highly detailed estimation process. It takes into account additional factors like
software reuse, documentation, and quality control. This version of COCOMO is
particularly suitable for very large and complex projects.
Key Advantages of COCOMO:
It offers a range of cost drivers and parameters for a more accurate estimation.
It relies heavily on lines of code (LOC) as a primary input, which may not
accurately capture the complexity and functionality of modern software.
Software Engineering 65
COCOMO estimates are based on historical data, and the accuracy of the model
depends on the relevance of that data to the current project.
COCOMO remains a valuable tool for initial software project cost estimation and
serves as a foundation for more advanced models and techniques in the field of
software engineering. It can be particularly useful for comparing different project
scenarios and making informed decisions about project planning and resource
allocation.
1. Project Size (S): The size of the software project is a critical factor in the
Putnam model. It is often measured in thousands of source lines of code (KLOC)
or function points (FP), depending on the context of the project.
2. Effort per Size (E/S): The Putnam model assumes that the effort required to
complete a project is proportional to its size. This factor represents the effort
required for each unit of project size (e.g., person-months per KLOC).
1. Effort (E): The effort required for the project is calculated as follows:
E=S/P
Where:
Software Engineering 66
S is the project size.
P is the productivity.
2. Schedule (T): The project schedule is estimated by dividing the effort by the
number of available resources:
T=E/R
Where:
E is the effort.
Suitable for large and complex projects where resource allocation is a critical
factor.
Assumes a linear relationship between size, effort, and productivity, which may
not hold true for all types of projects.
Does not account for variations in productivity that can occur during different
project phases.
The Putnam Resource Allocation Model is a valuable tool for estimating the effort
and schedule for software development projects, particularly for projects with well-
established productivity rates and resource constraints. However, as with any
estimation model, its accuracy depends on the quality of the data and the
applicability of its assumptions to the specific project at hand.
Software Engineering 67
of project overruns, and ensuring the successful completion of software projects.
Here are some key methods and best practices for validating software estimates:
Compare current project estimates with historical data from similar projects.
This analysis can reveal patterns and trends that help in validating the
accuracy of the estimates.
2. Expert Judgment:
3. Peer Review:
4. Prototyping:
5. Benchmarking:
6. Analogous Estimation:
Compare the current project with past projects that are similar in nature.
Analogous estimation involves adjusting past project data to account for
differences and validating the current estimates.
Software Engineering 68
Employ simulation techniques to test the sensitivity of the estimates to
various parameters and uncertainties. Monte Carlo analysis, in particular,
can provide a range of possible outcomes to assess the reliability of the
estimates.
9. Contingency Planning:
Develop contingency plans that account for potential deviations from the
estimates. This proactive approach helps in managing risks and mitigating
the impact of uncertainties.
As the project progresses, track actual effort, costs, and schedule against
the initial estimates. Continuous monitoring and adjustment help in validating
and refining the estimates.
Software Engineering 69
maintaining project control, managing risks, and ensuring successful project delivery.
1. Risk Identification:
Identify potential risks that may affect the project. Risks can be technical
(e.g., software bugs), external (e.g., changes in requirements), or operational
(e.g., resource constraints).
2. Risk Assessment:
Evaluate the potential impact and probability of each identified risk. High-
impact, high-probability risks require more attention than low-impact, low-
probability risks.
3. Risk Prioritization:
Prioritize risks based on their severity and the level of impact they may have
on the project. This helps in allocating resources and focus to the most
critical risks.
Develop mitigation plans for the high-priority risks. These plans should
outline specific actions to reduce the likelihood or impact of the risks.
Mitigation strategies can include code reviews, testing, contingency planning,
and resource allocation.
Continuously monitor the identified risks and their status throughout the
project's lifecycle. Regularly review the effectiveness of mitigation measures
and adjust them as necessary.
Software Engineering 70
7. Risk Documentation:
Maintain a risk register or risk log that documents each identified risk, its
assessment, mitigation plans, and tracking information. This serves as a
reference for the project team.
8. Communication:
9. Risk Reviews:
Conduct periodic risk reviews to reassess the project's risk landscape. New
risks may emerge, and the significance of existing risks may change as the
project progresses.
Effective risk management is an iterative and ongoing process that adapts to the
evolving nature of software development projects. It is a proactive approach that can
Software Engineering 71
significantly contribute to project success by reducing the likelihood of negative
outcomes and enhancing project predictability.
Logical Cohesion: In this case, the elements within a module are grouped
together because they share a logical relationship. For example, a module that
contains file I/O functions may exhibit logical cohesion.
Software Engineering 72
Functional Cohesion: This is the highest level of cohesion. Elements within a
module are grouped together because they collectively perform a single, well-
defined function or task. Modules with functional cohesion are easier to
understand, maintain, and reuse.
Aim for achieving functional cohesion in your software design, as it results in more
modular and maintainable code.
2. Coupling:
Coupling refers to the degree of interconnectedness between modules or
components within a software system. It measures how one module relies on the
functionality of another. Low coupling is generally preferred as it leads to more
flexible and maintainable systems.
There are different levels of coupling:
Tight Coupling: In this scenario, modules are highly dependent on each other
and are closely connected. Changes in one module can have a significant impact
on others. Tight coupling reduces the system's flexibility and maintainability.
Loose Coupling: In loose coupling, modules are less dependent on each other,
and changes in one module are less likely to affect others. This results in more
flexibility and ease of maintenance.
Reducing coupling and achieving loose coupling is a crucial design goal in software
development. One way to achieve this is by using well-defined interfaces and
ensuring that modules communicate through those interfaces rather than directly with
each other.
In summary, cohesion and coupling are fundamental principles in software design
that impact the quality and maintainability of software systems. High cohesion and
low coupling are desirable design characteristics that lead to more modular,
understandable, and flexible software architectures.
Software Engineering 73
1. Modularity: Function-oriented design promotes modularity, where the software
system is divided into separate, self-contained modules, each responsible for a
specific function. These modules can be designed, developed, tested, and
maintained independently.
7. Reuse: Modular functions can be reused across the system or in other projects,
promoting code reusability and reducing redundant development efforts.
9. Testing and Debugging: Smaller, modular functions are easier to test and
debug, which simplifies the software development and maintenance process.
Function-oriented design is often used for systems where the primary focus is on
data processing, algorithmic operations, and structured procedures. It is well-suited
for scientific and engineering applications, data processing systems, and embedded
software.
Software Engineering 74
While function-oriented design offers benefits in terms of modularity and
maintainability, it may not be the ideal choice for all types of software systems,
particularly those with complex user interfaces or object-oriented requirements. In
such cases, other design paradigms like object-oriented design may be more
appropriate.
Overall, function-oriented design remains a valuable approach in software
engineering, particularly when designing systems where a clear separation of
functionality into modules is crucial for achieving efficiency and maintainability.
2. Classes:
Classes serve as blueprints or templates for creating objects. They define the
structure and behavior of objects of a certain type. Classes can inherit attributes
and methods from other classes, fostering reusability and hierarchical
organization.
3. Encapsulation:
Encapsulation is the practice of bundling data and methods that operate on the
data within a class, making data private and providing controlled access through
methods (getters and setters). This ensures data integrity and modularity.
4. Inheritance:
Software Engineering 75
5. Polymorphism:
6. Abstraction:
7. Modularity:
OOD encourages modular design, where complex systems are broken down into
smaller, more manageable components (objects and classes). Each module
encapsulates a specific piece of functionality.
8. Reusability:
Objects and classes can be reused in different contexts, fostering code reuse
and reducing redundancy. Libraries, frameworks, and design patterns are
examples of reusable components in OOD.
Software Engineering 76
It's important to note that Object-Oriented Design is just one of several design
paradigms in software engineering. The choice of design paradigm depends on the
nature of the project, the problem domain, and the specific requirements.
Attractive
Simple to use
Clear to understand
Software Engineering 77
The analysis and design process of a user interface is iterative and can be
represented by a spiral model. The analysis and design process of user interface
consists of four framework activities.
1. User, task, environmental analysis, and modeling: Initially, the focus is based
on the profile of users who will interact with the system, i.e. understanding, skill
and knowledge, type of user, etc, based on the user’s profile users are made into
categories. From each category requirements are gathered. Based on the
requirements developer understand how to develop the interface. Once all the
requirements are gathered a detailed analysis is conducted. In the analysis part,
the tasks that the user performs to establish the goals of the system are
identified, described and elaborated. The analysis of the user environment
focuses on the physical work environment. Among the questions to be asked
are:
Will the user be sitting, standing, or performing other tasks unrelated to the
interface?
Software Engineering 78
Are there special human factors considerations driven by environmental
factors?
2. Interface Design: The goal of this phase is to define the set of interface objects
and actions i.e. Control mechanisms that enable the user to perform desired
tasks. Indicate how these control mechanisms affect the system. Specify the
action sequence of tasks and subtasks, also called a user scenario. Indicate the
state of the system when the user performs a particular task. Always follow the
three golden rules stated by Theo Mandel. Design issues such as response time,
command and action structure, error handling, and help facilities are considered
as the design model is refined. This phase serves as the foundation for the
implementation phase.
4. Interface Validation: This phase focuses on testing the interface. The interface
should be in such a way that it should be able to perform tasks correctly and it
should be able to handle a variety of tasks. It should achieve all the user’s
requirements. It should be easy to use and easy to learn. Users should accept
the interface as a useful one in their work.
Software Engineering 79