Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Devops Record

Download as pdf or txt
Download as pdf or txt
You are on page 1of 110

CONTINUOUS INTEGRATION AND CONTINUOUS DELIVERY USING DevOps

(Skill Oriented Course)

B.Tech III Year I Semester

Blooms
CO# Course Outcomes Taxonomy
level

Understand the why, what and how of DevOps adoption


CO1 Understand

Attain literacy on DevOps


CO2 Understand

CO3 Align capabilities required in the team Apply


CO4 Create an automated CICD Pipeline using a stack of tools Apply
Exercise 1: Get an understanding of the stages in software development lifecycle,
the process models, values and principles of agility and the need for agile software
development. This will enable you to work in projects following an agile approach
to software development.

Software engineering and agility:

Software Engineering is the process of designing, developing, testing, and


maintaining software. It is a systematic and disciplined approach to software
development that aims to create high-quality, reliable, and maintainable software.

SDLC (Software Development Life Cycle):

 SDLC is a process followed for software building within a software


organization. SDLC consists of a precise plan that describes how to
develop, maintain, replace, and enhance specific software.
 The life cycle defines a method for improving the quality of software and
the all-around development process.

Phases of SDLC:

 Planning & communication


 Requirements analysis
 Design
 Development
 Testing
 Deployment & Maintenance

1.PLANNING: Planning for the quality assurance requirements and identifications


of the risks associated with the projects is done at this stage.

2.REQUIREMENT ANALYSIS: Once the requirement analysis is done, the next


stage is to certainly represent and document the software requirements.

3.DESIGN: The design phase is a critical step in developing the conceptual


blueprint of a software project.

4.DEVELOPMENT: The actual development phase is where the development team


members divide the project into software modules and starts development of
software.
5.TESTING: After the development of the product, testing of the software is
necessary to ensure its smooth execution.

6.DEPLOYMENT & MAINTENANCE: The deployment phase is the final step in


the software development life cycle and delivers the final product to the customer.

Life cycle models in Software Development:

 Waterfall model
 Incremental model
 Iterative model
 Spiral model
WATERFALL MODEL

 The waterfall model is a software development model used in the context of


large, complex projects, typically in the field of information technology.
 It is characterized by a structured, sequential approach to project
management and software development.
 The waterfall model is useful in situations where the project requirements
are well-defined and the project goals are clear.
INCREMENTAL MODEL:

 Here, we develop software in small parts and test each piece individually
with users for feedback.
 Each increment in the incremental development process adds a new feature.
An example can be creating an MVP featuring only the core function and
adding new features based on user feedback.
ITERATIVE MODEL:

 The iterative development process also follows the mix of Waterfall and
agile development approaches.
 The only difference is that we develop a product version with all features
and functionalities and release it in the market for user feedback.
 Then, based on the received feedback, we can upgrade the product features.
SPIRAL MODEL:

 Spiral Planning emphasizes risk assessment and dividing the entire


development process into phases.
 Therefore, it can help you more accurately plan and budget your project.
 Also, it is possible to involve customers in the exploration and review steps
of each cycle.
AGILE SOFTWARE DEVELOPMENT

 Agile Software Development is a software development methodology that


values flexibility, collaboration, and customer satisfaction. It is based on the
Agile Manifesto, a set of principles for software development that prioritize
individuals and interactions, working software, customer collaboration, and
responding to change.
 Agile Software Development is an iterative and incremental approach to
software development that emphasizes the importance of delivering a
working product quickly and frequently.

PHASES IN AGILE SOFTWARE DEVELOPMENT

 Requirements gathering
 Design the requirements
 Construction
 Testing
 Deployment
 Feedback
AGILE TESTING METHODS

 Scrum
 eXtreme Programming(XP)

Extreme Programming (XP)

 Extreme Programming (XP) is an Agile software development


methodology that focuses on delivering high-quality software through
frequent and continuous feedback, collaboration, and adaptation.
 XP emphasizes a close working relationship between the development
team, the customer, and stakeholders, with an emphasis on rapid, iterative
development and deployment.

SCRUM

 SCRUM is an agile development process There are three roles in it, and
their responsibilities are:
 Scrum Master: The scrum can set up the master team, arrange the meeting
and remove obstacles for the process
 Product Owner: The product owner makes the product backlog, prioritizes
the delay and is responsible for the distribution of functionality on each
repetition.
 Scrum Team: The team manages its work and organizes the work to
complete the sprint or cycle focused primarily on ways to manage tasks in
team-based development conditions
ADVANTAGES OF AGILE

 Flexibility
 Focus on Customer Value
 Faster Delivery
 Software Quality
 Customer Satisfaction
 Efficient

CONCLUSION

 Software engineering is a dynamic field: It's a field that's always changing


and evolving, and it's essential for creating reliable and efficient software
systems.
 Agile development is a flexible and iterative approach to software
development that can help teams deliver high-quality software that meets
customer needs.
Exercise 2: Get a working knowledge of using extreme automation through
XP programming practices of test first development, refactoring and
automating test case writing.

AGILE PROGRAMMING - The meaning of Agile is versatility.

Agile is a way of managing and completing projects, especially in software


development, that emphasizes flexibility, collaboration, and customer satisfaction.
Agile is a software development methodology that focuses on delivering working
software quickly, and adapting to change easily. It's based on the principles of
collaboration, customer feedback, and the "three C's" of card, conversation, and
confirmation

Agile method break tasks into smaller iterations. The project scope and
requirements are laid down at the beginning of the development process. It is an
iterative software development approach where value is provided to users in small
increments rather than through a single large launch.

There are many different forms of the agile development method, including
scrum, crystal, extreme programming (XP), and feature-driven development
(FDD).

EXTREME PROGRAMMING
EXTREME PROGRAMMING (XP) is an Agile software development
methodology that focuses on delivering high-quality software through frequent and
continuous feedback, collaboration, and adaption. Xp emphasizes a close working
relationship between the development team, the customer, and stakeholders, with
an emphasis on rapid, iterative development and deployment.

PHASES OF EXTREME PROGRAMMING:


 Planning: During this phase, clients define their needs in concise
descriptions known as user stories. Its main task is to set goals of the entire
project and certain iterative cycles.
o The team calculates the effort required for each story and schedules
releases according to priority and effort.

 Design: At this stage of the project the team must define the main features
of the future code.
o The team creates only the essential design needed for current user
stories, using a common analogy or story to help everyone
understand the overall system architecture and keep the design
straightforward and clear.
o Extreme Programming developers often share responsibilities at the
stage of designing. Each developer is responsible for the design of a
certain part of the code.

 Coding: Extreme Programming developers believe that a good code must


be simple.Extreme Programming (XP) promotes pair programming i.e.
developers work together at one workstation, enhancing code quality and
knowledge sharing.
o They write tests before coding to ensure functionality from the start
(TDD), and frequently integrate their code into a shared repository
with automated tests to catch issues early.

 Testing: XP gives more importance to testing that consist of both unit tests
and acceptance test.
o Unit tests, which are automated, check if specific features work
correctly.
o Acceptance tests, conducted by customers, ensure that the overall
system meets initial requirements.
o This continuous testing ensures the software’s quality and alignment
with customer needs.

 Listening: In the listening phase regular feedback from customers to ensure


the product meets their needs and to adapt to any changes.

VALUES OF EXTREME PROGRAMMING

 Communication: Most projects fail because of poor communication. So


implement practices that force communication in a positive way.
 Simplicity: Develop the simplest product that meets the customer’s
needsFeedback.
o Developers must obtain and value feedback from the customer, from
the system, and from each other. The same as standard Agile values:
value customer collaboration over contract negotiation.
 Courage: Be prepared to make hard decisions that support the other
principles and practices.
 Respect: Every member’s input or opinion is appreciated which promotes a
collective way of working among people who are supportive within a
certain group

CORE PRACTICES IN EXTREME PROGRAMMING


The core practices of Extreme Programming (XP) are a set of guidelines that
emphasize collaboration, simplicity, feedback, and flexibility to ensure high-quality
software development. These practices help teams to deliver functional software
frequently while adapting to changing requirements and maintaining a sustainable
work pace.
1.THE PLANNING GAME:
 XP follows a planning game, where the customer and the development
team collaborate to prioritize and plan development tasks.
 This approach helps to ensure that the team is working on the most
important features and delivers value to the customer.
Business then decides
1.order of stories to implement
2.when and how often to produce a production release of the system.

 Collaboration: The planning game involves regular interaction


between developers and customers to decide which features to
implement first.
 Prioritization: Customers prioritize user stories based on business
value, while developers estimate technical effort and complexity.
 Iteration Planning: The team works in short iterations (usually 1-2
weeks), where a set of features are agreed upon and completed by
the end of the iteration.
 Adjustability: This practice allows for flexibility in the project, as
priorities and plans can change based on the customer’s evolving
needs.

2.SIMPLE DESIGN:
 Simplest possible design to get job done.
 XP is not a one-time, it is an “all-the-time” activity.
 Have design steps in
 Release planning
iteration planning
teams engage in quick design sessions and design revisions through
refactoring. through the course of the entire project.

 Focus on Current Needs: Design only for the current requirements,


avoiding features or complexity that aren’t immediately necessary.
 Avoid Over-Engineering: It discourages designing for future or
speculative needs, which often leads to unnecessary complexity and
harder-to-maintain code.
 Easier to Maintain: Simple designs are easier to maintain, modify, and
understand, making it easier for other team members to work on the code.
 Encourages Refactoring: As requirements evolve, the code can be
refactored to accommodate changes without becoming overly complex.

3.METAPHOR:
 The XP Metaphor is a central concept in Extreme Programming (XP), an
Agile software development methodology.
 It provides a simple, concrete idea or image used to help understand
complex or abstract concepts and guide the development process.
 The Extreme programming (XP) Metaphor helps the development team
focus on delivering the most important features and functionality to the
customer, and it provides a framework for prioritizing and managing the
development process.
 Example:
 A team working on a project to develop a new mobile app might use the
metaphor of a “digital assistant” to guide their development efforts.

 Shared Vision: The metaphor serves as a simple story or analogy that


the team uses to understand and discuss the system's structure and
design.
 Improves Communication: A shared metaphor simplifies
communication between developers and non-technical stakeholders,
aligning everyone’s understanding of the system.
 Guides Design: The metaphor provides guidance on how to structure
and organize the system, making it easier to reason about the
architecture.
 Simplifies Complex Systems: By relating the system to something
familiar, metaphors help simplify the complexity and allow the team to
make better decisions.

4.CONTINUOUS TESTING:
 The XP model gives high importance to testing and considers it to be
the primary factor in developing fault-free software.
 XP teams focus on validation of the software at all the times.
 Programmers develop software by writing tests first, and then code that
fulfills the requirements reflected in the tests.

 Tests Drive Development: Testing is integrated throughout the


development process, ensuring that each piece of code is tested as soon
as it’s written.
 Catch Bugs Early: By testing continuously, defects are caught early in
the development cycle, reducing the cost and effort required to fix them.
 Ensures Functionality: Continuous testing ensures that the software
remains functional as new features are added, preventing regression.
 Supports Refactoring: With a comprehensive set of tests, developers
can refactor the code confidently, knowing that the tests will catch any
unintentional changes in behaviour.

5.REFACTORING:
 XP encourages pair programming where two developers work together
at the same workstation. This approach helps in knowledge sharing,
reduces errors, and improves code quality.
 Pairing, in addition to providing better code and tests, also serves to
communicate knowledge throughout the team.

 Improves Code Structure: Refactoring focuses on restructuring


existing code without changing its external behavior, making it more
efficient and readable.
 Reduces Technical Debt: By continually cleaning up the code,
developers prevent technical debt from accumulating, which could slow
down future development.
 Supports Simple Design: It reinforces the idea of simple design by
allowing code to evolve as new needs arise, without making the system
overly complicated.
 Enhances Code Quality: Regular refactoring leads to cleaner, more
maintainable code, making it easier to introduce new features and fix
bugs.

6.PAIR PROGRAMMING:
 XP encourages pair programming where two developers work together
at the same workstation.
 This approach helps in knowledge sharing, reduces errors, and improves
code quality.
 Pairing, in addition to providing better code and tests, also serves to
communicate knowledge throughout the team.

 Real-Time Code Review: Pair programming involves two developers


working together at one workstation. One writes the code (the driver),
while the other reviews it in real-time (the observer).
 Higher Code Quality: Immediate feedback and review reduce errors,
leading to higher code quality and fewer bugs.
 Knowledge Sharing: It promotes continuous knowledge sharing
between developers, improving team collaboration and cohesion.
 Faster Problem Solving: Two developers working together can often
solve problems more quickly and come up with better solutions.

7.COLLECTIVE CODE OWNERSHIP:


 In XP, there is no individual ownership of code Instead, the entire team
is responsible for the codebase.
 This approach ensures that all team members have a sense of ownership
and responsibility towards the code.

 Shared Responsibility: All team members are responsible for the entire
codebase. Any developer can modify any part of the code, ensuring no
one is the sole owner of a particular section.
 Increases Flexibility: Developers can work on different parts of the
project as needed, allowing the team to respond quickly to changes or
issues.
 Fewer Bottlenecks: Since no one person "owns" the code, work is not
blocked if a specific developer is unavailable.
 Encourages Knowledge Sharing: This practice encourages everyone
to be familiar with the whole codebase, fostering team collaboration and
knowledge transfer.

8.CONTINUOUS INTEGRATION:
 In XP, developers integrate their code into a shared repository several
times a day. This helps to detect and resolve integration issues early on
in the development process.
 Although integration is critical to shipping good working code, the team
is not practiced at it, and often it is delegated to people not familiar with
the whole system.
 Code freezes mean that you have long time periods when the
programmers could be working on important shippable features, but that
those features must be held back.

 Frequent Code Integration: Developers integrate their code into a


shared repository multiple times a day, ensuring that changes from
different team members don’t conflict.
 Immediate Feedback: Continuous integration tools automatically run
tests when new code is integrated, providing immediate feedback if the
new code breaks anything.
 Fewer Integration Issues: Frequent integration reduces the likelihood
of major integration problems at the end of an iteration or project.
 Working Build: The goal is to always have a working, deployable
build of the software, so the team can quickly respond to feedback or
new requirements.

9.40-HOUR WEEK:
 Programmers go home on time.
 In crunch mode, up to one week of overtime is allowed.
 Multiple consecutive weeks of overtime are treated as a sign that
something is very wrong with the process and/or schedule.

 Sustainable Pace: XP encourages working at a sustainable pace,


usually 40 hours per week, to avoid developer burnout and maintain
productivity over the long term.
 Prevents Burnout: By avoiding excessive overtime, developers stay
fresh and motivated, improving overall quality and creativity.
 Higher Quality Work: Fatigued developers are more prone to
mistakes, so a sustainable work pace results in fewer errors and higher-
quality code.
 Work-Life Balance: A 40-hour workweek encourages a healthy work-
life balance, which helps retain talent and improve team morale.

10.ON-SITE CUSTOMER:
 XP requires an on-site customer who works closely with the
development team throughout the project.
 This approach helps to ensure that the customer’s needs are understood
and met, and also facilitates communication and feedback.
 For initiatives with lots of customers, a customer representative (i.e.
Product Manager) will be designated for Development team access.

 Immediate Clarification: Having a customer representative onsite


ensures that developers can get immediate answers to questions about
requirements, reducing misunderstandings.
 Faster Feedback: An onsite customer can provide instant feedback on
features as they are developed, allowing for faster adjustments and
iterations.
 Prioritize Features: The customer can quickly adjust priorities based
on real-time business needs, ensuring the development aligns with
current goals.
 Reduces Delays: By removing the need to wait for customer input,
decisions can be made faster, leading to quicker development cycles.

11.CODING STANDARDS:
 Everyone codes to the same standards.
 The specifics of the standard are not important what is important is that
all of the code looks familiar, in support of collective ownership.

 Consistent Style: XP encourages teams to agree on a set of coding


standards to ensure consistency throughout the codebase.
 Improves Readability: Uniform coding style makes the code easier to
read and understand for everyone on the team, which is especially
important for collective ownership.
 Enhances Maintainability: A standard approach to writing code
ensures that it remains maintainable, even as different developers
contribute.
 Reduces Conflicts: With predefined standards, there’s less chance of
conflicts or misunderstandings regarding how code should be written or
formatted.

12.TEST DRIVEN DEPLOYMENT:


 Test-driven development (TDD) is a technique that's used in Extreme
Programming (XP) to ensure that code is verified and validated as it's
developed.
 In TDD, automated tests are written before the actual code, and then the
code is written to pass the tests. This process helps to detect errors and
bugs early, and can lead to better code design and architecture.

 Tests First: In TDD, developers write tests before writing the actual
code. The tests define the desired behavior, and the code is written to
pass those tests.
 Immediate Feedback: By running the tests frequently, developers get
immediate feedback on whether their code works as expected.
 Improves Design: Writing tests first encourages developers to think
carefully about the design of their code, resulting in simpler, more
modular designs.
 Reduces Bugs: TDD ensures that the code is thoroughly tested from the
start, leading to fewer bugs and more reliable software.

ADVANTAGES OF EXTREME PROGRAMMING:


 No unnecessary programming work.
 Close contact with the customer.
 Stable software through continuous testing.
 Error avoidance through pair programming.
 No overtime, teams work at their own pace.
 Changes can be made at short notice.
 Code is clear and comprehensible at all times.

DISADVANTAGES OF EXTREME PROGRAMMING:


 Additional work.
 Customer must participate in the process.
 Relatively large time investment.
 Relatively high costs.
 Requires version management.
 Requires self-discipline to practice.

CONCLUSION:
 Extreme Programming is not a complete template for the entire delivery
organization.
 Rather, XP is a set of best practices for managing the development team
and its interface to the customer.
 As a process it gives the team the ability to grow, change and adapt as
they encounter different applications and business needs.
Exercise 3: It is important to comprehend the need to automate the software
development lifecycle stages through DevOps. Gain an understanding of the
capabilities required to implement DevOps, continuous integration and
continuous delivery practices.

Continuous Integration and Continuous Delivery Using DevOps

DevOps is a software development methodology that promotes collaboration and


communication between development and operations teams. DevOps aims to
automate the process of building, testing, and deploying software.

DevOps Lifecycle

1. Continuous Development

2. Continuous Integration

3. Continuous Testing

4. Continuous Monitoring

5. Continuous Feedback

6. Continuous Deployment

7. Continuous Operations

1. Continuous Development
 Continuous development involves writing, designing, and reviewing code.
This phase aims to produce high-quality code efficiently
2. Continuous Integration
 Continuous integration focuses on automating the process of merging code
changes into a central repository. This ensures that new code integrates
seamlessly with existing code.

3. Continuous Testing
 Continuous testing involves running automated tests to detect bugs and
ensure that the software meets quality standards. This ensures that any new
code changes do not introduce errors.

4. Continuous Monitoring
 Continuous monitoring involves collecting and analyzing data about the
software’s performance and availability. This helps identify potential issues
early on.

5. Continuous Feedback
 Continuous feedback involves gathering feedback from stakeholders
throughout the development process. This feedback is used to improve the
software and make it better meet user needs.

6. Continuous Deployment
 Continuous deployment involves automatically deploying new code
changes to production environments. This ensures that new features are
released to users quickly and efficiently.

7. Continuous Operations
 Continuous operations involve managing and maintaining the software in
production environments. This includes tasks such as monitoring,
troubleshooting, and scaling.
1. Agile Development
 Agile development is a popular methodology that emphasizes iterative
development and collaboration. It allows for quick adjustments to meet
changing user needs.

2. Code Reviews
 Code reviews ensure that code meets quality standards and adheres to best
practices. It helps identify potential bugs and vulnerabilities.

3. Version Control
 Version control systems allow teams to track changes made to code, revert
to previous versions, and collaborate on projects efficiently.

4. Automated Testing
 Automated testing is integrated into the development process to identify
bugs and ensure that the software meets quality standards early on.

Continuous Integration

1. Agile Automated Builds


 Automated builds streamline the process of compiling, packaging, and
deploying software. This ensures that new code changes are tested and
integrated quickly.

2. Automated Testing
 Automated tests are run on every code change to detect bugs and ensure
that the software meets quality standards. This helps identify issues early
on.
3. Code Merging
 Continuous integration focuses on merging code changes into a central
repository frequently. This ensures that new code integrates seamlessly with
existing code.

4. Feedback Loops
 Continuous integration relies on feedback loops to provide information
about the status of the build and any potential issues. This allows teams to
quickly identify and fix problems.

Continuous Testing

1. Unit Testing
 Unit testing verifies the functionality of individual units of code, such as
functions or classes. This ensures that each component works as expected.

2. Integration Testing
 Integration testing verifies that different components of the software work
together as expected. This ensures that the software functions as a whole.

3. System Testing
 System testing verifies that the software meets all functional and non-
functional requirements. This ensures that the software meets user needs
and operates as expected.
Continuous Monitoring, Continuous Feedback

1. Performance Monitoring
 Performance monitoring tracks key metrics such as response times, CPU
usage, and memory consumption to identify potential performance
bottlenecks.

2. User Feedback
 User feedback is gathered through surveys, reviews, and analytics to
understand user needs and identify areas for improvement.

3. Analytics & Reporting


 Analytics and reporting tools provide insights into user behavior,
performance trends, and potential issues. This helps teams make data-driven
decisions.

Continuous Operations & Continuous Deployment

1. Deployment Pipelines
 Deployment pipelines automate the process of deploying software to
production environments. This ensures that new code changes are released
quickly and efficiently.

2. Infrastructure as Code
 Infrastructure as code allows teams to manage infrastructure resources
using code. This enables automated provisioning and configuration of
servers and other resources.

3. Continuous Delivery
 Continuous delivery involves automating the process of deploying new
code changes to production environments. This ensures that new features
are released to users quickly and efficiently.

Continuous Integration (CI) and Continuous Delivery (CD)

Continuous Integration (CI) Continuous Delivery (CD)

Focuses on automating the process ofFocuses on automating the process of


merging code changes into a central deploying new code changes to production
repository. environments.

Aims to improve the quality and Aims to deliver new features and
reliability of software by detecting improvements to users quickly and
efficiently.
and fixing bugs early on.

Includes practices such as automatedIncludes practices such as deployment


builds, automated testing, and code pipelines, infrastructure as code, and
reviews. continuous monitoring.
DevOps Tools

Puppet

An open-source configuration management tool that automates the


provisioning and management of infrastructure resources.

Ansible

An open-source automation tool that simplifies the process of managing


and deploying software applications.
Docker

A containerization platform that allows developers to package and run


applications in isolated environments.

Nagios

An open-source monitoring tool that helps identify and resolve performance


and availability issues.

Jenkins
An open-source continuous integration and continuous delivery tool that
helps automate the build, test, and deployment process.
Git

A version control system that allows teams to track changes made to code,
revert to previous versions, and collaborate on projects efficiently.

Selenium

An open-source automated testing framework that enables developers to


write and execute automated tests for web applications.

SonarQube

An open-source platform for code quality management that helps identify


potential bugs, vulnerabilities, and code smells.
Exercise 4: Configure the web application and Version control using Git using
Git commands and version control operations

Git:

 Git is an open-source distributed version control system. It is designed to


handle minor to major projects with high speed and efficiency. It is
developed to co-ordinate the work among the developers. The version
control allows us to track and work together with our team members at the
same workspace.
 Git is foundation of many services like GitHub and GitLab, but we can use
Git without using any other Git services. Git can be used privately and
publicly.
 Git was created by Linus Torvalds in 2005 to develop Linux Kernel. It is
also used as an important distributed version-control tool for the DevOps.
 Git is easy to learn, and has fast performance. It is superior to other SCM
tools like Subversion, CVS, Perforce, and ClearCase.

Key Features of Git:

 Open Source: Git is an open-source tool. It is released under the GPL


(General Public License) license.
 Scalable: Git is scalable which means when the number of users increases,
the Git can easily handle such situations.
 Distributed System: Each developer has a full copy of the entire repository
history, which means you can work offline and still have access to the
project’s history.
 Branching and Merging: You can create branches to work on different
features or bug fixes in isolation. Once the work is complete, you can merge
the branch back into the main codebase.
 Lightweight: Git is fast and uses minimal storage, as it only stores changes
made to the files rather than the entire file.
 Integrity: Git uses a SHA-1 hashing algorithm to ensure that the data is
secure and has not been tampered with.

Advantages of Git:

 Distributed System: Everyone has the complete project history on their


computer, so you can work offline and there’s no single point of failure.
 Fast: Git handles large projects quickly and efficiently.
 Branching and Merging: You can create branches to work on new
features and merge them easily, without affecting the main project.
 Collaboration: Git makes it easy for teams to work together, review each
other’s code, and manage projects.
 Change Tracking: Git keeps a record of all changes, so you can go back to
previous versions if needed.
 Free and Open Source: Git is free to use and has a large community for
support.

Disadvantages of Git

 Learning Curve: Git can be complex to learn, especially for beginners.


 Complex Commands: Some commands and operations can be difficult to
understand and remember.
 Merge Conflicts: Resolving conflicts when merging branches can be
challenging.
 Storage: The local repository can take up significant space on your
computer.
 Setup and Configuration: Initial setup and configuration can be time-
consuming.
 Not Ideal for Large Binary Files: Git is less efficient at handling large
binary files (e.g., videos, images).
GitHub:

GitHub is a web-based platform built on top of Git. It provides a graphical interface


to manage Git repositories, along with additional features like project management,
collaboration tools, and hosting services. GitHub is one of the most popular
platforms for hosting open-source projects.

Key Features of GitHub:

 Version Control: GitHub’s core functionality is based on Git, which


allows you to keep track of changes in your code over time. This means you
can always revert to a previous version if something goes wrong, compare
different versions, and understand the history of your project.
 Repositories: A repository (or repo) is a central place where all the files for
a project are stored. Each repository can hold multiple files and folders, and
it tracks the history of every change made. Repositories can be public
(accessible to everyone) or private (restricted access).
 Branches: Branches are a crucial feature in GitHub that enable parallel
development. You can create a branch to work on a new feature or fix a bug
without affecting the main codebase. Once your changes are ready, you can
merge the branch back into the main branch.
 Pull Requests: Pull requests are a way to propose changes to a repository.
When you submit a pull request, you’re asking the project maintainers to
review and merge your changes into the main codebase. This feature
promotes collaboration and ensures code quality through peer review.
 Issues and Project Management: GitHub provides tools to track bugs,
enhancements, and other tasks through the Issues feature. You can create
issues, assign them to team members, and track their progress. GitHub also
offers project boards for more advanced project management.
 Actions and Automation: GitHub Actions allow you to automate
workflows, such as running tests or deploying code, directly from your
repository. This feature enhances productivity and ensures consistency
across development processes.

Advantages of GitHub:

 Collaboration: GitHub simplifies teamwork with tools like pull requests,


code reviews, and project boards, making it easy to work together and
manage projects.
 Community and Open Source: It hosts millions of open-source projects,
offering extensive resources and opportunities for developers to contribute
and learn.
 Integration: GitHub integrates seamlessly with CI/CD pipelines, third-
party tools, and IDEs, streamlining development workflows.
 User-Friendly Interface: GitHub’s GUI and web interface make version
control more accessible, even for those less familiar with Git’s command-
line tools.
 Security and Access Control: It offers private repositories and branch
protection, ensuring that your code is secure and managed efficiently.

Disadvantages of GitHub:

 Cost for Private Repositories: While public repositories are free,


advanced features and private repositories require a paid plan, which might
be limiting for small teams or individual developers.
 Dependency on Git: To fully utilize GitHub, you need a good
understanding of Git, which has a steep learning curve. Beginners may find
it challenging to manage branches, resolve conflicts, and use Git’s full
capabilities.
 Basic Project Management Tools: GitHub’s project management features,
like issues and project boards, are somewhat basic compared to dedicated
tools like Jira, which may be insufficient for complex projects.
 Performance with Large Repositories: Large repositories can experience
slow performance, especially when cloning or pulling changes, making
them harder to manage over time.
 Security Risks: For open-source projects, code is publicly visible, which
can be a security concern if sensitive information is accidentally included.
Additionally, as a cloud-based platform, there is always some risk of data
breaches.
commands in Git:

S. No. Command Syntax Description


1 Initialize Repo git init Initializes a new Git
repository in the current
directory.
2 Clone Repository git clone <repo- Clones an existing remote
url> repository to your local
machine.
3 Add Files git add <file> Adds a file to the staging
area. Use git add . to add all
changed files.
4 Commit Changes git commit -m Records changes in the
"message" repository with a
descriptive message.
5 View Status git status Shows the current state of
the working directory and
staging area.
6 View History git log Displays the commit
history. Use git log --
oneline for a brief view.
7 Create Branch git branch <branch- Creates a new branch.
name>
8 Switch Branch git checkout Switches to an existing
<branch-name> branch. Use git switch
<branch-name> as an
alternative in newer
versions.
9 Merge Branch git merge <branch- Merges the specified branch
name> into the current branch.
10 Delete Branch git branch -d Deletes a branch that has
<branch-name> been merged. Use -D to
force delete a branch.
11 Pull Changes git pull Fetches and integrates
changes from a remote
repository to the local
branch.
12 Push Changes git push Pushes local changes to the
remote repository.
13 Check Differences git diff Shows changes between the
working directory and the
staging area or between
commits.
14 Stash Changes git stash Temporarily stores
uncommitted changes for
later use.
15 Apply Stash git stash apply Applies the most recent
stashed changes without
removing them from the
stash list.
16 Show Remote git remote -v Displays the remote
URLs repository URLs linked to
your local repository.
17 Add Remote git remote add Adds a new remote
<name> <url> repository.
18 Remove Remote git remote rm Removes a remote
<name> repository.
19 Reset Changes git reset <commit> Resets the current branch to
a specific commit,
discarding commits after it.
20 Revert Commit git revert Creates a new commit that
<commit> undoes changes from a
specified commit.
21 Rebase Branch git rebase <branch- Reapplies commits from the
name> current branch on top of
another branch.
22 Squash Commits git rebase -i Combines multiple commits
<commit> into one. Use interactive
rebase to squash commits.
23 View Remote git branch -r Displays remote branches
Branches associated with the
repository.
24 Track Remote git checkout -t Tracks a remote branch and
Branch <remote>/<branch> creates a local branch to
follow it.
25 Remove File git rm <file> Removes a file from the
working directory and the
staging area.
26 Show Commit git show <commit> Shows details of a specific
Details commit including changes
and metadata.
27 Amend Commit git commit --amend Modifies the most recent
commit (e.g., to edit the
message or add more
changes).
28 Fetch from git fetch <remote> Downloads objects and refs
Remote from a remote repository
without merging.
29 Tag a Commit git tag <tag-name> Creates a tag for a specific
<commit> commit, typically used for
marking release versions.
30 Push Tags git push origin Pushes the specified tag to
<tag-name> the remote repository.
Exercise-05: Configure a static code analyzer which will perform static
analysis of the web application code and identify the coding practices
that are not appropriate. Configure the profiles and dashboard of the
static code analysis tool.

What is Software Testing?

Software Testing is a part of software development lifecycle, it's aim is to


ensure that the code to be deployed is of high quality with no bugs and no
logical errors.

Testing Methods:

1.Dynamic Testing Method

2.Static Testing Method

What is Dynamic Testing?

1.It happens during the execution of code 2. It can help identify small defects
because it also looking at the code integration with other databases and
servers.

Tools that are used for Dynamic Testing

1. Selenium

2. Catlan

3.Casper.js
What is Static Testing?

It is a method of debugging by examining source code before program is run,


that is test the code without actually executing it. It does so by analyses the
code against a pre-set of coding rules and ensure that it conforms to the
guidelines.

There are many tools which helps in static testing:

1.Sonarqube

2. Lint

3.PMD

Static Testing Techniques:

Static Testing is a necessary software testing technique comprising two


approaches: Review and Static Analysis.

Review

Reviews are a necessary feature of Static Testing. It enables testers to


identify defects and issues in documentation, such as requirements and
design. The importance of reviews lies in detecting the sources of failure at
the earliest stage.
Static Code Analysis

Static analysis, also called static code analysis, is a method used to analyze
software source code which checks for its correctness and reveals a wide
variety of information such as structure of the models used, data and control
flow, syntax accuracy and more.

Reasons to Use Static Code Analysis:

1. Find errors earlier in development

2. Detects overcomplexity in code

3. Find Security Errors

4. Enforces Best Coding Practices

5. Automated & Integrates in Jenkins

6. Can create project specific rules.

Tool on Static Code Analysis:

SonarQube

What is SonarQube?

1.Sonarqube is an open-source software quality platform.

2.SonarQube is used for continuous inspection of code quality. It performs


static code analysis, which means it analyses source code without executing
it.

3.Used to detect bugs, code smells, security vulnerabilities, and technical


debt. SonarQube supports a wide range of programming languages and
integrates with various build tools, CI/CD pipelines, and version control
systems.
SonarQube Features:

1.Supports languages: Java, C/C++, Objective-C, C#, PHP, Flex, Groovy,


JavaScript, Python, PL/SQL, COBOL, etc. (note that some of them are
commercial) 25+languages.

2.Can also be used in Android development.

3.Offers reports on duplicated code, coding standards, unit tests, code


coverage, code complexity, potential bugs, comments, design, and
architecture.

4.Records metrics history and provides evolution graphs (“time machine”)


and differential views.

5.Provides fully automated analyses: integrates with Maven, Ant, Gradle,


and continuous integration tools (Atlassian Bamboo, Jenkins, Hudson, etc.).

6.Integrates with the Eclipse development environment

7.Integrates with external tools: JIRA, Mantis, LDAP, Fortify, etc.

8.Is expandable with the use of plugins.


Technical Debt

It directly translates as the implied costs for additional rework that can occur
if at an early stage an easy but not efficient solution is chosen. In future the
easy code may restrict scalability.

Technical debt is caused by the 7 deadly sins of the developer:

 Duplications: SonarQube has a copy/paste detection engine to find


duplications

 Bad distribution of complexity: Cyclomatic complexity

 Spaghetti Design: Bad naming, Lack of patterns, Over abstraction

 Lack of unit tests, Potential bugs

 No coding standards

 Not enough too many comments or incorrect comments.

SonarQube Architecture
SonarQube Architecture can be classified in four components:
1. Sonar Scanner:

 Purpose: The Sonar Scanner is a tool that collects source code and
sends it to the SonarQube server for analysis.

 Functionality: It integrates with various build systems like Maven,


Gradle, Jenkins, etc. The scanner reads the source code, gathers
metrics, and prepares the data for analysis.

2. Source Code:

 Purpose: This is the actual codebase that you want to analyze for
quality and security.

 Functionality: The source code is the input for the Sonar Scanner. It
includes all the files and directories that make up your project. The
quality of this code is what SonarQube aims to measure and improve.

3. Sonar Analyzer:

 Purpose: This is the actual codebase that you want to analyze for
quality and security.

 Functionality: It uses a set of predefined rules to analyze the code for


bugs, vulnerabilities, code smells, and duplications. The analyzer
processes the data collected by the Sonar Scanner and generates a
detailed report.
4. SonarQube Database:

 Purpose: The database stores all the analysis results and configuration
settings.

 Functionality: After the results are processed (in Sonar Analyzer)


then the report or the data will be stored in the database. SonarQube
will have a default database

 You can integrate your own database our to your SonarQube

 SonarQube supports various databases like a MS SQL Server, Oracle


etc

Prerequisites:To set up SonarQube, you’ll need to ensure the following


prerequisites are met:

1.Java: Oracle JRE 17or openJDK17 to run both the servers and the
scanners.

2.Hardware requirements: At least 2GB of RAM for the SonarQube server


and 1GB of free RAM for the operating system.

3.Database: PostgreSQL, Microsoft SQL, and Oracle (supported Databases).

4.Operating System: It support 64-bit systems on both server and scanner


sides.

Installation process of SonarQube

The installation of SonarQube include following steps:

1.Verify that your system has java installed or not. If not, install by using the
following steps

Installing JDK:

Go to the URL:

https://www.oracle.com/in/java/technologies/downloads/#jdk17-windows

 After clicking above link then,


 Download Windows x64 MSI Installer.

 After the double click on MSI installer file we get welcome to the
installation wizard for Java SE development kit 17.0.6 then click on
next button again next and it will be installed.

JDK 17 is now successfully installed in System and next need to setup


environment variables.

Add Java Path to Environment Variables:

 Go to Java Folder and in which contains bin under java-17 copy that
path.

 Open the Start menu and search for environment variables.

 Click the Edit the system environment variables result.


 Under the Advanced tab in the System Properties window, click
Environment Variables.

 Click the New button under the System variables section to add a
new system environment variable.

 Enter as JAVA_PATH the variable name and the path from Java
directory as the variable value.
 Click OK to save the new system variable.

 Select the Path variable under the System variables section in the
Environment Variables window.

 Click the Edit button to edit the variable.


 And the paste the path of the bin under java-17 path in the Java
folder.

 And the click on the ok to save the Path variable.

 Click OK in the Environment Variables window to save the changes


to the system variables.
 Verify Java Version In the command prompt, use the following
command to verify the installation by checking the current version of
JDK: “java --version”

Download and Install SonarQube for windows

 Go to SonarQube official website to download SonarQube for


Windows to download latest version

 Click on Download Community Edition.


 After successful completion of SonarQube, then we need to go the
extract into C Drive and then go unzip the folder and Go to the
SonarQube->bin->windows-x86-64-> Click on Start Sonar.

we can set default Java executable by setting the environmental variable


SONAR_JAVA_PATH.

Open command prompt and type command

setx SONAR_JAVA_PATH "C:\Program Files\ Java_home\bin\java.exe“

We have downloaded and extracted SonarQube in C:\ drive in windows Go


to C:\sonarqube-9.9.0.65466\bin\windows-x86-64 path, there is Start Sonar
window batch file right click on run as administrator.

After clicking on it starts installing like this—


Access SonarQube on Windows

Go to any browser and type localhost:9000. This page we get

localhost:9000 OR IP:9000

SonarQube default username and password, login: admin and password:


admin and click on login button. once login it will ask to change password,
update the you password as your requirement.

It asks to update login credentials.


once password updated, you will navigate to SonarQube Dashboard as
shown below.

SonarQube Dashboard
Exercise 6: Write a build script to build the application using a build
automation tool like Maven. Create a folder structure that will run the
build script and invoke the various software development build stages.
This script should invoke the static analysis tool and unit test cases and
deploy the application to a web application server like Tomcat.

What is MAVEN:

Maven is a powerful project management tool that is based on POM (project


object model). It is used for projects build, dependency and documentation.
It simplifies the build process like ANT. But it is too much advanced than
ANT. In short terms we can tell maven is a tool that can be used for building
and managing any Java-based project. Maven make the day-to-day work of
Java developers easier and generally help with the comprehension of any
Java-based project.

What MAVEN does:

 Maven does a lot of helpful task like

 We can easily build a project using maven.

 We can add jars and other dependencies of the project easily using
the help of maven.

 Maven provides project information (log document, dependency list,


unit test reports etc.)

 Maven is very helpful for a project while updating central repository


of JARs and other dependencies.

 With the help of Maven we can build any number of projects into
output types like the JAR, WAR etc without doing any scripting.

 Using maven we can easily integrate out project with source control
system (such as Subversion or Git).
How MAVEN works:

Core Concepts of Maven:

POM Files: Project Object Model(POM) files are XML file that contains
information related to the project and configuration information such as
dependencies, source directory, plugin, goals etc. used by Maven to build the
project. When you should execute a maven command you give maven a
POM file to execute the commands. Maven reads pom.xml file to
accomplish its configuration and operations.

Dependencies and Repositories: Dependencies are external Java libraries


requires for Project and repositories are directories of packaged JAR files.
The local repository is just a directory on your machine hard drive. If the
dependencies are not found in the local Maven repository, Maven downloads
them from a central Maven repository and puts them in your local repository.

Build Life Cycles, Phases and Goals: A build life cycle consists of a
sequence of build phases, and each build phase consists of a sequence of
goals. Maven command is the name of a build lifecycle, phase or goal. If a
lifecycle is requested executed by giving maven command, all build phases
in that life cycle are executed also. If a build phase is requested executed, all
build phases before it in the defined sequence are executed too.

Build Profiles: Build profiles a set of configuration values which allows you
to build your project using different configurations. For example, you may
need to build your project for our local computer, for development and test.
To enable different builds you can add different build profiles to your POM
files using its profiles elements and are triggered in the variety of ways.

Build Plugins: Build plugins are used to perform specific goal. You can add
a plugin to the POM file. Maven has some standard plugins you can use, and
you can also implement your own in Java.

Installation process of MAVEN

The installation of Maven includes following Steps:

1. Verify that your system has java installed or not. If not, install by
using the following the steps

Installing JDK:

Go to the

URL:http://www.oracle.com/technetwork/java/javase/ downloads/index.html

Then click on Download under the JDK section.


2. Choose the appropriate option that is as per the machine you are installing
JDK viz. 32 bit machine or 64 bit machine.

3. Once the download is over. Start the .exe file by double click on it.
4. now click on next continuously

Once done with the Installation of JAVA. • We need to set the Environment
variables in order to use JDK with Eclipse.
Go to MyComputer from the startup menu->Go to Properties->Go to
Advanced System Settings->From the Popup ->Go to Environment
Variables.

Click on Add.

Name the Environment Variable as path.

Place the value of the JDK installation folder on your machine till the
folder’s bin directory. For ex: C:\Program

Files\Java\jdk1.7.0_05\bin

Create another environment variable name it as JAVA_HOME and set the


values as C:\Program Files\Java\jdk1.7.0_05 i.e. excluding the bin directory.

Once we have created the Environment variables, we are done with the
process of JDK installation.

Download MAVEN using the following steps:


Step 1: Download Maven Zip File and Extract

 Visit the Maven download page and download the version of Maven
you want to install. The Files section contains the archives of the
latest version. Access earlier versions using the archives link in the
Previous Releases section.

 Click on the appropriate link to download the binary zip archive of


the latest version of Maven. As of the time ,this is version 3.8.4.

 Since there is no installation process, extract the Maven archive to a


directory of your choice once the download is complete. For this, we
are using C:\Program Files\Maven\apache-maven-3.8.4.
Step 2: Add MAVEN_HOME System Variable

 Open the Start menu and search for environment variables.

 Click the Edit the system environment variables result.

 Under the Advanced tab in the System Properties window, click


Environment Variables.
 Click the New button under the System variables section to add a
new system environment variable.

 Enter MAVEN_HOME as the variable name and the path to the


Maven directory as the variable value. Click OK to save the new
system variable.
Step 3: Add MAVEN_HOME Directory in PATH Variable

Select the Path variable under the System variables section in the
Environment Variables window. Click the Edit button to edit the variable.

Click the New button in the Edit environment variable window.

Enter %MAVEN_HOME%\bin in the new field. Click OK to save changes


to the Path variable.
Note: Not adding the path to the Maven home directory to the Path variable
causes the 'mvn' is not recognized as an internal or external command,
operable program or batch file error when using the mvn command.

Click OK in the Environment Variables window to save the changes to the


system variables.

Step 4: Verify Maven Installation

In the command prompt, use the following command to verify the


installation by checking the current version of Maven: “mvn -version”
Maven pom.xml file:

POM means Project Object Model is key to operate Maven. Maven


reads pom.xml file to accomplish its configuration and operations. It is an
XML file that contains information related to the project and configuration
information such as dependencies, source directory, plugin, goals etc. used
by Maven to build the project.

The Sample of POM.xml:

<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">

<modelVersion>4.0.0</modelVersion>

<groupId>com.javatpoint.application1</groupId>

<artifactId>my-application1</artifactId>

<version>1.0</version>

<packaging>jar</packaging>

<name>Maven Quick Start Archetype</name>

<url>http://maven.apache.org</url>

<dependencies>
<dependency>

<groupId>junit</groupId>

<artifactId>junit</artifactId>

<version>4.8.2</version>

<scope>test</scope>

</dependency>

</dependencies>

</project>

Code that we need for this particular project:

<?xml version="1.0" encoding="UTF-8"?>

<project

xmlns="http://maven.apache.org/POM/4.0.0"

xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/maven-v4_0_0.xsd">

<modelVersion>4.0.0</modelVersion>

<groupId>ETA</groupId>

<artifactId>Calculator</artifactId>

<packaging>war</packaging>
<version>0.0.1-SNAPSHOT</version>

<name>calculator</name>

<url>http://calculator</url>

<dependencies>

<dependency>

<groupId>junit</groupId>

<artifactId>junit</artifactId>

<version>4.11</version>

</dependency>

<dependency>

<groupId>javax.servlet</groupId>

<artifactId>javax.servlet-api</artifactId>

<version>3.1.0</version>

</dependency>

<dependency>

<groupId>org.seleniumhq.selenium</groupId>

<artifactId>selenium-java</artifactId>

<version>3.6.0</version>

<scope>provided</scope>

</dependency>

</dependencies>

<build>
<finalName>calculator</finalName>

<plugins>

<plugin>

<groupId>org.apache.maven.plugins</groupId>

<artifactId>maven-war-plugin</artifactId>

<version>2.1.1</version>

<configuration>

<archive>

<manifestEntries>

<version>${project.version}</version>

</manifestEntries>

</archive>

</configuration>

</plugin>

<plugin>

<groupId>org.sonarsource.scanner.maven</groupId>

<artifactId>sonar-maven-plugin</artifactId>

<version>3.2</version>

</plugin>

<plugin>

<groupId>org.jacoco</groupId>

<artifactId>jacoco-maven-plugin</artifactId>
<version>0.7.9</version>

<executions>

<execution>

<id>default-prepare-agent</id>

<goals>

<goal>prepare-agent</goal>

</goals>

</execution>

<execution>

<id>default-report</id>

<phase>prepare-package</phase>

<goals>

<goal>report</goal>

</goals>

</execution>

<execution>

<id>default-check</id>

<goals>

<goal>check</goal>

</goals>

<configuration>

<rules>
<!-- implementation is needed only for Maven 2-->

<rule

implementation="org.jacoco.maven.RuleConfiguration">

<element>BUNDLE</element>

<limits>

<!--

implementation is needed only for Maven 2 -->

<limit

implementation="org.jacoco.report.check.Limit">

<counter>COMPLEXITY</counter>

<value>COVEREDRATIO</value>

<minimum>0.10</minimum>

</limit>

</limits>

</rule>

</rules>

</configuration>

</execution>

</executions>

</plugin>

</plugins>

</build>
<profiles>

<profile>

<id>ut</id>

<build>

<plugins>

<plugin>

<groupId>org.apache.maven.plugins</groupId>

<artifactId>maven-surefire- plugin</artifactId>

<configuration>

<includes>

<include>**/Calculaterut.java</include>

</includes>

</configuration>

</plugin>

</plugins>

</build>

</profile>

<profile>

<id>it</id>

<build>

<plugins>

<plugin>
<groupId>org.apache.maven.plugins</groupId>

<artifactId>maven-surefire- plugin</artifactId>

<configuration>

<includes>

<include>**/CalculatorIT.java</include>

</includes>

</configuration>

</plugin>

</plugins>

</build>

</profile>

<profile>

<id>pt</id>

<build>

<plugins>

<plugin>

<groupId>com.lazerycode.jmeter</groupId>

<artifactId>jmeter-maven-

plugin</artifactId>

<version>2.4.0</version>

<executions>

<execution>
<id>jmeter-tests</id>

<phase>test</phase>

<goals>

<goal>jmeter</goal>

</goals>

</execution>

</executions>

</plugin>

</plugins>

</build>

</profile>

</profiles>

</project>

<!---->

Elements used for Creating pom.xml file1.

project- It is the root element of the pom.xml file.2.

modelVersion- modelversion means what version of the POM model you


are using. Use version 4.0.0 for maven 2 and maven 3.3.

groupId- groupId means the id for the project group. It is unique and Most
often you will use a group ID which is similar to the root Java package name
of the project like we used the groupId com.project.loggerapi.4.
artifactId- artifactId used to give name of the project you are building.in
ourexample name of our project is LoggerApi.5.

version- version element contains the version number of the project. If your
project has been released in different versions then it is useful to give version
of your project. Other Elements of Pom.xml file1.

dependencies- dependencies element is used to defines a list of dependency


of project.2.

dependency- dependency defines a dependency and used inside


dependencies tag. Each dependency is described by its group Id, artifact Id
andversion.3.

name- this element is used to give name to our maven project.4.

scope- this element used to define scope for this maven project that can be
compile, runtime, test, provided system etc.5.

packaging- packaging element is used to packaging our project to output


types like JAR, WAR etc.

Maven Repository- Maven repositories are directories of packaged JAR


files with some metadata. The metadata are POM files related to the projects
each packaged JAR file belongs to, including what external dependencies
each packaged JAR has. This metadata enables Maven to download
dependencies of your

dependencies recursively until all dependencies are download ad put into


your local machine. Maven has three types of repository:

 Local repository

 Central repository

 Remote repository

Maven searches for dependencies in this repositories. First maven searches


in Local repository then Central repository then Remote repository if Remote
repository specified in the POM.

Local repository: A local repository is a directory on the machine of


developer. This repository contains all the dependencies Maven downloads.
Maven only needs to download the dependencies once, even if multiple
projects depends on them (e.g. ODBC). By default, maven local repository is
user_home/m2 directory. example – C:\Users\asingh\.m2

Central repository: The central Maven repository is created Maven


community. Maven looks in this central repository for any dependencies
needed but not found in your local repository. Maven then downloads these
dependencies into your local repository.

Remote repository: Remote repository is a repository on a web server from


which Maven can download dependencies.it often used for hosting projects
internal to organization. Maven then downloads these dependencies into your
local repository.

Practical Application Of Maven

When working on a java project and that project contains a lot of


dependencies, builds, requirement, then handling all those things manually is
very difficult and tiresome. Thus using some tool which can do these works
is very helpful. Maven is such a build management tool which can do all the
things like adding dependencies, managing the classpath to project,
generating war and jar file automatically and many other things.

Pros and Cons of using Maven

Cons:

 Maven needs the maven installation in the system for working and
maven plugin for the ide.

 If the maven code for an existing dependency is not available, then


one cannot add that dependency using maven.
Pros:

 Maven can add all the dependencies required for the project
automatically by reading pom file.

 One can easily build their project to jar, war etc. as per their
requirements using maven.

 Maven makes easy to start project in different environments and one


doesn’t needs to handle the dependencies injection, builds,
processing, etc.

 Adding a new dependency is very easy. One has to just write the
dependency code in pom file.

When should someone use Maven:

One can use the Maven Build Tool in the following condition:

 When there are a lot of dependencies for the project. Then it is easy
to handle those dependencies using maven.

 When dependency version update frequently. Then one has to only


update version ID in pom file to update dependencies.

 Continuous builds, integration, and testing can be easily handled by


using maven.

 When one needs an easy way to Generating documentation from the


source code, Compiling source code, Packaging compiled code into
JAR files or ZIP files.
Exercise 7: Configure the Jenkins tool with the required paths, path
variables, users and pipeline views.

 Jenkins is a open source automation server that enables


developers to build , test , and deploy applications efficiently.
 It supports continuous integration and continuous delivery by
automating various stages of the software development lifecycle.
 Jenkins in highly extensible through plugins, allowing it to
integrate with numerous tools and technologies, making it a
versatile choice for automating tasks such as building code ,
running tests , and made deploying software.
 Its flexibility and ease of use have made jenkins a leading tool in
DevOps practices for enhancing productivity and maintaining
high software quality.

Pipeline:

 jenkins pipelines are used to define and automate the steps in the
software development process , from building to testing and
deploying applications .
 Jenkins pipelines are scripts that define the entire CI/CD
workflow, A pipeline is a series of automated steps or stages
(e.g., build , test , deploy) that can write as code.

Stage : A block that contains series of steps is known as stage .

Step : A task that says what to do . The steps are defined inside the stage .
The Steps and Stages of Pipeline:

Development :A developer writes code and commits it to a version control


system like Git. This is the starting point where changes are pushed to the
repository.

Commit :This stage represents the integration of the newly committed code
into the shared codebase. Once the code is committed , jenkins automatically
triggers the pipeline.

Build : Jenkins takes the code from the repository and compiles it into a
build (e .g ., a binary or package ). If the build is successful, the process
moves to the next stage , this ensures that the code can be compiles or
assembled correctly.

Test : The newly built code is tested in this stage . Automated unit,
integration ,or functional tests are run to verify the code’s correctness and
functionality. Jenkins can integrate with various testing frameworks to
automate this process.

Stage : In this stage, the built and tested code is prepared for deployment in a
staging environment. It is often a replica of the production environment
where the code is further validated and checked for any issues before being
released.

Deploy : After passing tests and staging validation, the code is deployed to
the development, QA (Quality Assurance), or other pre-production
environments. It can also involve deployment to a production environment if
it's a continuous deployment setup. This is where the final checks take place
before releasing the software.

Production : Once all stages are complete, the code is moved to the
production environment where it becomes live and accessible to end-users.

The overall process depicted is a Continuous Integration/Continuous


Delivery (CI/CD) workflow, where each stage ensures that changes are built,
tested, and deployed in a consistent and automated way, improving software
delivery speed and reliability.

To create a pipeline in the Jenkins:

 Log into Jenkins.

 In the Dashboard, select New Item.

 Type an item name and select Pipeline from the list of item types.
Click OK.

 In the Pipeline configuration page, click the Pipeline tab.

 Type your Pipeline code in the text area, as shown below.

 Click Save.

How to Generate Jenkins Pipeline Scripts?

To access the pipeline script generator, simply navigate to the /pipeline-


syntax path on your Jenkins instance:

http://<your-jenkins-ip>: port/pipeline-syntax/

Alternatively, you can find the syntax generator path within your pipeline job
configuration, as shown below.
// Example of Jenkins pipeline script

pipeline {

stages {

stage("Build") {

steps {

// Just print a Hello, Pipeline to the console

echo "Hello, Pipeline!"

// Compile a Java file. This requires JDKconfiguration from Jenkins

javac HelloWorld.java

// Execute the compiled Java binary called HelloWorld. This requires JDK
configuration from Jenkins

java HelloWorld

// Executes the Apache Maven commands, clean then package. This requires
Apache Maven configuration from Jenkins

mvn clean package. /HelloPackage

// List the files in current directory path by executing a default shell


command
sh "ls -ltr"

// And next stages if you want to define further...

} // End of stages

} // End of pipeline

It's easy to see the structure of a Jenkins pipeline from this sample script.
Note that some commands, like java, javac, and mvn, are not available by
default, and they need to be installed and configured through Jenkins.
Therefore:

A Jenkins pipeline is the way to execute a Jenkins job sequentially in a


defined way by codifying it and structuring it inside multiple blocks that
can include multiple stepscontaining tasks.

How to build a jenkins pipelines:


3 steps to build jenkins pipelines.

1. Installation of jdk.
2. Installation of jenkins.
3. Setup jenkins pipelines.
**The steps in this document, that should apply for any version of
windows**

**Installation of JDK**
Step-1: Jenkins only run in JDK8 or JDK11.

Step-2: Open files and setup .

Step-3: Click on NEXT.


Step-4:

• we need to make a couple of changes here, JAVA_HOME


VARIABLE is ON and JAVASOFT(ORACLE) is off.

• And changing the location , this step is **OPTIONAL**.

• Then click on NEXT.


Before Modification

After Modification

Step-5:

 Click on INSTALL. And it will take just few seconds.


 Then click on FINISH

 Installation process of JDK is DONE.


Step-6:

• The installation process of jdk is completed, however the


configuration is not completed yet. We need to modify our
environmental variables , java home and path.

• Search for EDIT THE SYSTEM ENVIRONMENTAL VARIABLES


and click on it.

• Click on environmental variables in edit the system environmental


variables. And select JAVA_HOME.

• First thing we want to remove the reverse slash at the end of the
variable value.
• And then click on OK.

Step-7:

• Click on Path.

• We need to change the below showed path to reference the java home
environmental variable.
(a) Before Modification

(b) After Modification


 Then click on OK.

 After completion of the installation process, open the command


prompt and checking whether they exist in our system or not.

 With the command java – version, we can know java version.


**JDK installation process is completed**

**Installation Process of Jenkins**


Step-1:

• Click on Install.

Step-2: Select destination folder

• Select the destination folder to store your Jenkins installation and


click on next to continue.
• Here we need to change the destination folder.
• Here we need to modify the folder name .That is program files folder
to tools folder.

(a) Before Modification

(b) After Modification


Step 3: Service logon credentials

• Now we are installing and run Jenkins a local or domain user .


• Enter account name and password.
• Then click on Test Credentials , then you will get green tick mark
beside of that box.

• Then click on Next to continue.

Step-4:

• Port selection Specify the port on which Jenkins will be running, Test
Port button to validate whether the specified port if free on your
machine or not. Consequently, if the port is free, it will show a green
tick mark as shown below, then click on Next.
• Then click on Next.
Step-5:

• The installation process checks for Java on your machine and prefills
the dialog with the Java home directory. If the needed Java version is
not installed on your machine, you will be prompted to install it.(This
is the path of JDK)

• Once your Java home directory has been selected, click on Next to
continue.

Step-6:

• Here we need to OFF “Start Service”.


Step-7:

 Click on the Install button to start the installation of Jenkins. And it


will take few seconds to install.

Step-8:

• Click on Finish.

Step-9:

• Now we have few more steps we need to do in order to configure


Jenkins
• We need to back to where we install which is
C >> tools >> Jenkins

 After click on Jenkins, you can able to see the below: first we want to
modify is Jenkins XML.
 Right click on the Jenkins XML , then click on Edit.
 I want to control my Jenkins home directory where all the data will
live

(a) Before Modification

(b) After Modification


For the executable I Want to change this to be my java home environmental
variable . you can choose to not use java home environmental variable , you
can choose a specific version of java.

(a) Before Modification


(b) After Modification
 And now we are changing the arguments for that we need to modify
location according to our location and also web root

• Now we are changing the location of the pid file .


• All the changes that we made to this file , primarily locations that is
exactly where I want my data to live . Now all that data will within
the Jenkins XML file.

• After changing the location of the pid file , save the file .and close it.

• We setup all the things but when we open the Jenkins . err file , it
shows Jenkins has failed to create a temporary file.
• So one thing that we need to do before start the service is we need to
create “tmp” folder inside c >> tools >> jenkins .

• Later open the “Services” and then start the jenkins which is shown
below.
• Browse localhost 8080 in your system.
• Enter the password.

• Then click on install suggested plugins.


• Give it just a minute, download all the plugins.

• Later enter the credentials they ask, and then click on save and
continue.
• Click on save and finish.

• Our jenkins are installed, click on “Start Using Jenkins”.


• Now you are ready to use jenkins.
**Now we are ready to setup the Jenkins Pipeline**

Creating & Building a Jenkins Pipeline Job:


Follow the steps given below to create and build our pipeline as code.

Step 1: Go to Jenkins home and select “New Item”

Step 2: Give a name, select “Pipeline” and click ok.


Step 3: Scroll down to the Pipeline section, copy the whole pipeline code in
the script section and save it.

Step 4: Now, click “Build Now” and wait for the build to start.

During the job execution, you can monitor the progress of each stage in the
stage view. Below is a screenshot of a successfully completed job.
Additionally, you can access the job logs by clicking on the blue icon.
If you have installed the Blue Ocean plugin, you can enjoy a user-friendly
interface to view your job status and logs. Simply click on “Open in Blue
Ocean” on the left to access the job in the Blue Ocean view, as depicted
below.

Executing Jenkins Pipeline from Github (Jenkinsfile)

we will look at how to execute a pipeline script available in an SCM system


like Github.

Step 1: Create a Github repo with our pipeline code in a file


named Jenkinsfile. Or you can use this Github repo for
testing. https://github.com/devopscube/pipeline-as-code-demo

Step 2: Follow the same steps we used for creating a pipeline job. But
instead of entering the code directly into the script block, select the “Pipeline
script from SCM” option and fill in the details as shown below.
1. Definition: Pipeline script from SCM

2. Repository URL: https://github.com/devopscube/pipeline-as-code-


demo
3. Script Path: Jenkinsfile

Step 3: Save the configuration and run the build. You should see a
successful build.

Conclusion: In summary, Jenkins provides an essential tool for automating


the software development lifecycle, from building and testing to deploying
applications. By following the outlined steps for installing Jenkins and
setting up pipelines, developers can streamline their CI/CD processes,
improve code quality, and boost productivity. Jenkins’ flexibility through
plugins and integration with various tools makes it an indispensable asset in
modern DevOps environments.
Exercise 8: Configure the Jenkins pipeline to call the build script jobs and
configure to run it whenever there is a change made to an application in the
version control system. Make a change to the background color of the
landing page of the web application and check if the configured pipeline
runs.

This chapter covers all recommended aspects of Jenkins Pipeline


functionality, including how to:

 get started with Pipeline — covers how to define a Jenkins


Pipeline (i.e. your Pipeline) through Blue Ocean, through the classic
UI or in SCM,
 create and use a Jenkinsfile — covers use-case scenarios on how to
craft and construct your Jenkinsfile,
 work with branches and pull requests,
 use Docker with Pipeline — covers how Jenkins can invoke Docker
containers on agents/nodes (from a Jenkinsfile) to build your Pipeline
projects,
 extend Pipeline with shared libraries,
 use different development tools to facilitate the creation of your
Pipeline, and
 work with Pipeline syntax — this page is a comprehensive reference
of all Declarative Pipeline syntax.
What is Jenkins Pipeline?

Jenkins Pipeline (or simply "Pipeline" with a capital "P") is a suite of plugins
which supports implementing and integrating continuous delivery
pipelines into Jenkins.

A continuous delivery (CD) pipeline is an automated expression of your


process for getting software from version control right through to your users
and customers. Every change to your software (committed in source control)
goes through a complex process on its way to being released. This process
involves building the software in a reliable and repeatable manner, as well as
progressing the built software (called a "build") through multiple stages of
testing and deployment.

Pipeline provides an extensible set of tools for modelling simple-to-complex


delivery pipelines "as code" via the Pipeline domain-specific language
(DSL) syntax.

The definition of a Jenkins Pipeline is written into a text file (called


a Jenkinsfile) which in turn can be committed to a project’s source control
repository. This is the foundation of "Pipeline-as-code"; treating the CD
pipeline as a part of the application to be versioned and reviewed like any
other code.

Creating a Jenkins file and committing it to source control provides a number


of immediate benefits:

 Automatically creates a Pipeline build process for all branches and


pull requests.
 Code review/iteration on the Pipeline (along with the remaining
source code).
 Audit trail for the Pipeline.
 Single source of truth for the Pipeline, which can be viewed and
edited by multiple members of the project.
While the syntax for defining a Pipeline, either in the web UI or with
a Jenkinsfile is the same, it is generally considered best practice to define the
Pipeline in a Jenkinsfile and check that in to source control.

Declarative versus Scripted Pipeline syntax

A Jenkinsfile can be written using two types of syntax — Declarative and


Scripted.

Declarative and Scripted Pipelines are constructed fundamentally differently.


Declarative Pipeline is a more recent feature of Jenkins Pipeline which:

 provides richer syntactical features over Scripted Pipeline syntax, and


 is designed to make writing and reading Pipeline code easier.
Many of the individual syntactical components (or "steps") written into
a Jenkinsfile, however, are common to both Declarative and Scripted
Pipeline. Read more about how these two types of syntax differ in pipeline
concepts and pipeline syntax overview below.

Why Pipeline?

Jenkins is, fundamentally, an automation engine which supports a number of


automation patterns. Pipeline adds a powerful set of automation tools onto
Jenkins, supporting use cases that span from simple continuous integration to
comprehensive CD pipelines. By modelling a series of related tasks, users
can take advantage of the many features of Pipeline:

 Code: Pipelines are implemented in code and typically checked into


source control, giving teams the ability to edit, review, and iterate
upon their delivery pipeline.
 Durable: Pipelines can survive both planned and unplanned restarts
of the Jenkins controller.
 Pausable: Pipelines can optionally stop and wait for human input or
approval before continuing the Pipeline run.
 Versatile: Pipelines support complex real-world CD requirements,
including the ability to fork/join, loop, and perform work in parallel.
 Extensible: The Pipeline plugin supports custom extensions to its
DSL and multiple options for integration with other plugins.
While Jenkins has always allowed rudimentary forms of chaining Freestyle
Jobs together to perform sequential tasks, Pipeline makes this concept a first-
class citizen in Jenkins.

What is the difference between Freestyle and Pipeline in Jenkins

Building on the core Jenkins value of extensibility, Pipeline is also extensible


both by users with pipeline shared and by plugin developers.
The flowchart below is an example of one CD scenario easily modelled in
Jenkins Pipeline:

Pipeline concepts

The following concepts are key aspects of Jenkins Pipeline, which tie in
closely to Pipeline syntax (see the overview below).

Pipeline: A Pipeline is a user-defined model of a CD pipeline. A Pipeline’s


code defines your entire build process, which typically includes stages for
building an application, testing it and then delivering it. Also, a pipeline
block is a key part of Declarative Pipeline syntax.

Node: A node is a machine which is part of the Jenkins environment and is


capable of executing a Pipeline. Also, a node block is a key part of Scripted
Pipeline syntax.

Stage: A stage block defines a conceptually distinct subset of tasks


performed through the entire Pipeline (e.g. "Build", "Test" and "Deploy"
stages), which is used by many plugins to visualize or present Jenkins
Pipeline status/progress.

Step: A single task. Fundamentally, a step tells Jenkins what to do at a


particular point in time (or "step" in the process). For example, to execute the
shell command make, use the sh step: sh 'make'. When a plugin extends the
Pipeline DSL, that typically means the plugin has implemented a new step.
Pipeline syntax overview

The following Pipeline code skeletons illustrate the fundamental differences


between Declarative Pipeline syntax and Scripted Pipeline syntax.

Be aware that both stages and steps (above) are common elements of both
Declarative and Scripted Pipeline syntax.

Declarative Pipeline fundamentals

In Declarative Pipeline syntax, the pipeline block defines all the work done
throughout your entire Pipeline.

Jenkinsfile (Declarative Pipeline)

pipeline {

agent any

stages {

stage('Build') {

steps { // }

stage('Test') {

steps { // }

stage('Deploy') {

steps { // }

}
1. Execute this Pipeline or any of its stages, on any available agent.

2. Defines the "Build" stage.

3. Perform some steps related to the "Build" stage.

4. Defines the "Test" stage.

5. Perform some steps related to the "Test" stage.

6. Defines the "Deploy" stage.

7. Perform some steps related to the "Deploy" stage.

Scripted Pipeline fundamentals

In Scripted Pipeline syntax, one or more node blocks do the core work
throughout the entire Pipeline. Although this is not a mandatory requirement
of Scripted Pipeline syntax, confining your Pipeline’s work inside of
a node block does two things:

1. Schedules the steps contained within the block to run by adding an


item to the Jenkins queue. As soon as an executor is free on a node,
the steps will run.
2. Creates a workspace (a directory specific to that particular Pipeline)
where work can be done on files checked out from source control.
Caution: Depending on your Jenkins configuration, some
workspaces may not get automatically cleaned up after a period of
inactivity. See tickets and discussion linked from JENKINS-2111 for
more information.
Jenkinsfile (Scripted Pipeline)

node {

stage('Build') { // }

stage('Test') { // }

stage('Deploy') { // }

1. Execute this Pipeline or any of its stages, on any available agent.

Defines the "Build" stage. stage blocks are optional in Scripted Pipeline
syntax. However, implementing stage blocks in a Scripted Pipeline
2.
provides clearer visualization of each stage's subset of tasks/steps in the
Jenkins UI.

3. Perform some steps related to the "Build" stage.

4. Defines the "Test" stage.

5. Perform some steps related to the "Test" stage.

6. Defines the "Deploy" stage.

7. Perform some steps related to the "Deploy" stage.

Pipeline example

Here is an example of a Jenkinsfile using Declarative Pipeline syntax — its


Scripted syntax equivalent can be accessed by clicking the Toggle Scripted
Pipeline link below:
Jenkinsfile (Declarative Pipeline)

pipeline {

agent any

options {

skipStagesAfterUnstable()

stages {

stage('Build') {

steps { sh 'make' }

stage('Test'){

steps {

sh 'make check'

junit 'reports/**/*.xml'

stage('Deploy') {

steps { sh 'make publish' }

Toggle Scripted Pipeline (Advanced)


pipeline is Declarative Pipeline-specific syntax that defines a "block"
1.
containing all content and instructions for executing the entire Pipeline.

agent is Declarative Pipeline-specific syntax that instructs Jenkins to


2.
allocate an executor (on a node) and workspace for the entire Pipeline.

stage is a syntax block that describes a stage of this Pipeline. Read more
about stage blocks in Declarative Pipeline syntax on the Pipeline
3.
syntax page. As mentioned above, stage blocks are optional in Scripted
Pipeline syntax.

steps is Declarative Pipeline-specific syntax that describes the steps to be


4.
run in this stage.

sh is a Pipeline step (provided by the Pipeline: Nodes and Processes


5.
plugin) that executes the given shell command.

JUnit is another Pipeline step (provided by the JUnit plugin) for


6.
aggregating test reports.

sh is a Pipeline step (provided by the Pipeline: Nodes and Processes


7.
plugin) that executes the given shell command.

You might also like