Devops Record
Devops Record
Devops Record
Blooms
CO# Course Outcomes Taxonomy
level
Phases of SDLC:
Waterfall model
Incremental model
Iterative model
Spiral model
WATERFALL MODEL
Here, we develop software in small parts and test each piece individually
with users for feedback.
Each increment in the incremental development process adds a new feature.
An example can be creating an MVP featuring only the core function and
adding new features based on user feedback.
ITERATIVE MODEL:
The iterative development process also follows the mix of Waterfall and
agile development approaches.
The only difference is that we develop a product version with all features
and functionalities and release it in the market for user feedback.
Then, based on the received feedback, we can upgrade the product features.
SPIRAL MODEL:
Requirements gathering
Design the requirements
Construction
Testing
Deployment
Feedback
AGILE TESTING METHODS
Scrum
eXtreme Programming(XP)
SCRUM
SCRUM is an agile development process There are three roles in it, and
their responsibilities are:
Scrum Master: The scrum can set up the master team, arrange the meeting
and remove obstacles for the process
Product Owner: The product owner makes the product backlog, prioritizes
the delay and is responsible for the distribution of functionality on each
repetition.
Scrum Team: The team manages its work and organizes the work to
complete the sprint or cycle focused primarily on ways to manage tasks in
team-based development conditions
ADVANTAGES OF AGILE
Flexibility
Focus on Customer Value
Faster Delivery
Software Quality
Customer Satisfaction
Efficient
CONCLUSION
Agile method break tasks into smaller iterations. The project scope and
requirements are laid down at the beginning of the development process. It is an
iterative software development approach where value is provided to users in small
increments rather than through a single large launch.
There are many different forms of the agile development method, including
scrum, crystal, extreme programming (XP), and feature-driven development
(FDD).
EXTREME PROGRAMMING
EXTREME PROGRAMMING (XP) is an Agile software development
methodology that focuses on delivering high-quality software through frequent and
continuous feedback, collaboration, and adaption. Xp emphasizes a close working
relationship between the development team, the customer, and stakeholders, with
an emphasis on rapid, iterative development and deployment.
Design: At this stage of the project the team must define the main features
of the future code.
o The team creates only the essential design needed for current user
stories, using a common analogy or story to help everyone
understand the overall system architecture and keep the design
straightforward and clear.
o Extreme Programming developers often share responsibilities at the
stage of designing. Each developer is responsible for the design of a
certain part of the code.
Testing: XP gives more importance to testing that consist of both unit tests
and acceptance test.
o Unit tests, which are automated, check if specific features work
correctly.
o Acceptance tests, conducted by customers, ensure that the overall
system meets initial requirements.
o This continuous testing ensures the software’s quality and alignment
with customer needs.
2.SIMPLE DESIGN:
Simplest possible design to get job done.
XP is not a one-time, it is an “all-the-time” activity.
Have design steps in
Release planning
iteration planning
teams engage in quick design sessions and design revisions through
refactoring. through the course of the entire project.
3.METAPHOR:
The XP Metaphor is a central concept in Extreme Programming (XP), an
Agile software development methodology.
It provides a simple, concrete idea or image used to help understand
complex or abstract concepts and guide the development process.
The Extreme programming (XP) Metaphor helps the development team
focus on delivering the most important features and functionality to the
customer, and it provides a framework for prioritizing and managing the
development process.
Example:
A team working on a project to develop a new mobile app might use the
metaphor of a “digital assistant” to guide their development efforts.
4.CONTINUOUS TESTING:
The XP model gives high importance to testing and considers it to be
the primary factor in developing fault-free software.
XP teams focus on validation of the software at all the times.
Programmers develop software by writing tests first, and then code that
fulfills the requirements reflected in the tests.
5.REFACTORING:
XP encourages pair programming where two developers work together
at the same workstation. This approach helps in knowledge sharing,
reduces errors, and improves code quality.
Pairing, in addition to providing better code and tests, also serves to
communicate knowledge throughout the team.
6.PAIR PROGRAMMING:
XP encourages pair programming where two developers work together
at the same workstation.
This approach helps in knowledge sharing, reduces errors, and improves
code quality.
Pairing, in addition to providing better code and tests, also serves to
communicate knowledge throughout the team.
Shared Responsibility: All team members are responsible for the entire
codebase. Any developer can modify any part of the code, ensuring no
one is the sole owner of a particular section.
Increases Flexibility: Developers can work on different parts of the
project as needed, allowing the team to respond quickly to changes or
issues.
Fewer Bottlenecks: Since no one person "owns" the code, work is not
blocked if a specific developer is unavailable.
Encourages Knowledge Sharing: This practice encourages everyone
to be familiar with the whole codebase, fostering team collaboration and
knowledge transfer.
8.CONTINUOUS INTEGRATION:
In XP, developers integrate their code into a shared repository several
times a day. This helps to detect and resolve integration issues early on
in the development process.
Although integration is critical to shipping good working code, the team
is not practiced at it, and often it is delegated to people not familiar with
the whole system.
Code freezes mean that you have long time periods when the
programmers could be working on important shippable features, but that
those features must be held back.
9.40-HOUR WEEK:
Programmers go home on time.
In crunch mode, up to one week of overtime is allowed.
Multiple consecutive weeks of overtime are treated as a sign that
something is very wrong with the process and/or schedule.
10.ON-SITE CUSTOMER:
XP requires an on-site customer who works closely with the
development team throughout the project.
This approach helps to ensure that the customer’s needs are understood
and met, and also facilitates communication and feedback.
For initiatives with lots of customers, a customer representative (i.e.
Product Manager) will be designated for Development team access.
11.CODING STANDARDS:
Everyone codes to the same standards.
The specifics of the standard are not important what is important is that
all of the code looks familiar, in support of collective ownership.
Tests First: In TDD, developers write tests before writing the actual
code. The tests define the desired behavior, and the code is written to
pass those tests.
Immediate Feedback: By running the tests frequently, developers get
immediate feedback on whether their code works as expected.
Improves Design: Writing tests first encourages developers to think
carefully about the design of their code, resulting in simpler, more
modular designs.
Reduces Bugs: TDD ensures that the code is thoroughly tested from the
start, leading to fewer bugs and more reliable software.
CONCLUSION:
Extreme Programming is not a complete template for the entire delivery
organization.
Rather, XP is a set of best practices for managing the development team
and its interface to the customer.
As a process it gives the team the ability to grow, change and adapt as
they encounter different applications and business needs.
Exercise 3: It is important to comprehend the need to automate the software
development lifecycle stages through DevOps. Gain an understanding of the
capabilities required to implement DevOps, continuous integration and
continuous delivery practices.
DevOps Lifecycle
1. Continuous Development
2. Continuous Integration
3. Continuous Testing
4. Continuous Monitoring
5. Continuous Feedback
6. Continuous Deployment
7. Continuous Operations
1. Continuous Development
Continuous development involves writing, designing, and reviewing code.
This phase aims to produce high-quality code efficiently
2. Continuous Integration
Continuous integration focuses on automating the process of merging code
changes into a central repository. This ensures that new code integrates
seamlessly with existing code.
3. Continuous Testing
Continuous testing involves running automated tests to detect bugs and
ensure that the software meets quality standards. This ensures that any new
code changes do not introduce errors.
4. Continuous Monitoring
Continuous monitoring involves collecting and analyzing data about the
software’s performance and availability. This helps identify potential issues
early on.
5. Continuous Feedback
Continuous feedback involves gathering feedback from stakeholders
throughout the development process. This feedback is used to improve the
software and make it better meet user needs.
6. Continuous Deployment
Continuous deployment involves automatically deploying new code
changes to production environments. This ensures that new features are
released to users quickly and efficiently.
7. Continuous Operations
Continuous operations involve managing and maintaining the software in
production environments. This includes tasks such as monitoring,
troubleshooting, and scaling.
1. Agile Development
Agile development is a popular methodology that emphasizes iterative
development and collaboration. It allows for quick adjustments to meet
changing user needs.
2. Code Reviews
Code reviews ensure that code meets quality standards and adheres to best
practices. It helps identify potential bugs and vulnerabilities.
3. Version Control
Version control systems allow teams to track changes made to code, revert
to previous versions, and collaborate on projects efficiently.
4. Automated Testing
Automated testing is integrated into the development process to identify
bugs and ensure that the software meets quality standards early on.
Continuous Integration
2. Automated Testing
Automated tests are run on every code change to detect bugs and ensure
that the software meets quality standards. This helps identify issues early
on.
3. Code Merging
Continuous integration focuses on merging code changes into a central
repository frequently. This ensures that new code integrates seamlessly with
existing code.
4. Feedback Loops
Continuous integration relies on feedback loops to provide information
about the status of the build and any potential issues. This allows teams to
quickly identify and fix problems.
Continuous Testing
1. Unit Testing
Unit testing verifies the functionality of individual units of code, such as
functions or classes. This ensures that each component works as expected.
2. Integration Testing
Integration testing verifies that different components of the software work
together as expected. This ensures that the software functions as a whole.
3. System Testing
System testing verifies that the software meets all functional and non-
functional requirements. This ensures that the software meets user needs
and operates as expected.
Continuous Monitoring, Continuous Feedback
1. Performance Monitoring
Performance monitoring tracks key metrics such as response times, CPU
usage, and memory consumption to identify potential performance
bottlenecks.
2. User Feedback
User feedback is gathered through surveys, reviews, and analytics to
understand user needs and identify areas for improvement.
1. Deployment Pipelines
Deployment pipelines automate the process of deploying software to
production environments. This ensures that new code changes are released
quickly and efficiently.
2. Infrastructure as Code
Infrastructure as code allows teams to manage infrastructure resources
using code. This enables automated provisioning and configuration of
servers and other resources.
3. Continuous Delivery
Continuous delivery involves automating the process of deploying new
code changes to production environments. This ensures that new features
are released to users quickly and efficiently.
Aims to improve the quality and Aims to deliver new features and
reliability of software by detecting improvements to users quickly and
efficiently.
and fixing bugs early on.
Puppet
Ansible
Nagios
Jenkins
An open-source continuous integration and continuous delivery tool that
helps automate the build, test, and deployment process.
Git
A version control system that allows teams to track changes made to code,
revert to previous versions, and collaborate on projects efficiently.
Selenium
SonarQube
Git:
Advantages of Git:
Disadvantages of Git
Advantages of GitHub:
Disadvantages of GitHub:
Testing Methods:
1.It happens during the execution of code 2. It can help identify small defects
because it also looking at the code integration with other databases and
servers.
1. Selenium
2. Catlan
3.Casper.js
What is Static Testing?
1.Sonarqube
2. Lint
3.PMD
Review
Static analysis, also called static code analysis, is a method used to analyze
software source code which checks for its correctness and reveals a wide
variety of information such as structure of the models used, data and control
flow, syntax accuracy and more.
SonarQube
What is SonarQube?
It directly translates as the implied costs for additional rework that can occur
if at an early stage an easy but not efficient solution is chosen. In future the
easy code may restrict scalability.
No coding standards
SonarQube Architecture
SonarQube Architecture can be classified in four components:
1. Sonar Scanner:
Purpose: The Sonar Scanner is a tool that collects source code and
sends it to the SonarQube server for analysis.
2. Source Code:
Purpose: This is the actual codebase that you want to analyze for
quality and security.
Functionality: The source code is the input for the Sonar Scanner. It
includes all the files and directories that make up your project. The
quality of this code is what SonarQube aims to measure and improve.
3. Sonar Analyzer:
Purpose: This is the actual codebase that you want to analyze for
quality and security.
Purpose: The database stores all the analysis results and configuration
settings.
1.Java: Oracle JRE 17or openJDK17 to run both the servers and the
scanners.
1.Verify that your system has java installed or not. If not, install by using the
following steps
Installing JDK:
Go to the URL:
https://www.oracle.com/in/java/technologies/downloads/#jdk17-windows
After the double click on MSI installer file we get welcome to the
installation wizard for Java SE development kit 17.0.6 then click on
next button again next and it will be installed.
Go to Java Folder and in which contains bin under java-17 copy that
path.
Click the New button under the System variables section to add a
new system environment variable.
Enter as JAVA_PATH the variable name and the path from Java
directory as the variable value.
Click OK to save the new system variable.
Select the Path variable under the System variables section in the
Environment Variables window.
localhost:9000 OR IP:9000
SonarQube Dashboard
Exercise 6: Write a build script to build the application using a build
automation tool like Maven. Create a folder structure that will run the
build script and invoke the various software development build stages.
This script should invoke the static analysis tool and unit test cases and
deploy the application to a web application server like Tomcat.
What is MAVEN:
We can add jars and other dependencies of the project easily using
the help of maven.
With the help of Maven we can build any number of projects into
output types like the JAR, WAR etc without doing any scripting.
Using maven we can easily integrate out project with source control
system (such as Subversion or Git).
How MAVEN works:
POM Files: Project Object Model(POM) files are XML file that contains
information related to the project and configuration information such as
dependencies, source directory, plugin, goals etc. used by Maven to build the
project. When you should execute a maven command you give maven a
POM file to execute the commands. Maven reads pom.xml file to
accomplish its configuration and operations.
Build Life Cycles, Phases and Goals: A build life cycle consists of a
sequence of build phases, and each build phase consists of a sequence of
goals. Maven command is the name of a build lifecycle, phase or goal. If a
lifecycle is requested executed by giving maven command, all build phases
in that life cycle are executed also. If a build phase is requested executed, all
build phases before it in the defined sequence are executed too.
Build Profiles: Build profiles a set of configuration values which allows you
to build your project using different configurations. For example, you may
need to build your project for our local computer, for development and test.
To enable different builds you can add different build profiles to your POM
files using its profiles elements and are triggered in the variety of ways.
Build Plugins: Build plugins are used to perform specific goal. You can add
a plugin to the POM file. Maven has some standard plugins you can use, and
you can also implement your own in Java.
1. Verify that your system has java installed or not. If not, install by
using the following the steps
Installing JDK:
Go to the
URL:http://www.oracle.com/technetwork/java/javase/ downloads/index.html
3. Once the download is over. Start the .exe file by double click on it.
4. now click on next continuously
Once done with the Installation of JAVA. • We need to set the Environment
variables in order to use JDK with Eclipse.
Go to MyComputer from the startup menu->Go to Properties->Go to
Advanced System Settings->From the Popup ->Go to Environment
Variables.
Click on Add.
Place the value of the JDK installation folder on your machine till the
folder’s bin directory. For ex: C:\Program
Files\Java\jdk1.7.0_05\bin
Once we have created the Environment variables, we are done with the
process of JDK installation.
Visit the Maven download page and download the version of Maven
you want to install. The Files section contains the archives of the
latest version. Access earlier versions using the archives link in the
Previous Releases section.
Select the Path variable under the System variables section in the
Environment Variables window. Click the Edit button to edit the variable.
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.javatpoint.application1</groupId>
<artifactId>my-application1</artifactId>
<version>1.0</version>
<packaging>jar</packaging>
<url>http://maven.apache.org</url>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.8.2</version>
<scope>test</scope>
</dependency>
</dependencies>
</project>
<project
xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>ETA</groupId>
<artifactId>Calculator</artifactId>
<packaging>war</packaging>
<version>0.0.1-SNAPSHOT</version>
<name>calculator</name>
<url>http://calculator</url>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
</dependency>
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>javax.servlet-api</artifactId>
<version>3.1.0</version>
</dependency>
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-java</artifactId>
<version>3.6.0</version>
<scope>provided</scope>
</dependency>
</dependencies>
<build>
<finalName>calculator</finalName>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>2.1.1</version>
<configuration>
<archive>
<manifestEntries>
<version>${project.version}</version>
</manifestEntries>
</archive>
</configuration>
</plugin>
<plugin>
<groupId>org.sonarsource.scanner.maven</groupId>
<artifactId>sonar-maven-plugin</artifactId>
<version>3.2</version>
</plugin>
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.7.9</version>
<executions>
<execution>
<id>default-prepare-agent</id>
<goals>
<goal>prepare-agent</goal>
</goals>
</execution>
<execution>
<id>default-report</id>
<phase>prepare-package</phase>
<goals>
<goal>report</goal>
</goals>
</execution>
<execution>
<id>default-check</id>
<goals>
<goal>check</goal>
</goals>
<configuration>
<rules>
<!-- implementation is needed only for Maven 2-->
<rule
implementation="org.jacoco.maven.RuleConfiguration">
<element>BUNDLE</element>
<limits>
<!--
<limit
implementation="org.jacoco.report.check.Limit">
<counter>COMPLEXITY</counter>
<value>COVEREDRATIO</value>
<minimum>0.10</minimum>
</limit>
</limits>
</rule>
</rules>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
<profiles>
<profile>
<id>ut</id>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire- plugin</artifactId>
<configuration>
<includes>
<include>**/Calculaterut.java</include>
</includes>
</configuration>
</plugin>
</plugins>
</build>
</profile>
<profile>
<id>it</id>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire- plugin</artifactId>
<configuration>
<includes>
<include>**/CalculatorIT.java</include>
</includes>
</configuration>
</plugin>
</plugins>
</build>
</profile>
<profile>
<id>pt</id>
<build>
<plugins>
<plugin>
<groupId>com.lazerycode.jmeter</groupId>
<artifactId>jmeter-maven-
plugin</artifactId>
<version>2.4.0</version>
<executions>
<execution>
<id>jmeter-tests</id>
<phase>test</phase>
<goals>
<goal>jmeter</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
</profiles>
</project>
<!---->
groupId- groupId means the id for the project group. It is unique and Most
often you will use a group ID which is similar to the root Java package name
of the project like we used the groupId com.project.loggerapi.4.
artifactId- artifactId used to give name of the project you are building.in
ourexample name of our project is LoggerApi.5.
version- version element contains the version number of the project. If your
project has been released in different versions then it is useful to give version
of your project. Other Elements of Pom.xml file1.
scope- this element used to define scope for this maven project that can be
compile, runtime, test, provided system etc.5.
Local repository
Central repository
Remote repository
Cons:
Maven needs the maven installation in the system for working and
maven plugin for the ide.
Maven can add all the dependencies required for the project
automatically by reading pom file.
One can easily build their project to jar, war etc. as per their
requirements using maven.
Adding a new dependency is very easy. One has to just write the
dependency code in pom file.
One can use the Maven Build Tool in the following condition:
When there are a lot of dependencies for the project. Then it is easy
to handle those dependencies using maven.
Pipeline:
jenkins pipelines are used to define and automate the steps in the
software development process , from building to testing and
deploying applications .
Jenkins pipelines are scripts that define the entire CI/CD
workflow, A pipeline is a series of automated steps or stages
(e.g., build , test , deploy) that can write as code.
Step : A task that says what to do . The steps are defined inside the stage .
The Steps and Stages of Pipeline:
Commit :This stage represents the integration of the newly committed code
into the shared codebase. Once the code is committed , jenkins automatically
triggers the pipeline.
Build : Jenkins takes the code from the repository and compiles it into a
build (e .g ., a binary or package ). If the build is successful, the process
moves to the next stage , this ensures that the code can be compiles or
assembled correctly.
Test : The newly built code is tested in this stage . Automated unit,
integration ,or functional tests are run to verify the code’s correctness and
functionality. Jenkins can integrate with various testing frameworks to
automate this process.
Stage : In this stage, the built and tested code is prepared for deployment in a
staging environment. It is often a replica of the production environment
where the code is further validated and checked for any issues before being
released.
Deploy : After passing tests and staging validation, the code is deployed to
the development, QA (Quality Assurance), or other pre-production
environments. It can also involve deployment to a production environment if
it's a continuous deployment setup. This is where the final checks take place
before releasing the software.
Production : Once all stages are complete, the code is moved to the
production environment where it becomes live and accessible to end-users.
Type an item name and select Pipeline from the list of item types.
Click OK.
Click Save.
http://<your-jenkins-ip>: port/pipeline-syntax/
Alternatively, you can find the syntax generator path within your pipeline job
configuration, as shown below.
// Example of Jenkins pipeline script
pipeline {
stages {
stage("Build") {
steps {
javac HelloWorld.java
// Execute the compiled Java binary called HelloWorld. This requires JDK
configuration from Jenkins
java HelloWorld
// Executes the Apache Maven commands, clean then package. This requires
Apache Maven configuration from Jenkins
} // End of stages
} // End of pipeline
It's easy to see the structure of a Jenkins pipeline from this sample script.
Note that some commands, like java, javac, and mvn, are not available by
default, and they need to be installed and configured through Jenkins.
Therefore:
1. Installation of jdk.
2. Installation of jenkins.
3. Setup jenkins pipelines.
**The steps in this document, that should apply for any version of
windows**
**Installation of JDK**
Step-1: Jenkins only run in JDK8 or JDK11.
After Modification
Step-5:
• First thing we want to remove the reverse slash at the end of the
variable value.
• And then click on OK.
Step-7:
• Click on Path.
• We need to change the below showed path to reference the java home
environmental variable.
(a) Before Modification
• Click on Install.
Step-4:
• Port selection Specify the port on which Jenkins will be running, Test
Port button to validate whether the specified port if free on your
machine or not. Consequently, if the port is free, it will show a green
tick mark as shown below, then click on Next.
• Then click on Next.
Step-5:
• The installation process checks for Java on your machine and prefills
the dialog with the Java home directory. If the needed Java version is
not installed on your machine, you will be prompted to install it.(This
is the path of JDK)
• Once your Java home directory has been selected, click on Next to
continue.
Step-6:
Step-8:
• Click on Finish.
Step-9:
After click on Jenkins, you can able to see the below: first we want to
modify is Jenkins XML.
Right click on the Jenkins XML , then click on Edit.
I want to control my Jenkins home directory where all the data will
live
• After changing the location of the pid file , save the file .and close it.
• We setup all the things but when we open the Jenkins . err file , it
shows Jenkins has failed to create a temporary file.
• So one thing that we need to do before start the service is we need to
create “tmp” folder inside c >> tools >> jenkins .
• Later open the “Services” and then start the jenkins which is shown
below.
• Browse localhost 8080 in your system.
• Enter the password.
• Later enter the credentials they ask, and then click on save and
continue.
• Click on save and finish.
Step 4: Now, click “Build Now” and wait for the build to start.
During the job execution, you can monitor the progress of each stage in the
stage view. Below is a screenshot of a successfully completed job.
Additionally, you can access the job logs by clicking on the blue icon.
If you have installed the Blue Ocean plugin, you can enjoy a user-friendly
interface to view your job status and logs. Simply click on “Open in Blue
Ocean” on the left to access the job in the Blue Ocean view, as depicted
below.
Step 2: Follow the same steps we used for creating a pipeline job. But
instead of entering the code directly into the script block, select the “Pipeline
script from SCM” option and fill in the details as shown below.
1. Definition: Pipeline script from SCM
Step 3: Save the configuration and run the build. You should see a
successful build.
Jenkins Pipeline (or simply "Pipeline" with a capital "P") is a suite of plugins
which supports implementing and integrating continuous delivery
pipelines into Jenkins.
Why Pipeline?
Pipeline concepts
The following concepts are key aspects of Jenkins Pipeline, which tie in
closely to Pipeline syntax (see the overview below).
Be aware that both stages and steps (above) are common elements of both
Declarative and Scripted Pipeline syntax.
In Declarative Pipeline syntax, the pipeline block defines all the work done
throughout your entire Pipeline.
pipeline {
agent any
stages {
stage('Build') {
steps { // }
stage('Test') {
steps { // }
stage('Deploy') {
steps { // }
}
1. Execute this Pipeline or any of its stages, on any available agent.
In Scripted Pipeline syntax, one or more node blocks do the core work
throughout the entire Pipeline. Although this is not a mandatory requirement
of Scripted Pipeline syntax, confining your Pipeline’s work inside of
a node block does two things:
node {
stage('Build') { // }
stage('Test') { // }
stage('Deploy') { // }
Defines the "Build" stage. stage blocks are optional in Scripted Pipeline
syntax. However, implementing stage blocks in a Scripted Pipeline
2.
provides clearer visualization of each stage's subset of tasks/steps in the
Jenkins UI.
Pipeline example
pipeline {
agent any
options {
skipStagesAfterUnstable()
stages {
stage('Build') {
steps { sh 'make' }
stage('Test'){
steps {
sh 'make check'
junit 'reports/**/*.xml'
stage('Deploy') {
stage is a syntax block that describes a stage of this Pipeline. Read more
about stage blocks in Declarative Pipeline syntax on the Pipeline
3.
syntax page. As mentioned above, stage blocks are optional in Scripted
Pipeline syntax.