Every Thing About DevOps
Every Thing About DevOps
Every Thing About DevOps
Devops
Thanks for the patience
pls share with friends
Continuous Deployment
Automation
Continuous Monitoring
Security
Puppet
Nagios
Docker
ELK (Elasticsearch, Logstash, Kibana)
Q6) What is Git and explain the difference between Git and SVN?
Git is a source code management (SCM) tool which handles small as well as large projects with efficiency. It
is basically used to store our repositories in remote server such as GitHub.
GIT SVN
Git is a Decentralized Version Control Tool SVN is a Centralized Version Control Tool
Push and pull operations are fast Push and pull operations are slower compared to Git
It belongs to 3rd generation Version Control Tool It belongs to 2nd generation Version Control tools
Commits can be done offline too Commits can be done only online
Good performance
Idempotent
Very Easy to learn
Declarative not procedural
Roles Playbooks
A set of tasks for accomplishing certain role. Mapps among hosts and roles.
Downloads
You can’t see anything with name docker.tar.gz
RHEL 6.5+
CentOS 6+
Gentoo
ArchLinux
openSUSE 12.3+
CRUX 3.0+
Cloud:
Amazon EC2
An SHAI name, a 40-character string that uniquely identifies the commit object (also called as hash).
Q30) Explain the difference between git pull and git fetch?
Git pull command basically pulls any new changes or commits from a branch from your central repository
and updates your target branch in your local repository.
Git fetch is also used for the same purpose, but its slightly different form Git pull. When you trigger a git
fetch, it pulls all new commits from the desired branch and stores it in a new branch in your local repository.
If we want to reflect these changes in your target branch, git fetch must be followed with a git merge. Our
target branch will only be updated after merging the target branch and fetched branch. Just to make it easy
for us, remember the equation below:
Q31) How do we know in Git if a branch has already been merged into master?
git branch –merged
The above command lists the branches that have been merged into the current branch.
In this task branching model each task is implemented on its own branch with the task key included in the
branch name. It is quite easy to see which code implements which task, just look for the task key in the
branch name.
Release branching
Once the develop branch has acquired enough features for a release, then we can clone that branch to form
a Release branch. Creating this release branch starts the next release cycle, so no new features can be
added after this point, only bug fixes, documentation generation, and other release-oriented tasks should go
in this branch. Once it’s ready to ship, the release gets merged into master and then tagged with a version
number. In addition, it should be merged back into develop branch, which may have progressed since the
release was initiated earlier.
For each code commit changes an automatic build report notification get generated.
To notify developers about build report success or failure, it can be integrated with LDAP mail server.
Achieves continuous integration agile development and test-driven development environment.
With simple steps, maven release project can also be automated.
CVS
Subversion
Git
Mercurial
Perforce
Clearcase
RTC
After that enter a name for the job (it can be anything) and select free-style job.
Then click OK to create new job in Jenkins dashboard.
The next page enables you to configure your job, and it’s done.
Q49) I want a file that consists of last 10 lines of the some other file?
Tail -10 filename >filename
echo "1"
i=3
j=300
flag=0
tem=2
do
temp=`echo $i`
do
temp=`expr $temp - 1`
n=`expr $i % $temp`
then
flag=1
fi
done
if [ $flag -eq 0 ]
then
echo $i
else
flag=0
fi
i=`expr $i + 1`
done
Q54) How to pass the parameters to the script and how can I get those parameters?
Scriptname.sh parameter1 parameter2
I will use $* to get the parameters.
Q55) What is the default file permissions for the file and how can I modify it?
Default file permissions are : rw-r—r—
If I want to change the default file permissions I need to use umask command ex: umask 666
Q57) How you automate the whole build and release process?
Check out a set of source code files.
Compile the code and report on progress along the way.
Run automated unit tests against successful compiles.
Create an installer.
Publish the installer to a download site, and notify teams that the installer is available.
Run the installer to create an installed executable.
Run automated tests against the executable.
Report the results of the tests.
Accessing the Jenkins console log through Docker logsdocker logs <docker-container-name>Accessing the
Jenkins home directorydocker exec -it <docker-container-name> bash
Q61) Did you ever participated in Prod Deployments? If yes what is the procedure?
Yes I have participated, we need to follow the following steps in my point of view
Preparation & Planning : What kind of system/technology was supposed to run on what kind of machine
The specifications regarding the clustering of systems
How all these stand-alone boxes were going to talk to each other in a foolproof manner
Production setup should be documented to bits. It needs to be neat, foolproof, and understandable.
It should have all a system configurations, IP addresses, system specifications, & installation instructions.
It needs to be updated as & when any change is made to the production environment of the system
Q62) My application is not coming up for some reason? How can you bring it up?
We need to follow the steps
Network connection
The Web Server is not receiving users’s request
Checking the logs
Security for every enterprise user — end & privileged users, internal and external
Protect across enterprise resources — cloud & on-prem apps, VPNs, endpoints, servers, privilege elevation
and more
Reduce cost & complexity with an integrated identity platform
Q66) I want to copy the artifacts from one location to another location in cloud.
How?
Create two S3 buckets, one to use as the source, and the other to use as the destination and then create
policies.
Q68) How can you avoid the waiting time for the triggered jobs in Jenkins.
First I will check the Slave nodes capacity, If it is fully loaded then I will add the slave node by doing the
following process.
Open Source
Agent less
Improved efficiency , reduce cost
Less Maintenance
Easy to understand yaml files
Cons:
Q72) How you get the Inventory variables defined for the host?
We need to use the following command
Ansible – m debug- a “var=hostvars[‘hostname’]” localhost(10.92.62.215)
Q75) I want to change the default port number of apache tomcat. How?
Go to the tomcat folder and navigate to the conf folder there you will find a server.xml file. You can change
connector port tag as you want.
Q77) How you will run Jenkins job from command line?
We have a Jenkins CLI from there we need to use the curl command
Q80) How you will do code commit and code deploy in cloud?
Create a deployment environment
Get a copy of the sample code
Create your pipeline
Activate your pipeline
Commit a change and update the App.
Q83) What are the zones the Version control can acquaint with get proficient
DevOps practice?
A clearly fundamental region of Version Control is Source code the executives, Where each engineer code
ought to be pushed to a typical storehouse for keeping up assemble and discharge in CI/CD pipelines.
Another territory can be Version control For Administrators when they use Infrastructure as A Code (IAC)
apparatuses and rehearses for keeping up The Environment setup.
Another Area of Version Control framework Can be Artifactory Management Using Repositories like Nexus
and DockerHub
Arrangement
The vars inside the supports are supplanted by ansible while running utilizing layout module.
Q87) What is the requirement for sorting out playbooks as the job, is it vital?
Arranging playbooks as jobs , gives greater clarity and reusability to any plays , while consider an errand
where MySQL establishment ought to be done after the evacuation of Oracle DB , and another prerequisite
is expected to introduce MySQL after java establishment, in the two cases we have to introduce MySQL ,
yet without jobs need to compose playbooks independently for both use cases , yet utilizing jobs once the
MySQL establishment job is made can be used any number of times by summoning utilizing rationale in
site.yaml .
No, it isn’t important to make jobs for each situation, however making jobs is the best practice in Ansible.
Q90) What are the Different modes does a holder can be run?
Docker holder can be kept running in two modes
Connected: Where it will be kept running in the forefront of the framework you are running, gives a terminal
inside to compartment when – t choice is utilized with it, where each log will be diverted to stdout screen.
Isolates: This mode is typically kept running underway, where the holder is confined as a foundation
procedure and each yield inside a compartment will be diverted log records
inside/var/lib/docker/logs/<container-id>/<container-id.json> and which can be seen by docker logs order.
measure, – s Display all out document sizes if the sort is the compartment
type Return JSON for a predefined type
Q92) What is the order can be utilized to check the asset usage by docker holders?
Docker details order can be utilized to check the asset usage of any docker holder, it gives the yield
practically equivalent to Top direction in Linux, it shapes the base for compartment asset observing
instruments like a counsel, which gets yield from docker details order.
Q93) How to execute some errand (or) play on localhost just while executing
playbooks on various has on an ansible?
In ansible, there is a module called delegate_to, in this module area give the specific host (or) has where
your errands (or) assignment should be run.
undertakings:
name: ” Elasticsearch Hitting”
uri: url=’_search?q=status:new’ headers='{“Content-type”:”application/json”}’ method=GET
return_content=yes
register: yield
delegate_to: 127.0.0.1
set_fact:
fact_time: “Truth: ”
troubleshoot: var=fact_time
order: rest 2
troubleshoot: var=fact_time
assignments:
name: queries in factors versus queries in realities
has: localhost
vars:
var_time: “Var: ”
Despite the fact that the query for the date has been utilized in both the cases, wherein the vars are utilized
it modifies dependent on an opportunity to time each time executed inside the playbook lifetime. Be that as it
may, Fact dependably continues as before once query is finished
Q95) What is a query in ansible and what are query modules bolstered by ansible?
Query modules enable access to information in Ansible from outside sources. These modules are assessed
on the Ansible control machine and can incorporate perusing the filesystem yet in addition reaching outside
information stores and administrations.
Organization is {lookup{‘<plugin>’,'<source(or)connection_string>’}}
A portion of the query modules upheld by ansible are
Document
pipe
redis
jinja layouts
etcd kv store
Q96) How might you erase the docker pictures put away at your nearby machine
and how might you do it for every one of the pictures without a moment’s delay?
The direction docker RMI <image-id> can be utilized to erase the docker picture from nearby machine,
though a few pictures may should be constrained in light of the fact that the picture might be utilized by
some other holder (or) another picture , to erase pictures you can utilize the mix of directions by docker RMI
$(docker pictures – q), where docker pictures will give the docker picture names, to get just the ID of docker
pictures just , we are utilizing – q switch with docker pictures order.
Q97) What are the organizers in the Jenkins establishment and their employments?
JENKINS_HOME – which will be/$JENKINS_USER/.jenkins it is the root envelope of any Jenkins
establishment and it contains subfolders each for various purposes.
employments/ – Folder contains all the data pretty much every one of the occupations arranged in the
Jenkins example.
Inside employments/, you will have the envelope made for each activity and inside those organizers, you will
have fabricate organizers as indicated by each form numbers each form will have its log records, which we
see in Jenkins web support.
Modules/ – where all your modules will be recorded.
Workspace/ – this will be available to hold all the workspace documents like your source code pulled from
SCM.
Q100) What are Micro services, and how they control proficient DevOps rehearses?
Where In conventional engineering , each application is stone monument application implies that anything is
created by a gathering of designers, where it has been sent as a solitary application in numerous machines
and presented to external world utilizing load balances, where the micro services implies separating your
application into little pieces, where each piece serves the distinctive capacities expected to finish a solitary
exchange and by separating , designers can likewise be shaped to gatherings and each bit of utilization may
pursue diverse rules for proficient advancement stage, as a result of spry improvement ought to be staged
up a bit and each administration utilizes REST API (or) Message lines to convey between another
administration.
So manufacture and arrival of a non-strong form may not influence entire design, rather, some usefulness is
lost, that gives the confirmation to productive and quicker CI/CD pipelines and DevOps Practices.
gives more extravagant grammatical highlights over Scripted Pipeline language structure, and is intended to
make composing and perusing Pipeline code less demanding.
Q102) What are the Labels in Jenkins and where it tends to be used?
Similarly as with CI/CD arrangement should be concentrated , where each application in the association can
be worked by a solitary CI/CD server , so in association there might be various types of utilization like java,
c#,.NET and so forth, likewise with microservices approach your programming stack is inexactly coupled for
the task , so you can have Labeled in every hub and select the choice Only assembled employments while
name coordinating this hub, so when a manufacture is planned with the mark of the hub present in it, it
hangs tight for next agent in that hub to be accessible, despite the fact that there are different agents in
hubs.
Q104) What is the callback modules in Ansible, give a few instances of some
callback modules?
Callback modules empower adding new practices to Ansible when reacting to occasions. Of course,
callback modules control a large portion of the yield you see when running the direction line programs,
however can likewise be utilized to include an extra yield, coordinate with different apparatuses and
marshall the occasions to a capacity backend. So at whatever point a play is executed and after it creates a
few occasions, that occasions are imprinted onto Stdout screen, so callback module can be put into any
capacity backend for log preparing.
Model callback modules are ansible-logstash, where each playbook execution is brought by logstash in the
JSON group and can be incorporated some other backend source like elasticsearch.
Q109) What are Microservices, and how they control productive DevOps
rehearses?
Where In conventional engineering , each application is stone monument application implies that anything is
created by a gathering of designers, where it has been conveyed as a solitary application in numerous
machines and presented to external world utilizing load balancers, where the microservices implies
separating your application into little pieces, where each piece serves the diverse capacities expected to
finish a solitary exchange and by separating , engineers can likewise be shaped to gatherings and each bit
of utilization may pursue distinctive rules for proficient advancement stage, on account of light-footed
improvement ought to be staged up a bit and each administration utilizes REST API (or) Message lines to
impart between another administration.
So manufacture and arrival of a non-hearty variant may not influence entire design, rather, some usefulness
is lost, that gives the affirmation to proficient and quicker CI/CD pipelines and DevOps Practices.
Q110) What are the manners in which that a pipeline can be made in Jenkins?
There are two different ways of a pipeline can be made in Jenkins
Scripted Pipelines:
Progressively like a programming approach
Explanatory pipelines:
DSL approach explicitly to make Jenkins pipelines.
The pipeline ought to be made in Jenkins record and the area can either be in SCM or neighborhood
framework.
Definitive and Scripted Pipelines are developed in a general sense in an unexpected way. Explanatory
Pipeline is a later element of Jenkins Pipeline which:
gives more extravagant linguistic highlights over Scripted Pipeline sentence structure, and is intended to
make composing and perusing Pipeline code simpler.
Q111) What are the Labels in Jenkins and where it very well may be used?
Likewise with CI/CD arrangement should be incorporated , where each application in the association can be
worked by a solitary CI/CD server , so in association there might be various types of use like java, c#,.NET
and so forth, similarly as with microservices approach your programming stack is inexactly coupled for the
undertaking , so you can have Labeled in every hub and select the alternative Only assembled occupations
while mark coordinating this hub, so when a fabricate is booked with the name of the hub present in it, it sits
tight for next agent in that hub to be accessible, despite the fact that there are different agents in hubs.
It gives modern UI to recognize each phase of the pipeline and better pinpointing for issues and rich
Pipeline proofreader for fledglings.
Q113) What is the callback modules in ansible, give a few instances of some
callback modules?
Callback modules empower adding new practices to Ansible when reacting to occasions. As a matter of
course, callback modules control the greater part of the yield you see when running the direction line
programs, yet can likewise be utilized to include an extra yield, coordinate with different instruments and
marshall the occasions to a capacity backend. So at whatever point a play is executed and after it delivers a
few occasions, that occasions are imprinted onto Stdout screen, so callback module can be put into any
capacity backend for log handling.
Precedent callback modules are ansible-logstash, where each playbook execution is gotten by logstash in
the JSON position and can be incorporated some other backend source like elasticsearch.
Q115) For what reason is each instrument in DevOps is generally has some DSL
(Domain Specific Language)?
Devops is a culture created to address the necessities of lithe procedure, where the advancement rate is
quicker ,so sending should coordinate its speed and that needs activities group to arrange and work with
dev group, where everything can computerize utilizing content based , however it feels more like tasks
group than , it gives chaotic association of any pipelines, more the utilization cases , more the contents
should be composed , so there are a few use cases, which will be sufficient to cover the requirements of
light-footed are taken and apparatuses are made by that and customization can occur over the device
utilizing DSL to mechanize the DevOps practice and Infra the board.
Q116) What are the mists can be incorporated with Jenkins and what are the
utilization cases?
Jenkins can be coordinated with various cloud suppliers for various use cases like dynamic Jenkins slaves,
Deploy to cloud conditions.
Q117) What are Docker volumes and what sort of volume ought to be utilized to
accomplish relentless capacity?
Docker volumes are the filesystem mount focuses made by client for a compartment or a volume can be
utilized by numerous holders, and there are distinctive sorts of volume mount accessible void dir, Post
mount, AWS upheld lbs volume, Azure volume, Google Cloud (or) even NFS, CIFS filesystems, so a volume
ought to be mounted to any of the outer drives to accomplish determined capacity, in light of the fact that a
lifetime of records inside compartment, is as yet the holder is available and if holder is erased, the
information would be lost.
Q118) What are the Artifacts store can be incorporated with Jenkins?
Any sort of Artifacts vault can be coordinated with Jenkins, utilizing either shell directions (or) devoted
modules, some of them are Nexus, Jfrog.
Q119) What are a portion of the testing apparatuses that can be coordinated with
Jenkins and notice their modules?
Sonar module – can be utilized to incorporate testing of Code quality in your source code.
Gerrit code survey trigger-Gerrit is an opensource code audit instrument, at whatever point a code change is
endorsed after audit construct can be activated.
Trigger Build Remotely – You can have remote contents in any machine (or) even AWS lambda capacities
(or) make a post demand to trigger forms in Jenkins.
Calendar Jobs-Jobs can likewise be booked like Cron occupations.
Survey SCM for changes – Where your Jenkins searches for any progressions in SCM for the given interim,
if there is a change, a manufacture can be activated.
Upstream and Downstream Jobs-Where a construct can be activated by another activity that is executed
already.
Q128) Which is the top DevOps tools? and it’s Which tools have you worked on?
Discover about the trending Top DevOps Tools including Git. Well, if you live considering DevOps being a
tool when, you are wrong! DevOps does not a tool or software, it’s an appreciation that you can adopt for
continuous growth. file and, by practicing it you can simply coordinate this work among your team.
Automated Testing
Automated Deployment
Q131) What does configuration management under terms like infrastructure further
review some popular tools used?
In Software Engineering Software Configuration Management is a unique task about tracking to make the
setting configuration during the infrastructure with one change. It is done for deploying, configuring and
maintaining servers.
Q132) How will you approach when each design must to implement DevOps?
As the application is generated and deployed, we do need to control its performance. Monitoring means also
really important because it might further to uncover some defects which might not have been detected
earlier.
Q138) What mean the specific skills required for a DevOps engineer?
While tech abilities are a must, strong DevOps engineers further possess this ability to collaborate, multi-
task, also always place that customer first. critical skills that all DevOps engineer requirements for success.
Tools for an efficient DevOps workflow. A daily workflow based at DevOps thoughts allows team members
to achieve content faster, be flexible just to both experiments also deliver value, also help each part from
this organization use a learning mentality.
Q146) Can we copy Jenkins job from one server to other server?
Yes, we can do that using one of the following ways
We can copy the Jenkins jobs from one server to other server by copying the corresponding jobs folder.
We can make a copy of the existing job by making clone of a job directory with different names
Rename the existing job by renaming the directory
It is open source
For 3rd point explain various technologies we can use to ease the deployments, for development, explain
about taking small features and development, how it helps for testing and issue fixing.
In a nutshell, Agile is the set of rules for the development of a software, but DevOps focus more on
Development as well as Operation of the Developed software in various environments.
Highly scalable
Wide availability of virtuals and cloud infrastructure from both internal and external providers;
Increased usage of the data center ,automation and configuration management tools;
Increased focus on the test automation and continuous integration methods;
Best practices on the critical issues.
Q168) Why Are the Configuration Management Processes And Tools Important ?
Talk about to multiple software builds, releases, revisions, and versions for each other software or testware
that is being developed. Move on to explain the need for storing and maintaining data, keeping track of the
development builds and simplified troubleshooting. Don’t forget to mention that key CM tools that can be
used to the achieve these objectives. Talk about how to tools like Puppet, Ansible, and Chef help in
automating software deployment and configuration on several servers.
Q169) Which Are the Some Of the Most Popular Devops Tools ?
The most popular DevOps tools included`
Selenium
Puppet
Chef
Git
Jenkins
Ansible
Vagrant is a tool that can created and managed environments for the testing and developing software.
Devops Training Free Demo
IT Operations development
Q176) What Are The Advantages Of Devops With Respect To the Technical And
Business Perspective?
Technical benefits
Software delivery is continuous.
Reduces Complexity in problems.
Faster approach to resolve problems
Manpower is reduced.
Business benefits
High rate of delivering its features
Stable operating environments
More time gained to Add values.
Enabling faster feature time to market
Q177) What Are The Core Operations Of the Devops In Terms Of the Development
And Infrastructure ?
The core operations of DevOps
Application development
Code developing
Code coverage
Unit testing
Packaging
Q179) What are The Most Important Thing Devops Helps Us Achieve?
The most important thing that the DevOps helps us achieve is to get the changes into production as quickly
as possible while that minimizing risks in software quality assurance and compliance. This is the primary
objective of DevOps.
For example clear communication and better working relationships between teams i.e. both of the Ops
team and Dev team collaborate together to deliver good quality software which in turn leads to higher
customer satisfaction.
Q180) How Can Make a Sure New Service Is Ready For The Products Launched?
Backup System
Recovery plans
Load Balancing
Monitoring
Centralized logging
Developers develop the code and this source code is managed by Version Control System of the tools like
Git etc.
Developers send to this code of the Git repository and any changes made in the code is committed to this
Repository.
Jenkins pulls this code from the repository using the Git plugin and build it using tools like Ant or Maven.
Configuration managements tools like puppet deploys & provisions testing environment and then Jenkins
releases this code on the test to environment on which testing is done using tools like selenium.
Once the code are tested, Jenkins send it for the deployment on production to the server (even production
server are provisioned & maintained by tools like puppet).
After deployment Its continuously monitored by tools like Nagios.
Docker containers provides testing environment to the test the build features.
You can summarize by saying Agile of the software development methodology focuses on the development
for software but DevOps on the other hand is responsible for the development as well as deployment of the
software to the safest and most reliable way to the possible. Here’s a blog that will give you more
information of the evolutions of the DevOps.
Maven 2 project
Amazon EC2
HTML publisher
Copy artifact
Join
Green Balls
Q192) What testing is necessary to insure a new service is ready for production?
Continuous testing
Q211) What is the difference between the Annie Playbook book and the
characters?
Roles
The characters are a restructured entity of a play. Plays are on playbooks.
A set of functions to accomplish the specific role. Maps between hosts and roles.
Example: Common, Winners. Example: site.yml, fooservers.yml, webservers.yml.
ls | grep -v docker
Desktop
Dockerfile
Documents
Downloads
You can not find anything with name docker.tar.gz
Fedora 20+
RHEL 6.5+
CentOS 6+
Gentoo
ArchLinux
openSUSE 12.3+
CRUX 3.0+
Cloud:
Amazon EC2
Google Compute Engine
Microsoft Asur
Rackspace
Since support is not supported, do not work on Windows or Mac for token production, yes, even on windows
you can use it for testing purposes
Q231) Do you list the main difference between active and DevOffice?
Agile:
There is something about dynamic software development
Devops:
DevOps is about software deployment and management.
DevOps does not replace the active or lean. By removing waste, by removing gloves and improving
regulations, it allows the production of rapid and continuous products.
Q235) What is the main difference between Linux and Unix operating systems?
Unix:
It belongs to the multitasking, multiuser operating system family.
These are often used on web servers and workstations.
It was originally derived from AT & T Unix, which was started by the Bell Labs Research Center in the 1970s
by Ken Thompson, Dennis Ritchie, and many others.
Operating systems are both open source, but the comparison is relatively similar to Unix Linux.
Linux:
Linux may be familiar to each programming language.
These personal computers are used.
Q236) How can we ensure how to prepare a new service for the products
launched?
Backup system
Recovery plans
Load balance
Tracking
Centralized record
Very scalable
Q240) The first 10 capabilities of a person in the position of DevOp should be.
The best in system administration
Virtualization experience
Good technical skills
Great script
Good development skills
Chef in the automation tool experience
People management
Customer service
Real-time cloud movements
Who’s worried about who
Your next application must be Netflix. This streaming and on-the-video video company follows similar
procedures with complete automated processes and systems. Specify user base of these two companies:
Facebook has 2 billion users, Netflix provides online content for more than 100 million users worldwide.
Reduced lead time between the best examples of bugs, bug fixes, runtime and continuous supplies and the
overall reduction of human costs.
Git information
Jenkins
Ansible
Tucker Tipps Online Training
Q246) Is There a Difference Between Active and DevOps? If yes, please explain
As a DevOps Engineer, interview questions like this are very much expected. Start by explaining the clear
overlap between DevOps and Agile. Although the function of DevOps is always synonymous with dynamic
algorithms, there is a clear difference between the two. Agile theories are related to the soft product or
development of the software. On the other hand, DevOps is handled with development, ensuring quick
turnaround times, minimal errors and reliability by installing the software continuously.
Jenkins
Selenium
Puppet
Chef
Ansible
Nagios
Laborer
Monit
Q251) What are the main characters of DevOps engineers based on growth and
infrastructure?
DevOps Engineer’s major work roles
Application Development
Developing code
Code coverage
Unit testing
Packaging
Preparing with infrastructure
Continuous integration
Continuous test
Continuous sorting
Provisioning
Configuration
Orchestration
Deployment
Q252) What are the advantages of DevOps regarding technical and business
perspective?
Technical Advantages:
Software delivery continues.
Problems reduce austerity.
Fast approach to solving problems
Humans are falling.
Business Benefits:
The higher the rate for its features
Vegant is a tool for creating and managing the environment for making software and experiments.
Answer: Where the Configuration of any servers or toolchain or application stack required for an
organization can be made into more descriptive
level of code and that can be used for provisioning and manage infrastructure elements like Virtual Machine,
Software, Network Elements,
but it differs from scripts using any language, where they are series of static steps coded, where Version
control can be used in order
to track environment changes.Example Tools are Ansible, Terraform.
Q2.What are the areas the Version control can introduce to get efficient DevOps practice?
Answer: Obviously the main area of Version Control is Source code management, Where every developer
code should be pushed to the common repository for maintaining build and release in CI/CD
pipelines.Another area can be Version control For Administrators when they use Infrastructure as A Code
(IAC) tools and practices for maintaining The Environment configuration.Another Area of Version Control
system Can be Artifactory Management Using Repositories like Nexus &
DockerHub.</p></div></div><div class="su-spoiler su-spoiler-style-default su-spoiler-icon-plus su-spoiler-
closed"><div class="su-spoiler-title" tabindex="0" role="button"><span class="su-spoiler-icon">
Answer: Ansible is Agentless configuration management tool, where puppet or chef needs agent needs to
be run on the agent node and chef or puppet is based on pull model, where your cookbook or manifest for
chef and puppet respectively from the master will be pulled by the agent and ansible uses ssh to
communicate and it gives data-driven instructions to the nodes need to be managed , more like RPC
execution, ansible uses YAML scripting, whereas puppet (or) chef is built by ruby uses their own DSL
.</p></div></div><div class="su-spoiler su-spoiler-style-default su-spoiler-icon-plus su-spoiler-closed"><div
class="su-spoiler-title" tabindex="0" role="button"><span class="su-spoiler-icon">
Answer: roles common tasks handlers files templates vars defaults meta webservers tasks defaults
meta/</li></ul><p>Where common is role name, under tasks – there will be tasks (or) plays present,
handlers – to hold the handlers for any tasks, files – static files for copying (or) moving to remote systems,
templates- provides to hold jinja based templating , vars – to hold common vars used by
playbooks.</p></div></div><div class="su-spoiler su-spoiler-style-default su-spoiler-icon-plus su-spoiler-
closed"><div class="su-spoiler-title" tabindex="0" role="button"><span class="su-spoiler-icon">
Answer: Jinja2 templating is the Python standard for templating , think of it like a sed editor for Ansible ,
where it can be used is when there is a need for dynamic alteration of any config file to any application like
consider mapping a MySQL application to the IP address of the machine, where it is running, it cannot be
static , it needs altering it dynamically at runtime .</p><p>Format</p><p>{{ foo.bar }}</p><p>The vars within
the {{ }} braces are replaced by ansible while running using template module.</p></div></div><div
class="su-spoiler su-spoiler-style-default su-spoiler-icon-plus su-spoiler-closed"><div class="su-spoiler-title"
tabindex="0" role="button"><span class="su-spoiler-icon">
Q7. What is the need for organizing playbooks as the role, is it necessary?
Answer: Organizing playbooks as roles , gives more readability and reusability to any plays , while consider
a task where MySQL installation should be done after the removal of Oracle DB , and another requirement
is needed to install MySQL after java installation, in both cases we need to install MySQL , but without roles
need to write playbooks separately for both use cases , but using roles once the MySQL installation role is
created can be utilised any number of times by invoking using logic in site.yaml .</p><p>No, it is not
necessary to create roles for every scenario, but creating roles is a best practice in
Ansible.</p></div></div><div class="su-spoiler su-spoiler-style-default su-spoiler-icon-plus su-spoiler-
closed"><div class="su-spoiler-title" tabindex="0" role="button"><span class="su-spoiler-icon">
Answer: Docker engine contacts the docker daemon inside the machine and creates the runtime
environment and process for any container, docker composes links several containers to form as a stack
used in creating application stacks like a LAMP, WAMP, XAMP.</p></div></div><div class="su-spoiler su-
spoiler-style-default su-spoiler-icon-plus su-spoiler-closed"><div class="su-spoiler-title" tabindex="0"
role="button"><span class="su-spoiler-icon">
Q10. What are the Different modes does a container can be run?
Answer: Docker container can be run in two modes Attached: Where it will be run in the foreground of the
system you are running, provides a terminal inside to container when -t option is used with it, where every
log will be redirected to stdout screen. Detached: This mode is usually run in production, where the
container is detached as a background process and every output inside the container will be redirected log
files inside /var/lib/docker/logs/<container-id>/<container-id.json> and which can be viewed by
docker logs command. </div></div><div class="su-spoiler su-spoiler-style-default su-spoiler-icon-plus su-
spoiler-closed"><div class="su-spoiler-title" tabindex="0" role="button"><span class="su-spoiler-icon">
Answer: Docker inspects <container-id> will give output in JSON format, which contains details like the
IP address of the container inside the docker virtual bridge and volume mount information and every other
information related to host (or) container specific like the underlying file driver used, log driver used. docker
inspect [OPTIONS] NAME|ID [NAME|ID…] Options<br> Name, shorthand Default Description<br> —
format, -f Format the output using the given Go template<br> –size , -s Display total file sizes if the type is
container<br> –type Return JSON for specified type </div></div><div class="su-spoiler su-spoiler-style-
default su-spoiler-icon-plus su-spoiler-closed"><div class="su-spoiler-title" tabindex="0"
role="button"><span class="su-spoiler-icon">
Q12.What is the command can be used to check the resource utilization by docker containers?
Answer: Docker stats command can be used to check the resource utilization of any docker container, it
gives the output analogous to Top command in Linux, it forms the base for container resource monitoring
tools like advisor, which gets output from docker stats command. docker stats [OPTIONS] [CONTAINER…]
Options<br> Name, shorthand Default Description<br> — all, -a Show all containers (default shows just
running)<br> –format Pretty-print images using a Go template<br> –no-stream Disable streaming stats and
only pull the first result<br> –no-trunc Do not truncate output </div></div><div class="su-spoiler su-spoiler-
style-default su-spoiler-icon-plus su-spoiler-closed"><div class="su-spoiler-title" tabindex="0"
role="button"><span class="su-spoiler-icon">
Q13.What is the major difference between Continuos deployment and continuos delivery?
Answer: Where continuos deployment is fully automated and deploying to production needs no manual
intervention in continuos deployment, whereas in continuos delivery the deployment to production has some
manual intervention for change management in Organization for better management, and it needs to
approved by manager or higher authorities to be deployed in production. According to your application risk
factor for organization, the continuos deployment (or) delivery approach will be chosen. </div></div><div
class="su-spoiler su-spoiler-style-default su-spoiler-icon-plus su-spoiler-closed"><div class="su-spoiler-title"
tabindex="0" role="button"><span class="su-spoiler-icon">
Q14.How to execute some task (or) play on localhost only while executing playbooks on different hosts on
an ansible?
Answer: In ansible, there is a module called delegate_to, in this module section provide the particular host
(or) hosts where your tasks (or) task need to be run.
tasks:
– name: ” Elasticsearch Hitting”
Answer: Where a set_fact sets the value for a factor at one time and remains static, even though the value
is
Quite dynamic and vars keep on changing as per the value keeps on changing for the variable. tasks: –
set_fact: fact_time: “Fact: {{lookup(‘pipe’, ‘date \”+%H:%M:%S\”‘)}}” – debug: var=fact_time – command:
sleep 2 – debug: var=fact_time tasks: – name: lookups in variables vs. lookups in facts hosts: localhost
vars: var_time: “Var: {{lookup(‘pipe’, ‘date \”+%H:%M:%S\”‘)}}” Even though the lookup for date has been
used in both the cases , where in the vars is used it alters based on the time to time every time executed
within the playbook lifetime. But Fact always remains same once lookup is done
Q16. What is the lookup in ansible and what are lookup plugins supported by ansible?
Answer: Lookup plugins allow access of data in Ansible from outside sources. These plugins are evaluated
on the Ansible control machine, and can include reading the filesystem but also contacting external
datastores and services. Format is {lookup{‘<plugin>’,’<source(or)connection_string>’}} Some of
the lookup plugins supported by ansible are File pipe redis jinja templates etcd kv store …
Q17. How can you delete the docker images stored on your local machine and how can you do it for all the
images at once?
Answer: The command docker rmi <image-id> can be used to delete the docker image from local
machine , whereas some images may need to be forced because the image may be used by some other
container (or) another image , to delete images you can use the combination of commands by docker rmi
$(docker images -
Q) , where docker images will give the docker image names , to get only the ID of docker images only , we
are using -
Q switch with docker images command.
Q18. What are the folders in the Jenkins installation and their uses?
Answer: JENKINS_HOME – which will be /$JENKINS_USER/.jenkins it is the root folder of any Jenkins
installation and it contains subfolders each for different purposes. jobs/ – Folder contains all the information
about all the jobs configured in the Jenkins instance. Inside jobs/, you will have the folder created for each
job and inside those folders, you will have build folders according to each build numbers each build will have
its log files, which we see in Jenkins web console. Plugins/ – where all your plugins will be listed.
Workspace/ – this will be present to hold all the workspace files like your source code pulled from SCM.
Answer: Jenkins can be configured in two ways Web: Where there is an option called configure system , in
there section you can make all configuration changes . Manual on filesystem: Where every change can also
be done directly on the Jenkins config.xml file under the Jenkins installation directory , after you make
changes on the filesystem, you need to restart your Jenkins, either can do it directly from terminal (or) you
can use Reload configuration from disk under manage Jenkins menu or you can hit /restart endpoint
directly.
Answer: As Devops is purely focuses on Automating your infrastructure and provides changes over the
pipeline for different stages like an each CI/CD pipeline will have stages like build,test,sanity
test,UAT,Deployment to Prod environment as with each stage there are different tools is used and different
technology stack is presented and there needs to be a way to integrate with different tool for completing a
series toolchain, there comes a need for HTTP API , where every tool communicates with different tools
using API , and even user can also use SDK to interact with different tools like BOTO for Python to contact
AWS API’s for automation based on events , nowadays its not batch processing anymore , it is mostly event
driven pipelines
Q21. What are Microservices, and how they power efficient DevOps practices?
Answer: Where In traditional architecture , every application is monolith application means that anything is
developed by a group of developers , where it has been deployed as a single application in multiple
machines and exposed to outer world using loadbalancers , where the microservices means breaking down
your application into small pieces , where each piece serves the different functionality needed to complete a
single transaction and by breaking down , developers can also be formed to groups and each piece of
application may follow different guidelines for efficient development phase , because of agile development
should be phased up a bit and every service uses REST API (or) Message
Queues to communicate between other service. So build and release of a non-robust version may not affect
whole architecture , instead some functionality is lost , that provides the assurance for efficient and faster
CI/CD pipelines and DevOps Practices
Q22. What are the ways that a pipeline can be created in Jenkins?
Answer: There are two ways of the pipeline can be created in Jenkins Scripted Pipelines: More like a
programming approach Declarative pipelines: DSL approach specifically for creating Jenkins pipelines.
The pipeline should be created in Jenkins file and the location can either be in SCM or local system.
Declarative and Scripted Pipelines are constructed fundamentally differently. Declarative Pipeline is a more
recent feature of Jenkins Pipeline which: Provides richer syntactical features over Scripted Pipeline syntax,
and is designed to make writing and reading Pipeline code easier.
Q23. What are the Labels in Jenkins & where it can be utilised?
Answer: As with CI/CD solution needs to be centralized , where every application in the organization can be
built by a single CI/CD server , so in organization there may be different kinds of application like java ,
c#,.NET and etc , as with microservices approach your programming stack is loosely coupled for the project
, so you can have Labels in each node and select the option Only built jobs while label matching this node ,
so when a build is scheduled with the label of the node present in it , it waits for next executor in that node to
be available , eventhough there are other executors in nodes.
Answer: Blue Ocean rethinks the user experience of Jenkins. Designed from the ground up for Jenkins
Pipeline, but still compatible with freestyle jobs, Blue Ocean reduces clutter and increases clarity for every
member of the team. It provides sophisticated UI to identify each stage of the pipeline and better
pinpointing for issues and very rich Pipeline editor for beginners.
Q25. What is the callback plugins in ansible, give some examples of some callback plugins?
Answer: Callback plugins enable adding new behaviors to Ansible when responding to events. By default,
callback plugins control most of the output you see when running the command line programs, but can also
be used to add additional output, integrate with other tools and marshall the events to a storage backend.
So whenever an play is executed and after it produces some events , that events are printed onto Stdout
screen ,so callback plugin can be put into any storage backend for log processing. Example callback
plugins are ansible-logstash, where every playbook execution is fetched by logstash in the JSON format and
can be integrated any other backend source like elasticsearch.
Answer: As with scripting languages , the basic shell scripting is used for build steps in Jenkins pipelines
and python scripts can be used with any other tools like Ansible , terraform as a wrapper script for some
other complex decision solving tasks in any automation as python is more superior in complex logic
derivation than shell scripts and ruby scripts can also be used as build steps in Jenkins.
Q27. What is Continuos Monitoring and why monitoring is very critical in DevOps?
Answer: Devops brings out every organization capablity of build and release cycle to be much shorter with
concept of CI/CD , where every change is reflected into production environments fastly , so it needs to be
tightly monitored to get customer feedbacks. So the concept of continuos monitoring has been used to
evaluate each application performance in real time (atleast Near Real Time) , where each application is
developed with application performance monitor agents compatible and the granular level of metrics are
taken out like JVM stats and even fuctional wise metrics inside the application can also be poured out in real
time to Agents , which in turn gives to any backend storage and that can be used by monitoring teams in
dashboards and alerts to get continuosly monitor the application
Answer: Where many continuos monitoring tools are available in the market, where used for a different kind
of application and deployment model Docker containers can be monitored by cadvisor agent , which can be
used by Elasticsearch to store metrics (or) you can use TICK stack (Telegraf,
influxdb,Chronograf,Kapacitor) for every systems monitoring in NRT(Near Real Time) and You can use
Logstash (or) Beats to collect Logs from system , which in turn can use Elasticsearch as Storage Backend
can use Kibana (or) Grafana as visualizer. The system monitoring can be done by Nagios and Icinga.
Answer: Group of Virtual machines with Docker Engine can be clustered and maintained as a single system
and the resources also being shared by the containers and docker swarm master schedules the docker
container in any of the machines under the cluster according to resource availability Docker swarm init can
be used to initiate docker swarm cluster and docker swarm join with the master IP from client joins the node
into the swarm cluster.
Answer: Docker images can be created by two ways broadly Dockerfile: Most used method , where base
image can be specified and the files can be copied into the image and installation and configuration can be
done using declarative file which can be given to Docker build command to produce new docker image.
Docker commit: Where the Docker image is pinned up as a Docker container and every command execute
inside a container forms a Read-only layer and after every changes is Done can use docker commit
<container-iD> to save as a image, although this method is not suitable for CI/CD pipelines , as it re
FROM python:2
MAINTAINER janakiraman
Answer: Vault files are encrypted files, which contains any variables used by ansible playbooks, where the
vault encrypted files can be decrypted only by the vault-password, so while running a playbook, if any vault
file is used for a variable inside playbooks, so need to used –-ask-vault-pass command argument while
running playbook.
Answer: Docker is a containerization technology, which is a advanced technology over virtualization, where
in virtualization, an application needs to be installed in machine , then the OS should be spin up and
spinning up Virtual machine takes lot time , and it divides space from Physical hardware and hypervisor
layer wastes vast amount of space for running virtual machines and after it is provisioned, Every application
needs to be installed and installation re
Quires all dependencies and sometimes dependencies may miss out even if you double check and
migration from machine to machine of applications is painful , but docker shares underlying OS resources ,
where docker engine is lightweight and every application can be packaged with dependency once tested
works everywhere same, migration of application or spinning up of new application made easy because just
needs to install only docker in another machine and docker image pull and run does all the magic of
spinning up in seconds.
Answer: .NET applications needs Windows nodes to built , where Jenkins can use Jenkins windows slave
plugin can be used to connect windows node as a Jenkins slave , where it uses DCOM connector for
Jenkins master to slave connection (or) you can use Jenkins JNLP connector and the Build tools and SCM
tools used for the pipeline of .NET application needs to be installed in the Windows slave and MSBuild build
tool can be used to build .NET application and can be Deployed into Windows host by using Powershell
wrapper inside Ansible playbooks.
Q36. How can you make a High available Jenkins master-master solution without using any Jenkins plugin?
Answer: Where Jenkins stores all the build information in the JENKINS_HOME directory , which can be
mapped to any NFS (or) SAN storage drivers , common file systems and when the node is down , can
implement a monitoring solution using Nagios to check alive , if down can trigger an ansible playbook (or)
python script to create a new Jenkins master in different node and reload at runtime, if there is already a
passive Jenkins master in another instance kept silent with same JENKINS_HOME Network file store.
Answer: Jenkins filed starts with Pipeline directive , inside the pipeline directive will be agent directive ,
which specifies where the build should be run and next directive would be stages , which contains several
list of stage directives and each stage directive contains different steps . There are several optional
directives like options , which provides custom plugins used by the projects (or) any other triggering
mechanisms used and environment directive to provide all env variables Sample Jenkins file pipeline{
agent any stages { stage(‘Dockerbuild’) { steps { sh “sudo docker build. -t pyapp:v1” } } } }
Answer: The centralized nature of cloud computing provides DevOps automation with a standard and
centralized platform for testing, deployment, and production.Most cloud providers gives Even DevOps
technologies like CI tools and deployment tools as a service like codebuild, codepipeline, codedeploy in
AWS makes easy and even faster rate of DevOps pratice.
Q39. What is Orchestration of containers and what are the different tools used for orchestration?
Answer: When deploying into production, you cannot use a single machine for production as it is not robust
for any deployment , so when an application is containerized, the stack of applications maybe run at single
docker host in development environment to check application functionality, while when we arrive into
production servers, that it is not the case, where you should deploy your applications into multiple nodes
and stack should be connected between nodes , so to ensure network connectivity between different
containers , you need to have shell scripts (or) ansible playbooks between different nodes ,and another
disadvantage is using this tools , you cannot run an efficient stack, where an application is taking up more
resources in one node , but another sits idle most time , so deployment strategy also needs to be planned
out according to resources and load-balancing of this applications also be configured, so to clear out all this
obstacles , there came a concept called orchestration , where your docker containers is orchestrated
between different nodes in the cluster based on resources available according to scheduling strategy and
everything should be given as DSL specific files not like scripts .There are Different Orchestration tools
available in market which are Kubernetes,Swarm,Apache Mesos.
Q40. What is ansible tower?
Answer: Ansible is developed by Redhat , which provides IT automation and configuration management
purposes. Ansible Tower is the extended management layer created to manage playbooks organization
using roles and execution and can even chain different number of playbooks to form workflows. Ansible
tower dashboard provides NOC-style UI to look into the status of all ansible playbooks and hosts status.
Q41. What are the programming language applications that can be built by Jenkins?
Answer: Jenkins is a CI/CD tool not depends on any Programming language for building application, if there
is a build tool to built any language, that’s enough to build, even though plugin for build tool not available,
can use any scripting to replace your build stage like Shell, Powershell, Python scripts to make build of any
language application.
Q42. Why is every tool in DevOps is mostly has some DSL (Domain Specific Language)?
Answer: DevOps is culture developed to address the needs of agile methodology , where the developement
rate is faster ,so deployment should match its speed and that needs operations team to co-ordinate and
work with dev team , where everything can automated using script-based , but it feels more like operations
team than , it gives messy organization of any pipelines , more the use cases , more the scripts needs to be
written , so there are several use cases, which will be ade
Quate to cover the needs of agile are taken and tools are created according to that and customiztion can
happen over the tool using DSL to automate the DevOps practice and Infra management.
Q43. What are the clouds can be integrated with Jenkins and what are the use cases?
Answer: Jenkins can be integrated with different cloud providers for different use cases like dynamic
Jenkins slaves, Deploy to cloud environments. Some of the clouds can be integrated are
AWS</li><li>Azure</li><li>Google Cloud</li><li>OpenStack</li></ul>
Q44. What are Docker volumes and what type of volume should be used to achieve persistent storage?
Answer: Docker volumes are the filesystem mount points created by user for a container or a volume can
be used by many containers , and there are different types of volume mount available empty dir, Post
mount, AWS backed lbs volume, Azure volume, Google Cloud (or) even NFS, CIFS filesystems, so a
volume should be mounted to any of the external drive to achieve persistent storage , because a lifetime of
files inside container , is till the container is present and if container is deleted, the data would be lost.
Q45. What are the Artifacts repository can be integrated with Jenkins?
Answer: Any kind of Artifacts repository can be integrated with Jenkins, using either shell commands (or)
dedicated plugins, some of them are Nexus, Jfrog.
Q46. What are the some of the testing tools that can be integrated with jenkins and mention their plugins?
Quality in your source code. Performance plugin – this can be used to integrate JMeter performance
testing. Junit – to publish unit test reports. Selenium plugin – can be used to integrate with selenium for
automation testing.
Answer: Builds can be run manually (or) either can automatically triggered by different sources like
Webhooks– The webhooks are API calls from SCM , whenever a code is committed into repository (or) can
be done for specific events into specific branches. Gerrit code review trigger– Gerrit is an opensource code
review tool, whenever a code change is approved after review build can be triggered. Trigger Build
Remotely – You can have remote scripts in any machine (or) even AWS lambda functions (or) make a post
re
Quest to trigger builds in Jenkins. Schedule Jobs- Jobs can also schedule like Cron jobs. Poll SCM for
changes – Where your Jenkins looks for any changes in SCM for given interval, if there is a change, the
build can be triggered. Upstream and Downstream Jobs– Where a build can be triggered by another job
that is executed previously.
Answer: Docker images can be version controlled using Tags , where you can assign tag to any image
using docker tag <image-id> command. And if you are pushing any docker hub registry without tagging
the default tag would be assigned which is latest , even if a image with the latest is present , it demotes that
image without tag and reassign that to the latest push image.
Answer: It adds Timestamp to every line to the console output of the build.
Answer: You can run a build on master in Jenkins , but it is not advisable , because the master already has
the responsibility of scheduling builds and getting build outputs into JENKINS_HOME directory ,so if we run
a build on Jenkins master , then it additionally needs to build tools, and workspace for source code , so it
puts performance overload in the system , if the Jenkins master crashes , it increases the downtime of your
build and release cycle.
Q51. Why devops?
Answer: DevOps is the market trend now, which follows a systematic approach for getting the application
live to market. DevOps is all about tools which helps in building the development platform as well as
production platform. Product companies are now looking at a Code as a service concept in which the
development skill is used to create a production architecture with atmost no downtime.
Answer: A Configuration Management tool which is agentless. It works with key based or password based
ssh authentication. Since it is agentless, we have the complete control of the manipulating data. Ansible is
also use for architecture provisioning as it has modules which can talk to major cloud platforms. I have
mainly used for AWS provisioning and application/system config manipulations.
Q53. Why do you think a Version control system is necessary for DevOps team?
Answer: Application is all about code, if the UI is not behaving as expected, there could be a bug in the
code. Inorder to track the code updates, versioning is a must.
By any chance if bug breaks the application, we should be able to revert it to the working codebase.
Versioning helps to achieve this.
Also, by keeping a track of code commits by individuals, it is very easy to find the source of the bug in the
code.
Answer: Basically the following are prominent in DevOps depending upon the skillset.
1. Architect
2. Version Control Personnel
Q56. Suppose you are put in to a project where you have to implement devops culture, what will be your
approach?
Answer: Before thinking of DevOps, there should be a clear cut idea on what need to be implement and it
should be done by the Senior architect.
Even though it looks simple, the background work is not that easy, because a shopping cart must be :
– 99.99% live
– Easy and fast processing of shopping items
– Easy and fast payment system.
–
Quick reporting to shopkeeper
–
Quick Inventory Management
Answer: Ofcourse it is possible if we bring the Agility in every phase of development and deployment. The
release, testing and deployment automation should be so accurately finetuned
Answer: Agile is an iterative form of process which finalizes the application by fulfilling the checklist. For any
process, there should be set of checklist inorder to standardize the code as well as the build and
deployment process. The list depends on the architecture of the application and business model.
Q59. Why scripting using Bash, Python or any other language is a must for a DevOps team?
Answer: Even though we have numerous tools in devops, but there will certain custom re
Quirements for a project. In such cases, we have to make use of scripting and then integrate it with the
tools.
Answer: Applications were started to use Agile methodology where they build and deployed iteratively .
Docker helps is deploying same binaries with dependencies across different environments with fraction of
seconds
Answer: Command line tool – which is a docker binary and it communicate to the Docker daemon through
the Docker API.
Answer: The docker user have root like access and we should restrict access as we would protect root
Answer: “docker kill “command to kill a container “docker stop “command to stop a container
Answer: An image is a collection of files and its meta data , basically those files are the root filesystem of
the container Image is made up of layers where each layer can be edited
Q74.What is the differences between containers and images
Answer: No we can’t do changes in an image. we can make changes in a Dockerfile or to the existing
container to create a layered new image
Answer: Image tags are variants of Docker image . “latest” is the default tag of an image
Q80.What is a Dockerfile.?
Answer: A Dockerfile series of instructions to build a docker image Docker build command can be used to
build
Q81.How to build a docker file?
Answer: The docker history command lists all the layers in an image with image creation date, size and
command used
Answer: These will allow using the default command to be executed when a container is starting
Q85.What is Ansible?
Answer: Ansible is simple and light where it needs only shh and python as a dependency . It doesnt re
Quired an agent to be installed
Answer: Ansible “modules” are pre-defined small set of codes to perform some actions eg: copy a file, start
a service
Q88.What are Ansible Tasks ?
Answer: Tasks are nothing but ansible modules with the arguments
Answer: Handlers are triggered when there is need in change of state e.g.restart service when a property
file have changed.
Q91.What is YAML?
Answer: YAML – yet another markup language is way of storing data in a structured text format like JSON
Q93.What is MAVEN ?
Answer: Maven is a Java build tool, so you must have Java installed to proceed.
Answer: Validate is to check whether the info provided are correct and all necessary is available
Answer: It is to test the source code to test using suitable testing framework
Q98.What is docker-compose?
Answer: CI is nothing but giving immediate feedback to the developer by testing , analyzing the code .
Answer: Continuous delivery is a continuation of CI which aims in delivering the software until pre -prod
automatically
Answer: Continuous deployment is next step after CI and CD where the tested software will be provide to
the end customers post some validation and change management activities
Q102.What is git?
Answer: git commit records changes done to file in the local system.
Answer: git push is to update the changes to the remote repository in the internet .
Answer: git pull will download the files from the remote repo and will merge with the files in your local
system.
Answer: Start the answer by explaining general market trend, how releasing small features benefits
compared to releasing big features, advantages of releasing small features in high fre
Quency. Discuss about the topics such as Increase deployment fre
Answer: DevOps contains various stages. Each stage can be achieved with various tools. Below are the
various tool that are popularly used tools in DevOps. Version Control : Git , SVN</li><li>CI/CD :
Jenkins</li><li>Configuration Management Tools : Chef, Puppet, Ansible</li><li>Containerization Tool :
Docker</li></ul> Also mention any other tools that you worked on that helped you to automate the existing
environment
Answer: Version Control System (that are made to the files or documents over a period of time.
Answer: There are two types of Version Control Systems: Central Version Control System, Ex: Git,
Bitbucket</li><li>Distributed/Decentralized Version Control System</li></ul>
Q113. What is jenkins?In jenkins, what is the programming language should be used?
Answer: It is a open Source automation tool. it is a pupose of Continuous Integration and Continuous
Delivery. Jenkins is a written in java Programming language.
Answer: DevOps is nothing but a practice that emphasizes the collaboration and communication of both
software developers and implementation team. It focuses on delivering software product faster and lowering
the failure rate of releases.
Q116. Describe the core operations of DevOps with Infrastructure and with application.
After that JSON code will be organized into files called templates
You can Implement the templates on AWS DevOps and then managed as stacks
At last the creating, deleting, updating, etc. operation in the stack are done by Cloud Formation
Answer: It is very important to choose the simplest language for DevOps engineer. Python Language is
most suitable language for DevOps.
Answer: Developers can fix bug and implement new features with less time by the help of DevOps. DevOps
can also help to build a perfect communication system in a team with every team member.
Answer: SSH is used to log into a remote machine and work on the command line and also used it to dig
into the system to make possible secure coded communications between two untrusted hosts over an
insecure network.
Answer: I will post the code on SourceForge or GitHub to give avisual for everyone. I will post the checklist
also from the last revision to make sure that any unsolved issues are resolved.
Q123. How many types of Http re
Quests are?
Q124.If a Linux-build-server suddenly starts getting slow what will you check?
Answer: If a Linux-build-server suddenly starts getting slow, I will check for following three things
Application Level troubleshooting: Issues related with RAM, Issues related with Disk I/O read write, Issues
related with Disk space, etc. System-Level troubleshooting: Check for Application log file OR application
server log file, system performance issues, Web Server Log – check HTTP, tomcat log, etc. or check jboss,
WebLogic logs to see if the application server response/receive time is the issues for slowness, Memory
Leak of any application Dependent Services troubleshooting: Issues related with Antivirus, Issues related
with Firewall, Network issues, SMTP server response time issues, etc
Q126. Give example of some popular cloud platform used for DevOps Implementation.
Answer: For DevOps implementation popular Cloud platforms are: Google Cloud</li><li>Amazon Web
Services</li><li>Microsoft Azure</li></ul>
Answer: Version Control system gives scope to team members to work on any file at suitable time.
All the previous versions and variants are closely packed up inside the VCS.
You can use distributed VCS to store the complete project history in case central server breakdown you
can use your team member’s file location storage related with the project.
You can see the actual changes made in the file’s content.
Answer: Build is a method in which you can put source code together for checking that is the source code
working as a single unit. In the build creation process, the source code will undergo compilation, inspection,
testing, and deployment.
Answer: Puppet is a project management tool which helps you to convert the administration tasks
automatically.
Answer: Two-factor authentication is a security method in which the user provides two ways of identification
from separate categories.
Answer: It is a pattern which lowers the risk of new version software introduction into the production
environment. User will get “Canary Release” in a controlled manner before making it available to the
complete user set.
Answer: You need to run continuous testing to make sure the new service is ready for production.
Answer: Vagrant is a tool used to create and manage a virtual version of computing environments for tests
and software development.
Q135. Usefulness of PTR in DNS.
Answer: Chef is a powerful automation platform used for transforming infrastructure into code. In this tool,
you can use write scripts that are used to automate processes.
Q137. Prere
Quisites for the implementation of DevOps.
Answer: Here, are essential best practices for DevOps implementation: The speed of delivery means time
taken for any task to get them into the production environment.</li><li>Track the defects are found in the
various</li><li>It’s important to calculate the actual or the average time taken to recover in case of a failure
in the production environment.</li><li>Get a feedback from the customer about bug report because it also
affects the
Quality of application.</li></ul>
Answer: SubGit helps you to move SVN to Git. You can build a writable Git mirror of a local or alien to
Subversion repository by using SubGit.
Quirements, and if they are met, then your video card can run unity.
Answer: To enable startup sound Click control gear and then click on Startup Applications
In the Startup Application Preferences window, click Add to add an entry
Then fill the information in comment boxes like Name, Command, and Comment 1 /usr/bin/canberra-gtk-
play—id= “desktop-login”—description= “play login sound” Logout and then login once you are done You
can use shortcut key Ctrl+Alt+T to open .
Q143. Which is the fastest way to open an Ubuntu terminal in a particular directory?
Answer: To open an Ubuntu terminal in a particular directory, you can use custom keyboard short cut. To
do that, in the command field of a new custom keyboard, type genome – terminal – – working – directory =
/path/to/dir.
Q144. How could you get the current colour of the current screen on the Ubuntu desktop?
Answer: You have to open the background image in The Gimp (image editor) and use the dropper tool to
select the colour on a selected point. It gives you the RGB value of the colour at that point.
Answer: You have to use ALT+F2 then type” gnome-desktop-item-edit –create-new~/desktop,” it will launch
the old GUI dialog and create a launcher on your desktop in Ubuntu.
Answer: Memcached is an open source and free, high-performance, distributed memory object caching
system. The primary objective of Memcached is to increase the response time for data otherwise it can be
recovered or constructed from some other source or database. Memcached is used to reduce the necessity
of S
QL database operation or another source repetitively to collect data for a simultaneous re
Quest. Memcached can be used for Social Networking->Profile Caching</li><li>Content Aggregation -
> HTML/ Page Caching</li><li>Ad targeting -> Cookie/profile tracking</li><li>Relationship ->
Session caching</li><li>E-commerce -> Session and HTML caching</li><li>Location-based services -
> Database
Query scaling</li><li>Gaming and entertainment -> Session caching</li></ul> Memcache helps in Make
application processes much faster</li><li>Memcached make the object selection and rejection
process</li><li>Reduce the number of retrieval re
Quests to the database</li><li>Cuts down the I/O ( Input/Output) access (hard disk)</li></ul> Drawback of
Memcached is It is not a preserving data store</li><li>Not a database</li><li>It is not an application
specific</li><li>Unable to cache large object</li></ul>
Answer: Important features of Memcached includes CAS Tokens: A CAS token is attached to an object
retrieved from a cache. You can use that token to save your updated object.</li><li>Callbacks: It simplifies
the code</li><li>getDelayed: It decrease the time consumption of your script, waiting for results to come
back from a server</li><li>Binary protocol: You can use binary protocol instead of ASCII with the newer
client</li><li>Igbinary: A client always has to do serialization of the value with complex data previously, but
now with Memcached, you can use igbinary option.</li></ul>
Answer: Yes, it is possible to share a single instance of Memcache between multiple projects. You can run
Memcache on more than one server because it is a memory store space. You can also configure your client
to speak to a particular set of case. So, you can run two different Memcache processes on the same host
independently.
Q149. You are having multiple Memcache servers, one of the memcache servers fails, and it has your data,
can you recover key data from the perticular failed server?
Answer: Data won’t be removed from the server but there is a solution for auto-failure, which you can
configure for multiple nodes. Fail-over can be triggered during any socket or Memcached server level errors
and not during standard client errors like adding an existing key, etc.
Answer: If you write the code to minimize cache stampedes then it will leave a minimal
impact</li><li>Another way is to bring up an instance of Memcached on a new machine using the lost
machines IP address</li><li>The code is another option to minimize server outages as it gives you the
liberty to change the Memcached server list with minimal work</li><li>Setting timeout value is another
option that some Memcached clients implement for Memcached server outage. When your Memcached
server goes down, the client will keep trying to send a re
Quest till the time-out limit is reached</li></ul>
Answer: When data changes you can update Memcached by Clearing the Cache proactively: Clearing the
cache when an insert or update is made
Resetting the Cache: this method is similar with previous one but without delete the keys and wait for the
next re
Quest for the data to refresh the cache, reset the values after the insert or update.
Answer: When a cache expires, and websites are hit by the multiple re
Quests made by the client at the same time the Dogpile effect occurs. You have to use semaphore lock to
prevent the effect. In this system after value expires, the first process ac
Answer: You have to use Memcached as cache; don’t use it as a data store.</li><li>Don’t use Memcached
as the ultimate source of information to run your application. You must always have an option of data source
in your hand.</li><li>Memcached is basically a value store and can’t perform a
Query over the data or go through again over the contents to extract information.</li><li>Memcached is not
secure either in encryption or authentication.</li></ul>
Q154. When a server gets shut down does data stored in Memcached is still available?
Answer: No after a server shuts down and then restart the stored data in Memcached will be deleted
because Memcached is unable to store data for long time.
Answer: Memcache: It is an extension that allows you to work through handy object-oriented (OOP’s) and
procedural interfaces. It is designed to reduce database load in dynamic web
applications.</li><li>Memcached: It is an extension that uses the libmemcached library to provide API for
communicating with Memcached servers. It is used to increase the dynamic web applications by reducing
database load. It is the latest API.</li></ul>
Q156. Explain Blue/Green Deployment Pattern
Answer: Blue/Green colouring pattern is one of the hardest challenge faced at the time of automatic
deployment process. In Blue/ Green Deployment approach, you need to make sure two identical production
environments. Only one among them is LIVE at any given point of time and it is called Blue environment.
After take the full preparation to release the software the team conducts the final testing in an environment
called Green environment. When the verification is complete the traffic is routed to the Green environment.
Answer: Containers are from of lightweight virtualization and create separation among process.
Answer: In DevOps Post mortem meeting takes place to discuss about the mistakes and how to repair the
mistakes during the total process.
Answer: VMfres is one of the best options to built IaaS cloud from Virtual Box VMs in lesser time. But if you
want lightweight PaaS, then Dokku is a better option because bash script can be PaaS out of Dokku
containers.
Q160. Name two tools you can use for docker networking.
Answer: You can use Kubernetes and Docker swarm tools for docker networking.
Answer: DevOps are used for Production, Production feedback, IT operation, and its software development.
Answer: Pair programming is an engineering practice of Extreme Programming Rules. This is the process
where two programmers work on the same system on the same design/algorithm/code. They play two
different roles in the system. One as a“driver” and other as an “observer”. Observer continuously observes
the progress of a project to identify problems. T
DevOps is the new buzz in the IT world, swiftly spreading all through the technical space. Like other new
and popular technologies, people have contradictory impressions of what DevOps is exactly. The main
objective of DevOps is to alter and improve the relationship between the development and IT teams by
advocating better inter-communication and smoother collaboration between two units of an enterprise.
Corporations are now facing the necessity of carrying quicker and improved requests to see the ever more
persistent demands of mindful users to decrease the “Time to Marketplace.“ DevOps often benefits
placement to occur very profligately.
By the passage of time, the need for DevOps is continuously increasing. However, these are the main areas
it is implemented in-
Agile growth used as a substitute for Waterfall development training. In Agile, the expansion process is
more iterative and additive; there are more challenging and response at every stage of development as
opposed to only the latter stage in Waterfall. Scrum is used to accomplish composite software and product
growth, using iterative and additive performs. Scrum has three roles:
Product owner
Scrum master
Team
Q6). Name a few most famous DevOps tools?
Puppet
Chef
Ansible
Git
Nagios
Docker
Jenkins
Q7). Can we consider DevOps as an agile practice?
Yes, DevOps is considered as an agile practice where development is driven by profound changing
demands of professionals to stick closer to the corporate needs and requirements
DevOps specialist exertion very methodically with Agile development teams to assurance they have a
condition essential to support purposes such as automatic testing, incessant Integration, and unceasing
Delivery. DevOps specialist must be in continuous contact with the developers and make all compulsory
parts of environment work flawlessly.
You can respond to this question by saying, “Incessant Testing permits any change made in the code to be
tested directly. This circumvents the glitches shaped by having “big-bang” testing left-hand to the end of the
series such as announcement postponements and quality matters. In this way, Incessant Testing eases
more recurrent and good class releases.”
SSH is a Secure Shell which gives the users a very secure as well encrypted mechanism to safely log into
systems and ensures the safe transfer of files. It aids in the process of logging out of a remote machine
along with the work on the command line. It helps in securing an encrypted and protected end to end
communications between two hosts communicating over an insecure network.
Q12). What are the benefits of DevOps when seen from the Technical and Business viewpoint?
DevOps is developers friendly because it fixes the bugs and implements the new features very smoothly
quickly. It is amazing because it provides the much-needed clarity of communication among team members.
Q14). What measures would you take to handle revision (version) control?
To manage a successful revision control, you are required to post your code on SourceForge or GitHub so
that everyone on the team can view it from there and also there is an option for viewers to give suggestions
for the better improvement of it.
GET
HEAD
PUT
POST
PATCH
DELETE
TRACE
CONNECT
OPTIONS
Q16). Explain the DevOps Toolchain.
Code
Build
Test
Package
Release
Configure
Monitor
Q17). Elucidate the core operations of DevOps concerning development and Infrastructure.
Unit testing
Packaging
Code coverage
Code developing
Configuration
Orchestration
Provisioning
Deployment
Q18). Why do you think there is a need for Continuous Integration of Development & Testing?
Continuous Integration of Development and Testing enhances the quality of software and highly deducts the
time which is taken to deliver it, by replacing the old-school practice of testing only after completing all the
development process.
Feature Branching
Task Branching
Release Branching
Q20). What is the motive of GIT tools in DevOps?
The major components of DevOps are continuous integration, continuous delivery, continuous integration,
and continuous monitoring.
Q22). What steps should be taken when Linux-based-server suddenly gets slow?
When a Linux-based-server suddenly becomes slow, then you should focus on three things primarily:
Cloud platforms that can be used for the successful DevOps implementation are given as:
Google Cloud
VCS is a software application that helps software developers to work together and maintain the complete
history of their work.
Q25). What are the significant benefits of VCS (Version Control System)?
Git Bisect helps you to find the commit which introduced a bug using the binary search. Here is the basic
syntax for a Git Bisect: Git bisect
Q27). What do you understand by the term build?
A build is a method in the source code where the source code is put together to check how it works as a
single unit. In the complete process, the source code will undergo compilation, testing, inspection, and
deployment.
Q28). As per your experience, what is the most important thing that DevOps helps to achieve?
The most important thing that DevOps helps us to achieve is to get the changes in a product quickly while
minimizing risks related to software quality and compliance. Other than this, there are more benefits of
DevOps that include better communication, better collaboration among team members, etc.
Q29). Discuss one use case where DevOps can be implemented in the real-life.
Etsy is a Company that focuses on vintage, handmade, and uniquely manufactured items. There are
millions of Etsy users who are selling products online. At this stage, Etsy decided to follow a more agile
approach. DevOps helped Etsy with a continuous delivery pipeline and fully automated deployment lifecycle.
Q30). Explain your understanding of both the software development side and technical operations side of an
organization you have worked in the past recently.
The answer to this question may vary from person to person. Here, you should discuss the experience of
how flexible you were in your last Company.
A pattern is used by others, not by organizations and you continue blindly follow it. You are essentially
adopting anti-patterns here.
It is a version control system that tracks changes to a file and allows you to revert to any particular changes.
Q33). In Git, how to revert a commit that has already been made public?
Remove or fix the commit and push it to the remote repository. This is the most natural style to fix an error.
To do this, you should use the command given below: Git commit –m “commit message”
Create a new commit that undergoes all changes that were made in the bad commit. Git revert
Q34). What is the process to squash last N number of commits into a single commit?
There are two options to squash last N number of commits into a single commit.
To write a new commit message from scratch, you should use the following command: git reset –soft HEAD
~N && git commit
To edit the existing message, you should extract those messages first and pass them to the Git commit for
later usage. Git reset –soft HEAD ~ N&& git commit –edit –m “$(git log –format=%B –reverse .HEAD {N})”
Q35). What is Git rebase and how to use it for resolving conflicts in a feature branch before merging?
Git Rebase is a command that is used to merge another branch to the existing branch where you are
working recently. It moves all local commits at the top of the history of that branch. It effectively replays the
changes of feature branch at the tip of master and allowing conflicts to be resolved in the process.
Moreover, the feature branch can be merged to the master branch with relative ease and sometimes
considered as the fast-forward operation.
Q36). How can you configure a git repository to run code sanity checking tools right before making commits
and preventing them if the test fails?
Sanity or smoke test determines how to continue the testing reasonably. This is easy configuring a Git
repository to run code sanity checking before making commits and preventing them if the test fails. It can be
done with a simple script as mentioned below:
#!/bin/sh
file=$(git diff -cached -name-only -diff-filter=ACM | grep '.go$')
exit 0
fi
eacho "some .go files are not fmt'd"
exit 1
Q37). How to find a list of files that are changed in a certain manner?
To get a list of files that are changed or modified in a particular way, you can use the following command: git
diff-tree -r{hash}
Q38). How to set up a script every time a repository receives new commits from a push?
There are three techniques to set up a script every time a repository receives new commits from Push.
These are the pre-receive hook, post-receive hook, and update hook, etc.
Q39). Write commands to know in Git if a branch is merged to the master or not.
Here are the commands to know in Git if a branch is merged to the master or not. To list branches that are
merged to the current branch, you can use the following command: git branch -merged
To list branches that are not merged to the current branch, you can use the following command: git branch –
no-merged
It is a development practice that requires developers to integrate code into a shared repository multiple
times a day. Each check-in is verified with an automated build allowing teams to detect problems early.
Q41). Why is continuous integration necessary for the development and testing team?
It improves the quality of software and reduces the overall time to product delivery, once the development is
complete. It allows the development team to find and locate bugs at an early stage and merge them to the
shared repository multiple times a day for automating testing.
Automate the deployment, and everyone should be able to check the result of the latest build.
Q43). What is the process to copy Jenkins from one server to another?
There are multiple ways to copy Jenkins from one server to another. Let us discuss them below:
You can move the job from one Jenkin installation to another by simply copying the corresponding job
directory.
Make a copy of the existing job and save it with a different name in the job directory.
Rename the existing job and make necessary changes as per the requirement.
Q44). How to create a file and take backups in Jenkins?
For taking backup in Jenkins, you just need to copy the directory and save it with a different name.
Go to the Jenkins page at the top, select the “new job” option, and choose “Build a free-style software
project.”
Choose the preferable script that can be used to make the build.
Collect the information for the build and notify people about the build results.
Q46). Name a few useful plugins in Jenkins.
Amazon EC2
HTML publisher
Copy artifact
Join
Green Balls
Q47). How will you secure Jenkins?
Here are a few steps you should follow to secure the Jenkins:
Make sure that global security option is on and Jenkins is integrated with the company’s user directory with
appropriate login details.
Make sure that the project matrix is enabled for the fine tune access.
Automate the process of setting privileges in Jenkins with custom version-controlled scripts.
Limit the physical access to Jenkins data/folders.
Run the security audits periodically.
Jenkins is one of the popular tools used extensively in DevOps and hands-on training in Jenkins can make
you an expert in the DevOps domain.
It is the process of automating the manual process for testing an application under test (AUT). It involves the
usage of different testing tools that lets you creating test scripts that can be executed repeatedly and does
not require any manual intervention.
With continuous testing, all changes to the code can be tested automatically. It avoids the problem created
by the big-bang approach at the end of the cycle like release delays or quality issues etc. In this way,
continuous testing assures frequent and quality releases.
Policy analysis
Risk assessment
Requirements traceability
Test optimization
Advanced analytics
Service virtualization
Q53). Which testing tool is just the best as per your experience?
Selenium testing tool is just the best as per my experience. Here are a few benefits which makes it suitable
for the workplace.
It is an open source free testing tool with a large user base and helping communities.
It is compatible with multiple browsers and operating systems.
It supports multiple programming languages with regular development and distributed testing.
Q54). What are the different testing types supported by the Selenium?
Two-factor authentication in DevOps is a security method where the user is provided with two identification
methods from different categories.
Q56). Which type of testing should be performed to make sure that a new service is ready for production?
It is continuous testing that makes sure that a new service is ready for production.
It is a configuration management tool in DevOps that helps you in automating administration tasks.
It is a pattern that reduces the risk of introducing a new version of the software into the production
environment. It is made available in a controlled manner to the subset of users before releasing to the
complete set of users.
PTR means pointer record that is required for a reverse DNS lookup.
It is a DevOps tool that is used for creating and managing virtual environments for testing and developing
software programs.
Q61). What are the prerequisites for the successful implementation of DevOps?
Q62). What are the best practices to follow for DevOps success?
The speed of delivery time taken for a task to get them into the production environment.
Focus on different types of defects in the build.
Check the average time taken to recover in case of failure.
The total number of reported bugs by customers impacting the quality of an application.
A SubGit tool helps in migrating from SVN to Git. It allows you to build a writable Git mirror of a remote or
local subversion repository.
Splunk
Icinga 2
Wireshark
Nagios
OpenNMS
Q65). How to check either your video card can run Unity or not?
Here is the command to check either your video card can run unity or not: /usr/lib/linux/unity_support_test-p
It will give you a depth of unity’s requirements. If they are met, your video card can run Unity.
To enable the start-up sounds in Ubuntu, you should follow these steps:
For this purpose, you can use the custom keyword shortcuts.
To do that, in the command field of a new custom keyboard, type genome –terminal –working –directory =
/path/to/dir.
Q68). How to get the current color of the screen on the Ubuntu desktop?
You should open the background image and use a dropper tool to select the color at a specific point. It will
give you the RGB value for that color at a specific point.
To create a launcher on a Ubuntu desktop, you should use the following command:
ALT+F2 then type “gnome-desktop-item-edit-create-new~/desktop,” it will launch the old GUI dialog and
create a launcher on your desktop
Q70). What is Memcached in DevOps?
It is an open source, high speed, distributed memory object. Its primary objective is enhancing the response
time of data that can otherwise be constructed or recovered from another source of database. It avoids the
need for operating SQL database repetitively to fetch data for a concurrent request.
DevOps quiz
It is not application-specific.
It is not able to cache large objects.
Q73). What are the features of Memcached?
A few highlighted features of Memcached can be given as:
Q75). If you have multiple Memcached servers and one of the Memcached servers gets failed, then what
will happen?
Even if one of the Memcached servers gets failed, data won’t get lost, but it can be recovered by configuring
it for multiple nodes.
If one of the server instances get failed, it will put a huge load on the database server. To avoid this, the
code should be written in such a way that it can minimize the cache stampedes and leave a minimal impact
on the database server.
You can bring up an instance of Memcached on a new machine with the help of lost IP addresses.
You can modify the Memcached server list to minimize the server outages.
Set up the timeout value for Memcached server outages. If the server gets down, it will try to send a request
to the client until the timeout value is achieved.
Q77). How to update Memcached when data changes?
To update the Memcached in case of data changes, you can use these two techniques:
Dogpile effect refers to the event when the cache expires, and website hits by multiple requests together at
the same time. The semaphore lock can minimize this effect. When the cache expires, the first process
acquires the lock and generates new value as required.
These two colors are used to represent tough deployment challenges for a software project. The live
environment is the Blue environment. When the team prepares the next release of the software, it conducts
the final stage of testing in the Green environment.
A post mortem meeting discusses what went wrong and what steps to be taken to avoid failures.
Q83). Name two tools that can be used for Docket networking.
Asset management refers to any system that monitors and maintains things of a group or unit. Configuration
Management is the process of identifying, controlling, and managing configuration items in support of
change management.
An HTTP protocol works like any other protocol in a client-server architecture. The client initiates a request,
and the server responds to it.
A resource is a piece of infrastructure and its desires state like packages should be installed, services
should be in running state, the file could be generated, etc.
The answer is pretty direct. A recipe is a collection of resources, and a Cookbook is a collection of recipes
and other information.
Playbooks are Ansible’s orchestration, configuration, and deployment languages. They are written in
human-readable basic text language.
Q98). How can you check the complete list of Ansible variables?
You can use this command to check the complete list of Ansible variables. Ansible –m setup hostname
It is a DevOps tool for continuous monitoring of systems, business processes, or application services, etc.
Plugins are scripts that are run from a command line to check the status of Host or Service.
Development Cycle: DevOps shortens the development cycle from initial design
to production.
Full Automation: DevOps helps to achieve full automation from testing, to build, release
and deployment.
Deployment Rollback: In DevOps, we plan for any failure in deployment rollback due to a
bug in code or issue in production. This gives confidence in releasing feature without
worrying about downtime for rollback.
Defect Detection: With DevOps approach, we can catch defects much earlier than
releasing to production. It improves the quality of the software.
2/71
which teams become more productive and more innovative.
Based on the Jira tasks, developers checking code into GIT version control system.
During the build process, automated tests run to validate the code checked in by
a developer.
Jenkins automatically picks the libraries from Artifactory and deploys it to Production.
During Production deployment, Docker images are used to deploy same code on
multiple hosts.
Once a code is deployed to Production, we use monitoring tools like ngios are
used to check the health of production servers.
Splunk based alerts inform the admins of any issues or exceptions in production.
Agile is a set of values and principles about how to develop software in a systematic way.
Where as DevOPs is a way to quickly, easily and repeatably move that software into
production infrastructure, in a safe and simple way.
Most important aspect of DevOps is to get the changes into production as quickly as
possible while minimizing risks in software quality assurance and compliance. This is the
primary objective of DevOps.
Git
Jenkins, Bamboo
Selenium
3/71
Puppet, BitBucket
Chef
Ansible, Artifactory
Nagios
Docker
Monit
Collectd/Collect
Code is deployed by adopting continuous delivery best practices. Which means that
checked in code is built automatically and then artifacts are published to repository servers.
On the application severs there are deployment triggers usually timed by using cron jobs.
4/71
Gradle is an open-source build automation system that builds upon the concepts of Apache
Ant and Apache Maven. Gradle has a proper programming language instead of XML
configuration file and the language is called ‘Groovy’.
Gradle uses a directed acyclic graph ("DAG") to determine the order in which tasks can
be run.
Gradle was designed for multi-project builds, which can grow to be quite large. It supports
incremental builds by intelligently determining which parts of the build tree are up to date,
any task dependent only on those parts does not need to be re-executed.
Deep API: Using this API, developers can monitor and customize its configuration
and execution behaviors.
Scalability: Gradle can easily increase productivity, from simple and single project
builds to huge enterprise multi-project builds. Multi-project builds: Gradle
supports multi-project builds and also partial builds.
First build integration tool − Gradle completely supports ANT tasks, Maven and lvy
repository infrastructure for publishing and retrieving dependencies. It also provides a
converter for turning a Maven pom.xml to Gradle script.
Free open source − Gradle is an open source project, and licensed under the
Apache Software License (ASL).
Groovy: Gradle's build scripts are written in Groovy, not XML. But unlike other
approaches this is not for simply exposing the raw scripting power of a dynamic
language. The whole design of Gradle is oriented towards being used as a
language, not as a rigid framework.
5/71
There isn't a great support for multi-project builds in Ant and Maven. Developers end
up doing a lot of coding to support multi-project builds.
Also having some build-by-convention is nice and makes build scripts more concise. With
Maven, it takes build by convention too far, and customizing your build process becomes a
hack.
Maven also promotes every project publishing an artifact. Maven does not support
subprojects to be built and versioned together.
But with Gradle developers can have the flexibility of Ant and build by convention of
Maven.
Groovy is easier and clean to code than XML. In Gradle, developers can define
dependencies between projects on the local file system without the need to publish
artifacts to repository.
The following is a summary of the major differences between Gradle and Apache Maven:
Flexibility: Google chose Gradle as the official build tool for Android; not because build
scripts are code, but because Gradle is modeled in a way that is extensible in the most
fundamental ways.
Both Gradle and Maven provide convention over configuration. However, Maven provides a
very rigid model that makes customization tedious and sometimes impossible.
While this can make it easier to understand any given Maven build, it also makes it
unsuitable for many automation problems. Gradle, on the other hand, is built with
an empowered and responsible user in mind.
Performance
Both Gradle and Maven employ some form of parallel project building and parallel
dependency resolution. The biggest differences are Gradle's mechanisms for work
avoidance and incrementally. Following features make Gradle much faster than Maven:
Incrementally:Gradle avoids work by tracking input and output of tasks and only
running what is necessary.
Build Cache:Reuses the build outputs of any other Gradle build with the
same inputs.
Gradle Daemon:A long-lived process that keeps build information "hot" in memory.
User Experience
Maven's has a very good support for various IDE's. Gradle's IDE support continues to
improve quickly but is not great as of Maven.
6/71
Although IDEs are important, a large number of users prefer to execute build operations
through a command-line interface. Gradle provides a modern CLI that has discoverability
features like `gradle tasks`, as well as improved logging and command-line completion.
Dependency Management
Both build systems provide built-in capability to resolve dependencies from configurable
repositories. Both are able to cache dependencies locally and download them in parallel.
As a library consumer, Maven allows one to override a dependency, but only by version.
Gradle provides customizable dependency selection and substitution rules that can be
declared once and handle unwanted dependencies project-wide. This substitution
mechanism enables Gradle to build multiple source projects together to create composite
builds.
Maven has few, built-in dependency scopes, which forces awkward module architectures in
common scenarios like using test fixtures or code generation. There is no separation
between unit and integration tests, for example. Gradle allows custom dependency scopes,
which provides better-modeled and faster builds.
Gradle builds a script file for handling projects and tasks. Every Gradle build represents
one or more projects.
The wrapper is a batch script on Windows, and a shell script for other operating systems.
This type of name is written in the format that is build.gradle. It generally configures the
Gradle scripting language.
In order to make sure that dependency for your project is added, you need to mention the
7/71
configuration dependency like compiling the block dependencies of the build.gradle file.
Dependency configuration comprises of the external dependency, which you need to install
well and make sure the downloading is done from the web. There are some key features of
this configuration which are:
1. Compilation: The project which you would be starting and working on the first
needs to be well compiled and ensure that it is maintained in the good condition.
2. Runtime: It is the desired time which is required to get the work dependency in
the form of collection.
4. Test runtime: This is the final process which needs the checking to be done for
running the test that is in a default manner considered to be the mode of runtime
Gradle runs on the Java Virtual Machine (JVM) and uses several
supporting libraries that require a non-trivial initialization time.
As a result, it can sometimes seem a little slow to start. The solution to this
problem is the Gradle Daemon: a long-lived background process that
executes your builds much more quickly than would otherwise be the case.
We accomplish this by avoiding the expensive bootstrapping process as
well as leveraging caching, by keeping data about your project in memory.
Running Gradle builds with the Daemon is no different than without
Software projects rarely work in isolation. In most cases, a project relies on reusable
functionality in the form of libraries or is broken up into individual components to compose a
modularized system.
8/71
Gradle has built-in support for dependency management and lives up the task of
fulfilling typical scenarios encountered in modern software projects.
1. It has good UX
2. It is very powerful
Multi-project builds helps with modularization. It allows a person to concentrate on one area
of work in a larger project, while Gradle takes care of dependencies from other parts of the
project
A multi-project build in Gradle consists of one root project, and one or more subprojects
that may also have subprojects.
While each subproject could configure itself in complete isolation of the other subprojects, it
is common that subprojects share common traits.
9/71
Gradle Build life cycle consists of following three steps
-Initialization phase: In this phase the project layer or objects are organized
-Configuration phase: In this phase all the tasks are available for the current build and
a dependency graph is created
The Java plugin adds Java compilation along with testing and bundling capabilities to the
project. It is introduced in the way of a SourceSet which act as a group of source files
complied and executed together.
Compile:
Runtime:
It is the required time needed to get the dependency work in the collection.
Test Compile:
The check source of the dependencies is to be collected in order to run the project.
Test Runtime:
The final procedure is to check and run the test which is by default act as a runtime mode.
10/71
Question: What is Groovy?
It can be used as both a programming language and a scripting language for the
Java Platform, is compiled to Java virtual machine (JVM) bytecode, and interoperates
seamlessly with other Java code and libraries.
Groovy uses a curly-bracket syntax similar to Java. Groovy supports closures, multiline
strings, and expressions embedded in strings.
And much of Groovy's power lies in its ASTtransformations, triggered through annotations.
11/71
Increased expressivity (type less to do more)
Closures
Groovy is documented very badly. In fact the core documentation of Groovy is limitedand
there is no information regarding the complex and run-time errors that happen.
Developers are largely on there own and they normally have to figure out the explanations
about internal workings by themselves.
Groovy adds the execute method to String to make executing shells fairly easy
println "ls".execute().text
-Application Servers
-Servlet Containers
It is possible but in this case the features are limited. Groovy cannot be made to handle all
the tasks in a manner it has to.
12/71
Installing and using Groovy is easy. Groovy does not have complex system requirements. It
is OS independent.
Groovy can perform optimally in every situation.There are many Java based components in
Groovy,which make it even more easier to work with Java applications.
A closure in Groovy is an open, anonymous, block of code that can take arguments, return
a value and be assigned to a variable. A closure may reference variables declared in its
surrounding scope. In opposition to the formal definition of a closure, Closure in the Groovy
language can also contain free variables which are defined outside of its surrounding
scope.
When a parameter list is specified, the -> character is required and serves to separate the
arguments from the closure body. The statements portion consists of 0, 1, or many Groovy
statements.
Through this class programmers can add properties, constructors, methods and operations
in the task. It is a powerful option available in the Groovy.
By default this class cannot be inherited and users need to call explicitly. The command for
this is “ExpandoMetaClass.enableGlobally()”.
For using groovy, you need to have enough knowledge of Java. Knowledge of Java
is important because half of groovy is based on Java.
13/71
It might take you some time to get used to the usual syntax and default typing.
class Test {
println('Hello World');
-Default imports
In Groovy all these packages and classes are imported by default, i.e. Developers do not
have to use an explicit import statement to use them:
java.io.*
java.lang.*
java.math.BigDecimal
java.math.BigInteger
java.net.*
java.util.*
groovy.lang.*
groovy.util.*
-Multi-methods
14/71
In Groovy, the methods which will be invoked are chosen at runtime. This is called runtime
dispatch or multi-methods. It means that the method will be chosen based on the types of
the arguments at runtime. In Java, this is the opposite: methods are chosen at compile
time, based on the declared types.
-Array initializers
In Groovy, the { … } block is reserved for closures. That means that you cannot create
array literals with this syntax:
int[] arraySyntex = { 6, 3, 1}
-ARM blocks
ARM (Automatic Resource Management) block from Java 7 are not supported in Groovy.
Instead, Groovy provides various methods relying on closures, which have the same effect
while being more idiomatic.
-GStrings
As double-quoted string literals are interpreted as GString values, Groovy may fail with
compile error or produce subtly different code if a class with String literal containing a dollar
character is compiled with Groovy and Java compiler.
While typically, Groovy will auto-cast between GString and String if an API declares the type
of a parameter, beware of Java APIs that accept an Object parameter and then check the
actual type.
Singly-quoted literals in Groovy are used for String , and double-quoted result in
String or GString , depending whether there is interpolation in the literal.
assert 'c'.getClass()==String
assert "c".getClass()==String
Groovy will automatically cast a single-character String to char only when assigning to a
variable of type char . When calling methods with arguments of type char we need to either
cast explicitly or make sure the value has been cast in advance.
char a='a'
try {
15/71
} catch(MissingMethodException e) {
Groovy supports two styles of casting and in the case of casting to char there are subtle
differences when casting a multi-char strings. The Groovy style cast is more lenient and will
take the first character, while the C-style cast will fail with exception.
try {
} catch(GroovyCastException e) {
-Behaviour of ==
The Groovy programming language comes with great support for writing tests. In addition
to the language features and test integration with state-of-the-art testing libraries and
frameworks.
The Groovy ecosystem has born a rich set of testing libraries and frameworks.
Junit Integrations
Groovy also has excellent built-in support for a range of mocking and stubbing alternatives.
A key reason for this is that it is hard work creating custom hand-crafted mocks using Java.
16/71
Writing tests means formulating assumptions by using assertions. In Java this can be done
by using the assert keyword. But Groovy comes with a powerful variant of assert also known
as power assertion statement.
Groovy’s power assert differs from the Java version in its output given the boolean
expression validates to false :
def x = 1
assert x == 2
// Output:
//
// Assertion failed:
// assert x == 2
// ||
// 1 false
The java.lang.AssertionError that is thrown whenever the assertion can not be validated
successfully, contains an extended version of the original exception message. The power
assertion output shows evaluation results from the outer to the inner expression. The power
assertion statements true power unleashes in complex Boolean statements, or statements
with collections or other toString -enabled classes:
def x = [1,2,3,4,5]
// Output:
//
// Assertion failed:
// || |
// || false
// | [1, 2, 3, 4, 5, 6]
// [1, 2, 3, 4, 5, 6]
Question: Can We Use Design Patterns In Groovy?
Design patterns can also be used with Groovy. Here are important points
Some patterns carry over directly (and can make use of normal Groovy syntax
improvements for greater readability)
Some patterns are no longer required because they are built right into the language
or because Groovy supports a better way of achieving the intent of the pattern
some patterns that have to be expressed at the design level in other languages can
be implemented directly in Groovy (due to the way Groovy can blur the distinction
between design and implementation)
17/71
Groovy comes with integrated support for converting between Groovy objects and JSON.
The classes dedicated to JSON serialisation and parsing are found in the groovy.json
package.
JsonSlurper is a class that parses JSON text or reader content into Groovy data
structures (objects) such as maps, lists and primitive types like Integer , Double ,
Boolean and String .
The class comes with a bunch of overloaded parse methods plus some special methods
such as parseText , parseFile and others
XmlParser and XmlSluper are used for parsing XML with Groovy. Both have the same
approach to parse an xml.
Both come with a bunch of overloaded parse methods plus some special methods such
as parseText , parseFile and others.
XmlSlurper
<list>
<technology>
<name>Groovy</name>
</technology>
</list>
'''
XmlParser
18/71
def text = '''
<list>
<technology>
<name>Groovy</name>
</technology>
</list>
'''
Both are based on SAX so they both are low memory footprint
Both can update/transform the XML
XmlSlurper evaluates the structure lazily. So if you update the xml you’ll have to
evaluate the whole tree again.
19/71
Question: What is Maven?
Maven is a build automation tool used primarily for Java projects. Maven addresses two
aspects of building software:
First: It describes how software is built
Unlike earlier tools like Apache Ant, it uses conventions for the build procedure, and
only exceptions need to be written down.
An XML file describes the software project being built, its dependencies on other external
modules and components, the build order, directories, and required plug-ins.
It comes with pre-defined targets for performing certain well-defined tasks such as
compilation of code and its packaging.
Maven dynamically downloads Java libraries and Maven plug-ins from one or more
repositories such as the Maven 2 Central Repository, and stores them in a local cache.
This local cache of downloaded artifacts can also be updated with artifacts created by local
projects. Public repositories can also be updated.
20/71
Question: What Are Benefits Of Maven?
One of the biggest benefit of Maven is that its design regards all projects as having
a certain structure and a set of supported task work-flows.
Maven has quick project setup, no complicated build.xml files, just a POM and go
All developers in a project use the same jar dependencies due to centralized
POM. In Maven getting a number of reports and metrics for a project "for free"
It reduces the size of source distributions, because jars can be pulled from a
central location
With Maven there is no need to add jar files manually to the class path
Build lifecycle is a list of named phases that can be used to give order to goal execution.
One of Maven's standard life cycles is the default lifecycle, which includes the following
phases, in this order
1 validate
2 generate-sources
3 process-sources
4 generate-resources
5 process-resources
6 compile
7 process-test-sources
8 process-test-resources
9 test-compile
10 test
11 package
12 install
13 deploy
Build tools are programs that automate the creation of executable applications from source
code. Building incorporates compiling, linking and packaging the code into a usable or
executable form.
In small projects, developers will often manually invoke the build process. This is not
practical for larger projects.
Where it is very hard to keep track of what needs to be built, in what sequence and what
dependencies there are in the building process. Using an automation tool like Maven,
Gradle or ANT allows the build process to be more consistent.
21/71
Question: What Is Dependency Management Mechanism
In Gradle?
For example if a project needs Hibernate library. It has to simply declare Hibernate's
project coordinates in its POM.
Maven will automatically download the dependency and the dependencies that Hibernate
itself needs and store them in the user's local repository.
Maven 2 Central Repository is used by default to search for libraries, but developers can
configure the custom repositories to be used (e.g., company-private repositories) within the
POM.
The Central Repository Search Engine, can be used to find out coordinates for different
open-source libraries and frameworks.
Most of Maven's functionality is in plugins. A plugin provides a set of goals that can
be executed using the following syntax:
mvn [plugin-name]:[goal-name]
For example, a Java project can be compiled with the compiler-plugin's compile-goal by
running mvn compiler:compile . There are Maven plugins for building, testing, source control
management, running a web server, generating Eclipse project files, and much more.
Plugins are introduced and configured in a <plugins>-section of a pom.xml file. Some basic
plugins are included in every project by default, and they have sensible default settings.
Ant Maven
22/71
Ant doesn't have formal Maven has a convention to place source code, compiled
code
conventions. etc.
A Project Object Model (POM) provides all the configuration for a single project. General
configuration covers the project's name, its owner and its dependencies on other projects.
One can also configure individual phases of the build process, which are implemented
as plugins.
For example, one can configure the compiler-plugin to use Java version 1.5 for compilation,
or specify packaging the project even if some unit tests fail.
Larger projects should be divided into several modules, or sub-projects, each with its own
POM. One can then write a root POM through which one can compile all the modules with a
single command. POMs can also inherit configuration from other POMs. All POMs inherit
from the Super POM by default. The Super POM provides default configuration, such as
default source directories, default plugins, and so on.
-Group ID
-Artifact ID
-Version string. The three together uniquely identify the artifact. All the project
dependencies are specified as artifacts.
In Maven a goal represents a specific task which contributes to the building and managing
23/71
of a project.
It may be bound to 1 or many build phases. A goal not bound to any build phase could be
executed outside of the build lifecycle by its direct invocation.
In Maven a build profile is a set of configurations. This set is used to define or override
default behaviour of Maven build.
Build profile helps the developers to customize the build process for different environments.
For example you can set profiles for Test, UAT, Pre-prod and Prod environments each with
its own configurations etc.
There are 6 build phases. -Validate -Compile -Test -Package -Install -Deploy
Target: folder holds the compiled unit of code as part of the build process.
Source: folder usually holds java source codes. Test: directory contains all the
unit testing codes.
24/71
Question: What is Linux?
Linux is the best-known and most-used open source operating system. As an operating
system, Linux is a software that sits underneath all of the other software on a computer,
receiving requests from those programs and relaying these requests to the computer’s
hardware.
In many ways, Linux is similar to other operating systems such as Windows, OS X, or iOS
But Linux also is different from other operating systems in many important ways.
First, and perhaps most importantly, Linux is open source software. The code used
to create Linux is free and available to the public to view, edit, and—for users with
the appropriate skills—to contribute to.
Kernel: Linux is a monolithic kernel that is free and open source software that is
responsible for managing hardware resources for the users.
System Utility: System Utility performs specific and individual level tasks.
25/71
Question: What Is Difference Between Linux & Unix?
Unix and Linux are similar in many ways, and in fact, Linux was originally created to be
similar to Unix.
Both have similar tools for interfacing with the systems, programming tools, filesystem
layouts, and other key components.
However, Unix is not free. Over the years, a number of different operating systems have
been created that attempted to be “unix-like” or “unix-compatible,” but Linux has been the
most successful, far surpassing its predecessors in popularity.
BASH stands for Bourne Again Shell. BASH is the UNIX shell for the GNU operating
system. So, BASH is the command language interpreter that helps you to enter your input,
and thus you can retrieve information.
In a straightforward language, BASH is a program that will understand the data entered by
the user and execute the command and gives output.
The crontab (short for "cron table") is a list of commands that are scheduled to run at
regular time intervals on computer system. The crontab command opens the crontab for
editing, and lets you add, remove, or modify scheduled tasks.
The daemon which reads the crontab and executes the commands at the right time
is called cron. It's named after Kronos, the Greek god of time.
Command syntax
crontab [-u user] file
26/71
Unix-like systems typically run numerous daemons, mainly to accommodate requests for
services from other computers on a network, but also to respond to other programs and
to hardware activity.
Examples of actions or conditions that can trigger daemons into activity are a specific time
or date, passage of a specified time interval, a file landing in a particular directory, receipt of
an e-mail or a Web request made through a particular communication line.
It is not necessary that the perpetrator of the action or condition be aware that a daemon
is listening, although programs frequently will perform an action only because they are
aware that they will implicitly arouse a daemon.
Processes are managed by the kernel (i.e., the core of the operating system), which
assigns each a unique process identification number (PID).
-Batch:Batch processes are submitted from a queue of processes and are not associated
with the command line; they are well suited for performing recurring tasks when system
usage is otherwise low.
That is, the entire display screen, or the currently active portion of it, shows
only characters (and no images), and input is usually performed entirely with a keyboard.
A kernel is the lowest level of easily replaceable software that interfaces with the hardware
in your computer.
It is responsible for interfacing all of your applications that are running in “user mode” down
27/71
to the physical hardware, and allowing processes, known as servers, to get information
from each other using inter-process communication (IPC).
Microkernel:A microkernel takes the approach of only managing what it has to: CPU,
memory, and IPC. Pretty much everything else in a computer can be seen as an accessory
and can be handled in user mode.
Hybrid Kernel:Hybrid kernels have the ability to pick and choose what they want to run in
user mode and what they want to run in supervisor mode. Because the Linux kernel is
monolithic, it has the largest footprint and the most complexity over the other types of
kernels. This was a design feature which was under quite a bit of debate in the early days
of Linux and still carries some of the same design flaws that monolithic kernels are inherent
to have.
Partial backup refers to selecting only a portion of file hierarchy or a single partition to back
up.
The root account a system administrator account. It provides you full access and control of
the system.
Admin can create and maintain user accounts, assign different permission for each account
etc
Question: What Is Difference Between Cron and Anacron?
One of the main difference between cron and anacron jobs is that cron works on
the system that are running continuously.
While anacron is used for the systems that are not running continuously.
1. Other difference between the two is cron jobs can run every minute, but anacron jobs
can be run only once a day.
2. Any normal user can do the scheduling of cron jobs, but the scheduling of anacron
jobs can be done by the superuser only.
28/71
3. Cron should be used when you need to execute the job at a specific time as per the
given time in cron, but anacron should be used in when there is no any restriction for
the timing and can be executed at any time.
4. If we think about which one is ideal for servers or desktops, then cron should be used
for servers while anacron should be used for desktops or laptops.
Linux Loader is a boot loader for Linux operating system. It loads Linux into into the main
memory so that it can begin its operations.
Swap space is the amount of physical memory that is allocated for use by Linux to hold
some concurrent running programs temporarily.
This condition usually occurs when Ram does not have enough memory to support all
concurrent running programs.
This memory management involves the swapping of memory to and from physical storage.
There are around six hundred Linux distributors. Let us see some of the important ones
UBuntu: It is a well known Linux Distribution with a lot of pre-installed apps and
easy to use repositories libraries. It is very easy to use and works like MAC
operating system.
Linux Mint: It uses cinnamon and mate desktop. It works on windows and should be
used by newcomers.
Debian: It is the most stable, quicker and user-friendly Linux Distributors.
Fedora: It is less stable but provides the latest version of the software. It has
GNOME3 desktop environment by default.
Arch Linux: Every package is to be installed by you and is not suitable for
the beginners.
29/71
There are 3 types of permissions in Linux
Read: User can read the file and list the directory.
It is always required to keep a check on the memory usage in order to find out whether the
user is able to access the server or the resources are adequate. There are roughly 5
methods that determine the total memory used by the Linux.
Free command: This is the most simple and easy to use the command to check
memory usage. For example: ‘$ free –m’, the option ‘m’ displays all the data in MBs.
Vmstat: This command basically lays out the memory usage statistics. For example:
‘$ vmstat –s’
Top command: This command determines the total memory usage as well as
also monitors the RAM usage.
Htop: This command also displays the memory usage along with other details.
chmod +x
cd: This stands for ‘change directory’. This command is used to change to the
30/71
directory you want to work from the present directory. We just need to type cd
followed by the directory name to access that particular directory. mkdir: This
command is used to create an entirely new directory.
The shell reads this file and carries out the commands as though they have been entered
directly on the command line.
The shell is somewhat unique, in that it is both a powerful command line interface to the
system and a scripting language interpreter.
As we will see, most of the things that can be done on the command line can be done in
scripts, and most of the things that can be done in scripts can be done on the command
line.
We have covered many shell features, but we have focused on those features most often
used directly on the command line.
The shell also provides a set of features usually (but not always) used when writing
programs.
Some of the popular and frequently used system resource generating tools available on the
Linux platform are
vmstat
netstat
iostat
ifstat
mpstat.
These are used for reporting statistics from different system components such as virtual
memory, network connections and interfaces, CPU, input/output devices and more.
dstat is a powerful, flexible and versatile tool for generating Linux system resource
statistics, that is a replacement for all the tools mentioned in above question.
31/71
It comes with extra features, counters and it is highly extensible, users with Python
knowledge can build their own plugins.
Features of dstat:
1. Joins information from vmstat, netstat, iostat, ifstat and mpstat tools
A new process is normally created when an existing process makes an exact copy of itself
in memory.
The child process will have the same environment as its parent, but only the process ID
number is different.
There are two conventional ways used for creating a new process in Linux:
Using fork() and exec() Function – this technique is a little advanced but
offers greater flexibility, speed, together with security.
32/71
Because Linux is a multi-user system, meaning different users can be running various
programs on the system, each running instance of a program must be identified
uniquely by the kernel.
And a program is identified by its process ID (PID) as well as it’s parent processes
ID (PPID), therefore processes can further be categorized into:
Parent processes – these are processes that create other processes during run-
time.
Child processes – these processes are created by other processes during run-time.
lnit process is the mother (parent) of all processes on the system, it’s the first program that
is executed when the Linux system boots up; it manages all other processes on the
system. It is started by the kernel itself, so in principle it does not have a parent process.
The init process always has process ID of 1. It functions as an adoptive parent for
all orphaned processes.
# pidof systemd
# pidof top
# pidof httpd
To find the process ID and parent process ID of the current shell, run:
$ echo $$
$ echo $PPID
Question: What Are Different States Of A
Processes In Linux?
During execution, a process changes from one state to another depending on its
environment/circumstances. In Linux, a process has the following possible states:
Running – here it’s either running (it is the current process in the system) or it’s
ready to run (it’s waiting to be assigned to one of the CPUs).
Waiting – in this state, a process is waiting for an event to occur or for a system
resource. Additionally, the kernel also differentiates between two types of waiting
processes; interruptible waiting processes – can be interrupted by signals and
uninterruptible waiting processes – are waiting directly on hardware conditions and
cannot be interrupted by any event/signal.
Stopped – in this state, a process has been stopped, usually by receiving a signal.
33/71
Zombie – here, a process is dead, it has been halted but it’s still has an entry in
the process table.
There are several Linux tools for viewing/listing running processes on the system, the two
traditional and well known are ps and top commands:
1. ps Command
It displays information about a selection of the active processes on the system as shown
below:
#ps
#ps -e ] head
top is a powerful tool that offers you a dynamic real-time view of a running system as shown
in the screenshot below:
#top
#glances
$ kill 2308
$ pkill glances
The fundamental way of controlling processes in Linux is by sending signals to them. There
are multiple signals that you can send to a process, to view all the signals run:
34/71
$ kill -l
To send a signal to a process, use the kill, pkill or pgrep commands we mentioned earlier
on. But programs can only respond to signals if they are programmed to recognize those
signals.
And most signals are for internal use by the system, or for programmers when they write
code. The following are signals which are useful to a system user:
SIGKILL 9 – this signal immediately terminates (kills) a process and the process
will not perform any clean-up operations.
SIGTERM 15 – this a program termination signal (kill will send this by default).
SIGTSTP 20 – sent to a process by its controlling terminal to request it to stop
(terminal stop); initiated by the user pressing [Ctrl+Z] .
On the Linux system, all active processes have a priority and certain nice value. Processes
with higher priority will normally get more CPU time than lower priority processes.
However, a system user with root privileges can influence this with
the nice and renice commands.
From the output of the top command, the NI shows the process nice value:
$ top
$ renice +8 2687
$ renice +8 2103
35/71
Git is a version control system for tracking changes in computer files and coordinating work
on those files among multiple people.
It is primarily used for source code management in software development but it can be
used to keep track of changes in any set of files.
As a distributed revision control system it is aimed at speed, data integrity, and support for
distributed, non-linear workflows.
By far, the most widely used modern version control system in the world today is Git. Git is
a mature, actively maintained open source project originally developed in 2005 by Linus
Torvald. Git is an example of a Distributed Version Control System, In Git, every
developer's working copy of the code is also a repository that can contain the full history of
all changes.
Ease of use
High availability
Collaboration friendly
Any kind of projects from large to small scale can use GIT
The Git repository is stored in the same directory as the project itself, in a subdirectory
called .git. Note differences from central-repository systems like CVS or Subversion:
There is only one .git directory, in the root directory of the project.
36/71
Staging is a step before the commit process in git. That is, a commit in git is performed in
two steps:
-Staging and
-Actual commit
As long as a change set is in the staging area, git allows you to edit it as you like
(replace staged files with other versions of staged files, remove changes from staging, etc.)
Often, when you’ve been working on part of your project, things are in a messy state and
you want to switch branches for a bit to work on something else.
The problem is, you don’t want to do a commit of half-done work just so you can get back to
this point later. The answer to this issue is the git stash command. Stashing takes the dirty
state of your working directory — that is, your modified tracked files and staged changes —
and saves it on a stack of unfinished changes that you can reapply at any time.
Given one or more existing commits, revert the changes that the related patches introduce,
and record some new commits that record them. This requires your working tree to be
clean (no modifications from the HEAD commit).
SYNOPSIS
Use the git remote rm command to remove a remote URL from your repository.
37/71
In case we do not need a specific stash, we use git stash drop command to remove it from
the list of stashes.
To remove a specific stash we specify as argument in the git stash drop <stashname>
command.
Git has a centralized server and repository; SVN does not have a centralized
server or repository.
Git does not have the global revision number feature like SVN has.
Git was developed for Linux kernel by Linus Torvalds; SVN was developed
by CollabNet, Inc.
Hamano; Apache Subversion, or SVN, is distributed under the open source license.
GIT pull – It downloads as well as merges the data from the remote repository into the local
working files.
This may also lead to merging conflicts if the user’s local changes are not yet committed.
A fork is a copy of a repository. Forking a repository allows you to freely experiment with
changes without affecting the original project.
A fork is really a Github (not Git) construct to store a clone of the repo in your user account.
As a clone, it will contain all the branches in the main repo at the time you made the fork.
38/71
Create Tag:
Fill out the form fields, then click Publish release at the bottom.
After you create your tag on GitHub, you might want to fetch it into your local
repository too: git fetch.
A fork is a copy of a repository. Forking a repository allows you to freely experiment with
changes without affecting the original project.
A fork is really a Github (not Git) construct to store a clone of the repo in your user account.
As a clone, it will contain all the branches in the main repo at the time you made the fork.
Cherry picking in git means to choose a commit from one branch and apply it onto another.
This is in contrast with other ways such as merge and rebase which normally applies many
commits onto a another branch.
Make sure you are on the branch you want apply the commit to. git checkout master
Much of Git is written in C, along with some BASH scripts for UI wrappers and other bits.
Rebasing is the process of moving a branch to a new base commit.The golden rule of
git rebase is to never use it on public branches.
The only way to synchronize the two master branches is to merge them back together,
resulting in an extra merge commit and two sets of commits that contain the same
changes.
Question: What is ‘head’ in git and how many heads can be created in
a repository?
39/71
There can be any number of heads in a GIT repository. By default there is one head known
as HEAD in each repository in GIT.
HEAD is a ref (reference) to the currently checked out commit. In normal states, it's actually
a symbolic ref to the branch user has checked out.
if you look at the contents of .git/HEAD you'll see something like "ref: refs/heads/master".
The branch itself is a reference to the commit at the tip of the branch
GIT diff – It shows the changes between commits, commits and working tree.
GIT status – It shows the difference between working directories and index.
GIT stash applies – It is used to bring back the saved changes on the working
directory.
GIT rm – It removes the files from the staging area and also of the disk.
GIT add – It adds file changes in the existing directory to the index.
GIT reset – It is used to reset the index and as well as the working directory to the
state of the last commit.
GIT checkout – It is used to update the directories of the working tree with
those from another branch without merging.
GIT Is tree – It represents a tree object including the mode and the name of
each item.
GIT instaweb – It automatically directs a web browser and runs the web server with
an interface into your local repository.
40/71
repaired merge run the “GIT commit” command.
GIT identifies the position and sets the parents of
the commit correctly.
SubGIT is a tool for smooth and stress-free subversion to GIT migration and also a solution
for a company-wide subversion to GIT migration that is:
The index is a single, large, binary file in under .git folder, which lists all files in the
current branch, their sha1 checksums, time stamps and the file name. Before completing
the commits, it is formatted and reviewed in an intermediate area known as Index also
known as the staging area.
One or more commits can be reverted through the use of git revert. This command, in
essence, creates a new commit with patches that cancel out the changes introduced in
specific commits.
In case the commit that needs to be reverted has already been published or changing the
repository history is not an option, git revert can be used to revert commits. Running the
following command will revert the last two commits:
41/71
Alternatively, one can always checkout the state of a particular commit from the past, and
commit it anew.
Squashing multiple commits into a single commit will overwrite history, and should be done
with caution. However, this is useful when working in feature branches.
To squash the last N commits of the current branch, run the following command (with {N}
replaced with the number of commits that you want to squash):
Upon running this command, an editor will open with a list of these N commit messages,
one per line.
Each of these lines will begin with the word “pick”. Replacing “pick” with “squash” or “s” will
tell Git to combine the commit with the commit before it.
To combine all N commits into one, set every commit in the list to be squash except the first
one.
Upon exiting the editor, and if no conflict arises, git rebase will allow you to create a new
commit message for the new combined commit.
A conflict arises when more than one commit that has to be merged has some change
in the same place or same line of code.
Git will not be able to predict which change should take precedence. This is a git conflict.
To resolve the conflict in git, edit the files to fix the conflicting changes and then add the
resolved files by running git add .
After that, to commit the repaired merge, run git commit . Git remembers that you are in the
middle of a merge, so it sets the parents of the commit correctly.
42/71
To configure a script to run every time a repository receives new commits through push,
one needs to define either a pre-receive, update, or a post-receive hook depending on
when exactly the script needs to be triggered.
Pre-receive hook in the destination repository is invoked when commits are pushed to it.
Any script bound to this hook will be executed before any references are updated.
This is a useful hook to run scripts that help enforce development policies.
Update hook works in a similar manner to pre-receive hook, and is also triggered before
any updates are actually made.
However, the update hook is called once for every commit that has been pushed to the
destination repository.
Finally, post-receive hook in the repository is invoked after the updates have been accepted
into the destination repository.
This is an ideal place to configure simple deployment scripts, invoke some continuous
integration systems, dispatch notification emails to repository maintainers, etc.
Hooks are local to every Git repository and are not versioned. Scripts can either be created
within the hooks directory inside the “.git” directory, or they can be created elsewhere and
links to those scripts can be placed within the directory.
In Git each commit is given a unique hash. These hashes can be used to identify the
corresponding commits in various scenarios (such as while trying to checkout a particular
state of the code using the git checkout {hash} command).
Additionally, Git also maintains a number of aliases to certain commits, known as refs.
Also, every tag that you create in the repository effectively becomes a ref (and that is
exactly why you can use tags instead of commit hashes in various git commands).
Git also maintains a number of special aliases that change based on the state of
the repository, such as HEAD, FETCH_HEAD, MERGE_HEAD, etc.
Git also allows commits to be referred as relative to one another. For example, HEAD~1
refers to the commit parent to HEAD, HEAD~2 refers to the grandparent of HEAD, and so
on.
In case of merge commits, where the commit has two parents, ^ can be used to select one
of the two parents, e.g. HEAD^2 can be used to follow the second parent.
And finally, refspecs. These are used to map local and remote branches together.
However, these can be used to refer to commits that reside on remote branches allowing
one to control and manipulate them from a local Git environment.
43/71
Question: What Is Conflict In GIT?
A conflict arises when more than one commit that has to be merged has some change
in the same place or same line of code.
Git will not be able to predict which change should take precedence. This is a git conflict.To
resolve the conflict in git, edit the files to fix the conflicting changes and then add the
resolved files by running git add . After that, to commit the repaired merge, run git commit . Git
remembers that you are in the middle of a merge, so it sets the parents of the commit
correctly.
Git hooks are scripts that can run automatically on the occurrence of an event in a Git
repository. These are used for automation of workflow in GIT. Git hooks also help in
customizing the internal behavior of GIT. These are generally used for enforcing a GIT
commit policy.
GIT has very few disadvantages. These are the scenarios when GIT is difficult to use.
Binary Files: If we have a lot binary files (non-text) in our project, then GIT becomes very
slow. E.g. Projects with a lot of images or Word documents.
Steep Learning Curve: It takes some time for a newcomer to learn GIT. Some of the
GIT commands are non-intuitive to a fresher.
Slow remote speed: Sometimes the use of remote repositories in slow due to network
latency. Still GIT is better than other VCS in speed.
Question: What is stored inside a commit object in GIT?
Files: List of files that represent the state of a project at a specific point of time
44/71
Question: What Is GIT reset command?
Git reset command is used to reset current HEAD to a specific state. By default it reverses
the action of git add command. So we use git reset command to undo the changes of git
add command. Reference: Any reference to parent commit objects
GIT is made very secure since it contains the source code of an organization. All the
objects in a GIT repository are encrypted with a hashing algorithm called SHA1.
This algorithm is quite strong and fast. It protects source code and other contents of
repository against the possible malicious attacks.
This algorithm also maintains the integrity of GIT repository by protecting the change
history against accidental changes.
Continuous Integration is the process of continuously integrating the code and often
multiple times per day. The purpose is to find problems quickly, s and deliver the fixes more
rapidly.
CI is a best practice for software development. It is done to ensure that after every code
change there is no issue in software.
Including compiling computer source code into binary code, packaging binary code, and
running automated tests.
45/71
It enables you to quickly learn what to expect every time you deploy an environment with
much faster results.
This combined with Build Automation can save development teams a significant amount of
hours.
Automated Deployment saves clients from being extensively offline during development
and allows developers to build while “touching” fewer of a clients’ systems.
With an automated system, human error is prevented. In the event of human error,
developers are able to catch it before live deployment – saving time and headache.
You can even automate the contingency plan and make the site rollback to a working or
previous state as if nothing ever happened.
Clearly, this automated feature is super valuable in allowing applications and sites to
continue during fixes.
Different tools for supporting Continuous Integration are Hudson, Jenkins and Bamboo.
Jenkins is the most popular one currently. They provide integration with various version
control systems and build tools.
After build pass run automated test cases if test cases fail notify to developer.
Source code repository : To commit code and changes for example git.
46/71
Build tool: It builds application on particular way for example maven, gradle.
Jenkins is self-contained, open source automation server used to automate all sorts of
tasks related to building, testing, and delivering or deploying software.
Jenkins is one of the leading open source automation servers available. Jenkins has an
extensible, plugin-based architecture, enabling developers to create 1,400+ plugins to
adapt it to a multitude of build, test and deployment technology integrations.
Jenkins Pipeline (or simply “Pipeline”) is a suite of plugins which supports implementing
and integrating continuous delivery pipelines into Jenkins..
Maven and Ant are Build Technologies whereas Jenkins is a continuous integration tool.
The Jenkins software enables developers to find and solve defects in a code base rapidly
and to automate testing of their builds.
Jenkins
TeamCity
Travis CI
Go CD
Bamboo
GitLab CI
47/71
CircleCI
Codeship
Jenkins supports version control tools, including AccuRev, CVS, Subversion, Git, Mercurial,
Perforce, ClearCase and RTC, and can execute Apache Ant, Apache Maven and arbitrary
shell scripts and Windows batch commands.
Pipeline adds a powerful set of automation tools onto Jenkins, supporting use cases that
span from simple continuous integration to comprehensive continuous delivery pipelines.
By modeling a series of related tasks, users can take advantage of the many features of
Pipeline:
Code: Pipelines are implemented in code and typically checked into source control,
giving teams the ability to edit, review, and iterate upon their delivery pipeline.
Durable: Pipelines can survive both planned and unplanned restarts of the Jenkins
master.
Pausable: Pipelines can optionally stop and wait for human input or approval
before continuing the Pipeline run.
Extensible: The Pipeline plugin supports custom extensions to its DSL and
multiple options for integration with other plugins.
In a Multi branch Pipeline project, Jenkins automatically discovers, manages and executes
Pipelines for branches which contain a Jenkins file in source control.
Jenkins can be used to perform the typical build server work, such as doing
continuous/official/nightly builds, run tests, or perform some repetitive batch tasks. This is
called “free-style software project” in Jenkins.
48/71
Question: How do you configuring
automatic builds in Jenkins?
Amazon Web Services provides services that help you practice DevOps at your company
and that are built first for use with AWS.
These tools automate manual tasks, help teams manage complex environments at scale,
and keep engineers in control of the high velocity that is enabled by DevOps
Fully Managed Services: These services can help you take advantage of AWS resources
quicker. You can worry less about setting up, installing, and operating infrastructure on
your own. This lets you focus on your core product.
Built For Scalability: You can manage a single instance or scale to thousands using AWS
services. These services help you make the most of flexible compute resources by
simplifying provisioning, configuration, and scaling.
Programmable: You have the option to use each service via the AWS Command Line
Interface or through APIs and SDKs. You can also model and provision AWS resources
and your entire AWS infrastructure using declarative AWS CloudFormation templates.
Automation: AWS helps you use automation so you can build faster and more efficiently.
Using AWS services, you can automate manual tasks or processes such as deployments,
49/71
development & test workflows, container management, and configuration management.
Secure: Use AWS Identity and Access Management (IAM) to set user permissions and
policies. This gives you granular control over who can access your resources and how they
access those resources.
The AWS Developer Tools help in securely store and version your application’s source
code and automatically build, test, and deploy your application to AWS.
An Elastic Load Balancer ensures that the incoming traffic is distributed optimally across
various AWS instances.
A buffer will synchronize different components and makes the arrangement additional
elastic to a burst of load or traffic.
The components are prone to work in an unstable way of receiving and processing the
requests.
The buffer creates the equilibrium linking various apparatus and crafts them effort at the
identical rate to supply more rapid services.
Amazon S3 : with this, one can retrieve the key information which are occupied in
creating cloud structural design and amount of produced information also can be stored in
this component that is the consequence of the key specified.
Amazon EC2 instance : helpful to run a large distributed system on the Hadoop cluster.
Amazon SQS : this component acts as a mediator between different controllers. Also worn
for cushioning requirements those are obtained by the manager of Amazon.
Amazon SimpleDB : helps in storing the transitional position log and the errands
executed by the consumers.
50/71
Question: How is a Spot instance different from an
On-Demand instance or Reserved Instance?
Spot Instance, On-Demand instance and Reserved Instances are all models for pricing.
Moving along, spot instances provide the ability for customers to purchase compute
capacity with no upfront commitment, at hourly rates usually lower than the On-Demand
rate in each region.
Spot instances are just like bidding, the bidding price is called Spot Price. The Spot Price
fluctuates based on supply and demand for instances, but customers will never pay more
than the maximum price they have specified.
If the Spot Price moves higher than a customer’s maximum price, the customer’s EC2
instance will be shut down automatically.
But the reverse is not true, if the Spot prices come down again, your EC2 instance will not
be launched automatically, one has to do that manually.
In Spot and On demand instance, there is no commitment for the duration from the user
side, however in reserved instances one has to stick to the time period that he has chosen.
Questions: What are the best practices for Security in Amazon EC2?
There are several best practices to secure Amazon EC2. A few of them are given below:
Use AWS Identity and Access Management (IAM) to control access to your AWS
resources.
Review the rules in your security groups regularly, and ensure that you apply
the principle of least
AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and
produces software packages that are ready to deploy.
With CodeBuild, you don’t need to provision, manage, and scale your own build servers.
CodeBuild scales continuously and processes multiple builds concurrently, so your builds
are not left waiting in a queue.
51/71
Question: What is Amazon Elastic Container Service in AWS Devops?
Amazon Elastic Container Service (ECS) is a highly scalable, high performance container
management service that supports Docker containers and allows you to easily run
applications on a managed cluster of Amazon EC2 instances.
AWS Lambda lets you run code without provisioning or managing servers. With Lambda,
you can run code for virtually any type of application or backend service, all with zero
administration.
Just upload your code and Lambda takes care of everything required to run and scale your
code with high availability.
The platform of Splunk allows you to get visibility into machine data generated from
different networks, servers, devices, and hardware.
It can give insights into the application management, threat visibility, compliance, security,
etc. so it is used to analyze machine data. The data is collected from the forwarder from the
source and forwarded to the indexer. The data is stored locally on a host machine or cloud.
Then on the data stored in the indexer the search head searches, visualizes, analyzes and
performs various other functions.
Deployment servers act like an antivirus policy server for setting up Exceptions and Groups
so that you can map and create adifferent set of data collection policies each for either
window based server or a Linux based server or a Solaris based server. plunk has four
important components :
Forwarder – Refers to Splunk instances that forward data to the remote indexers
Search Head – Provides GUI for searching
Deployment Server –Manages the Splunk components like indexer, forwarder, and
52/71
search head in computing environment.
An alert is an action that a saved search triggers on regular intervals set over a time range,
based on the results of the search.
When the alerts are triggered, various actions occur consequently.. For instance, sending
an email when a search to the predefined list of people is triggered.
1. Pre-result alerts : Most commonly used alert type and runs in real-time for an all-
time span. These alerts are designed such that whenever a search returns a result,
they are triggered.
2. Scheduled alerts : The second most common- scheduled results are set up to
evaluate the results of a historical search result running over a set time range on a
regular schedule. You can define a time range, schedule and the trigger condition
to an alert.
3. Rolling-window alerts: These are the hybrid of pre-result and scheduled alerts.
Similar to the former, these are based on real-time search but do not trigger each
time the search returns a matching result . It examines all events in real-time mapping
within the rolling window and triggers the time that specific condition by that event in
the window is met, like the scheduled alert is triggered on a scheduled search.
1. Sorting Results – Ordering results and (optionally) limiting the number of results.
2. Filtering Results – It takes a set of events or results and filters them into a
smaller set of results.
5. Reporting Results – Filtering out some fields to focus on the ones you need,
or modifying or adding fields to enrich your results or events.
In case the license master is unreachable, then it is just not possible to search the data.
53/71
However, the data coming in to the Indexer will not be affected. The data will continue to
flow into your Splunk deployment.
The Indexers will continue to index the data as usual however, you will get a warning
message on top your Search head or web UI saying that you have exceeded the indexing
volume.
And you either need to reduce the amount of data coming in or you need to buy a higher
capacity of license. Basically, the candidate is expected to answer that the indexing does
not stop; only searching is halted
Port
Service Number
KV store 8191
Hot – It contains newly indexed data and is open for writing. For each index, there
Frozen – Data rolled from cold. The indexer deletes frozen data by default but
users can also archive it.
Thawed – Data restored from an archive. If you archive frozen data , you can later
return it to the index by thawing (defrosting) it.
54/71
Data models are used for creating a structured hierarchical model of data. It can be used
when you have a large amount of unstructured data, and when you want to make use of
that information without using complex search queries.
Create Sales Reports: If you have a sales report, then you can easily create the total
number of successful purchases, below that you can create a child object containing
the list of failed purchases and other views
Set Access Levels: If you want a structured view of users and their various
access levels, you can use a data model
On the other hand with pivots, you have the flexibility to create the front views of your
results and then pick and choose the most appropriate filter for a better view of results.
All of Splunk’s configurations are written in .conf files. There can be multiple copies present
for each of these files, and thus it is important to know the role these files play when a
Splunk instance is running or restarted. To determine the priority among copies of a
configuration file, Splunk software first determines the directory scheme. The directory
schemes are either a) Global or b) App/user. When the context is global (that is, where
there’s no app/user context), directory priority descends in this order:
When the context is app/user, directory priority descends from user to app to system:
Search time field extraction refers to the fields extracted while performing searches.
Whereas, fields extracted when the data comes to the indexer are referred to as Index time
field extraction.
55/71
You can set up the indexer time field extraction either at the forwarder level or at
the indexer level.
Another difference is that Search time field extraction’s extracted fields are not part of the
metadata, so they do not consume disk space.
Whereas index time field extraction’s extracted fields are a part of metadata and hence
consume disk space.
Source type is a default field which is used to identify the data structure of an incoming
event. Source type determines how Splunk Enterprise formats the data during the indexing
process.
Source type can be set at the forwarder level for indexer extraction to identify different data
formats.
SOS stands for Splunk on Splunk. It is a Splunk app that provides graphical view of your
Splunk environment performance and issues.
The indexer is a Splunk Enterprise component that creates and manages indexes.
The main functions of an indexer are:
Input : Splunk Enterprise acquires the raw data from various input sources and breaks it into
64K blocks and assign them some metadata keys. These keys include host, source and
source type of the data. Parsing : Also known as event processing, during this stage, the
Enterprise analyzes and transforms the data, breaks data into streams, identifies, parses and
sets timestamps, performs metadata annotation and transformation of data. Indexing : In this
phase, the parsed events are written on the disk index including both compressed data and the
associated index files. Searching : The ‘Search’ function plays a
56/71
major role during this phase as it handles all searching aspects (interactive, scheduled
searches, reports, dashboards, alerts) on the indexed data and stores saved searches,
events, field extractions and views
Stats – This command produces summary statistics of all existing fields in your search
results and store them as values in new fields. Eventstats – It is same as stats command
except that aggregation results are added in order to every event and only if the
aggregation is applicable to that event. It computes the requested statistics similar to stats
but aggregates them to the original raw data.
log4j is a reliable, fast and flexible logging framework (APIs) written in Java, which is
distributed under the Apache Software License.
log4j has been ported to the C, C++, C#, Perl, Python, Ruby, and Eiffel languages.
log4j is highly configurable through external configuration files at runtime. It views the
logging process in terms of levels of priorities and offers mechanisms to direct logging
information to a great variety of destinations.
It supports internationalization.
It uses multiple levels, namely ALL, TRACE, DEBUG, INFO, WARN, ERROR and
FATAL.
The format of the log output can be easily changed by extending the Layout class.
The target of the log output as well as the writing strategy can be altered by
implementations of the Appender interface.
It is fail-stop. However, although it certainly strives to ensure delivery, log4j does not
57/71
guarantee that each log statement will be delivered to its destination.
Following are the Pros and Cons of Logging Logging is an important component of the
software development. A well-written logging code offers quick debugging, easy
maintenance, and structured storage of an application's runtime information. Logging does
have its drawbacks also. It can slow down an application. If too verbose, it can cause
scrolling blindness. To alleviate these concerns, log4j is designed to be reliable, fast and
extensible. Since logging is rarely the main focus of an application, the log4j API strives to
be simple to understand and to use.
Question:What Is The Purpose Of Logger Object?
Logger Object − The top-level layer of log4j architecture is the Logger which provides the
Logger object.
The Logger object is responsible for capturing logging information and they are stored in a
namespace hierarchy.
58/71
The layout layer of log4j architecture provides objects which are used to format logging
information in different styles. It provides support to appender objects before publishing
logging information.
Layout objects play an important role in publishing logging information in a way that is
human-readable and reusable.
The Appender object is responsible for publishing logging information to various preferred
destinations such as a database, file, console, UNIX Syslog, etc.
This object is used by Layout objects to prepare the final logging information.
The LogManager object manages the logging framework. It is responsible for reading the
initial configuration parameters from a system-wide configuration file or a configuration
class.
Appender can have a threshold level associated with it independent of the logger level.
The Appender ignores any logging messages that have a level lower than the threshold
level.
Docker is a tool designed to make it easier to create, deploy, and run applications by using
containers.
Containers allow a developer to package up an application with all of the parts it needs,
such as libraries and other dependencies, and ship it all out as one package.
By doing so, the developer can rest assured that the application will run on any other Linux
machine regardless of any customized settings that machine might have that could differ
from the machine used for writing and testing the code. In a way, Docker is a bit like a
virtual machine. But unlike a virtual machine, rather than creating a whole virtual operating
system. Docker allows applications to use the same Linux kernel as the system that they're
running on and only requires applications be shipped with things not already running on the
host computer. This gives a significant performance boost and reduces the size of the
application.
Linux containers, in short, contain applications in a way that keep them isolated from the
host system that they run on.
Containers allow a developer to package up an application with all of the parts it needs,
such as libraries and other dependencies, and ship it all out as one package.
And they are designed to make it easier to provide a consistent experience as developers
and system administrators move code from development environments into production in a
fast and replicable way.
For developers, it means that they can focus on writing code without worrying about
the system that it will ultimately be running on.
It also allows them to get a head start by using one of thousands of programs already
designed to run in a Docker container as a part of their application.
For operations staff, Docker gives flexibility and potentially reduces the number of systems
needed because of its small footprint and lower overhead.
60/71
Question: What Is Docker Container?
Docker containers include the application and all of its dependencies, but share the kernel
with other containers, running as isolated processes in user space on the host operating
system.
Docker containers are not tied to any specific infrastructure: they run on any computer, on
any infrastructure, and in any cloud.
Now explain how to create a Docker container, Docker containers can be created by either
creating a Docker image and then running it or you can use Docker images that are present
on the Dockerhub. Docker containers are basically runtime instances of Docker images.
Docker image is the source of Docker container. In other words, Docker images are used
to create containers.
Images are created with the build command, and they’ll produce a container when started
with run.
Images are stored in a Docker registry such as registry.hub.docker.com because they can
become quite large, images are designed to be composed of layers of other images,
allowing a minimal amount of data to be sent when transferring images over the network.
Docker hub is a cloud-based registry service which allows you to link to code repositories,
build your images and test them, stores manually pushed images, and links to Docker
cloud so you can deploy images to your hosts.
It provides a centralized resource for container image discovery, distribution and change
management, user and team collaboration, and workflow automation throughout the
development pipeline.
Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single,
virtual Docker host.
Docker Swarm serves the standard Docker API, any tool that already communicates with a
Docker daemon can use Swarm to transparently scale to multiple hosts.
61/71
I will also suggest you to include some supported tools:
Dokku
Docker Compose
Docker Machine
Jenkins
A Dockerfile is a text document that contains all the commands a user could call on the
command line to assemble an image.
Using docker build users can create an automated build that executes several command-
line instructions in succession.
Docker containers are easy to deploy in a cloud. It can get more applications running on
the same hardware than other technologies.
We can use Docker image to create Docker container by using the below command:
1 docker run -t -i command name
This command will create and start a container. You should also add, If you want to
check the list of all running container with the status on a host use the below command:
1 docker ps - a
In order to stop the Docker container you can use the below command:
ID
62/71
1 docker restart container
ID
Question: What is the difference between docker run and docker create?
The primary difference is that using ‘docker create’ creates a container in a stopped
state. Bonus point: You can use ‘docker create’ and store an outputed container ID for
later use. The best way to do it is to use ‘docker run’ with --cidfile FILE_NAME as
running it again won’t allow to overwrite the file.
Running
Paused
Restarting
Exited
Docker registry is a service for hosting and distributing images. Docker repository is a
collection of related Docker images.
A CMD does not execute anything at build time, but specifies the intended command for
the image.
If you would like your container to run the same executable every time, then you should
consider using ENTRYPOINT in combination with CMD.
As far as the number of containers that can be run, this really depends on your
63/71
environment. The size of your applications as well as the amount of available resources will
all affect the number of containers that can be run in your environment.
Containers unfortunately are not magical. They can’t create new CPU from scratch. They
do, however, provide a more efficient way of utilizing your resources.
Docker hub is a cloud-based registry service which allows you to link to code repositories,
build your images and test them, stores manually pushed images, and links to Docker
cloud so you can deploy images to your hosts.
It provides a centralized resource for container image discovery, distribution and change
management, user and team collaboration, and workflow automation throughout the
development pipeline.
VMware was founded in 1998 by five different IT experts. The company officially launched
its first product, VMware Workstation, in 1999, which was followed by the VMware GSX
Server in 2001. The company has launched many additional products since that time.
VMware's desktop software is compatible with all major OSs, including Linux, Microsoft
Windows, and Mac OS X. VMware provides three different types of desktop software:
VMware Workstation: This application is used to install and run multiple copies or
instances of the same operating systems or different operating systems on a single
physical computer machine.
VMware Fusion: This product was designed for Mac users and provides extra
compatibility with all other VMware products and applications.
VMware Player: This product was launched as freeware by VMware for users who do
not have licensed VMWare products. This product is intended only for personel use.
VMware's software hypervisors intended for servers are bare-metal embedded hypervisors
that can run directly on the server hardware without the need of an extra primary OS.
VMware’s line of server software includes:
VMware ESXi Server: This server is similar to the ESX Server except that the service
64/71
console is replaced with BusyBox installation and it requires very low disk space to
operate.
VMware Server: Freeware software that can be used over existing operating systems
like Linux or Microsoft Windows.
The process of creating virtual versions of physical components i-e Servers, Storage
Devices, Network Devices on a physical host is called virtualization.
Virtualization lets you run multiple virtual machines on a single physical machine which is
called ESXi host.
Server virtualization: consolidates the physical server and multiple OS can be run on
a single server.
This Agent will be installed on ESX/ESXi will be done when you try to add the ESx host
in Vcenter.
65/71
VMWare Kernel is a Proprietary kernel of vmware and is not based on any of the flavors of
Linux operating systems.
VMkernel requires an operating system to boot and manage the kernel. A service console
is being provided when VMWare kernel is booted.
VMkernel is a virtualization interface between a Virtual Machine and the ESXi host which
stores VMs.
It is responsible to allocate all available resources of ESXi host to VMs such as memory,
CPU, storage etc.
It’s also control special services such as vMotion, Fault tolerance, NFS, traffic management
and iSCSI.
To access these services, VMkernel port can be configured on ESXi server using a
standard or distributed vSwitch. Without VMkernel, hosted VMs cannot communicate with
ESXi server.
Hypervisor is a virtualization layer that enables multiple operating systems to share a single
hardware host.
A network of VMs running on a physical server that are connected logically with each other
is called virtual networking.
vSS stands for Virtual Standard Switch is responsible for communication of VMs hosted on
a single physical host.
66/71
it works like a physical switch automatically detects a VM which want to communicate with
other VM on a same physical server.
AVMKernel adapter provides network connectivity to the ESXi host to handle network traffic
for vMotion, IP Storage, NAS, Fault Tolerance, and vSAN.
For each type of traffic such as vMotion, vSAN etc. separate VMKernal adapter should be
created and configured.
A Datastore is a storage location where virtual machine files are stored and accessed.
Datastore is based on a file system which is called VMFS, NFS
1. Thick Provisioned Lazy Zeroes: every virtual disk is created by default in this disk
format. Physical space is allocated to a VM when virtual disk is created. It can’t be
converted to thin disk.
2. Thick Provision Eager Zeroes: this disk type is used in VMware Fault Tolerance.
All required disk space is allocated to a VM at time of creation. It takes more time
to create a virtual disk compare to other disk formats.
67/71
3. Thin provision: It provides on-demand allocation of disk space to a VM. When data
size grows, the size of disk will grow. Storage capacity utilization can be up to 100%
with thin provisioning.
Vmkernel port is used by ESX/ESXi for vmotion, ISCSI & NFS communications. ESXi
uses Vmkernel as the management network since it don’t have serviceconsole built with it.
In this way, each build is tested continuously, allowing Development teams to get fast
feedback so that they can prevent those problems from progressing to the next stage of
Software delivery life-cycle.
68/71
doesn’t require any manual intervention.
Continuous Testing allows any change made in the code to be tested immediately.
This avoids the problems created by having “big-bang” testing left to the end of the
development cycle such as release delays and quality issues.
In this way, Continuous Testing facilitates more frequent and good quality releases.”
Regression Testing: It is the act of retesting a product around an area where a bug was
fixed.
Verify command also checks whether the given condition is true or false. Irrespective of
the condition being true or false, the program execution doesn’t halts i.e. any failure
during verification would not stop the execution and all the test steps would be executed.
Summary
69/71
DevOps refers to a wide range of tools, process and practices used
bycompanies to improve their build, deployment, testing and release life
cycles.
In order to ace a DevOps interview you need to have a deep understanding of all
of these tools and processes.
Most of the technologies and process used to implement DevOps are not isolated.
Most probably you are already familiar with many of these. All you have to do is to
prepare for these from DevOps perspective.
In this guide I have created the largest set of interview questions. Each section in
this guide caters to a specific area of DevOps.
https://theagileadmin.com/what-is-devops/
https://en.wikipedia.org/wiki/DevOps
http://www.javainuse.com/misc/gradle-interview-questions
https://mindmajix.com/gradle-interview-questions
https://tekslate.com/groovy-interview-questions-and-answers/
https://mindmajix.com/groovy-interview-questions
https://www.wisdomjobs.com/e-university/groovy-programming-language-
interviewquestions.html
https://www.quora.com/What-are-some-advantages-of-the-Groovy-programminglanguage
70/71
https://www.quora.com/What-are-some-advantages-of-the-Groovy-programminglanguage
http://groovy-lang.org/documentation.html
https://maven.apache.org/guides/introduction/introduction-to-archetypes.html
https://en.wikipedia.org/wiki/Apache_Maven
https://www.tecmint.com/linux-process-management/
https://www.tecmint.com/dstat-monitor-linux-server-performance-process-memorynetwork/
https://www.careerride.com/Linux-Interview-Questions.aspx
https://www.onlineinterviewquestions.com/git-interview-questions/#.WxcTP9WFMy4
https://www.atlassian.com/git/tutorials/what-is-git
https://www.toptal.com/git/interview-questions
https://www.sbf5.com/~cduan/technical/git/git-1.shtml
http://preparationforinterview.com/preparationforinterview/continuous-integrationinterview-
question
https://codingcompiler.com/jenkins-interview-questions-answers/
https://www.edureka.co/blog/interview-questions/top-splunk-interview-questions-andanswers/
https://intellipaat.com/interview-question/splunk-interview-questions/
https://www.edureka.co/blog/interview-questions/docker-interview-questions/
http://www.vmwarearena.com/vmware-interview-questions-and-answers/
https://www.myvirtualjourney.com/top-80-vmware-interview-questions-answers/
https://www.edureka.co/blog/interview-questions/top-devops-interview-questions2016/