NodeJS and Microservices - QB
NodeJS and Microservices - QB
NodeJS and Microservices - QB
They are:
1. Startup Probe
2. Liveness Probe
3. Readiness Probe
Startup Probe :
When the readiness probe fails, the pod’s IP is removed from the
endpoint list of the service.
There are three types of actions kubelet performs on a pod, which
are:
Configure Probes
Probes have several fields that you can use to more precisely control
the behavior of liveness and readiness checks:
Nginx deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-webserver
labels:
app: webserver
spec:
replicas: 1
template:
metadata:
labels:
app: webserver
spec:
containers:
- name: webserver
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 3
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 3
HTTP get has additional fields that can be set:
health checks
2.Depict Docker architecture from the client's perspective, illustrating how
users interact with Docker components and their role in the container
lifecycle.
Answer:
Docker uses a client-server architecture. The docker client talks to the Docker daemon, which used to building,
running, and distributing the Docker containers. The Docker client and daemon communicate using a REST API, over
UNIX sockets, or a network interface.
There are five major components in the Docker architecture:
a) Docker Daemon listens to Docker API requests and manages Docker objects such as images, containers, networks
and volumes.
b) Docker Clients: With the help of Docker Clients, users can interact with Docker. Docker client provides a
command-line interface (CLI) that allows users to run, and stop application commands to a Docker daemon.
c) Docker Host provides a complete environment to execute and run applications. It comprises of the Docker daemon,
Images, Containers, Networks, and Storage.
d) Docker Registry stores Docker images. Docker Hub is a public registry that anyone can use, and Docker is
configured to use images on Docker Hub by default. You can run your own registry on it.
e) Docker Images are read-only templates that you build from a set of instructions written in Dockerfile. Images
define both what you want your packaged application and its dependencies to look like what processes to run when
it’s launched.
There are different stages when we create a container which is known as Lifecycle of container i.e create, run,
pause, delete & stopped.
• The first phase is the created state. Further, the container moves into the running state while we
use the Docker run command.
• We can stop or pause the container, using Docker stop/pause command. And, to put a container
back from a stopped state to a running state, we use the Docker run command.
• We can delete a running or stopped container, using Docker rm command.
Answer :
Node.js is a JavaScript-based platform that is mainly used to create I/O-intensive web applications such as
chat apps, multimedia streaming sites, etc. It is built on Google Chrome’s V8 JavaScript engine. A web
application is software that runs on a server and is rendered by a client browser that accesses all of the
application’s resources through the internet.
A typical web application consists of the following components:
• Client: A client refers to the user who interacts with the server by sending out requests.
• Server: The server is in charge of receiving client requests, performing appropriate tasks, and
returning results to the clients. It serves as a bridge between the front-end and the stored data,
allowing clients to perform operations on the data.
• Database: A database is where a web application’s data is stored. Depending on the client’s
request, the data can be created, modified, and deleted.
VPS servers offer base capabilities and environment to integrate Node.js apps with developer tools and
APIs. Hostinger’s VPS gives you more control and flexibility over your hosting environment and offers
you much more than what you are paying for. It has template build for Node.js – Ubuntu 22.04 with
Node.js. This makes it super easy and swift to start. It also comes with OpenLiteSpeed server. Besides,
they also offer CloudPanel template which allows Node.js applications creation, making it easier to start
and manage Node.js apps. With a slick, easy-to-use interface, you can figure everything out quickly even
with no experience with VPS.
Node.js Server Architecture: To manage several concurrent clients, Node.js employs a “Single Threaded
Event Loop” design. The JavaScript event-based model and the JavaScript callback mechanism are
employed in the Node.js Processing Model. It employs two fundamental concepts:
1. Asynchronous model
2. Non-blocking of I/O operations
These features enhance the scalability, performance, and throughput of Node.js web applications.
Components of the Node.js Architecture:
• Requests: Depending on the actions that a user needs to perform, the requests to the server can
be either blocking (complex) or non-blocking (simple).
• Node.js Server: The Node.js server accepts user requests, processes them, and returns results to
the users.
• Event Queue: The main use of Event Queue is to store the incoming client requests and pass
them sequentially to the Event Loop.
• Thread Pool: The Thread pool in a Node.js server contains the threads that are available for
performing operations required to process requests.
• Event Loop: Event Loop receives requests from the Event Queue and sends out the responses
to the clients.
• External Resources: In order to handle blocking client requests, external resources are used.
They can be of any type ( computation, storage, etc).
Workflow of Nodejs Server:
• Users send requests (blocking or non-blocking) to the server for performing operations.
• The requests enter the Event Queue first at the server-side.
• The Event queue passes the requests sequentially to the event loop. The event loop checks the
nature of the request (blocking or non-blocking).
• Event Loop processes the non-blocking requests which do not require external resources and
returns the responses to the corresponding clients
• For blocking requests, a single thread is assigned to the process for completing the task by using
external resources.
• After the completion of the operation, the request is redirected to the Event Loop which delivers
the response back to the client.
Advantages:
• The Node.js server can efficiently handle a high number of requests by employing the use of
Event Queue and Thread Pool.
• There is no need to establish multiple threads because Event Loop processes all requests one at
a time, therefore a single thread is sufficient.
• The entire process of serving requests to a Node.js server consumes less memory and server
resources since the requests are handled one at a time.
Part -B (14 Marks):
4.Explain the Event loop in Node.js and clarify its operational mechanism.
Answer :
Example Program :
• javascript
setTimeout(function(){
console.log("This is the second statement");
}, 1000);
Output:
This is the first statement
This is the third statement
This is the second statement
Explanation: In the above example, the first console log statement is pushed to the call stack,
and “This is the first statement” is logged on the console, and the task is popped from the
stack. Next, the setTimeout is pushed to the queue and the task is sent to the Operating system
and the timer is set for the task. This task is then popped from the stack. Next, the third
console log statement is pushed to the call stack, and “This is the third statement” is logged on
the console and the task is popped from the stack.
When the timer set by the setTimeout function (in this case 1000 ms) runs out, the callback is
sent to the event queue. The event loop on finding the call stack empty takes the task at the top
of the event queue and sends it to the call stack. The callback function for the setTimeout
function runs the instruction and “This is the second statement” is logged on the console and
the task is popped from the stack.
1. Timers: This phase processes timers that have been set using
setTimeout() and setInterval().
2. Pending Callbacks: This phase processes any callbacks that have been
added to the message queue by asynchronous functions.
3. Idle, Prepare: The “idle.ignore” phase is not a standard phase of the
event loop in Node.js. It means it’s Used internally only. The “idle” phase
is a period of time during which the event loop has nothing to do and can
be used to perform background tasks, such as running garbage collection
or checking for low-priority events.
“idle.ignore” is not an official phase of the event loop, it is a way to ignore
the idle phase, meaning that it will not use the time of the idle phase to
perform background tasks.
4. Poll: Thisphase is used to check for new I/O events and process any that
have been detected.
5. Check This phase processes any setImmediate() callbacks that have been
added to the message queue.
6. Close Callbacks: This phase processes any callbacks that have been added
to the message queue by the close event of a socket. This means that any
code that needs to be executed when a socket is closed is placed in the
message queue and processed during this phase.
Answer :
Callbacks are no different. Just like a string or number, you can pass a function
(known as a callback) as an argument. When you do so, you are passing the
entire function, not just what the function outputs. This is because functions
are first class data types. That means they can be:
3. Assigned to a variable
So, with that out of the way, let’s get to the nitty gritty. First, why use a
callback? The best explanation I’ve heard:
For me, the description of the what and why of callbacks is a bit useless without
some example to show where they might be useful. Before I get to a legit
example, let me show you a bare-bones breakdown of setting up a callback.
In the code above, the function on line 1 is the callback. It’s sole purpose is
to take in an argument for the parameter called name and return “Hello name“.
“What is the point of all this?” you may ask. “I could do all this with a single function”,
you say. And it’s true. The scenario above would probably be better served by a single
function. I only used this to show you a simple example of how callbacks work. To
understand the real beauty of callbacks, we can use a more real world example. Let’s
assume you want to write some code (perhaps a calculator app) that performs
calculations on two numbers.
Rather than having a single function that contains all the possible types of equations one could
run (which would require lots of if/else statements), you can have separate functions for each
type of calculation. Then, you can pass that calculation type as a callback to a function that
performs the calculation. That’s precisely what is happening above on line 17. The
function performCalculation accepts three arguments: two numbers and a callback (referred to
here as calculationType) that determines what type of calculation happens to those numbers.
Answer :
1. Troubleshooting
Dockerized applications can be complex, comprising multiple containers, each with its
own dependencies. When things go wrong, it's essential to quickly identify and rectify
the issues. Logging and monitoring provide the visibility required to pinpoint
problems, whether it's a failing container, network issues, or resource constraints.
2. Performance Optimization
To keep your applications running efficiently, you need insights into resource
utilization, response times, and other performance metrics. Monitoring tools can help
you fine-tune your Docker environment, ensuring that resources are allocated
effectively and that your applications are performing at their best.
3. Scalability
Docker's lightweight and portable nature make it an excellent choice for scaling
applications. However, managing the scaling process effectively requires careful
monitoring to prevent resource bottlenecks and optimize container placement.
4. Security
Docker Logging
Logging in a Docker environment involves capturing and managing the output of
containerized applications, making it accessible for analysis and troubleshooting.
Docker provides several ways to collect logs from your containers, and there are also
third-party solutions available. Let's explore some of the key options for logging in a
Docker environment.
Docker itself provides the ability to view container logs using the docker logs command.
You can retrieve logs for a specific container, making this a straightforward method
for inspecting logs on a per-container basis. However, it may not be suitable for large-
scale or automated log collection and analysis.
Docker supports various logging drivers that allow you to configure where container
logs are sent. These include the JSON File driver, the Syslog driver, the Fluentd driver,
and the Gelf driver, among others. By selecting an appropriate logging driver, you can
send logs to different destinations such as files, remote Syslog servers, or centralized
log management systems.
3. Fluentd
4. ELK Stack
Elasticsearch, Logstash, and Kibana, collectively known as the ELK stack, are popular
tools for log aggregation and analysis. You can use Elasticsearch to store log data,
Logstash to process and enrich the logs, and Kibana to create visualizations and
dashboards. This stack is highly extensible and can be integrated with Docker using
various plugins and configurations.
6. Graylog
Maintain a consistent log format across your applications. Using JSON or structured
logging formats makes it easier to parse and analyze logs. Standardized logs facilitate
automated processing and reduce the time required for troubleshooting.
Avoid storing logs within the container itself. Instead, use a centralized logging
solution to store and manage logs. Storing logs off the container ensures that log data
is preserved even if the container or host fails.
3. Set Log Rotation and Retention Policies
Define log rotation and retention policies to manage log storage efficiently. You can
configure log rotation and retention policies to automatically delete or archive old
logs. This prevents your log storage from becoming overwhelmed with outdated data.
Protect your log data by applying access controls and encryption. Unauthorized access
to logs can expose sensitive information and pose security risks. Ensure that only
authorized personnel can access and modify log data.
Use structured logging to add context to your log entries. Include important
information such as application names, versions, timestamps, and request IDs. This
context is invaluable for tracing issues and identifying the source of problems.
Monitor the log collection process itself. If log collection fails, it may indicate
underlying issues in your logging infrastructure or containers. Set up alerts to be
notified of any log collection failures.
Collect logs from all parts of your Docker environment and correlate them to get a
holistic view of your application's behavior. Correlating logs from different services
and components can help you identify and troubleshoot complex issues.
Leverage log analysis tools to automatically detect anomalies and patterns in your log
data. Machine learning and AI-based log analysis can help you identify issues before
they impact your applications.
Use visualization tools to create dashboards that provide real-time insights into your
Docker environment's health. Set up alerts to notify you of critical events or unusual
behavior, allowing for proactive responses to potential issues.
Docker Monitoring
Monitoring in a Docker environment goes beyond logging. While logs are crucial for
troubleshooting, monitoring provides real-time visibility into your container's
performance and resource utilization. Here are some essential aspects of monitoring in
a Docker environment:
1. Metrics Collection
Collecting metrics is the foundation of Docker monitoring. Metrics can include CPU
and memory usage, network traffic, storage consumption, and more. Docker exposes a
rich set of metrics that you can use to gain insights into your container's health.
2. Resource Utilization
Monitoring resource utilization helps you ensure that your containers have enough
capacity to handle your applications' workloads. It also enables you to optimize
resource allocation, preventing over-provisioning or resource bottlenecks.
3. Application Performance
4. Auto-Scaling
6. Event Tracking
Monitoring should also capture and track significant events in your Docker
environment, such as container starts, stops, and resource allocation changes. Event
tracking provides an audit trail and helps in root cause analysis.
1. Prometheus
3. cAdvisor
4. Datadog
5. Sysdig
Sysdig is a container intelligence platform that offers Docker monitoring and security
capabilities. It provides detailed visibility into your containers, microservices, and
applications, helping you detect and respond to security threats and performance
issues.
Clearly define what you want to achieve with monitoring. Determine the key metrics
and alerts that are critical to your applications' performance and stability.
Collect metrics that are relevant to your applications, including resource usage,
application-specific metrics, and business-related KPIs. Avoid collecting excessive
data that can lead to information overload.
3. Set Up Alerts
Configure alerts based on your defined objectives. Alerts should be actionable and not
generate noise. Consider using multiple notification channels, such as email, Slack, or
SMS, for different severity levels.
4. Implement Monitoring as Code
Monitor not only your applications but also the entire stack, including the underlying
infrastructure and the Docker host. This comprehensive view helps you detect issues at
any level of your environment.
Regularly review your monitoring setup to ensure it remains relevant and effective.
Update alerting thresholds, metrics, and dashboards as your applications evolve.