Splunk Project
Splunk Project
Splunk Project
Problem Statement:
You work as a Splunk administrator in GrapeVine Pvt. Ltd. You have been asked to plan a splunk
deployment and deploy it on AWS. Upload the system log data from the forwarder server and
analyze it accordingly.
Assumptions:
1. Nodes:
a. One Master Cluster Node: Install Splunk Enterprise
b. Two Index Cluster Nodes: Install Splunk Enterprise
c. Two Forwarder Nodes: Install Universal Forwarder
2. Index:
a. Two Indexes: Demo_Index_1, Demo_Index_2
3. Location of Logs: /var/log/syslog
Tasks to be completed:
1. Start 5 Nodes on AWS RHEL instance
2. Install Splunk Enterprise on 3 of them and Splunk Universal Forwarder on 2 of them
3. Name them accordingly:
4. Master Node, Indexer Node 1 & Indexer Node 2 ( Splunk Enterprise )
5. Forwarder Node 1 & Forwarder Node 2 ( Splunk Universal Forwarder )
6. Enable Index Clustering on the Master Node and verify if the Indexer nodes are connected to
the master node.
7. Enable Forwarder Management in Master Node and verify if the Forwarder nodes are
connected to the master node.
8. Create a new app in Master Node Called "Sending" with inputs.conf which tells the universal
forwarder to collect syslogs of RHEL instances (location: /var/log/) and outputs.conf which will
tell the forwarders where to send the monitored data to.
*NOTE* Forwarder node 1 should be sending data to Indexer node 1 & Forwarder Node 2
should be sending data to Indexer Node 2.
9. Map the newly created app to the two Forwarder Nodes using a server class called Linux.
10. Create a new role in Master Node called "dev" with inherited capabilities from both power user
and user and with access to all indexes ( internal & non-internal )
11. Create two new users dev1 & dev2 and make sure they have the role "dev" associated with
them.
12. Restart the splunk on all the relevant instances to flush all the changes made.
13. Verify the final inputs.conf & outputs.conf in the forwarders using btool.
14. Verify is the data specified is being monitored and indexed.
15. Once you are done with indexing, disable the app using the CLI.
16. Input system log data of the splunk master node using the web console into the main index.
*NOTE* You should be collecting data from 3 sources: syslogs from Forwarder node 1 & 2,
syslogs from master node locally.
17. Now that you have collected all the necessary data you can start analyzing it.
18. Run a transformation command on all 3 collected data to get count of all process ids (pid) on the
respective nodes.
19. Save the search as a visualization ( pie chart ) & table as a report for all three and edit the
permissions of the report so it is accessible to all.
20. Schedule the report for every Monday morning 7 AM.
21. Create a new dashboard and add the report as a panel in the new dashboard.
22. Create an alert that notifies you through email when the number of a specific pid is higher than
current number.
23. Create a workflow action to Google the "RHEL Process id $pid$" for the event menu.
24. Finally perform a health check on splunk and create a diag.