Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
SlideShare a Scribd company logo
Analysis NetFlow[v5] in Real Time
Piotr Perzyna
Marzec 2016
SGGW
What is NetFlow?
1. NetFlow is a feature that was introduced on Cisco routers that provides the ability to
collect IP network traffic as it enters or exits an interface.
2. NetFlow have a several versions from 1 to 10, but the comon is only v5 and v9.
3. The idea was that the first packet of a flow would create a NetFlow switching record. This
record would then be used for all later packets of the same flow, until the expiration of
the flow. Only the first packet of a flow would require an investigation of the route table
to find the most specific matching route.
Where is NetFlow?
NetFlow v5 content
Bytes Contents Description
0-3 srcaddr Source IP address
4-7 dstaddr Destination IP address
8-11 nexthop IP address of next hop router
12-13 input SNMP index of input interface
14-15 output SNMP index of output interface
16-19 dPkts Packets in the flow
20-23 dOctets
Total number of Layer 3 bytes in the
packets of the flow
24-27 first SysUptime at start of flow
28-31 last
SysUptime at the time the last packet
of the flow was received
32-33 srcport
TCP/UDP source port nr or
equivalent
34-35 dstport
TCP/UDP dest port nr or
equivalent
36 pad1 Unused (zero) bytes
37 tcp_flags Cumulative OR of TCP flags
38 prot IP protocol type
39 tos IP type of service (ToS)
40-41 src_as
Autonomous system number of the
source, either origin or peer
42-43 dst_as
Autonomous system number of the
destination, either origin or peer
44 src_mask Source address prefix mask bits
45 dst_mask Destination address prefix mask bits
46-47 pad2 Unused (zero) bytes
Mikrotik as NetFlow Exporter?!
1. SIA Mikrotīkls, as MikroTik - Latvian manufacturer of computer hardware.
2. The main product is a Linux-based operating system known as MikroTik RouterOS.
3. It allows you to change any PC computer (including machines in MIPS and PowerPC) fully
functional router.
4. Remote administration from using the program WinBox
Logstash as NetFlow Collector?!
Process Any Data, From Any Source
1. Centralize data processing of all types
2. Normalize varying schema and formats
3. Quickly extend to custom log formats
4. Easily add plugins for custom data sources
The recent Logstash 2.2 release is powered by a new and improved, next-generation pipeline
backbone, enables dynamic watermarking for JDBC input queries, supports compressed HTTP input
requests, and is compatible with the latest versions of Elasticsearch and Beats.
ElasticSearch as storage?!
1. Object Json DB
2. Real-Time Data
3. High Availability
4. Full-Text Search
5. RESTful API
6. Massively Distributed
Kibana as Analyzer?!
Open Source
Easy Setup
Integration with Elasticsearch
Data visualization platform
GeoIP
Easy to Share
Simple Data Export
Data from Many Sources
Simple laboratory
Our exercise is
to create an area
highlighted in
red
Configuration contained in the
presentation is designed to exercise
and show a circuit diagram.
Production use inadvisable for security reasons.
CookBook logstash?!
1. Create new directory
# mkdir /opt/logstash
# mkdir /opt/logstash/config
2. Download logstash
# wget
https://download.elastic.co/logstash/logst
ash/logstash-2.2.2.tar.gz
3. Unpack kibana to /opt/kibana
# tar -zxvf logstash-2.2.2.tar.gz
1. Download NetFlow library
# wget
https://raw.githubusercontent.com/logstash
-plugins/logstash-codec-
netflow/master/lib/logstash/codecs/netflow
/netflow.yaml
# mv netflow.yaml
/opt/logstash/config/netflow.yml
CookBook logstash?!
1. Create configuration
/opt/logstash/config/mikrotik.yml
input {
udp {
port => 9995
codec => netflow {
definitions => "/opt/logstash/config/netflow.yml"
versions => [5]
}
}
}
output {
elasticsearch {
index => "logstash-%{+YYYY.MM}"
hosts => "localhost:9200"
}
}
6. Run
# screen -dmS logstash
/opt/logstash/bin/logstash -f
/opt/logstash/config/mikrotik.yml
7. Tell presenter Your IP address, NetFlow will start
flood your server :)
CookBook elasticsearch?!
1. Create new directory /opt/elastic
# mkdir /opt/elastic
2. Download elastic
# wget
https://download.elasticsearch.org/elastic
search/release/org/elasticsearch/distribut
ion/tar/elasticsearch/2.2.0/elasticsearch-
2.2.0.tar.gz
3. Unpack to /opt/elastic
# tar -zxvf elasticsearch-2.2.0.tar.gz
1. Change /opt/elastic/config/elasticsearch.yml
# path.data: /opt/elastic/data
# path.logs: /var/log/elastic
# network.host: 0.0.0.0
# http.port: 9200
2. Run elasticsearch
# screen -dmS elastic
/opt/elastic/bin/elasticsearch -
Des.insecure.allow.root=true
CookBook kibana?!
1. Create new directory /opt/kibana
# mkdir /opt/kibana
2. Download kibana
# wget
https://download.elastic.co/kibana/kibana
/kibana-4.4.1-linux-x64.tar.gz
3. Unpack kibana to /opt/kibana
# tar -zxvf kibana-4.4.1-linux-x64.tar.gz
# mv kibana-4.4.1-linux-x64/* /opt/kibana/
4. Change /opt/kibana/config/kibana.yml
✓ server.port: 5601
✓ server.host: “0.0.0.0”
✓ elasticsearch: “http://localhost:9200”
✓ kibana.index: “.kibana”
5. Run kibana
# screen -dmS kibana /opt/kibana/bin/kibana
Login via browser to:
http://xxx.xxx.xxx.xxx:5601
and
create fantastic dashboard!
poweroff
Thank you for watching!

More Related Content

Analise NetFlow in Real Time

  • 1. Analysis NetFlow[v5] in Real Time Piotr Perzyna Marzec 2016 SGGW
  • 2. What is NetFlow? 1. NetFlow is a feature that was introduced on Cisco routers that provides the ability to collect IP network traffic as it enters or exits an interface. 2. NetFlow have a several versions from 1 to 10, but the comon is only v5 and v9. 3. The idea was that the first packet of a flow would create a NetFlow switching record. This record would then be used for all later packets of the same flow, until the expiration of the flow. Only the first packet of a flow would require an investigation of the route table to find the most specific matching route.
  • 4. NetFlow v5 content Bytes Contents Description 0-3 srcaddr Source IP address 4-7 dstaddr Destination IP address 8-11 nexthop IP address of next hop router 12-13 input SNMP index of input interface 14-15 output SNMP index of output interface 16-19 dPkts Packets in the flow 20-23 dOctets Total number of Layer 3 bytes in the packets of the flow 24-27 first SysUptime at start of flow 28-31 last SysUptime at the time the last packet of the flow was received 32-33 srcport TCP/UDP source port nr or equivalent 34-35 dstport TCP/UDP dest port nr or equivalent 36 pad1 Unused (zero) bytes 37 tcp_flags Cumulative OR of TCP flags 38 prot IP protocol type 39 tos IP type of service (ToS) 40-41 src_as Autonomous system number of the source, either origin or peer 42-43 dst_as Autonomous system number of the destination, either origin or peer 44 src_mask Source address prefix mask bits 45 dst_mask Destination address prefix mask bits 46-47 pad2 Unused (zero) bytes
  • 5. Mikrotik as NetFlow Exporter?! 1. SIA Mikrotīkls, as MikroTik - Latvian manufacturer of computer hardware. 2. The main product is a Linux-based operating system known as MikroTik RouterOS. 3. It allows you to change any PC computer (including machines in MIPS and PowerPC) fully functional router. 4. Remote administration from using the program WinBox
  • 6. Logstash as NetFlow Collector?! Process Any Data, From Any Source 1. Centralize data processing of all types 2. Normalize varying schema and formats 3. Quickly extend to custom log formats 4. Easily add plugins for custom data sources The recent Logstash 2.2 release is powered by a new and improved, next-generation pipeline backbone, enables dynamic watermarking for JDBC input queries, supports compressed HTTP input requests, and is compatible with the latest versions of Elasticsearch and Beats.
  • 7. ElasticSearch as storage?! 1. Object Json DB 2. Real-Time Data 3. High Availability 4. Full-Text Search 5. RESTful API 6. Massively Distributed
  • 8. Kibana as Analyzer?! Open Source Easy Setup Integration with Elasticsearch Data visualization platform GeoIP Easy to Share Simple Data Export Data from Many Sources
  • 9. Simple laboratory Our exercise is to create an area highlighted in red
  • 10. Configuration contained in the presentation is designed to exercise and show a circuit diagram. Production use inadvisable for security reasons.
  • 11. CookBook logstash?! 1. Create new directory # mkdir /opt/logstash # mkdir /opt/logstash/config 2. Download logstash # wget https://download.elastic.co/logstash/logst ash/logstash-2.2.2.tar.gz 3. Unpack kibana to /opt/kibana # tar -zxvf logstash-2.2.2.tar.gz 1. Download NetFlow library # wget https://raw.githubusercontent.com/logstash -plugins/logstash-codec- netflow/master/lib/logstash/codecs/netflow /netflow.yaml # mv netflow.yaml /opt/logstash/config/netflow.yml
  • 12. CookBook logstash?! 1. Create configuration /opt/logstash/config/mikrotik.yml input { udp { port => 9995 codec => netflow { definitions => "/opt/logstash/config/netflow.yml" versions => [5] } } } output { elasticsearch { index => "logstash-%{+YYYY.MM}" hosts => "localhost:9200" } } 6. Run # screen -dmS logstash /opt/logstash/bin/logstash -f /opt/logstash/config/mikrotik.yml 7. Tell presenter Your IP address, NetFlow will start flood your server :)
  • 13. CookBook elasticsearch?! 1. Create new directory /opt/elastic # mkdir /opt/elastic 2. Download elastic # wget https://download.elasticsearch.org/elastic search/release/org/elasticsearch/distribut ion/tar/elasticsearch/2.2.0/elasticsearch- 2.2.0.tar.gz 3. Unpack to /opt/elastic # tar -zxvf elasticsearch-2.2.0.tar.gz 1. Change /opt/elastic/config/elasticsearch.yml # path.data: /opt/elastic/data # path.logs: /var/log/elastic # network.host: 0.0.0.0 # http.port: 9200 2. Run elasticsearch # screen -dmS elastic /opt/elastic/bin/elasticsearch - Des.insecure.allow.root=true
  • 14. CookBook kibana?! 1. Create new directory /opt/kibana # mkdir /opt/kibana 2. Download kibana # wget https://download.elastic.co/kibana/kibana /kibana-4.4.1-linux-x64.tar.gz 3. Unpack kibana to /opt/kibana # tar -zxvf kibana-4.4.1-linux-x64.tar.gz # mv kibana-4.4.1-linux-x64/* /opt/kibana/ 4. Change /opt/kibana/config/kibana.yml ✓ server.port: 5601 ✓ server.host: “0.0.0.0” ✓ elasticsearch: “http://localhost:9200” ✓ kibana.index: “.kibana” 5. Run kibana # screen -dmS kibana /opt/kibana/bin/kibana
  • 15. Login via browser to: http://xxx.xxx.xxx.xxx:5601 and create fantastic dashboard!