Nov 13, 2015

Process NetFlow with nProbe and Elasticsearch, Logstash, and Kibana - Part 2


Send NetFlow data to ELK, create searches, visualizations, and dashboards in Kibana

If you haven't read part 1, please visit http://blog.sysadmin.live/2015/11/process-netflow-with-nprobe-and.html

In this part, we will try to create a nice dashboard in Kibana as below

A view of traffic
<br /> <a name='more'></a>

Install and configure nProbe on win2012elk1


nProbe will listen to NetFlow data on port 2055 and export to Logstash on TCP port 5544. Make sure you turn off Windows Firewall or allow UDP port 2055 on win2012elk1

1. Download the latest nProbe for Windows
http://packages.ntop.org/Windows/nProbeWin-x64-7.2.151020.zip
2. Install nProbe
Any IP and port can be entered
3. Open Command Prompt at C:\Program Files\nProbe and run
nprobe.exe /r nprobe
nprobe.exe /c -b 2 -V 9 --collector-port 2055 -i none -n none --json-label --tcp 127.0.0.1:5544

  • nprobe.exe /r nprobe will remove the default installed nprobe service
  • nprobe.exe /c: run nProbe on console, to install nProbe as a service, we can use nprobe.exe /i
  • -b 2: turn on verbose mode so we can see what nProbe does
  • -V 9: NetFlow version that nProbe will export to a collector. We can specify version 5, 9, or 10 (IPFIX). Incoming NetFlow version does not need to be configured.
  • --collector-port 2055: nProbe will listen to incoming NetFlow on UDP port 2055. We can change the port but the protocol must be UDP
  • -i none: listen on no interface. We specify interface index only in probe mode with mirrored port
  • -n none: nProbe does not export flow directly to a Flow Collector like ntopng
  • --json-label: export data in JSON format. We can also use option -T to define a template of which information nProbe will export
  • --tcp 127.0.0.1:5544: the server IP and port that nProbe will export flow to. Since nProbe and ELK are on the same server, we can use 127.0.0.1
For advanced nProbe options, please visit http://www.ntop.org/support/documentation/documentation/

Instead of running nProbe on a command prompt, we can install nProbe as a Windows service
nprobe.exe /i nprobe -V 9 --collector-port 2055 -i none -n none --json-label --tcp 127.0.0.1:5544
In nprobe.exe /i nprobe , we can use any service name. nProbe is now ready to receive NetFlow data.

Simulate NetFlow data with PRTG NetFlow Generator


Download PRTG NetFlow Generator
https://www.paessler.com/download/netflowgenerator
Run NetFlow Generator as add sample data as below

For your convenience, I have exported the NetFlow Simulation. You can download and open it in NetFlow Generator
https://drive.google.com/file/d/0B7j8G8dcaRWxT2FZTzlBcnFKN2s/view?usp=sharing
Start generating flows and we should see the first index in Elasticsearch data folder
First Elasticsearch index

Make use of NetFlow data in Kibana


Create an index pattern

After we send data into Elasticsearch, it is now time to use Kibana to see what we have in our index. In our web browser, open
http://192.168.1.50:5601
Let's start by configure a new index pattern in Kibana
Mark on "Use event times to create index names" checkbox
Change index name to netflow instead of logstash, click Create
Kibana will retrieve all the fields in our index
Other NetFlow fields
Discover App is where we can search for an event in our network
JSON fields
Fields can be pinned as column. A view/search can be saved

Create our first Kibana visualization

Let's create our first Kibana visualization or chart. I'll leave other visualization to your creativity. We want to see the number of flows over time

Create a line chart
Select From a new search
Add a field to the X-Axis
Select Date Histogram in Aggregation
Optional chart settings. Click Apply changes
Welcome to our first Kibana chart
Save the chart to put it on a dashboard
Create a Kibana dashboard

Go to Dashboard app, click Add Visualization
Select the FLOWS OVER TIME chart
The chart can be resized or moved around

Summary


Voila! We have gone through steps to get our hands on NetFlow data. Now go ahead and create as many visualization as you can and throw them onto your dashboard. Read Kibana documentation or just play with it to see how powerful it is in filtering our data.

In the next parts, I will write about more advanced configurations in ELK stack including
Logstash
  • Remove redundant fields like message, version ...
  • Use cidr plugin to match and tag our network
  • Create additional GeoIP fields based on IP
Kibana and Elasticsearch
  • Install and use Sense to work with Elasticsearch
  • Create an index template for NetFlow
  • Mapping in Elasticsearch
  • and more ...
Part 3: Customize nProbe and Logstash configuration

No comments:

Post a Comment