Part 2: http://blog.sysadmin.live/2015/11/process-netflow-with-nprobe-and_13.html
Part 3: http://blog.sysadmin.live/2015/11/process-netflow-with-nprobe-and_91.html
Map User Location within ELK stack
Install Sense on Kibana
Before we create GeoIP fields into Elasticsearch (ES), let's install Sense on Kibana so that we have a great UI to interact with Elasticsearch instead of using curl.
Open a Command Prompt and go to
Run
Restart Kibana service and open Kibana.
Sense UI |
Create an index template for NetFlow that sets geip.location to geo_point type
Before configuring Logstash (LS) to parse and send GeoIP fields into Elasticsearch, we need to create a template to define the data types Elasticsearch should assign to incoming fields. If we stick with the default index name logstash-, we don't have to worry about creating a template. However, it is more likely that we will create more indices with different names in the future, so it would be very helpful if we get to know index template. Let's look at the default logstash template by sending the following request in Sense
This will list all templates in ES
Default logstash template |
With this template, any future indices starting with logstash- will use this template.
"dynamic": true,
"type": "object",
"properties": {
"location": {
"type": "geo_point"
}
}
}
The geoip block tells ES to set geoip.location field as geo_point type which stores longitude and latitude.
Since we have IPV4_SRC_ADDR and IPV4_DST_ADDR in each flow, we will need to have two geoip fields. We can name them src_geoip and dst_geoip. To create a template for NetFlow, open the template file and copy the content to Sense.
Send the request in Sense and we should get
netflow index template was created |
Verify netflow index template |
Configure Logstash to generate GeoIP data from IP fields
As ES template is ready, we now configure LS to use geoip filter plugin to generate GeoIP data. geoip is a plugin bundled with LS, so we just need to change LS config file. Open
[...]
## Create Geo info based on IP ##
# Netflow source IP
geoip {
source => "IPV4_SRC_ADDR"
target => "src_geoip"
fields => ["country_code2", "country_name", "continent_code", "region_name", "real_region_name", "city_name", "postal_code", "timezone", "location"]
}
# Netflow destination IP
geoip {
source => "IPV4_DST_ADDR"
target => "dst_geoip"
fields => ["country_code2", "country_name", "continent_code", "region_name", "real_region_name", "city_name", "postal_code", "timezone", "location"]
}
## End - Create Geo info based on IP ##
[...]
}
Restart LS service and go back to Sense. Since the current/latest netflow index was created with no template, we need to delete it so that ES will create a new index with the configured template. Issue this request to Sense
netflow-2015.11.22 was deleted successfully |
- Re-index data
- Close old indices
- Delete old indices
geoip fields with correct types in Kibana |
Create tile maps in Kibana
GeoIP data are already in ES, now we can create tile map in Kibana to map location. Go to Kibana Visualize
Choose Tile map |
Select "From a new search" |
Set data as shown in screenshot |
Save map for SRC_IP by IN_BYTES |
Create another map for DST_IP |
Add our new visualizations to NetFlow Dashboard |
The great thing with Tile map is that we can create a filter with an area on the map. With that we can see the traffic for that specific area.
Bonus: For your convenience, I have export Kibana data, which includes Searches, Visualizations, and Dashboards, into a JSON file. You can simply import it into Kibana and save some time
Kibana JSON file
Summary
Getting NetFlow data into ELK stack is just a start. You need to create a baseline for our network traffic and identify the anomalies. For example, with the Tile map, you can have an idea of the sources of a DDOS attack. With the THROUGHPUT OVER TIME, you can get an email alert if throughput is higher than a certain number. That can be done with ElastAlert (https://github.com/yelp/elastalert).
Later I am going to write about how to get real time IIS logs into ELK stack with nxlog. Have a great Thanksgiving and stay tuned for my next post.
No comments:
Post a Comment