This document discusses using the ELK stack (Elasticsearch, Logstash, Kibana) for attack monitoring. It provides an overview of each component, describes how to set up ELK and configure Logstash for log collection and parsing. It also demonstrates log forwarding using Logstash Forwarder, and shows how to configure alerts and dashboards in Kibana for attack monitoring. Examples are given for parsing Apache logs and syslog using Grok filters in Logstash.
2. About Us
@prajalkulkarni
-Security Analyst @flipkart.com
-Interested in webapps, mobile, loves scripting in python
-Fan of cricket! and a wannabe guitarist!
@mehimansu
-Security Analyst @flipkart.com
-CTF Player - Team SegFault
-Interested in binaries, fuzzing
3.
4. Today’s workshop agenda
•Overview & Architecture of ELK
•Setting up & configuring ELK
•Logstash forwarder
•Alerting And Attack monitoring
5. What does the vm contains?
● Extracted ELK Tar files in /opt/
● java version "1.7.0_76"
● Apache installed
● Logstash-forwarder package
7. Why ELK?
Old School
● grep/sed/awk/cut/sort
● manually analyze the output
ELK
● define endpoints(input/output)
● correlate patterns
● store data(search and visualize)
9. Overview of Elasticsearch
•Open source search server written in Java
•Used to index any kind of heterogeneous data
•Enables real-time ability to search through index
•Has REST API web-interface with JSON output
10. Overview of Logstash
•Framework for managing logs
•Founded by Jordan Sissel
•Mainly consists of 3 components:
● input : passing logs to process them into machine understandable
format(file,lumberjack).
● filters: set of conditionals to perform specific action on a
event(grok,geoip).
● output: decision maker for processed event/log(elasticsearch,file)
11. •Powerful front-end dashboard for visualizing indexed information from
elastic cluster.
•Capable to providing historical data in form of graphs,charts,etc.
•Enables real-time search of indexed information.
Overview of Kibana
15. edit elasticsearch.yml
$sudo nano /opt/elasticsearch/config/elasticsearch.yml
ctrl+w search for ”cluster.name”
Change the cluster name to elastic_yourname
ctrl+x Y
Now start ElasticSearch sudo ./elasticsearch
17. Terminologies of Elastic Search!
Cluster
● A cluster is a collection of one or more nodes (servers) that together
holds your entire data and provides federated indexing and search
capabilities across all nodes
● A cluster is identified by a unique name which by default is
"elasticsearch"
18. Terminologies of Elastic Search!
Node
● It is an elasticsearch instance (a java process)
● A node is created when a elasticsearch instance is started
● A random Marvel Charater name is allocated by default
19. Terminologies of Elastic Search!
Index
● An index is a collection of documents that have somewhat similar
characteristics. eg:customer data, product catalog
● Very crucial while performing indexing, search, update, and delete
operations against the documents in it
● One can define as many indexes in one single cluster
20. Document
● It is the most basic unit of information which can be indexed
● It is expressed in json (key:value) pair. ‘{“user”:”nullcon”}’
● Every Document gets associated with a type and a unique id.
Terminologies of Elastic Search!
21. Terminologies of Elastic Search!
Shard
● Every index can be split into multiple shards to be able to distribute data.
● The shard is the atomic part of an index, which can be distributed over the cluster if you
add more nodes.
● By default 5 primary shards and 1 replica shards are created while starting elasticsearch
____ ____ | 1 | | 2 | | 3 | | 4 | | 5 | |____| |____|
● Atleast 2 Nodes are required for replicas to be created
22.
23. Plugins of Elasticsearch
head
./plugin -install mobz/elasticsearch-head
HQ
./plugin -install royrusso/elasticsearch-HQ
Bigdesk
./plugin -install lukas-vlcek/bigdesk
24. Restful API’s over http -- !help curl
curl -X<VERB> '<PROTOCOL>://<HOST>/<PATH>?<QUERY_STRING>' -d '<BODY>'
● VERB-The appropriate HTTP method or verb: GET, POST, PUT, HEAD, or DELETE.
● PROTOCOL-Either http or https (if you have an https proxy in front of Elasticsearch.)
● HOST-The hostname of any node in your Elasticsearch cluster, or localhost for a node on your
local machine.
● PORT-The port running the Elasticsearch HTTP service, which defaults to 9200.
● QUERY_STRING-Any optional query-string parameters (for example ?pretty will pretty-print
the JSON response to make it easier to read.)
● BODY-A JSON encoded request body (if the request needs one.)
25. !help curl
Simple Index Creation with XPUT:
curl -XPUT 'http://localhost:9200/twitter/'
Add data to your created index:
curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '{"user":"nullcon"}'
Now check the Index status:
curl -XGET 'http://localhost:9200/twitter/?pretty=true'
26. !help curl
Automatic doc creation in an index with XPOST:
curl -XPOST ‘http://localhost:9200/twitter/tweet/' -d ‘{“user”:”nullcon”}’
Creating a user profile doc:
curl -XPUT 'http://localhost:9200/twitter/tweet/9' -d '{"user”:”admin”, “role”:”tester”,
“sex”:"male"}'
Searching a doc in an index:
First create 2 docs:
curl -XPOST 'http://localhost:9200/twitter/tester/' -d '{"user":"abcd", "role":"tester",
"sex":"male"}'
curl -XPOST 'http://localhost:9200/twitter/tester/' -d '{"user":"abcd", "role":"admin",
"sex":"male"}'
curl -XGET 'http://localhost:9200/twitter/_search?q=user:abcd&pretty=true'
27. !help curl
Deleting an doc in an index:
$curl -XDELETE 'http://localhost:9200/twitter/tweet/1'
Cluster Health: (yellow to green)/ Significance of
colours(yellow/green/red)
$curl -XGET ‘http://localhost:9200/_cluster/health?pretty=true’
$./elasticsearch -D es.config=../config/elasticsearch2.yml &
29. Setting up Elasticsearch & Kibana
•Starting your elasticsearch server(default on 9200)
$cd /opt/elasticsearch-1.4.2/bin/
•Edit elasticsearch.yml and add below 2 lines:
● http.cors.enabled: true
● http.cors.allow-origin to the correct protocol, hostname, and port
For example, http://mycompany.com:8080, not
http://mycompany.com:8080/kibana.
$sudo ./elasticsearch &
30.
31. Logstash Configuration
● Managing events and logs
● Collect data
● Parse data
● Enrich data
● Store data (search and
visualizing)
} input
} filter
} output
40. Understanding Grok
•Understanding grok nomenclature.
•The syntax for a grok pattern is %{SYNTAX:SEMANTIC}
•SYNTAX is the name of the pattern that will match your text.
● E.g 1337 will be matched by the NUMBER pattern, 254.254.254
will be matched by the IP pattern.
•SEMANTIC is the identifier you give to the piece of text being
matched.
● E.g. 1337 could be the count and 254.254.254 could be a client
making a request
%{NUMBER:count} %{IP:client}
41. Playing with grok filters
•GROK Playground: https://grokdebug.herokuapp.com/
•Apache access.log event:
123.249.19.22 - - [01/Feb/2015:14:12:13 +0000] "GET /manager/html HTTP/1.1" 404 448
"-" "Mozilla/3.0 (compatible; Indy Library)"
•Matching grok:
%{IPV4} %{USER:ident} %{USER:auth} [%{HTTPDATE:timestamp}] "(?:%{WORD:verb}
%{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?)" %{NUMBER:response}
(?:%{NUMBER:bytes}|-)
•Things can get even more simpler using grok:
%{COMBINEDAPACHELOG}