Elastic SIEM Installation lab

elastic siem installation lab
All Blog

Security information and event management (SIEM) is a subsection within the field of computer security, where software products and services combine security information management (SIM) and security event management (SEM). They provide real-time analysis of security alerts generated by applications and network hardware.

There are several well-known SIEM software available in the market but the only issue is they are a privilege to have and are on the expensive side on the cost spectrum. So in this article, we try to solve the very same problem.

ELK or Elastic stack is a set of several opensource tools specialized for centralized logs which makes it handy to search, analyze, and visualize logs from different sources.

Here we configure this on Ubuntu but it can be configured on any Linux or Windows machine with ROOT privileges.

Elastic SIEM Architecture

We’ll be using Beats to ship the logs to logstash through a pipeline which then will be fed to elasticsearch and will be visualized through kibana.

Kibana and elasticsearch will be deployed through Nginx as it allows small persistent connections, load balancing like features that make it stable in the long run.

PREREQUISITES

  • Java dependencies
  • File beats
  • Logstash
  • Elasticsearch
  • Kibana
  • Nginx
  • Open ssl ( as we will be shipping the data throughout an encrypted channel )

INSTALLATION AND CONFIGURATION PROCESS

To acquire system root use

Sudo -s
Screenshot 1 2

Java Dependencies

Elasticsearch requires Java in our running machine. To install Java along with the HTTPS support and wget packages for APT Use the following command

apt install -y openjdk-11-jdk wget apt-transport-https curl
Screenshot 2 2

Installing Elasticsearch

Note: If the following method does not work for you or in case you need a different version then please refer to elasticsearch’s official documents.

Start by importing Elasticsearch public key into APT. To import the GPG key enter the following command:

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add –
Screenshot 3 2

Then Add Elastic repository to the directory sources.list.d

echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-6.x.list
Screenshot 4 2

Now update and install elasticsearch using

Sudo apt-get update && sudo apt-get install elasticsearch
Screenshot 5 2

Configuring Elasticsearch

By default, Elasticsearch listens for traffic on port 9200. We are going to restrict outside access to our Elasticsearch instance so that outside parties cannot access data or shut down the elastic cluster through the REST API.

In order to do so, we need to do some modifications in the Elasticsearch configuration file – elasticsearch.yml.

nano /etc/elasticsearch/elasticsearch.yml
Screenshot 6 2

In the network, section uncomment the network.host and http.port

Screenshot 7 2

Now, start and enable Elasticsearch services

systemctl start elasticsearch
systemctl enable elasticsearch
Screenshot 8 2

verify the status if Elasticsearch.

systemctl status elasticsearch
curl -X GET "localhost:9200"
Screenshot 9 2
Screenshot 10 2

As mentioned above elasticsearch listens on the port 9200 so you can verify in the browser also.

Screenshot 11 3

Installing logstash

First Let’s confirm OpenSSL is running and then install Logstash

openssl version -a
apt install logstash -y
Screenshot 12 2
Screenshot 13 2

Edit the /etc/hosts file and add the following line

Screenshot 14.1 2
Screenshot 14.2 2

Where 18.224.44.11 is the IP address of server elk-master.

Now generate an SSL certificate to secure the log data transfer from the client Rsyslog & Filebeat to the Logstash server.

mkdir -p /etc/logstash/ssl
cd /etc/logstash/
openssl req -subj '/CN=elk-master/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout ssl/logstash-forwarder.key -out ssl/logstash-forwarder.crt
Screenshot 15 2

Now, we need to create new configuration files for Logstash also know al pipelines named ‘filebeat-input.conf’ as input file from filebeat ‘Syslog-filter.conf’ for system logs processing, and ‘output-elasicsearch.conf’ file to define Elasticsearch output.

cd /etc/logstash/
nano conf.d/filebeat-input.conf
Screenshot 16 2
input {
  beats {
    port => 5443
    type => syslog
    ssl => true
    ssl_certificate => "/etc/logstash/ssl/logstash-forwarder.crt"
    ssl_key => "/etc/logstash/ssl/logstash-forwarder.key"
  }
}
Screenshot 17 2

For the system log data processing, we are going to use a filter plugin named ‘grok’. Create a new conf. file ‘Syslog-filter.conf in the same directory

nano conf.d/syslog-filter.conf
Screenshot 18 2
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}
Screenshot 19 2

last, create a configuration file ‘output-elasticsearch.conf’ for the output of elasticsearch.

nano conf.d/output-elasticsearch.conf
Screenshot 20 2
output {
  elasticsearch { hosts => ["localhost:9200"]
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}
Screenshot 21 2

Now start, enable & verify the status of Logstash service.

systemctl start logstash
systemctl enable logstash
systemctl status logstash
Screenshot 22 2

Installing Kibana

apt install kibana
Screenshot 23 2

Now we need to do some similar changes in kibana.yml file as we did in elasticsearch

nano /etc/kibana/kibana.yml
Screenshot 24 2

Now uncomment the following attributes

Screenshot 25 2

start & enable the kibana service:

systemctl enable kibana
systemctl start kibana
Screenshot 26 2

Install and Configure NGINX

Install Nginx and ‘Apache2-utlis’

apt install nginx apache2-utils -y
Screenshot 27 2

create a new virtual host file named Kibana.

nano /etc/nginx/sites-available/kibana
Screenshot 28 2

Paste the following

server {
    listen 80;
    server_name localhost;
    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/.kibana-user;
    location / {
        proxy_pass https://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}
Screenshot 29 2

Now we need to create authentication for the Kibana Dashboard and activate the Kibana virtual host configuration and test our Nginx configuration

After that enable & restart the Nginx service.

sudo htpasswd -c /etc/nginx/.kibana-user elastic
ln -s /etc/nginx/sites-available/kibana /etc/nginx/sites-enabled/
nginx -t
systemctl enable nginx
systemctl restart nginx
Screenshot 30 2

Install and Configure Filebeat

apt-get update && apt-get install filebeat
Screenshot 31 2

open the filebeat configuration file named ‘filebeat.yml’

Nano /etc/filebeat/filebeat.yml
Screenshot 32 2

Enable the filebeat prospectors by changing the ‘enabled’ line value to ‘true’.

Screenshot 33 2

Next head to the Elasticsearch output section and add the following lines

hosts: ["your IP:9200"]
username: "elastic"
password: "123"
setup.kibana:
host: "your IP:5601"
Screenshot 34 2

Enable and configure the Elasticsearch module

sudo filebeat modules enable elasticsearch
sudo filebeat setup
sudo service filebeat start
Screenshot 35 1
Screenshot 37.1

Now copy the Logstash certificate file – logstash-forwarder.crt – to /etc/filebeat directory And restart filebeat

cp /etc/logstash/ssl/logstash-forwarder.crt /etc/filebeat/
sudo service filebeat restart
Screenshot 37.2

Routing the Logs to Elasticsearch

To route the logs from rsyslog to Logstash firstly we need to set up log forwarding between Logstash and Elasticsearch.

To do this we need to create a configuration file for Logstash

cd /etc/logstash/conf.d
nano logstash.conf
Screenshot 38 1
input {                                                                                     
  udp {                                                                                     
    host => "127.0.0.1"                                                                     
    port => 10514                                                                            
    codec => "json"                                                                         
    type => "rsyslog"                                                                       
  }                                                                                          
}    
                                                                                                                                                         
# The Filter pipeline stays empty here, no formatting is done.    
filter { } 

# Every single log will be forwarded to ElasticSearch. If you are using another port, you should specify it here.                                                                                             
output {                                                                                    
  if [type] == "rsyslog" {                                                                  
    elasticsearch {                                                                          
      hosts => [ "127.0.0.1:9200" ]                                                         
    }                                                                                       
  }
  }
Screenshot 39 1

Now restart the Logstash service

And to check that everything is working correctly

systemctl restart logstash
netstat -na | grep 10514
Screenshot 40 1
Screenshot 41 1

Routing from rsys to Logstash

Rsyslog can transform logs using templates to forward logs in rsylog, head over to the directory /etc/rsylog.d and create a new file named 70-output.conf

cd /etc/rsyslog.d
nano 70-output.conf
Screenshot 42 1
template(name="json-template"
  type="list") {
    constant(value="{")
      constant(value="\"@timestamp\":\"")     property(name="timereported" dateFormat="rfc3339")
      constant(value="\",\"@version\":\"1")
      constant(value="\",\"message\":\"")     property(name="msg" format="json")
      constant(value="\",\"sysloghost\":\"")  property(name="hostname")
      constant(value="\",\"severity\":\"")    property(name="syslogseverity-text")
      constant(value="\",\"facility\":\"")    property(name="syslogfacility-text")
      constant(value="\",\"programname\":\"") property(name="programname")
      constant(value="\",\"procid\":\"")      property(name="procid")
    constant(value="\"}\n")
}
Screenshot 45 1

Restart rsyslog service

systemctl restart rsyslog
curl -XGET 'http://localhost:9200/logstash-*/_search?q=*&pretty'
Screenshot 46 2
Screenshot 47 2

And the Elastic SIEM is complete. Now to visualize the logs and to segregate them according to your needs create index patterns the same as in other SIEM software available in the market.

As Elastic stack is opensource and free to use the possibilities are quite endless.

Like their threat-hunting module or endpoint security module or even machine learning to automate the majority of tasks but more on that in future posts.

Leave a Reply

Your email address will not be published. Required fields are marked *