This section describes how to install and configure logging and visualization for the oudsm Helm Chart deployment.
The ELK stack consists of Elasticsearch, Logstash, and Kibana. Using ELK we can gain insights in real-time from the log data from your applications.
ELK can be enabled for environments created using the Helm charts provided. The example below will demonstrate installation and configuration of ELK for the oudsm
chart.
Create a Kubernetes secret to access the required images on hub.docker.com:
Note: You must first have a user account on hub.docker.com.
$ kubectl create secret docker-registry "dockercred" --docker-server="https://index.docker.io/v1/" --docker-username="<docker_username>" --docker-password=<password> --docker-email=<docker_email_credentials> --namespace=<domain_namespace>
For example:
$ kubectl create secret docker-registry "dockercred" --docker-server="https://index.docker.io/v1/" --docker-username="username" --docker-password=<password> --docker-email=user@example.com --namespace=oudsmns
The output will look similar to the following:
secret/dockercred created
Create a directory on the persistent volume to store the ELK log files:
$ mkdir -p <persistent_volume>/oudsm_elk_data
$ chmod 777 <persistent_volume>/oudsm_elk_data
For example:
$ mkdir -p /scratch/shared/oudsm_elk_data
$ chmod 777 /scratch/shared/oudsm_elk_data
Navigate to the $WORKDIR/kubernetes/helm
directory and create a logging-override-values.yaml
with the following:
elk:
enabled: true
imagePullSecrets:
- name: dockercred
elkVolume:
# If enabled, it will use the persistent volume.
# if value is false, PV and PVC would not be used and there would not be any mount point available for config
enabled: true
type: filesystem
filesystem:
hostPath:
path: <persistent_volume>/oudsm_elk_data
For example:
elk:
enabled: true
imagePullSecrets:
- name: dockercred
elkVolume:
# If enabled, it will use the persistent volume.
# if value is false, PV and PVC would not be used and there would not be any mount point available for config
enabled: true
type: filesystem
filesystem:
hostPath:
path: /scratch/shared/oudsm_elk_data
If using NFS for the persistent volume change the elkVolume
section as follows:
elkVolume:
# If enabled, it will use the persistent volume.
# if value is false, PV and PVC would not be used and there would not be any mount point available for config
enabled: true
type: networkstorage
networkstorage:
nfs:
server: myserver
path: <persistent_volume>/oudsm_elk_data
Run the following command to upgrade the OUDSM deployment with the ELK configuration:
$ helm upgrade --namespace <namespace> --values logging-override-values.yaml <release_name> oudsm --reuse-values
For example:
$ helm upgrade --namespace oudsmns --values logging-override-values.yaml oudsm oudsm --reuse-values
Run the following command to verify the elasticsearch, logstash and kibana pods are running:
$ kubectl get pods -o wide -n <namespace> | grep 'es\|kibana\|logstash'
For example:
$ kubectl get pods -o wide -n oudsmns | grep 'es\|kibana\|logstash'
The output will look similar to the following:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
oudsm-es-cluster-0 1/1 Running 0 4m5s 10.244.1.124 <worker-node> <none> <none>
oudsm-kibana-7bf95b4c45-sfst6 1/1 Running 1 4m5s 10.244.2.137 <worker-node> <none> <none>
oudsm-logstash-5bb6bc67bf-l4mdv 1/1 Running 0 4m5s 10.244.2.138 <worker-node> <none> <none>
From the above identify the elasticsearch pod, for example: oudsm-es-cluster-0
.
Run the port-forward
command to allow ElasticSearch to be listening on port 9200:
$ kubectl port-forward oudsm-es-cluster-0 9200:9200 --namespace=<namespace> &
For example:
$ kubectl port-forward oudsm-es-cluster-0 9200:9200 --namespace=oudsmns &
The output will look similar to the following:
[1] 98458
bash-4.2$ Forwarding from 127.0.0.1:9200 -> 9200
Forwarding from [::1]:9200 -> 9200
Verify that ElasticSearch is running by interrogating port 9200:
$ curl http://localhost:9200
The output will look similar to the following:
{
"name" : "oudsm-es-cluster-0",
"cluster_name" : "OUD-elk",
"cluster_uuid" : "TIKKJuK4QdWcOZrEOA1zeQ",
"version" : {
"number" : "6.8.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "65b6179",
"build_date" : "2019-05-15T20:06:13.172855Z",
"build_snapshot" : false,
"lucene_version" : "7.7.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
List the Kibana application service using the following command:
$ kubectl get svc -o wide -n <namespace> | grep kibana
For example:
$ kubectl get svc -o wide -n oudsmns | grep kibana
The output will look similar to the following:
oudsm-kibana NodePort 10.101.248.248 <none> 5601:31195/TCP 7m56s app=kibana
In this example, the port to access Kibana application via a Web browser will be 31195
.
Access the Kibana console in a browser with: http://${MASTERNODE-HOSTNAME}:${KIBANA-PORT}/app/kibana
.
From the Kibana portal navigate to Management
> Kibana
> Index Patterns
.
In the Create Index Pattern page enter *
for the Index pattern and click Next Step.
In the Configure settings page, from the Time Filter field name drop down menu select @timestamp
and click Create index pattern.
Once the index pattern is created click on Discover in the navigation menu to view the OUDSM logs.